Sonic Thinking

Home > Other > Sonic Thinking > Page 33
Sonic Thinking Page 33

by Bernd Herzogenrath


  By contrast, sound is motion, and heard sound or thought sound contracts that motion into singular qualities of timbre, volume, pitch, and rhythm. Sound carries with it the displacement at its origin. And it implies through its motion its potential, a modulation of the vibration that constitutes sound to begin with. Thought sound is about anticipation, expectation, implication. Sound is directed, it’s going somewhere. Standing apart but tied to its past and future, thought sound is an event.

  Digital sound, like everything in the digital, is simulated, a motion represented but not actually included in the digital: stop-motion sonography. Does simulation preclude the event? If digital sound has a past and future, these are not implicated dimensions of what is foremost a motion, but disconnected posits, lacking a common internal sense. The motion of digital sound cannot be found in the datum currently directing the processor, but exists separately, future data to be generated or read from a digital storage surface. In place of an event, the digital refers to the possible; digital sound is the sound of possibility. A sound made digital (or born digital) can be copied, measured, tweaked, sampled, reversed, compressed, etc.; it becomes available for a great set of possible manipulations. Precisely the opposite obtains for thought sound. Thought sound, on the order of the event, is a happening that cannot be reproduced, that does not even admit a concept of the same. Tied deeply into its milieu, thought sound defies representation that would purport to capture it as a separable and discrete occurrence.

  So is digital sound, as a product of digital culture, the sound that we hear today, sound that might be captured, reproduced, remixed, mashed up, filtered, recontextualized, played ironically? Is this what it means to hear digital sound as possibility, to hear all sound as digital possibility? A digital sound is a sound that reveals itself as possibility, the sound of the possible. It would be an uncomplicated distinction in principle: thought sound is sound heard in and of its milieu; digital sound is sound heard in terms of its possible redeployments, its value as a building block of a digital culture.

  It is therefore a trade-off: sound captured digitally discards its history and its implicated future but becomes available for other operations, piecemeal and predictable manipulation, plugging in rather than being heard. An event is singular but the singular can never be adequately represented; thus digital sound discards that portion of sound that is an event, removing sound from its milieu. The discarded portions of sound are not restored, but some history, some implicated future is reinjected into the digital sound when it is heard, when it is thought, and the digital’s pendant reliance on its outside ensures this opportunity. But what is restored or regiven to sound? Its origin as digital? An implication gathered from its data? Must we be satisfied with a sonic world in which sound presents its many possibilities at the cost of its potential for thought?

  Notes

  1Lest this sound somehow condemnatory, I should point out that every media technology operates in accord with a set of assumptions—norms or conventions. LP records, which have the signal literally inscribed (scratched) into the spiral groove of their soft surfaces, house a deliberately altered version of the “original” signal, with low pitches attenuated and higher pitches emphasized. This contrivance, the RIAA equalization curve, is designed to make LP sound storage and playback more accurate and efficient: loud, low pitches would require very wide grooves to inscribe accurately and lead to distortion, while soft higher pitches would be buried under the surface noise of the record. The RIAA curve thus compensates for material deficiencies in vinyl records, but on playback the signal must be decoded by applying the inverse equalization curve, usually in the preamplification stage.

  2There are multiple ways to represent a sound in digital code. There are methods, for example, of taking a sequence of numbers representing the oscillating amplitude of a signal and making the sequence shorter by eliminating some of the numbers to compress the data. It can be played back by decompressing the signal to regenerate an approximation of the discarded data, reconstituting (inexactly) the original sequence of numbers. Or consider the possibility of writing coded instructions to produce a sequence of numbers that could be turned into heard sound; is this not also a digital representation of sound? The point is that there is no bottom line, no “true” representation of a sound in the digital domain. The digital is already a mediation, and so imposes its medial character on its objects and actions. Moreover, its representations rely on a model of what sound is and what hearing is, theories of acoustics and psychoacoustics.

  3“Maps should be made of these things, organic, ecological, and technological maps one can lay out on the plane of consistency” (Deleuze and Guattari 1987: 61).

  4The digital does not altogether dematerialize of course. Aside from the peripheral hardware that actualizes the central layer of the interface, a privileged plane of encounter between user and digital, even the computer processors, a churn of bits, remain solid pieces of recalcitrant materiality, demanding power, limiting speed, taking up space and heft. But this hardware is designed to get out of the way to the greatest extent possible, a rule that has almost unfailingly governed the historical progress of digital technologies. These traces of materiality in the machine do not betray the division between the formal value of the bit and the localized material property that represents that value, like a window that allows one to see whatever is on its other side without prejudice.

  5Compare this to Kirschenbaum’s notion of formal materiality. Perhaps these are analogous or at least complementary notions, formal materiality and materialized abstraction, but the noun in each case shows where the emphasis is placed: Kirschenbaum wishes to highlight the materiality of digital form, while the present study names abstraction the distinctive “formation of power” of the digital, which minimizes the drag of the digital material as much as possible.

  6“For, in their combination on chip, silicon and its oxide provide for perfect hardware architectures. That is to say that the millions of basic elements work under almost the same physical conditions, especially as regards the most critical, namely temperature dependent degradations, and yet, electrically, all of them are highly isolated from each other. Only this paradoxical relation between two physical parameters, thermal continuity and electrical discretization on chip, allows integrated circuits to be not only finite state machines like so many other devices on earth, but to approximate that Universal Discrete Machine into which its inventor’s name has long disappeared” (Kittler 1997: 153–54).

  7There are no exact values in the real world, only degrees of precision. Thus in order to ensure that every bit is read by the system as either 0 or 1, the values—of magnetic field strength or electrical potential or reflectivity or some other property of a substrate used to encode bit values—are treated as 0 when they are within a range of value close to the nominal designated value, and likewise for values of 1. So, if a system uses the convention that a voltage of +2V is considered a value of 0, then it will likely also treat a bit whose voltage reads +1.8V as a 0, using a “close enough” principle.

  8Consider the asymmetry between plus-one and the n that it increments. The expression is itself misleading, because it does not indicate with sufficient intensity its own asymmetry. The increment is an operation, whereas n is just a static point, equal to itself. In fact this means that there is a certain irony in the pseudo-mathematical expression, n+1, in that the variable is the fixed part while the seemingly more concrete addend is the dynamic element. Renaissance mathematicians distinguished between the thing to be increased, the augend or first term of the sum, and the thing that increases, the addend or second term of the sum, preserving an asymmetry in the operation of addition, one that takes one element as given and the other as to be given.

  9To establish the bit’s inclusion of dynamic difference it would be enough to identify a single bit bearing the same value over time, implying that the concept of the bit by itself already includes both vectors of abstraction.

&n
bsp; 10The place of a bit often determines its power: when used to represent integer numbers, for example, bits are ordered from least significant to most significant, where a bit’s significance refers to what power of two (2n) it determines.

  11“We may therefore use the term central layer, or central ring, for the following aggregate comprising the unity of composition of a stratum: exterior molecular materials, interior substantial elements, and the limit or membrane conveying the formal relations. There is a single abstract machine that is enveloped by the stratum and constitutes its unity. This is the Ecumenon, as opposed to the Planomenon of the plane of consistency” (Deleuze and Guattari 1987: 50).

  12Bits are individually accessed using a numerical index. It is a powerful feature of the digital that its code can operate on the elements of that very code; the code can point to a bit with trivial simplicity. In the digital, the binary code thereby has the power of self-reference built in to it.

  13“It goes from a center to a periphery, at the same time as the periphery reacts back upon the center to form a new center in relation to a new periphery. Flows constantly radiate outward, then turn back. There is an outgrowth and multiplication of intermediate states, and this process is one of the local conditions of the central ring (different concentrations, variations that are tolerated below a certain threshold of identity). […] We will use the term epistrata for these intermediaries and superpositions, these outgrowths, these levels” (Deleuze and Guattari 1987: 50).

  14“We will apply the term ‘parastrata’ to the second way in which the central belt fragments into sides and ‘besides,’ and the irreducible forms and milieus associated with them. This time, it is at the level of the limit or membrane of the central belt that the formal relations or traits common to all of the strata necessarily assume entirely different forms or types of forms corresponding to the parastrata” (Deleuze and Guattari 1987: 52).

  15Digital code allows access to multiple planes, provides ways of referencing and manipulating structured data of greater and lesser complexity, and such access is explicitly termed an interface to the data structure in question.

  16Structured programming reveals this nested hierarchy of layers. Objects, their parts, and their parts’ parts are often explicitly specified in the code text, the name of an object, followed by the name of one of its parts, followed by the name of a subpart, with a lexical mark such as a period or colon separating them.

  17The distinction between run-time and compile-time reinforces the two-sidedness; in compile-time, the programmer designs structures using coded instructions. In run-time, the user determines the specific nature of those structures, actualizing possibilities proffered by the executing software, by providing values for the bits that constitute the outlined structures. (Though it is a reasonable gloss of the difference between programming and executing a program, this distinction between compile-time and run-time is undoubtedly an oversimplification, which ignores, for one thing, the lack of any real distinction between data, structure, and instruction in the digital.)

  18“Nomadic waves or flows of deterritorialization go from the central layer to the periphery, then from the new center to the new periphery, falling back to the old center and launching forth to the new” (Deleuze and Guattari 1987: 53).

  19Bert Dreyfus (2009) writes plainly and compellingly on the limited investment in the digital in On the Internet.

  20Thus does Hansen include in his understanding of digital image the possibilities for manipulating or altering that image.

  21Certainly some mathematical statements feel more closely bound to the actual world than do others; whole number arithmetic, 5+7=12, seems somehow unassailable, at least to many thinkers (and adders). Whereas statements about transfinite numbers and infinite dimensional spaces feel more closely tethered to the axioms and definitions that give rise to those statements, and a bit farther from our living world.

  Works cited

  Deleuze, G. and F. Guattari (1987), A Thousand Plateaus: Capitalism and Schizophrenia. Minneapolis: University of Minnesota Press.

  Dreyfus, H. (2009), On the Internet. New York City: Routledge.

  Galloway, A. (2004), Protocol: How Control Exists After Decentralization. Cambridge, MA: MIT Press.

  Hansen, M. (2004), New Philosophy for New Media. Cambridge, MA: MIT Press.

  Kirschenbaum, M. (2008), Mechanisms: New Media and the Forensic Imagination. Cambridge, MA: MIT Press.

  Kittler, F. (1997), “There Is No Software,” in J. Johnston (ed.), Literature, Media, Information Systems: Essays. Amsterdam: G+B Arts International.

  sonic thought iv

  Sonotypes

  Sebastian Scherer

  CAVEAT: The following is more than a text. It rather comes in different forms. Originally delivered as a talk / performance at the 2013 sound|thinking conference in Frankfurt, Germany, various other incarnations of it will be explored in what is to come here.

  This is normally not the way I do it when giving talks, however today I will read out my text word by word. This is an experiment. I will try not to make any mistakes or digressions.

  A couple of months ago, when I was thinking about this conference, and my possible part in it, these humble 20 minutes, I soon decided to take things very literally: sound thinking. This talk is going to be about a possible interrelation of sound and thought—and the transition from thought to sound.

  What interested me were different ways of actualizing thought, especially what I was doing when formulating this talk: manifesting the results of my thought process in a written form. This process of writing, of bringing these ephemeral thoughts down to paper, the becoming of text, entails yet another thought process, in refining and completing my ideas by finding appropriate, unequivocal expressions for them.

  So when I tried to bring these two spheres—thinking and sound, research and art—closer together, and into a potential dialogue, I had to find a viable method of expressing the results of my thought process sonically. I wanted to devise a concept to translate this process of me thinking and writing—letter after letter, word after word—into a form that is audible. I wanted to turn thought into sound, I wanted to sound out my thoughts—to eventually think sound.

  Of course there is the most common form of sonically decoding written information, which is speech. But I wondered if there was a more abstract way of tapping into this signal chain, of reinterpreting and bypassing the apparent conventions and constraints of speech in relation to thought.

  Maybe this would permit novel ways of comprehension. Maybe this would have the potential to open up alternative perspectives on the very thought process itself and disclose implicit patterns and structures that are not apparent when reading a text in its written form, or when listening to it being presented orally.

  And indeed, related strategies are employed in other scientific disciplines: for instance by astrophysicists, who use methods of sonification to perceptualize rather large bulks of data. Last year, at the Media-Matter conference we had Frank Scherbaum here in Frankfurt, a geophysicist from Potsdam University, who transposed the infrasonic rumblings of seismic activity to the audible range of the frequency spectrum, in order to get better analytical leverage on the readings of his sensors and probes. And a couple of weeks ago I talked to a neurolinguist from the University of Mainz, who similarly listens to the waveform-output of his EEG machines, which record the cranial activity of test-persons, who are confronted with non-standard or irregular syntax structures.

  So I thought: since the natural sciences are doing it, why not in the humanities?

  When writing nowadays, the tools of choice are oftentimes no longer pen and paper, or the trusty typewriter, but rather a computer and a keyboard. Now the term “keyboard” is however rather ambiguous: it bears a double meaning, because it obviously not only connotes an input device for the computer, but also the manual interface for many musical instruments.

  A first idea was to approach this on machine level: I tried t
o solder an output jack to a standard computer keyboard and plug that into a preamplifier in order to make audible the scancode signals that the keyboard produces when different keys are pressed. This, however, is easier said than done. I had the naive notion that listening to the output of the keyboard would produce a stream of audio information not unlike the acoustic-couplers in old modems or fax machines. And I cracked an old computer keyboard open and took it apart (which is a rather disgusting thing to do). But when fumbling around with the delicate electronic innards of the keyboard, I soon decided that this endeavor would exceed my hacking and engineering capabilities by far.

  Doing further research in the direction of sonification, I soon stumbled across a rather unsuccessful invention by the French-Irish physicist Edmund Edward Fournier d’Albe from the year 1913, called the Optophone. This device was originally conceptualized as a reading aid for blind people. It translated printed letters via photoelectric sensors into correspondingly pitched acoustic signals. However, the practical application turned out to be rather difficult, because the interpretation of the resulting toneclusters was a delicate and cumbersome process, permitting a reading frequency of only a few words per minute—and only blind persons who were specifically trained for this process could accomplish an accurate rate that high.1

  So sitting at my desk, where I was reading, researching, and writing these lines, I realized I had everything I needed already right in front of me: next to my computer keyboard sits a MIDI-controller musical keyboard (Figure v.1).

  Figure iv.1

  When we look at the octave in the standard western musical notation system, it is obvious, that the naming of the musical notes is indeed derived from the alphabet. When reciting one of the most common musical scales, C major, we start out with a C. So the sequence of steps in the octave would be CDEFGABC. If however, we start with the A, like the alphabet does, we would get an equally familiar sequence of ABCDEFGA.

 

‹ Prev