Thus somewhere in the evolutionary history of animals, after they diverged from fungi some seven hundred million years ago—and according to James Watson, 40 percent of yeast proteins are still homologous to ours—there came a point where, with their sensory organs concentrated at one end, they recognized their fellows as a rich energy gradient. Of course they didn’t at first realize, as we vegetarian, vegan, Jainist, and pepperoni pizza eaters do, that those fellows were feeling beings like themselves. But this primordial carnophagy, as Derrida calls it, set up the conditions for an ethical crisis from which we still have not recovered.
We come from a long line of naturally self-centered ancestors. As Alan Watts, the greatest popularizer of Eastern philosophy we have ever had in the West, puts it, the “shape alone is stable. The substance is a stream of energy going in at one end and out at the other.” The tubes “put things in at one end and let them out at the other. . . . [This] both keeps them doing it and in the long run wears them out. [But this part isn’t true, as I will explain.] So to keep the farce going, the tubes find ways of making new tubes, which also put things in at one end and let them out at the other. At the input end they even develop ganglia of nerves called brains, with eyes and ears, so that they can more easily scrounge around for things to swallow. As and when they get enough to eat, they use up their surplus energy by wiggling in complicated patterns, making all sorts of noises by blowing air in and out of the input hole, and gathering together in groups to fight with other groups. In time, the tubes grow such an abundance of attached appliances that they are hardly recognizable as mere tubes, and they manage to do this in a staggering variety of forms. There is a vague rule not to eat tubes of your own form, but in general there is serious competition as to who is going to be the top type of tube.”
All this seems “marvelously futile,” says Watts, adding that it is “more marvelous than futile.”3 Watts is right, although it is actually a bit worse from an ethical standpoint because his assumption that the tubes must wear out turns out to be wrong. In fact, they are killed off by what we could call an “inside job”: multiple redundant systems, working from deep within the genome, ensure that organisms die by aging in many species, including us. Genetic assassins—to use colorful language—include apoptosis (programmed cell death), telomere shortening, and glucose-mediated mechanisms that ensure we age relatively quickly if we are well fed.4
Of course to say “assassin” and “inside job” is to beg the biopolitical question. For example, Ed Cohen in A Body Worth Defending excavates the political roots of the seemingly self-evident idea of the biological idea of immunity. And he does a splendid job, beginning his book with the striking image of Élie Metchnikoff, who introduced the term immunity into biology. One day Metchnikoff, when his “family had gone to a circus to see some extraordinary performing apes, remained alone with [his] microscope, observing the life in the mobile cells of a transparent star-fish larva [at which point] a new thought suddenly flashed across [his] brain.”
As the scientist described his finding, “It struck me that similar cells might serve in the defense of the organism against intruders . . . I said to myself that if my supposition was true, a splinter introduced into the body of a star fish larva, devoid of blood vessels or a nervous system, should soon be surrounded by mobile cells as is to be observed in a man who runs a splinter into his finger. This was no sooner said than done.
“There was a small garden to our dwelling . . . [and] I fetched from it a few rose thorns and introduced them at once under the skin of the beautiful star-fish larvae as transparent as water.
“I was too excited to sleep that night in the expectation of the results of my experiment, and very early the next morning I ascertained that it had fully succeeded.”5
COHEN CORRECTLY POINTS OUT that Derrida before he died tended to speak of 9/11 in terms of global autoimmunity. Cohen criticizes Derrida for conflating immunity and autoimmunity, but as I read, it occurred to me that this conflation may reflect our own geopolitical confusion. Whether or not Derrida was aware of insufficiently investigated physical anomalies on September 11, 2001, his conflating immunity and autoimmunity acts like an unwritten Wiki entry that can absorb future discourse. Speaking plainly, if 9/11 is indeed a false flag (one of many and not necessarily the most recent in a long military history), then Derrida’s “conflation” is quite cagey.6 Immunity would refer to the rapid spread of national security control apparatuses in the global body politic, whereas autoimmunity would refer to the purposeful introduction of terror, a state crime against democracy, to initiate a global response.7
As I read I also worried about what we might call Foucauldianism or even Foucaultitus (pronounce it how you like)—the tendency to pore through documents and identify a concept or reality with its historical introduction into texts. Sex began in such and such a century, immunity evolved in the court system, and so on.
As Cohen relentlessly pushed for the political, social, and cultural roots of this seemingly innocent concept of immunity for almost three-hundred-plus pages, I kept asking myself, yes, but if the starfish, and we, are not in some sense immune, what would you call it? What is the transparent starfish larva doing with its mobile cells extracting the rose thorn if not being in some sense immune to them?
THERE IS OBVIOUSLY SOMETHING to be said for this modern form of scholasticism, as it forces us to disinter concepts we take for granted. However, I believe Alfred North Whitehead had it right when he credited a deep strain not of conceptual gymnastics but of “anti-intellectualism” to the brilliance of science.8 The excesses of Aristotelian scholasticism served as a counterexample, helping scientists leave the musty room of involuting ideas with no love lost, as they stepped forward with just the Greek genius for bold lucid speculation, into the fresh air of nonhuman things, which they measured and observed. . . . A good example, of course, is Metchnikoff himself boldly applying the juridical concept of immunity to starfish larvae, and finding that it worked.
By book’s end, however, Cohen answered, or acknowledged, my question. He discussed AIDs as an insufficiently problematized diagnosis, one dependent on the very notion of immunity, which provides the “I” in the middle of both AIDS and HIV. “Might biological community,” he writes, “enable us to appreciate healing. . . . There may be more to immunity than we currently know, or are indeed even capable of knowing, so long as we remain infected by the biopolitical perspectives that it defensively defines as the apotheosis of the modern body.”9
Here it is worth a return to Margaret McFall-Ngai’s startling but sensible surmise that the immune system, appearing first in marine metazoans surrounded by seawater infused with one hundred million microbes per liter, started by not weeding out or destroying pathogens but engaging the most helpful symbionts10—producing, as Cohen suggests, a community.
STILL AND ALL THE SAME, metacaspases, T cells, nitrogen oxide, telomere rationing, apoptosis, and thymic involution (which progressively weakens the immune system) seem part of cell regimes that not only shape and protect but eventually kill the body via aging. While Latour recommends a constructivism in working for world peace because naturalism has been tried already, and didn’t work, this recommendation runs the risk of derailing the sort of critical thinking necessary to see the depths of problems before beginning to fabricate novel solutions. We don’t have to be mononaturalists to see that culture is grounded in a nature or that nature is not our organic fairy godmother.
Facing the problem of culture’s natural roots squarely, Roberto Esposito writes, “Anything but the negation of nature, the political is nothing else but the continuation of nature at another level and therefore destined to incorporate and reproduce nature’s original characteristics.”11 It is tempting but insufficient to argue that something like the general violence that war for us epitomizes is a contingent peculiarity of human history that we’ve prematurely naturalized in concepts like immunity and survival of the fittest. We need not paint with as broad a brush
as Nietzsche: “Every moment devours the preceding one, every birth is the death of innumerable beings; begetting, living, murdering, all is one.” But Spencer had a point. “Now that moral injunctions are losing the authority given by their supposed sacred origin, the secularization of morals is becoming imperative.”12 This paves the way for religionists to contemplate the moral vacuum that threatens them with “fear” and “dread.” On the other hand, it emboldens political movers and shakers, not to mention sociopaths, to seize the opportunity in a world without moral law. However, I would add that it’s much harder to break natural laws than human ones.
In the gruesome feast at the end of Zarathustra, Overman sets fire to his texts and dances. This is aktiv Vergesslichkeit—active forgetting. I think we actively forget when we are confronted with life’s ethical abyss. The presence of this abyss reminds me of the Sun for Georges Bataille. In 1929 in Paris, in translation, Bataille read Vladimir Vernadsky’s La Biosphère. Vernadsky, as famous in Russia as Darwin is in the West, argued that we are children of the Sun, that life is a solar phenomenon, that the Earth system is really the Earth–solar system. Accumulating light energy in green material through photosynthesis, life creates problems, especially for itself. I would liken life’s ethical abyss to the Sun as described by Bataille: Although it is real, our great source of excess and possibility, we tend to abstract it or look away, because we cannot gaze on it too long without going blind.
CHAPTER 3
THE POST-MAN ALREADY ALWAYS RINGS TWICE
THE FUTURE TIMES
“Today” I received this strange news item, but it had no address on it, so I am passing it on to lucky you. By post-man I will have meant (mostly) posthuman. And he (so to speak) rings twice. At least.
The first ringing is literal and refers to what comes after humans in evolution. The first ringing announcement that the posthuman has arrived has to do with speciation, guesswork, machines; with loose predictions that fall off a cliff of accuracy as we extrapolate physically nonextrapolatable trends into the future. The classic example of such a trend is the graph of human transportation modes over time. I discovered this in John Platt’s futurism class at Harvard, which I audited in the 1980s (after 1984, before 2001). Extrapolated, the hyperbolic curve of increasing velocity made by peripatetic pedestrian philosopher, horse-drawn carriage, automobile, airplane, and rocket ship does not take long to exceed the speed of light, a progression that is impossible under Albert Einstein’s general theory of relativity. Another example—more pertinent and potentially horrible—is an extrapolation of human population growth. I committed the “fact” of total world population to memory in grade school. It was three billion. Now it’s more than doubled. We know from the fossil record and observation of bacterial growth in petri dishes that the greatest species abundance sometimes occurs in generations immediately prior to population collapse. The first ringing of the posthuman is thus linear or literally futuristic, and as such speculative, most readily explorable by the science fiction imagination.
Unfortunately for us as humans, the majority of fictive posthuman realities are characterized by the total absence of Man, except of course as the imaginer. (I mean “Man” in the old, general, sexist sense of the term. Of course, some futures include men without women and women without men. In some futures there will be only women, cloned lesbians more satisfied and less ecologically destructive without the Y chromosome. Perhaps they will replicate Arnold Schwarzenegger’s disembodied pectoral muscles for white breast-meat in annual Thanksgiving to their triumph. Nonetheless, my meaning of posthuman is inclusive.)
THE FUTURE UNTIMES
The second ringing or meaning of the posthuman is for me both more interesting and more accurate with regard to the posthuman, the after-Man. Because I take it as axiomatic that linear time may be (sometimes is) an illusion, the second, nonlinear or metaphysical aspect of the posthuman operates not in the science fiction future but in the present. Logically, of course, the future is a construct. As Saint Augustine of Hippo reportedly said, “I know what time is until you ask me.” The future is always to come; we never get there except in our imaginations, as with the past. It is always now. Mystics and ecodelic-ingestors sometimes report the experience of seeing time “end-on,” although the word experience here is inadequate insofar as it connotes precisely that duration in linear time that is being called into question. Alan Watts has stressed that while ecologists, biologists, and physicists know that the organism and the environment are not two things separated by a subject–object relationship—but are rather a single process, a unified field—they don’t necessarily feel that this is so. This argument is similar to the idea that time is possibly illusory or insubstantial, a conclusion that can be logically arrived at by, say, asking you to point in the direction of the future. According to Benjamin Whorf, the Hopi locate time behind them, because it cannot be seen. The past is solid, visible, because we can see what happened (think of “hindsight”); the future is liquid, open, black, and frightening, but wonderfully full of potential. Outside time, such differences blend or further differentiate. Glossing Martin Heidegger in a footnote, Jacques Derrida describes how, in the nonvulgar or “Greek” conception of time, times past, present, and future converge and diverge; they are at once touching and infinitely distant. Perhaps the founding example of such posthuman nonlinearity is Friedrich Nietzsche’s discussion of the last man and the Übermensch (the “Overman”) in Thus Spake Zarathustra. The last man, the overcoming of man, and the superman are, clearly, not just literal (as in Adolf Hitler’s interpretation) but allegorical. On the one hand, Nietzsche, thinking of Charles Darwin, complains how these English are no philosophers. On the other hand, Zarathustra—his prophet up in the mountains who talks to the animals and informs humanity that God is dead (but the news has taken a long time to reach us)—espouses the theory of eternal recurrence as his most important doctrine. At the very least, the idea that everything that can happen will happen, and not once but an infinite number of times, does not go along with simple, linear, evolutionary scenarios.
So we have then a double meaning, a double entendre, of the future: one linear, one nonlinear. The post-man’s first ring announces the future of man; the second is uncannily silent, more a buzz or a beep. This is the human future that never arrives because it is already always here.
There are problems with both ringings. But in the end I think the nonlinear, atemporal or polymorphically temporal posthuman version is the more resonant. Ironically, the playful and metaphorical view of the human future is not only broader but more literally true than what Heidegger calls the received or “vulgar” view of orderly, progressive, linear time.
But before I veer off into unreconstructed mysticism, let me say that the fragmentary, the fallen, and the derivative do not belong only to the present age—as Derrida argues, against Jean-Jacques Rousseau, in Of Grammatology. Language itself, the splintering of things into signs, into alphanumeric representations, calls into question our idealized image of the past: the golden age, our happy childhood, or, cosmologically, the unity of the singularity at the big bang. This seems to be the same absent sense of wholeness, of the halcyon, that we project into the future: heaven, merging with light, a dissolution into the ecstasy of the sexual or the neural that relieves our constitutive sense of loss. Sigmund Freud’s student Otto Rank located this loss in the original trauma of birth,1 the loss of the womb—the experience of which might, if we could remember it, be compared with floating in space or lying on a raft on opiates in a sunlit pool. To be alive is to be deprived of this Edenic yesteryear for which we long, and this utopian future for which we strive. Jacques Lacan suggests that both are based on the illusion of the uncoordinated, wobbly toddler looking in the mirror, or at its mother, and hallucinating its own unity. Locating the mirror stage in the register of the Imaginary, Lacan marks it as the gateway into language, into the splintering of signification. The idea of wholeness in the past as well as in the future is thus
a narcissistic illusion.
One familiar way to illustrate this doubleness, this insistent and split entity arriving at our threshold with the news that never quite comes but is already always here, is to look at radical science fiction scenarios for the far future and to see how they stack up against the present. A visitor from humanity’s future is projected back in time, landing, for example, in lower Manhattan, in the middle of rush hour. “Ah, what peace!” he sighs. The witty geneticist J. B. S. Haldane said the universe is not only queerer than we suppose; it is queerer than we can suppose. Truth, or what Lacan calls the Real, is stranger than fiction, including science fiction. (Granted, not all agree. The fiction guru and Cornell professor Paul West scoffs at the idea that anything is stranger than fiction.2 But I guess that, for me, fiction is a subclass of nonfiction, just as the linear is a subset of the nonlinear.)
There is a branch of science fiction concerned with the idea that life will evolve into robots, that consciousness will be based on silicon or metal rather than carbon. If an MIT graduate student could build such a silica-based robot—one that could make more of itself, a real nanotechnological von Neumann machine—he or she would get a Nobel Prize. But these nanobots already exist. We call them diatoms and radiolarians, and they build their beautiful tiny skeletons from silica, which they take from the ocean. With regard to metal, magnetobacteria have tiny magnets in them that they use to orient themselves to Earth’s magnetic poles, and chitons scrape corals with their iron teeth. Life is ahead of the human game of fiction.
Cosmic Apprentice: Dispatches from the Edges of Science Page 5