Book Read Free

The Information

Page 34

by James Gleick


  Well, now, Walton’s own viral text, as you can see here before your eyes, has managed to commandeer the facilities of a very powerful host—an entire magazine and printing press and distribution service. It has leapt aboard and is now—even as you read this viral sentence—propagating itself madly throughout the ideosphere!

  (In the early 1980s, a magazine with a print circulation of 700,000 still seemed like a powerful communications platform.) Hofstadter gaily declared himself infected by the meme meme.

  One source of resistance—or at least unease—was the shoving of us humans toward the wings. It was bad enough to say that a person is merely a gene’s way of making more genes. Now humans are to be considered as vehicles for the propagation of memes, too. No one likes to be called a puppet. Dennett summed up the problem this way: “I don’t know about you, but I am not initially attracted by the idea of my brain as a sort of dung heap in which the larvae of other people’s ideas renew themselves, before sending out copies of themselves in an informational diaspora.… Who’s in charge, according to this vision—we or our memes?”♦

  He answered his own question by reminding us that, like it or not, we are seldom “in charge” of our own minds. He might have quoted Freud; instead he quoted Mozart (or so he thought):

  In the night when I cannot sleep, thoughts crowd into my mind.… Whence and how do they come? I do not know and I have nothing to do with it. Those which please me I keep in my head and hum them.

  Later Dennett was informed that this well-known quotation was not Mozart’s after all. It had taken on a life of its own; it was a fairly successful meme.

  For anyone taken with the idea of memes, the landscape was changing faster than Dawkins had imagined possible in 1976, when he wrote, “The computers in which memes live are human brains.”♦ By 1989, the time of the second edition of The Selfish Gene, having become an adept programmer himself, he had to amend that: “It was obviously predictable that manufactured electronic computers, too, would eventually play host to self-replicating patterns of information.”♦ Information was passing from one computer to another “when their owners pass floppy discs around,” and he could see another phenomenon on the near horizon: computers connected in networks. “Many of them,” he wrote, “are literally wired up together in electronic mail exchange.… It is a perfect milieu for self-replicating programs to flourish.” Indeed, the Internet was in its birth throes. Not only did it provide memes with a nutrient-rich culture medium; it also gave wings to the idea of memes. Meme itself quickly became an Internet buzzword. Awareness of memes fostered their spread.

  A notorious example of a meme that could not have emerged in pre-Internet culture was the phrase “jumped the shark.” Loopy self-reference characterized every phase of its existence. To jump the shark means to pass a peak of quality or popularity and begin an irreversible decline. The phrase was thought to have been used first in 1985 by a college student named Sean J. Connolly, in reference to a certain television series. The origin of the phrase requires a certain amount of explanation without which it could not have been initially understood. Perhaps for that reason, there is no recorded usage until 1997, when Connolly’s roommate, Jon Hein, registered the domain name jumptheshark.com and created a web site devoted to its promotion. The web site soon featured a list of frequently asked questions:

  Q. Did “jump the shark” originate from this web site, or did you create the site to capitalize on the phrase?

  A. This site went up December 24, 1997 and gave birth to the phrase “jump the shark.” As the site continues to grow in popularity, the term has become more commonplace. The site is the chicken, the egg, and now a Catch-22.

  It spread to more traditional media in the next year; Maureen Dowd devoted a column to explaining it in The New York Times in 2001; in 2003 the same newspaper’s “On Language” columnist, William Safire, called it “the popular culture’s phrase of the year”; soon after that, people were using the phrase in speech and in print without self-consciousness—no quotation marks or explanation—and eventually, inevitably, various cultural observers asked, “Has ‘jump the shark’ jumped the shark?” (“Granted, Jump the Shark is a brilliant cultural concept.… But now the damn thing is everywhere.”) Like any good meme, it spawned mutations. The “jumping the shark” entry in Wikipedia advised in 2009, “See also: jumping the couch; nuking the fridge.”

  Is this science? In his 1983 column, Hofstadter proposed the obvious memetic label for such a discipline: memetics. The study of memes has attracted researchers from fields as far apart as computer science and microbiology. In bioinformatics, chain letters are an object of study. They are memes; they have evolutionary histories. The very purpose of a chain letter is replication; whatever else a chain letter may say, it embodies one message: Copy me. One student of chain-letter evolution, Daniel W. VanArsdale, listed many variants, in chain letters and even earlier texts: “Make seven copies of it exactly as it is written” [1902]; “Copy this in full and send to nine friends” [1923]; “And if any man shall take away from the words of the book of this prophecy, God shall take away his part out of the book of life” [Revelation 22:19].♦ Chain letters flourished with the help of a new nineteenth-century technology: “carbonic paper,” sandwiched between sheets of writing paper in stacks. Then carbon paper made a symbiotic partnership with another technology, the typewriter. Viral outbreaks of chain letters occurred all through the early twentieth century.

  “An unusual chain-letter reached Quincy during the latter part of 1933,”♦ wrote a local Illinois historian. “So rapidly did the chain-letter fad develop symptoms of mass hysteria and spread throughout the United States, that by 1935–1936 the Post Office Department, as well as agencies of public opinion, had to take a hand in suppressing the movement.” He provided a sample—a meme motivating its human carriers with promises and threats:

  We trust in God. He supplies our needs.

  Mrs. F. Streuzel........Mich.

  Mrs. A. Ford............Chicago, Ill.

  Mrs. K. Adkins..........Chicago, Ill.

  etc.

  Copy the above names, omitting the first. Add your name last. Mail it to five persons who you wish prosperity to. The chain was started by an American Colonel and must be mailed 24 hours after receiving it. This will bring prosperity within 9 days after mailing it.

  Mrs. Sanford won $3,000. Mrs. Andres won $1,000.

  Mrs. Howe who broke the chain lost everything she possessed.

  The chain grows a definite power over the expected word.

  DO NOT BREAK THE CHAIN.

  Two subsequent technologies, when their use became widespread, provided orders-of-magnitude boosts in chain-letter fecundity: photocopying (c. 1950) and e-mail (c. 1995). One team of information scientists—Charles H. Bennett from IBM in New York and Ming Li and Bin Ma from Ontario, Canada—inspired by a chance conversation on a hike in the Hong Kong mountains, began an analysis of a set of chain letters collected during the photocopier era. They had thirty-three, all variants of a single letter, with mutations in the form of misspellings, omissions, and transposed words and phrases. “These letters have passed from host to host, mutating and evolving,”♦ they reported.

  Like a gene, their average length is about 2,000 characters. Like a potent virus, the letter threatens to kill you and induces you to pass it on to your “friends and associates”—some variation of this letter has probably reached millions of people. Like an inheritable trait, it promises benefits for you and the people you pass it on to. Like genomes, chain letters undergo natural selection and sometimes parts even get transferred between coexisting “species.”

  Reaching beyond these appealing metaphors, they set out to use the letters as a “test bed” for algorithms used in evolutionary biology. The algorithms were designed to take the genomes of various modern creatures and work backward, by inference and deduction, to reconstruct their phylogeny—their evolutionary trees. If these mathematical methods worked with genes, the scien
tists suggested, they should work with chain letters, too. In both cases the researchers were able to verify mutation rates and relatedness measures.

  Still, most of the elements of culture change and blur too easily to qualify as stable replicators. They are rarely as neatly fixed as a sequence of DNA. Dawkins himself emphasized that he had never imagined founding anything like a new science of memetics. A peer-reviewed Journal of Memetics came to life in 1997—published online, naturally—and then faded away after eight years partly spent in self-conscious debate over status, mission, and terminology. Even compared with genes, memes are hard to mathematize or even to define rigorously. So the gene-meme analogy causes uneasiness and the genetics-memetics analogy even more.

  Genes at least have a grounding in physical substance. Memes are abstract, intangible, and unmeasurable. Genes replicate with near-perfect fidelity, and evolution depends on that: some variation is essential, but mutations need to be rare. Memes are seldom copied exactly; their boundaries are always fuzzy, and they mutate with a wild flexibility that would be fatal in biology. The term meme could be applied to a suspicious cornucopia of entities, from small to large. For Dennett, the first four notes of Beethoven’s Fifth Symphony were “clearly” a meme, along with Homer’s Odyssey (or at least the idea of the Odyssey), the wheel, anti-Semitism, and writing.♦ “Memes have not yet found their Watson and Crick,” said Dawkins; “they even lack their Mendel.”♦

  Yet here they are. As the arc of information flow bends toward ever greater connectivity, memes evolve faster and spread farther. Their presence is felt if not seen in herd behavior, bank runs, informational cascades, and financial bubbles. Diets rise and fall in popularity, their very names becoming catchphrases—the South Beach Diet and the Atkins Diet, the Scarsdale Diet, the Cookie Diet and the Drinking Man’s Diet all replicating according to a dynamic about which the science of nutrition has nothing to say. Medical practice, too, experiences “surgical fads” and “iatroepidemics”—epidemics caused by fashions in treatment—like the iatroepidemic of children’s tonsillectomies that swept the United States and parts of Europe in the mid-twentieth century, with no more medical benefit than ritual circumcision. Memes were seen through car windows when yellow diamond-shaped BABY ON BOARD signs appeared as if in an instant of mass panic in 1984, in the United States and then Europe and Japan, followed an instant later by a spawn of ironic mutations (BABY I’M BOARD, EX IN TRUNK). Memes were felt when global discourse was dominated in the last year of the millennium by the belief that the world’s computers would stammer or choke when their internal clocks reached a special round number.

  In the competition for space in our brains and in the culture, the effective combatants are the messages. The new, oblique, looping views of genes and memes have enriched us. They give us paradoxes to write on Möbius strips. “The human world is made of stories, not people,”♦ writes David Mitchell. “The people the stories use to tell themselves are not to be blamed.” Margaret Atwood writes: “As with all knowledge, once you knew it, you couldn’t imagine how it was that you hadn’t known it before. Like stage magic, knowledge before you knew it took place before your very eyes, but you were looking elsewhere.”♦ Nearing death, John Updike reflects on

  A life poured into words—apparent waste

  intended to preserve the thing consumed.♦

  Fred Dretske, a philosopher of mind and knowledge, wrote in 1981: “In the beginning there was information. The word came later.”♦ He added this explanation: “The transition was achieved by the development of organisms with the capacity for selectively exploiting this information in order to survive and perpetuate their kind.” Now we might add, thanks to Dawkins, that the transition was achieved by the information itself, surviving and perpetuating its kind and selectively exploiting organisms.

  Most of the biosphere cannot see the infosphere; it is invisible, a parallel universe humming with ghostly inhabitants. But they are not ghosts to us—not anymore. We humans, alone among the earth’s organic creatures, live in both worlds at once. It is as though, having long coexisted with the unseen, we have begun to develop the needed extrasensory perception. We are aware of the many species of information. We name their types sardonically, as though to reassure ourselves that we understand: urban myths and zombie lies. We keep them alive in air-conditioned server farms. But we cannot own them. When a jingle lingers in our ears, or a fad turns fashion upside down, or a hoax dominates the global chatter for months and vanishes as swiftly as it came, who is master and who is slave?

  12 | THE SENSE OF RANDOMNESS

  (In a State of Sin)

  “I wonder,” she said. “It’s getting harder to see the patterns, don’t you think?”

  —Michael Cunningham (2005)♦

  IN 1958, GREGORY CHAITIN, a precocious eleven-year-old New Yorker, the son of Argentine émigrés, found a magical little book in the library and carried it around with him for a while trying to explain it to other children—and then, he had to admit, trying to understand it himself.♦ It was Gödel’s Proof, by Ernest Nagel and James R. Newman. Expanded from an article in Scientific American, it reviewed the renaissance in logic that began with George Boole; the process of “mapping,” encoding statements about mathematics in the form of symbols and even integers; and the idea of metamathematics, systematized language about mathematics and therefore beyond mathematics. This was heady stuff for the boy, who followed the authors through their simplified but rigorous exposition of Gödel’s “astounding and melancholy” demonstration that formal mathematics can never be free of self-contradiction.♦

  The vast bulk of mathematics as practiced at this time cared not at all for Gödel’s proof. Startling though incompleteness surely was, it seemed incidental somehow—contributing nothing to the useful work of mathematicians, who went on making discoveries and proving theorems. But philosophically minded souls remained deeply disturbed by it, and these were the sorts of people Chaitin liked to read. One was John von Neumann—who had been there at the start, in Königsberg, 1930, and then in the United States took the central role in the development of computation and computing theory. For von Neumann, Gödel’s proof was a point of no return:

  It was a very serious conceptual crisis, dealing with rigor and the proper way to carry out a correct mathematical proof. In view of the earlier notions of the absolute rigor of mathematics, it is surprising that such a thing could have happened, and even more surprising that it could have happened in these latter days when miracles are not supposed to take place. Yet it did happen.♦

  Why? Chaitin asked. He wondered if at some level Gödel’s incompleteness could be connected to that new principle of quantum physics, uncertainty, which smelled similar somehow.♦ Later, the adult Chaitin had a chance to put this question to the oracular John Archibald Wheeler. Was Gödel incompleteness related to Heisenberg uncertainty? Wheeler answered by saying he had once posed that very question to Gödel himself, in his office at the Institute for Advanced Study—Gödel with his legs wrapped in a blanket, an electric heater glowing warm against the wintry drafts. Gödel refused to answer. In this way, Wheeler refused to answer Chaitin.

  When Chaitin came upon Turing’s proof of uncomputability, he thought this must be the key. He also found Shannon and Weaver’s book, The Mathematical Theory of Communication, and was struck by its upside-down seeming reformulation of entropy: an entropy of bits, measuring information on the one hand and disorder on the other. The common element was randomness, Chaitin suddenly thought. Shannon linked randomness, perversely, to information. Physicists had found randomness inside the atom—the kind of randomness that Einstein deplored by complaining about God and dice. All these heroes of science were talking about or around randomness.

  It is a simple word, random, and everyone knows what it means. Everyone, that is, and no one. Philosophers and mathematicians struggled endlessly. Wheeler said this much, at least: “Probability, like time, is a concept invented by humans, and humans have to bea
r the responsibility for the obscurities that attend it.”♦ The toss of a fair coin is random, though every detail of the coin’s trajectory may be determined à la Newton. Whether the population of France is an even or odd number at any given instant is random, but the population of France itself is surely not random: it is a definite fact, even if not knowable.♦ John Maynard Keynes tackled randomness in terms of its opposites, and he chose three: knowledge, causality, and design.♦ What is known in advance, determined by a cause, or organized according to plan cannot be random.

  “Chance is only the measure of our ignorance,”♦ Henri Poincaré famously said. “Fortuitous phenomena are by definition those whose laws we do not know.” Immediately he recanted: “Is this definition very satisfactory? When the first Chaldean shepherds watched the movements of the stars, they did not yet know the laws of astronomy, but would they have dreamed of saying that the stars move at random?” For Poincaré, who understood chaos long before it became a science, examples of randomness included such phenomena as the scattering of raindrops, their causes physically determined but so numerous and complex as to be unpredictable. In physics—or wherever natural processes seem unpredictable—apparent randomness may be noise or may arise from deeply complex dynamics.

  Ignorance is subjective. It is a quality of the observer. Presumably randomness—if it exists at all—should be a quality of the thing itself. Leaving humans out of the picture, one would like to say that an event, a choice, a distribution, a game, or, most simply, a number is random.

 

‹ Prev