What Just Happened: A Chronicle From the Information Frontier
Page 38
Mathematicians and logicians had developed a tendency to think of information processing as free—not like pumping water or carrying stones. In our time, it certainly has gotten cheap. But it embodies work after all, and Bennett suggests that we recognize this work, reckon its expense in understanding complexity. “The more subtle something is, the harder it is to discover,” Bennett says. He applied the idea of logical depth to the problem of self-organization: the question of how complex structures develop in nature. Evolution starts with simple initial conditions; complexity arises, apparently building on itself. Whatever the basic processes involved, physical or biological, something is under way that begins to resemble computation.
* * *
♦ “Our definition of the quantity of information has the advantage that it refers to individual objects and not to objects treated as members of a set of objects with a probability distribution given on it. The probabilistic definition can be convincingly applied to the information contained, for example, in a stream of congratulatory telegrams. But it would not be clear how to apply it, for example, to an estimate of the quantity of information contained in a novel or in the translation of a novel into another language relative to the original.”
♦ 1729 = 13 + 123 = 93 + 103
♦ More precisely, it looked like this: “The finite binary sequence S with the first proof that S cannot be described by a Turing machine with n states or less” is a (log2 n+cF)–state description of S.
13 | INFORMATION IS PHYSICAL
(It from Bit)
The more energy, the faster the bits flip. Earth, air, fire, and water in the end are all made of energy, but the different forms they take are determined by information. To do anything requires energy. To specify what is done requires information.
—Seth Lloyd (2006)♦
QUANTUM MECHANICS HAS WEATHERED in its short history more crises, controversies, interpretations (the Copenhagen, the Bohm, the Many Worlds, the Many Minds), factional implosions, and general philosophical breast-beating than any other science. It is happily riddled with mysteries. It blithely disregards human intuition. Albert Einstein died unreconciled to its consequences, and Richard Feynman was not joking when he said no one understands it. Perhaps arguments about the nature of reality are to be expected; quantum physics, so uncannily successful in practice, deals in theory with the foundations of all things, and its own foundations are continually being rebuilt. Even so, the ferment sometimes seems more religious than scientific.
“How did this come about?”♦ asks Christopher Fuchs, a quantum theorist at Bell Labs and then the Perimeter Institute in Canada.
Go to any meeting, and it is like being in a holy city in great tumult. You will find all the religions with all their priests pitted in holy war—the Bohmians, the Consistent Historians, the Transactionalists, the Spontaneous Collapseans, the Einselectionists, the Contextual Objectivists, the outright Everettics, and many more beyond that. They all declare to see the light, the ultimate light. Each tells us that if we will accept their solution as our savior, then we too will see the light.
It is time, he says, to start fresh. Throw away the existing quantum axioms, exquisite and mathematical as they are, and turn to deep physical principles. “Those principles should be crisp; they should be compelling. They should stir the soul.” And where should these physical principles be found? Fuchs answers his own question: in quantum information theory.
“The reason is simple, and I think inescapable,”♦ he declares. “Quantum mechanics has always been about information; it is just that the physics community has forgotten this.”
VISUAL AID BY CHRISTOPHER FUCHS (Illustration credit 13.1)
One who did not forget—or who rediscovered it—was John Archibald Wheeler, pioneer of nuclear fission, student of Bohr and teacher of Feynman, namer of black holes, the last giant of twentieth-century physics. Wheeler was given to epigrams and gnomic utterances. A black hole has no hair was his famous way of stating that nothing but mass, charge, and spin can be perceived from outside. “It teaches us,” he wrote, “that space can be crumpled like a piece of paper into an infinitesimal dot, that time can be extinguished like a blown-out flame, and that the laws of physics that we regard as ‘sacred,’ as immutable, are anything but.”♦ In 1989 he offered his final catchphrase: It from Bit. His view was extreme. It was immaterialist: information first, everything else later. “Otherwise put,”♦ he said,
every it—every particle, every field of force, even the space-time continuum itself—derives its function, its meaning, its very existence … from bits.
Why does nature appear quantized? Because information is quantized. The bit is the ultimate unsplittable particle.
Among the physics phenomena that pushed information front and center, none were more spectacular than black holes. At first, of course, they had not seemed to involve information at all.
Black holes were the brainchild of Einstein, though he did not live to know about them. He established by 1915 that light must submit to the pull of gravity; that gravity curves the fabric of spacetime; and that a sufficient mass, compacted together, as in a dense star, would collapse utterly, intensifying its own gravity and contracting without limit. It took almost a half century more to face up to the consequences, because they are strange. Anything goes in, nothing comes out. At the center lies the singularity. Density becomes infinite; gravity becomes infinite; spacetime curves infinitely. Time and space are interchanged. Because no light, no signal of any kind, can escape the interior, such things are quintessentially invisible. Wheeler began calling them “black holes” in 1967. Astronomers are sure they have found some, by gravitational inference, and no one can ever know what is inside.
At first astrophysicists focused on matter and energy falling in. Later they began to worry about the information. A problem arose when Stephen Hawking, adding quantum effects to the usual calculations of general relativity, argued in 1974 that black holes should, after all, radiate particles—a consequence of quantum fluctuations near the event horizon.♦ Black holes slowly evaporate, in other words. The problem was that Hawking radiation is featureless and dull. It is thermal radiation—heat. But matter falling into the black hole carries information, in its very structure, its organization, its quantum states—in terms of statistical mechanics, its accessible microstates. As long as the missing information stayed out of reach beyond the event horizon, physicists did not have to worry about it. They could say it was inaccessible but not obliterated. “All colours will agree in the dark,” as Francis Bacon said in 1625.
The outbound Hawking radiation carries no information, however. If the black hole evaporates, where does the information go? According to quantum mechanics, information may never be destroyed. The deterministic laws of physics require the states of a physical system at one instant to determine the states at the next instant; in microscopic detail, the laws are reversible, and information must be preserved. Hawking was the first to state firmly—even alarmingly—that this was a problem challenging the very foundations of quantum mechanics. The loss of information would violate unitarity, the principle that probabilities must add up to one. “God not only plays dice, He sometimes throws the dice where they cannot be seen,” Hawking said. In the summer of 1975, he submitted a paper to the Physical Review with a dramatic headline, “The Breakdown of Physics in Gravitational Collapse.” The journal held it for more than a year before publishing it with a milder title.♦
As Hawking expected, other physicists objected vehemently. Among them was John Preskill at the California Institute of Technology, who continued to believe in the principle that information cannot be lost: even when a book goes up in flames, in physicists’ terms, if you could track every photon and every fragment of ash, you should be able to integrate backward and reconstruct the book. “Information loss is highly infectious,”♦ warned Preskill at a Caltech Theory Seminar. “It is very hard to modify quantum theory so as to accommodate a little bit of information
loss without it leaking into all processes.” In 1997 he made a much-publicized wager with Hawking that the information must be escaping the black hole somehow. They bet an encyclopedia of the winner’s choice. “Some physicists feel the question of what happens in a black hole is academic or even theological, like counting angels on pinheads,”♦ said Leonard Susskind of Stanford, siding with Preskill. “But it is not so at all: at stake are the future rules of physics.” Over the next few years a cornucopia of solutions was proposed. Hawking himself said at one point: “I think the information probably goes off into another universe. I have not been able to show it yet mathematically.”♦
It was not until 2004 that Hawking, then sixty-two, reversed himself and conceded the bet. He announced that he had found a way to show that quantum gravity is unitary after all and that information is preserved. He applied a formalism of quantum indeterminacy—the “sum over histories” path integrals of Richard Feynman—to the very topology of spacetime and declared, in effect, that black holes are never unambiguously black. “The confusion and paradox arose because people thought classically in terms of a single topology for space-time,” he wrote.♦ His new formulation struck some physicists as cloudy and left many questions unanswered, but he was firm on one point. “There is no baby universe branching off, as I once thought,”♦ he wrote. “The information remains firmly in our universe. I’m sorry to disappoint science fiction fans.” He gave Preskill a copy of Total Baseball: The Ultimate Baseball Encyclopedia, weighing in at 2,688 pages—“from which information can be recovered with ease,” he said. “But maybe I should have just given him the ashes.”
Charles Bennett came to quantum information theory by a very different route. Long before he developed his idea of logical depth, he was thinking about the “thermodynamics of computation”♦—a peculiar topic, because information processing was mostly treated as disembodied. “The thermodynamics of computation, if anyone had stopped to wonder about it, would probably have seemed no more urgent as a topic of scientific inquiry than, say, the thermodynamics of love,” says Bennett. It is like the energy of thought. Calories may be expended, but no one is counting.
Stranger still, Bennett tried investigating the thermodynamics of the least thermodynamic computer of all—the nonexistent, abstract, idealized Turing machine. Turing himself never worried about his thought experiment consuming any energy or radiating any heat as it goes about its business of marching up and down imaginary paper tapes. Yet in the early 1980s Bennett was talking about using Turing-machine tapes for fuel, their caloric content to be measured in bits. Still a thought experiment, of course, meant to focus on a very real question: What is the physical cost of logical work? “Computers,” he wrote provocatively, “may be thought of as engines for transforming free energy into waste heat and mathematical work.”♦ Entropy surfaced again. A tape full of zeroes, or a tape encoding the works of Shakespeare, or a tape rehearsing the digits of Π, has “fuel value.” A random tape has none.
Bennett, the son of two music teachers, grew up in the Westchester suburbs of New York; he studied chemistry at Brandeis and then Harvard in the 1960s. James Watson was at Harvard then, teaching about the genetic code, and Bennett worked for him one year as a teaching assistant. He got his doctorate in molecular dynamics, doing computer simulations that ran overnight on a machine with a memory of about twenty thousand decimal digits and generated output on pages and pages of fan-fold paper. Looking for more computing power to continue his molecular-motion research, he went to the Lawrence Livermore Laboratory in Berkeley, California, and Argonne National Laboratory in Illinois, and then joined IBM Research in 1972.
IBM did not manufacture Turing machines, of course. But at some point it dawned on Bennett that a special-purpose Turing machine had already been found in nature: namely RNA polymerase. He had learned about polymerase directly from Watson; it is the enzyme that crawls along a gene—its “tape”—transcribing the DNA. It steps left and right; its logical state changes according to the chemical information written in sequence; and its thermodynamic behavior can be measured.
In the real world of 1970s computing, hardware had rapidly grown thousands of times more energy-efficient than during the early vacuum-tube era. Nonetheless, electronic computers dissipate considerable energy in the form of waste heat. The closer they come to their theoretical minimum of energy use, the more urgently scientists want to know just what that theoretical minimum is. Von Neumann, working with his big computers, made a back-of-the-envelope calculation as early as 1949, proposing an amount of heat that must be dissipated “per elementary act of information, that is per elementary decision of a two-way alternative and per elementary transmittal of one unit of information.”♦ He based it on the molecular work done in a model thermodynamic system by Maxwell’s demon, as reimagined by Leó Szilárd.♦ Von Neumann said the price is paid by every elementary act of information processing, every choice between two alternatives. By the 1970s this was generally accepted. But it was wrong.
Von Neumann’s error was discovered by the scientist who became Bennett’s mentor at IBM, Rolf Landauer, an exile from Nazi Germany.♦ Landauer devoted his career to establishing the physical basis of information. “Information Is Physical” was the title of one famous paper, meant to remind the community that computation requires physical objects and obeys the laws of physics. Lest anyone forget, he titled a later essay—his last, it turned out—“Information Is Inevitably Physical.” Whether a bit is a mark on a stone tablet or a hole in a punched card or a particle with spin up or down, he insisted that it could not exist without some embodiment. Landauer tried in 1961 to prove von Neumann’s formula for the cost of information processing and discovered that he could not. On the contrary, it seemed that most logical operations have no entropy cost at all. When a bit flips from zero to one, or vice-versa, the information is preserved. The process is reversible. Entropy is unchanged; no heat needs to be dissipated. Only an irreversible operation, he argued, increases entropy.
Landauer and Bennett were a double act: a straight and narrow old IBM type and a scruffy hippie (in Bennett’s view, anyway).♦ The younger man pursued Landauer’s principle by analyzing every kind of computer he could imagine, real and abstract, from Turing machines and messenger RNA to “ballistic” computers, carrying signals via something like billiard balls. He confirmed that a great deal of computation can be done with no energy cost at all. In every case, Bennett found, heat dissipation occurs only when information is erased. Erasure is the irreversible logical operation. When the head on a Turing machine erases one square of the tape, or when an electronic computer clears a capacitor, a bit is lost, and then heat must be dissipated. In Szilárd’s thought experiment, the demon does not incur an entropy cost when it observes or chooses a molecule. The payback comes at the moment of clearing the record, when the demon erases one observation to make room for the next.
Forgetting takes work.
“You might say this is the revenge of information theory on quantum mechanics,”♦ Bennett says. Sometimes a successful idea in one field can impede progress in another. In this case the successful idea was the uncertainty principle, which brought home the central role played by the measurement process itself. One can no longer talk simply about “looking” at a molecule; the observer needs to employ photons, and the photons must be more energetic than the thermal background, and complications ensue. In quantum mechanics the act of observation has consequences of its own, whether performed by a laboratory scientist or by Maxwell’s demon. Nature is sensitive to our experiments.
“The quantum theory of radiation helped people come to the incorrect conclusion that computing had an irreducible thermodynamic cost per step,” Bennett says. “In the other case, the success of Shannon’s theory of information processing led people to abstract away all of the physics from information processing and think of it as a totally mathematical thing.” As communications engineers and chip designers came closer and closer to atomic levels,
they worried increasingly about quantum limitations interfering with their clean, classical ability to distinguish zero and one states. But now they looked again—and this, finally, is where quantum information science is born. Bennett and others began to think differently: that quantum effects, rather than being a nuisance, might be turned to advantage.
Wedged like a hope chest against a wall of his office at IBM’s research laboratory in the wooded hills of Westchester is a light-sealed device called Aunt Martha (short for Aunt Martha’s coffin). Bennett and his research assistant John Smolin jury-rigged it in 1988 and 1989 with a little help from the machine shop: an aluminum box spray-painted dull black on the inside and further sealed with rubber stoppers and black velvet.♦ With a helium-neon laser for alignment and high-voltage cells to polarize the photons, they sent the first message ever to be encoded by quantum cryptography. It was a demonstration of an information-processing task that could be effectively accomplished only via a quantum system. Quantum error correction, quantum teleportation, and quantum computers followed shortly behind.