The Best American Science and Nature Writing 2012

Home > Other > The Best American Science and Nature Writing 2012 > Page 32
The Best American Science and Nature Writing 2012 Page 32

by Dan Ariely


  A number of physics journals rejected some of Deutsch’s early quantum-computing work, saying it was “too philosophical.” When it was finally published, he said, “a handful of people kind of got it.” One of them was the physicist Artur Ekert, who had come to Oxford as a graduate student and who told me, “David was really the first one who formulated the concept of a quantum computer.”

  Other important figures early in the field included the reclusive physicist Stephen J. Wiesner, who, with Bennett’s encouragement, developed ideas like quantum money (uncounterfeitable!) and quantum cryptography, and the philosopher of physics David Albert, whose imagining of introspective quantum automata (think robots in analysis) Deutsch describes in his 1985 paper as an example of “a true quantum computer.” Ekert says of the field, “We’re a bunch of odd ducks.”

  Although Deutsch was not formally Ekert’s adviser, Ekert studied with him. “He kind of adopted me,” Ekert recalled, “and then, afterward, I kind of adopted him. My tutorials at his place would start at around eight P.M., when David would be having his lunch. We’d stay talking and working until the wee hours of the morning. He likes just talking things over. I would leave at three or four A.M., and then David would start properly working afterward. If we came up with something, we would write the paper, but sometimes we wouldn’t write the paper, and if someone else also came up with the solution we’d say, ‘Good, now we don’t have to write it up.’” It was not yet clear, even in theory, what a quantum computer might be better at than a classical computer, so Deutsch and Ekert tried to develop algorithms for problems that were intractable on a classical computer but that might be tractable on a quantum one.

  One such problem is prime factorization. A holy grail of mathematics for centuries, it is the basis of much current cryptography. It’s easy to take two large prime numbers and multiply them, but it’s very difficult to take a large number that is the product of two primes and then deduce what the original prime factors are. To factor a number of two hundred digits or more would take a regular computer many lifetimes. Prime factorization is an example of a process that is easy one way (easy to scramble eggs) and very difficult the other (nearly impossible to unscramble them). In cryptography, two large prime numbers are multiplied to create a security key. Unlocking that key would be the equivalent of unscrambling an egg. Using prime factorization in this way is called RSA encryption (named for the scientists who proposed it, Rivest, Shamir, and Adleman), and it’s how most everything is kept secret on the Internet, from your credit card information to IRS records.

  In 1992 the MIT mathematician Peter Shor heard a talk about theoretical quantum computing, which brought to his attention the work of Deutsch and other foundational thinkers in what was then still an obscure field. Shor worked on the factorization problem in private. “I wasn’t sure anything would come of it,” Shor explained. But about a year later, he emerged with an algorithm that (a) could only be run on a quantum computer and (b) could quickly find the prime factors of a very large number—the grail! With Shor’s algorithm, calculations that would take a normal computer longer than the history of the universe would take a sufficiently powerful quantum computer an afternoon. “Shor’s work was the biggest jump,” the physicist David DiVincenzo, who is considered among the most knowledgeable about the history of quantum computing, says. “It was the moment when we were, like, Oh, now we see what it would be good for.”

  Today quantum computation has the sustained attention of experimentalists; it also has serious public and private funding. Venture capital companies are already investing in quantum encryption devices, and university research groups around the world have large teams working both to build hardware and to develop quantum-computer applications—for example, to model proteins or to better understand the properties of superconductors.

  Artur Ekert became a key figure in the transition from pure theory to building machines. He founded the quantum computation center at Oxford, as well as a similar center a few years later at Cambridge. He now leads a center in Singapore, where the government has made quantum-computing research one of its top goals. “Today in the field there’s a lot of focus on lab implementation, on how and from what you could actually build a quantum computer,” DiVincenzo said. “From the perspective of just counting, you can say that the majority of the field now is involved in trying to build some hardware. That’s a result of the success of the field.” In 2009 Google announced that it had been working on quantum-computing algorithms for three years, with the aim of having a computer that could quickly identify particular things or people from among vast stores of video and images—David Deutsch, say, from among millions of untagged photographs.

  In the early nineteenth century, a “computer” was any person who computed: someone who did the math for building a bridge, for example. Around 1830, the English mathematician and inventor Charles Babbage worked out the idea for his Analytical Engine, a machine that would remove the human from computing and thus bypass human error. Nearly no one imagined that an analytical engine would be of much use, and in Babbage’s time no such machine was ever built to completion. Though Babbage was prone to serious mental breakdowns, and though his bent of mind was so odd that he once wrote to Alfred, Lord Tennyson, correcting his math (Babbage suggested rewriting “Every minute dies a man / Every minute one is born” as “Every moment dies a man / Every moment one and a sixteenth is born,” further noting that although the exact figure was 1.167, “something must, of course, be conceded to the laws of meter”), we can now say the guy was on to something.

  A classical computer—any computer we know today—transforms an input into an output through nothing more than the manipulation of binary bits, units of information that can be either zero or one. A quantum computer is in many ways like a regular computer, but instead of bits it uses qubits. Each qubit (pronounced “q-bit”) can be zero or one, like a bit, but a qubit can also be zero and one—the quantum-mechanical quirk known as superposition. It is the state that the cat in the classic example of Schrödinger’s closed box is stuck in: dead and alive at the same time. If one reads quantum-mechanical equations literally, superposition is ontological, not epistemological; it’s not that we don’t know which state the cat is in but that the cat really is in both states at once. Superposition is like Freud’s description of true ambivalence: not feeling unsure, but feeling opposing extremes of conviction at once. And, just as ambivalence holds more information than any single emotion, a qubit holds more information than a bit.

  What quantum mechanics calls entanglement also contributes to the singular powers of qubits. Entangled particles have a kind of ESP: regardless of distance, they can instantly share information that an observer cannot even perceive is there. Input into a quantum computer can thus be dispersed among entangled qubits, which lets the processing of that information be spread out as well: tell one particle something, and it can instantly spread the word among all the other particles with which it’s entangled.

  There’s information that we can’t perceive when it’s held among entangled particles; that information is their collective secret. As quantum mechanics has taught us, things are inexorably changed by our trying to ascertain anything about them. Once observed, qubits are no longer in a state of entanglement or of superposition: the cat commits irrevocably to life or death, and this ruins the quantum computer’s distinct calculating power. A quantum computer is the pot that, if watched, really won’t boil. Charles Bennett described quantum information as being “like the information of a dream—we can’t show it to others, and when we try to describe it we change the memory of it.”

  But once the work on the problem has been done among the entangled particles, then we can look. When one turns to a quantum computer for an “answer,” that answer, from having been held in that strange entangled way among many particles, needs then to surface in just one ordinary, unentangled place. That transition from entanglement to nonentanglement is sometimes termed “collapse.” O
nce the system has collapsed, the information it holds is no longer a dream or a secret or a strange cat at once alive and dead; the answer is then just an ordinary thing we can read off a screen.

  Qubits are not merely theoretical. Early work in quantum-computer hardware built qubits by manipulating the magnetic nuclei of atoms in a liquid soup with electrical impulses. Later teams, such as the one at Oxford, developed qubits using single trapped ions, a method that confines charged atomic particles to a particular space. These qubits are very precise, though delicate; protecting them from interference is quite difficult. More easily manipulated, albeit less precise, qubits have been built from superconducting materials arranged to model an atom. Typically, the fabrication of a qubit is not all that different from that of a regular chip. At Oxford I saw something that resembled an oversize air-hockey table chaotically populated with a specialty Lego set, with what looked like a salad-bar sneeze guard hovering over it; this extended apparatus comprised lasers and magnetic-field generators and optical cavities, all arranged at just the right angles to manipulate and protect from interference the eight tiny qubits housed in a steel tube at the table’s center.

  Oxford’s eight-qubit quantum computer has significantly less computational power than an abacus, but fifty to a hundred qubits could make something as powerful as any laptop. A team in Bristol, England, has a small, four-qubit quantum computer that can factor the number 15. A Canadian company claims to have built one that can do Sudoku, though that has been questioned by some who say that the processing is effectively being done by normal bits, without any superposition or entanglement.

  Increasing the number of qubits, and thus the computer’s power, is more than a simple matter of stacking. “One of the main problems with scaling up is a qubit’s fidelity,” Robert Schoelkopf, a physics professor at Yale who leads a quantum-computing team, explained. By fidelity, he refers to the fact that qubits “decohere”—fall out of their information-holding state—very easily. “Right now, qubits can be faithful for about a microsecond. And our calculations take about one hundred nanoseconds. Either calculations need to go faster or qubits need to be made more faithful.”

  What qubits are doing as we avert our gaze is a matter of some dispute and occasionally—“shut up and calculate”—of some determined indifference, especially for more pragmatically minded physicists. For Deutsch, to really understand the workings of a quantum computer necessitates subscribing to Hugh Everett’s Many Worlds Interpretation of quantum mechanics.

  Everett’s theory was neglected upon its publication in 1957 and is still a minority view. It entails the following counterintuitive reasoning: every time there is more than one possible outcome, all of them occur. So if a radioactive atom might or might not decay at any given second, it both does and doesn’t; in one universe it does, and in another it doesn’t. These small branchings of possibility then ripple out until everything that is possible in fact is. According to Many Worlds theory, instead of a single history there are innumerable branchings. In one universe your cat has died, in another he hasn’t, in a third you died in a sledding accident at age seven and never put your cat in the box in the first place, and so on.

  Many Worlds is an ontologically extravagant proposition. But it also bears some comfortingly prosaic implications: in Many Worlds theory, science’s aspiration to explain the world fully remains intact. The strangeness of superposition is, as Deutsch explains it, simply “the phenomenon of physical variables having different values in different universes.” And entanglement, which so bothered Einstein and others, especially for its implication that particles could instantly communicate regardless of their distance in space or time, is also resolved. Information that seemed to travel faster than the speed of light and along no detectable pathway—spookily transmitted as if via ESP—can, in Many Worlds theory, be understood to move differently. Information still spreads through direct contact—the “ordinary” way; it’s just that we need to adjust to that contact being via the tangencies of abutting universes. As a further bonus, in Many Worlds theory, randomness goes away, too. A 10 percent chance of an atom decaying is not arbitrary at all, but rather refers to the certainty that the atom will decay in 10 percent of the universes branching from that point. (This being science, there’s the glory of nuanced dissent around the precise meaning of each descriptive term, from “chance” to “branching” to “universe.”)

  In the 1970s, Everett’s theory received some of the serious attention it missed at its conception, but today the majority of physicists are not much compelled. “I’ve never myself subscribed to that view,” DiVincenzo says, “but it’s not a harmful view.” Another quantum-computing physicist called it “completely ridiculous,” but Ekert said, “Of all the weird theories out there, I would say Many Worlds is the least weird.” In Deutsch’s view, “Everett’s approach was to look at quantum theory and see what it actually said, rather than hope it said certain things. What we want is for a theory to conform to reality, and in order to find out whether it does, you need to see what the theory actually says. Which with the deepest theories is actually quite difficult, because they violate our intuitions.”

  I told Deutsch that I’d heard that even Everett thought his theory could never be tested.

  “That was a catastrophic mistake,” Deutsch said. “Every innovator starts out with the world view of the subject as it was before his innovation. So he can’t be blamed for regarding his theory as an interpretation. But”—and here he paused for a moment—“I proposed a test of the Everett theory.”

  Deutsch posited an artificial-intelligence program run on a computer that could be used in a quantum-mechanics experiment as an “observer”; the AI program, rather than a scientist, would be doing the problematic “looking,” and, by means of a clever idea that Deutsch came up with, a physicist looking at the AI observer would see one result if Everett’s theory was right and another if the theory was wrong.

  It was a thought experiment, though. No AI program existed that was anywhere near sophisticated enough to act as the observer. Deutsch argued that theoretically there could be such a program, though it could only be run on radically more advanced hardware—hardware that could model any other hardware, including that of the human brain. The computer on which the AI program would run “had to have the property of being universal . . . so I had to postulate this quantum-coherent universal computer, and that was really my first proposal for a quantum computer. Though I didn’t think of it as that. And I didn’t call it a quantum computer. But that’s what it was.” Deutsch had, it seems, come up with the idea for a quantum computer twice: once in devising a way to test the validity of the Many Worlds Interpretation, and a second time, emerging from the complexity-theory conversation, with evidenced argument supporting Many Worlds as a consequence.

  To those who find the Many Worlds Interpretation needlessly baroque, Deutsch writes, “the quantum theory of parallel universes is not the problem—it is the solution. . . . It is the explanation—the only one that is tenable—of a remarkable and counterintuitive reality.” The theory also explains how quantum computers might work. Deutsch told me that a quantum computer would be “the first technology that allows useful tasks to be performed in collaboration between parallel universes.” The quantum computer’s processing power would come from a kind of outsourcing of work, in which calculations literally take place in other universes. Entangled particles would function as paths of communication among different universes, sharing information and gathering the results. So, for example, with the case of Shor’s algorithm, Deutsch said, “When we run such an algorithm, countless instances of us are also running it in other universes. The computer then differentiates some of those universes (by creating a superposition), and as a result they perform part of the computation on a huge variety of different inputs. Later those values affect each other, and thereby all contribute to the final answer, in just such a way that the same answer appears in all the universes.”

 
Deutsch is mainly interested in the building of a quantum computer for its implications for fundamental physics, including the Many Worlds Interpretation, which would be a victory for the argument that science can explain the world and that consequently reality is knowable. (“House cures people,” Deutsch said to me when discussing Hugh Laurie, “because he’s interested in solving problems, not because he’s interested in people.”) Shor’s algorithm excites Deutsch, but here is how his excitement comes through in his book The Fabric of Reality:

  To those who still cling to a single-universe world-view, I issue this challenge: explain how Shor’s algorithm works. I do not merely mean predict that it will work, which is merely a matter of solving a few uncontroversial equations. I mean provide an explanation. When Shor’s algorithm has factorized a number, using 10500 or so times the computational resources than can be seen to be present, where was the number factorized? There are only about 1080 atoms in the entire visible universe, an utterly minuscule number compared with 10500. So if the visible universe were the extent of physical reality, physical reality would not even remotely contain the resources required to factorize such a large number. Who did factorize it, then? How, and where, was the computation performed?

 

‹ Prev