Book Read Free

Computing with Quantum Cats

Page 20

by John Gribbin


  But I don't want to end this chapter on such a daunting note. The optimistic news is that quantum computers have already been built and have used Shor's algorithm in the factorization problem. To be sure, only in a modest way; but it's a beginning. In 2001, a team led by Isaac Chuang at the IBM Almaden Research Center in San Jose, California, used a different method of correcting—or rather, compensating for—errors to find the factors of the number 15. The essence of their approach was to work with a molecule which contains five fluorine atoms and two carbon atoms, each of which has its own nuclear spin state.30 This means that each single molecule can in effect be used as a seven-qubit quantum computer, equivalent to a classical computer with 27 bits (128 bits). But they didn't work with just a single molecule. You can't clone a quantum entity into multiple copies of itself, but you can prepare a multitude of quantum entities all in the same state. They used about a thimbleful of liquid containing about a billion billion molecules, probed with pulses of radio-frequency electromagnetic waves and monitored using the technique of nuclear magnetic resonance (NMR) familiar today from hospital scanning systems (where it is known as magnetic resonance imaging, or MRI, because the word “nuclear” frightens some people).

  Left to their own devices, the spins of the nuclei of the atoms in all those molecules “point” in different directions. For computational purposes they can be regarded as a random string of 0s and 1s. Applying the appropriate magnetic field changes the spins of all the nuclei inside some of the molecules, so that in about one in a hundred million of the molecules all seven nuclei are in the same state—say, 1. That's like 10 billion identical computers all in the state 1111111. More subtle applications of the magnetic influence can flip one particular nucleus in each molecule, so that all 10 billion of them now read, say, 1111011. The pattern can be read by NMR because 10 trillion molecules are giving out the same signal against a background of random noise produced by all the other molecules. And if, say, 10 percent of the 10 trillion are in the “wrong” state because of errors, that just gets lost in the noise. The experimenters can flip the spin of each type of atom in all the molecules, effectively at will; but the atoms in each molecule interact with their neighbors, so the molecule used was chosen to have certain properties, so that, for example, one of the nuclei will flip only if its neighbor is in the spin state 1, forming a CNOT gate. Which is how the “computer” was made to factorize the number 15.

  Of course, the experimenters were careful not to read the patterns in the molecules during the computation because that would make them decohere. They only looked at the pattern when the computation was complete. In effect, the “readout” from the computer averaged over all the molecules, with the huge number of right answers effectively swamping the much smaller number of errors introduced by decoherence and other difficulties. You will not be surprised to learn that the computer found that the factors of 15 are 3 and 5. But still, it worked. It even made the pages of the Guinness Book of Records.

  Unfortunately, because of the limitations of the monitoring technique, the method will not work for molecules with more than about ten qubits, so at present there is no prospect of building a larger quantum computer using this method. But there are, as we shall see, realistic prospects of building bigger quantum computers using other techniques.

  Everything I have described so far demonstrates that quantum computers work—not just in principle, but at a practical level. Individual qubits can be prepared and manipulated, with the aid of individual logic gates, including the vital CNOT gate. But the enormous challenge remains of constructing a quantum computer on a scale large enough to beat classical computers at the range of tasks, such as factorization of large numbers, for which they are suited. Even given the power of superposition, this will (conservatively) involve manipulating at the very least hundreds of qubits using dozens of gates, within the time limits set by decoherence and with inbuilt error correction. It's a sign of the immaturity of the field that many competing approaches are being tried out in an attempt to find one that works on the scale required. And it's a sign of how fast the field is developing that while I was writing this book a technology that had seemed like an also-ran when I started had emerged as one of the favorites by the time I got to this chapter. I've no idea what will seem the best bet by the time you read these words, so I shall simply set out a selection of the various stalls to give you a flavor of what is going on. The techniques include the trapped ion and nuclear magnetic resonance approaches that we have already met, superconductors and a quantum phenomenon known as the Josephson junction, so-called “quantum dots,” using photons of visible light as qubits, and a technique called cavity quantum electrodynamics (involving atoms).

  THE KEY CRITERIA

  If any of these techniques is to work as a “proper” quantum computer, as opposed to the “toy” versions so far constructed, it will have to satisfy five criteria, spelled out by David DiVincenzo, of IBM's Physics of Information group, at the beginning of the present century:1

  1. Each qubit has to be well defined (“well characterized” in quantum jargon) and the whole system must be scalable to sufficiently large numbers. In computer jargon, each qubit must be separately addressable. It is also desirable, if possible, to have a single quantum system acting as different kinds of qubits, as with the single ion we met in the previous chapter that stores 1 from the energy mode and 0 from the rocking mode, giving a two-bit register.

  2. There has to be a way of initializing the computer by setting all the qubits to zero at the start of a computation (re-setting the register). This may sound trivial, but it is a big problem for some techniques, including the NMR system that has proved so effective on a small scale. In addition, quantum error correction needs a continuous supply of fresh qubits in the 0 state, requiring what DiVincenzo calls a “qubit conveyor belt” carrying qubits in need of initialization away from the region where computation is being carried out to be initialized, then bringing them back when they have been set to 0.

  3. There has to be a way of solving the old problem of decoherence, or specifically, decoherence time. A classical computer is good for as long as the hardware lasts, and my wife is not alone in having a computer nearly ten years old that she is still entirely happy with. By contrast, a quantum computer—the virtual Turing machine inside the hardware—“lasts” for about a millionth of a second. In fact, this is not quite the whole story. What really matters are the relative values of the decoherence time and the time it takes for a gate to operate. The gate operation time may be pushed to a millionth of a millionth of a second, allowing for a million operations before complete decoherence occurs. Putting it another way, during the course of the operation of a gate, only one in a million qubits will “dephase.” This just about makes quantum computing feasible; or, in DiVincenzo's words, it is, “to tell the truth, a rather stringent condition.”

  4. As I have already discussed, we need reversible gates. In particular, we need to incorporate CNOT gates into the “circuitry” of the quantum computer, but these gates have short decoherence times and are difficult to construct. (Incidentally, if 3-bit gates, equivalent to Fredkin gates, could be made out of qubits, quantum computers would be more efficient; 2-bit gates are the minimum requirement, not the best in computational terms.) It is also necessary, of course, to turn the gates on and off as required. Quantum computer scientists have identified two potential problems that might be encountered. The first involves gates which are switched on by natural processes, as the computation proceeds, but which are hard to switch off; the second involves gates which have to be switched on (and off) from outside as required. “Outside” to the gate would be a “bus qubit” which would have to be able to interact with each of the qubits in the computer, and which would itself be prone to decoherence. But the bottom line is that quantum gates cannot be implemented perfectly, and errors are inevitable. The trick will be to minimize the errors and find ways of working around them.

  5. Finally, it has t
o be possible to measure the qubits in order to read out the “answer” to a problem. Inevitably, because this involves quantum processes, the measurement cannot give a unique answer with 100 percent accuracy, so it also has to be possible to repeat the computation as many times as is required to achieve the desired level of accuracy. This may not be too arduous. DiVincenzo points out that if the “quantum efficiency” is 90 percent, meaning that the “answer” is right nine times out of ten, then 97 percent reliability can be achieved just by running the calculation three times.

  DiVincenzo also added two other criteria, not strictly relevant to computation itself, but important in any practical quantum computer. They are a result of the need for communication, by which, says DiVincenzo, “we mean quantum communication: the transmission of intact qubits from place to place.” This is achieved with so-called “flying qubits” that carry information from one part of a quantum computer to another part. The first criterion is the ability to convert stationary qubits into flying qubits, and vice versa; the second is to ensure that the flying qubits fly to the right places, faithfully linking specified locations inside the computer.

  Nobody has yet found a system which achieves a satisfactory level for all five criteria at the same time, although ion trap devices come closest, having fulfilled all the criteria separately and several in conjunction with one another.2 Some systems are (potentially) good in one or two departments, others are good in other departments. It may be that the best path will involve combining different techniques in some sort of hybrid quantum computer, to get around all these difficulties. But here are some of the contenders, as of the end of 2012, starting with one of my favorites.

  JOSEPHSON AND THE JUNCTION

  When I first started writing about quantum physics, I was particularly intrigued by work being carried out at Sussex University on Superconducting Quantum Interference Devices, or SQUIDs. These are, by the standards of quantum physics, very large (macroscopic) objects, a bit smaller than the size of a wedding ring. Yet they can behave, under the right circumstances, like single quantum entities, which is what makes them so fascinating. Now, three decades after I first wrote about them, they have the potential to contribute to the construction of quantum computers. They are based on a phenomenon known as the Josephson effect, discovered in 1962 by a 22-year-old student, who later received a Nobel Prize for the work.

  Brian Josephson was born in 1940 in Cardiff. He was educated at the Cardiff High School for Boys, at the time a grammar school (it has since merged with two other schools to form the modern comprehensive Cardiff High School). From there he went on to Trinity College, Cambridge, at the age of seventeen, graduating in 1960. As an undergraduate he had already published a significant scientific paper concerning a phenomenon known as the Mösbauer effect, and was marked out as a high flier. Josephson stayed on in Cambridge to work for a PhD, awarded in 1964, two years after his Nobel Prize–winning breakthrough; also in 1962, while still a student, he was elected a Fellow of Trinity. After completing his PhD, Josephson spent a year as a visitor at the University of Illinois before returning to Cambridge, where he stayed for the rest of his career (apart from brief visits to universities around the world), becoming a professor in 1974 and retiring in 2007. But after he encountered Bell's theorem in the mid-1960s, Josephson drifted away from mainstream physics and became increasingly intrigued by “mind-matter interactions,” directing the Mind-Matter Unification Project at the Cavendish Laboratory (a Nobel Prize allows researchers considerable leeway in later life), studying Eastern mysticism, and becoming convinced that entanglement provides an explanation for telepathy. Most physicists regard this as complete rubbish, and feel that Josephson's brilliant mind was essentially lost to physics by the time of the award of his Nobel Prize in 1973.

  When the Royal Mail produced a set of stamps to mark the centenary, in 2001, of the Nobel Prizes, they asked laureates, including Josephson, to contribute their thoughts on their own field of study. In his comments, Josephson referred to the possibility of “an explanation of processes still not understood within conventional science, such as telepathy.” This provoked a fierce response from several physicists, including David Deutsch, who said: “It is utter rubbish. Telepathy simply does not exist…complete nonsense.”3

  But none of this detracts from the importance of the discovery Josephson made in 1962, which is straightforward to describe but runs completely counter to common sense. He was studying the phenomenon of superconductivity, which had fascinated him since he was an undergraduate. This happens in some materials when cooled to very low temperatures, below their appropriate “critical temperature,” at which point they have no electrical resistance at all. It was discovered in 1911, by Kamerlingh Onnes, in Leiden; but although clearly a quantum effect, it was still not fully understood in 1962.

  Josephson found that, according to the equations of quantum physics, under the right circumstances, once started a current would flow forever through a superconductor without any further voltage being applied. The “right conditions” involve what have become known as Josephson junctions: two superconductors joined by a “weak link” of another kind of material, through which electrons4 can tunnel. There are three possible forms of this junction: first, superconductor-insulator-superconductor, or S-I-S; secondly, superconductor-nonsuperconductor-superconductor, or S-N-S; and finally one with a literally weak link in the form of a thin neck of the superconductor itself, known as S-s-S.

  The story of how Josephson came up with his insight has been well documented, notably by Josephson himself in his Nobel lecture, and by Philip Anderson (himself a later Nobel Prize winner), who was visiting Cambridge from Bell Labs in 1962. Anderson wrote about the discovery of the Josephson effect in an article in the November 1970 issue of Physics Today, recounting how he met Josephson when the student—nearly seventeen years his junior—attended a course he gave on solid-state and many-body theory: “This was a disconcerting experience for a lecturer, I can assure you, because everything had to be right or he would come up and explain it to me after class.” Josephson had learned about experiments involving tunneling in superconductors, and was working on the underlying theory when “one day Anderson showed me a preprint he had just received from Chicago in which Cohen, Falicov and Philips calculated the current flowing in a superconductor-barrier–normal-metal system…. I immediately set to work to extend the calculation to a system in which both sides of the barrier were superconducting.” Josephson discussed his work with Anderson, who was encouraging but, he emphasizes, made no direct contribution: “I want to emphasize that the whole achievement, from the conception to the explicit calculation in the publication, was entirely Josephson's…this young man of twenty-two conceived the entire thing and carried it through to a successful conclusion.”

  Anderson returned to Bell Labs, where he and John Rowell made the first working Josephson junction and confirmed the reality of the Josephson effect. Meanwhile, in August 1962 Josephson wrote up his work as a “fellowship thesis” in support of his (successful) application for a research fellowship at Trinity College; this may be a unique example of such a thesis being worthy of a Nobel Prize! There were originally just two copies of this masterpiece, one submitted to Trinity and one kept by Josephson; a third (a photocopy) turned up in Chicago, but Anderson does not know how it got there. More formal publication came in the journal Physics Letters later in the year (volume 1, page 251). But in accordance with the regulations of Cambridge University, Josephson still had to remain “in residence” for another two years before he could be awarded his doctorate.

  Josephson's paper was as comprehensive as it could possibly have been. It was clear from the outset that the Josephson effect had many practical applications, some of which I will describe. At Bell, Anderson and Rowell consulted their resident patent lawyer about the possibilities: “In his opinion, Josephson's paper was so complete that no one else was ever going to be very successful in patenting any substantial aspect of the Jos
ephson effect.” Nevertheless, they took out patents, but these were never enforced so have never been challenged.

  Many of the practical applications of Josephson effect devices depend on their extreme sensitivity to magnetic fields—so extreme that in some cases the Earth's magnetic field has to be compensated for. In one of the early experiments by Anderson and Rowling, they found a “supercurrent” of 0.30 milliAmps in the Earth's magnetic field, which increased to 0.65 mA when the field was compensated for. (Less field means more current.) Probably the most widespread application is in calibrating and controlling voltage. A Josephson junction exhibits a precise relationship between frequency and voltage. Since frequency is defined in terms of standards such as the emission from atoms of cesium, in effect this leads to a definition of the volt; turning this around, the effect is used to ensure the accuracy of DC voltage standards. Another application, the superconducting single-electron transistor, is a charge amplifying device with widespread potential uses; Josephson devices can also be used as fast, ultra-sensitive switches which can be turned on and off using light, increasing the processing speed of classical computers, for example, a hundredfold. A group at the University of Sussex, headed by Terry Clark, has developed a device based on this technology which is so sensitive that it can monitor a person's heartbeat from a meter away, without the need for any electrical connections; the technique has been extended to monitoring brain activity. This list is by no means exhaustive; but what we are interested in here is the application of the Josephson effect to quantum computing, where the key (as with some of the other applications) is the superconducting quantum interference device, or SQUID. A SQUID is a ring of superconducting material incorporating a single Josephson junction, so that electric current can flow endlessly around the ring without any voltage being applied. But, in an echo of Schrödinger's famous cat “experiment,” the same electric current can flow both ways around the ring at once. Quantum superposition can be demonstrated in a macroscopic object, one big enough to see and feel—I've held one in my own hand, courtesy of Terry Clark.

 

‹ Prev