Book Read Free

Computing with Quantum Cats

Page 22

by John Gribbin


  THE NUCLEAR OPTION

  In 2012, news came of two major developments in nuclear-spin quantum memory, reported in the same issue of the journal Science.11 Both are based on the kind of solid-state technology familiar to manufacturers of classical semiconducting computer chips. The first involves ultra-pure samples of silicon-28, an isotope which has an even number of nucleons (protons and neutrons) in each atomic nucleus, so that overall there is no nuclear spin. The samples are 99.995 percent pure. This provides a background which the researchers, a team from Canada, Britain and Germany, describe as a “semiconductor vacuum.” It has no spins to interact with the nuclei of interest, which greatly reduces the likelihood of decoherence. With this material as a background, the silicon can be doped with donor atoms such as phosphorus12 (just as in conventional chips), each of which does have spin.

  The term “donor” means that the phosphorus atom has an electron which it can give up. Each silicon atom can be thought of as having four electronic bonds, one to each of its four nearest neighbors in a crystal lattice; an occasional phosphorus atom can fit into the lattice, also forming four bonds with its neighbors, but with one electron left over. Such a doped silicon lattice forms an n-type semiconductor. Using a coupling which is known as the hyperfine interaction, the spin state of the donor electron (which can itself, in principle, act as a qubit) can be transferred to the nucleus of the phosphorus atom, stored there for a while, then transferred back to the electron. All of this involves manipulating the nuclei with magnetic fields, running the experiment at temperatures only a few degrees above absolute zero, and monitoring what is going on using optical spectroscopy. But crucially, although they appear daunting to the layman, the techniques used to monitor hyperfine transitions optically are already well established as standard for monitoring ion qubits in a vacuum.

  In the experiments reported so far, ensembles of nuclei, rather than individual phosphorus nuclei, were monitored. But the decoherence time was 192 seconds, or as the team prefer to point out, “more than three minutes.” We have already arrived in the era of decoherence times measured in minutes, rather than seconds or fractions of a second, which is a huge and valuable step towards a practical working quantum computer. And the technique should be extensible to the readout of the state of single phosphorus atoms, as well as being suitable to other donor atoms.

  Compared with this, the achievement of the other team reported in the same issue of Science may seem at first sight less impressive. Using a sample of pure carbon (essentially, diamond) rather than silicon, a joint US-German-British team achieved a decoherence time of just over one second. But they did so at room temperature, reading out from a single quantum system, and they make a reliable estimate, based on cautious extrapolation from the existing technology, that “quantum storage times” exceeding a day should be possible. That really would be a game changer.

  In these experiments, crystals of diamond made from 99.99 percent pure carbon-12 (which, like silicon-28, has no net nuclear spin) were grown by depositing them from a vapor. Like a silicon atom, each carbon atom can bond with four neighbors. But such crystals contain a few defects known as nitrogen-vacancy (N-V) centers. In such a defect, one carbon atom is replaced by a nitrogen atom, which comes from the air; but since each nitrogen atom can only bond with three carbon atoms there is a gap (the vacancy) where the bond to the fourth next-door carbon atom ought to be. In effect, this vacancy contains two electrons from the nitrogen atom and one from a nearby carbon atom, which exist in an electron spin resonance (ESR) state. N-V centers absorb and emit red light, so they interact with the outside world and can be used as readouts of the quantum state of anything they interact with at the quantum level—the fifth of the DiVincenzo criteria—or as a means of making inputs to the system. The bright red light associated with N-V centers also makes it easy to locate them in the crystal.

  The “anything” the N-V center interacts with in these experiments is a single atom of carbon-13, which has overall nuclear spin, located one or two nanometers away. At this distance, the coupling between the carbon-13 nucleus and the ESR associated with the N-V center (another example of the hyperfine interaction at work) is strong enough to make it possible to prepare the nucleus in a specified quantum spin state and to read the state back, but not strong enough to cause rapid decoherence. For the concentration of carbon-13 used in the experiments, about 10 percent of all the naturally occurring N-V centers had a carbon-13 nucleus the right distance away to be useful; but each measurement involved just a single N-V center interacting with a single carbon-13 nucleus. Other experiments have shown that it is possible to entangle photons with the electronic spin state of N-V centers, providing another way of linking the nuclear memory to the outside world, potentially over long distances.

  The storage time achieved in these experiments was 1.4 seconds. But even using simple refinements, such as reducing the concentration of carbon-13 to decrease the interference caused by unwanted interactions, it should be possible to extend this by more than 2,500 times, to an hour or so; from there it will be a relatively smaller step, using techniques pioneered in other fields, to go up by another factor of 25 to get nuclear spin memories that last for more than a day, at room temperature. But proponents of quantum computing are still far from putting all their eggs in one basket, attractive as this one might be. Even the NMR approach, which is now almost ancient history by the standards of the field, is still providing potentially useful insights.

  THE NUTS AND BOLTS OF NMR

  In the previous chapter, I got a little ahead of myself by describing the exciting first results of quantum computation using nuclear magnetic resonance, the first successful quantum computation technique, without really explaining the fundamental basis of NMR. It is time to redress the balance.

  Atomic nuclei are made up of protons and neutrons.13 The simplest nucleus, that of hydrogen, consists of a single proton; the next element, helium, always has two protons in the nucleus, but may have either one or two neutrons; these varieties are known as helium-3 and helium-4, respectively, from the total number of particles (nucleons) in the nucleus. Going on up through heavier chemical elements, the very pure form of silicon used by Michelle Simmons and her colleagues is a variety (isotope) known as silicon-28, because it has 28 nucleons (14 protons and 14 neutrons) in the nucleus of each atom. Another isotope, silicon-29, has 14 protons and 15 neutrons in each nucleus. The crucial distinction, for the purposes of quantum computation, is the difference in spin between nuclei with an even number of nucleons and nuclei with an odd number of nucleons.

  Neutrons and protons are both so-called “spin-½” particles. This means that they can exist in either of two spin states, +½ or –½, also known as “up” and “down,” which can be equivalent, as we have seen, to 0 and 1 in binary code. You might think that this would mean that the overall spin of a nucleus of silicon-28 would be anything up to 14, depending on how the spins of individual nucleons add up or cancel out; but the quantum world doesn't work like that. Instead, each pair of protons aligns so that the spins cancel out, and the same is true for each pair of neutrons. So nuclei with even numbers of both protons and neutrons have zero overall spin, but other nuclei have non-zero spin. Thus silicon-28 has zero spin, but silicon-29 has an overall spin of ±½. This is what makes pure silicon-28 such a perfect background material against which to monitor the spins of atoms used to dope the crystal lattice.

  NMR, though, doesn't use nuclei as complicated as those of silicon-28. It depends on the fact that there is an interaction between magnetism and nuclear spin, so that applying the right kind of alternating magnetic field to a nucleus can make it jump between energy levels corresponding to different spin states. This is the resonance in nuclear magnetic resonance, and it shows up as an absorption of energy at a precise frequency of oscillation, the resonance frequency. The simplest nucleus to work with is the hydrogen nucleus, which is a single proton. The exact response of the proton to the oscillating magneti
c field depends on its chemical environment—which molecules the hydrogen atoms are part of—so by sweeping a varying magnetic field across the human body and measuring the resonance at different locations it is possible to get a map which reveals the chemical environment of the hydrogen atoms in different parts of the body. That's what we know as an MRI scan.

  The curious feature of quantum computing using NMR, though, is that we are dealing not with individual spins, but with some kind of average of billions and billions of essentially identical states—typically involving 1020 nuclei. In a fluid14 being used for quantum computation in this way, the energy difference between the two spin states of the proton is very small, and this means that although nuclei prefer to be in the lower energy level, it is easy for them to get knocked up into the upper level by random interactions (literally, by neighbor atoms bumping into them). Once there, they will fall back down again; but meanwhile, other nuclei have been bumped up to take their place. At any one time, for every million nuclei in the upper level there may be only a million and one in the lower energy level. In effect, the NMR computing technique is working with the one in a million “surplus” nuclei in the lower level, getting them to jump up to the higher level. But it is working with all of them at once. And all of those “one in a million” nuclei, billions of them, jumping together between energy levels, have to be regarded as a single qubit, switching between the states 0 and 1.

  I explained in the previous chapter how effective the technique has been in demonstrating the techniques of quantum computing with small numbers of qubits (up to 10 or so). But as I also mentioned, there are severe scaling problems with the technique, and it has already been pushed about as far as it can go. Even so, it isn't quite ready to be consigned to the dustbin of history. There is something very odd about NMR computation, which has set people thinking along completely different lines—as I shall discuss in the Coda.

  Meanwhile, another old warhorse of a quantum computation technique, the ion trap approach, which is scalable, has quietly made steady progress. This was recognized in 2012 by the award of a half-share of the Nobel Prize in physics to David Wineland, of NIST, whom we met in Chapter 5.

  TRAPPED IONS TAKE A BOW

  Wineland's Nobel citation specified that the award was for “ground-breaking experimental methods that enable measuring and manipulation of individual quantum systems.” In the words of the Royal Swedish Academy of Sciences, which administers the awards, if the quantum computer is built in the near future it “will change our everyday lives in this century in the same radical way as the classical computer did in the last century.” In this connection, there is one important feature of the ion trap approach which should be borne in mind when considering all the possibilities for computing with quanta. It is the only method in which all of the physics involved uses standard techniques that have all been tried and tested and proven to work. Every one of the other approaches, even though they are based on sound theoretical principles, relies on there being some kind of practical physics breakthrough in the not too distant future if they are to maintain momentum. So while one or another of them may seem to spurt ahead for a time, like the fabled hare, so far they have each ground to a halt after a while, while the trapped ion technique continues to plod along, tortoise-like, improving the technology but always using the same physics.15 Winfried Hensinger says that the first working quantum computer, in about the middle of the 2020s, is likely to be based on the trapped ion technique, and to be as big as a house. But you only have to compare the size of Colossus with the size of a modern smartphone to realize that this will be far from the end of the story.

  Wineland has helped the trapped ion tortoise to take a few more steps down that road. Born in Milwaukee, Wisconsin, on February 24, 1944, he moved to California as a child and attended high school in Sacramento. He took his first degree at the University of California, Berkeley, graduating in 1965, received his PhD from Harvard in 1970, and worked at the University of Washington in Hans Dehmelt's group before joining the National Bureau of Standards in 1975. One focus of his work with trapped ions there has been towards the development of more accurate clocks—better timekeeping devices than the atomic clocks which are now standard. In 1979 Wineland founded the ion storage group of the Bureau, now based at NIST in Boulder, Colorado, using the techniques which I described in the previous chapter.

  As I have explained, the problem with developing the trapped ion technique into a practical quantum computer is that it is extremely difficult to control strings of trapped ions containing more than about 20 qubits. Wineland and his colleagues have proposed getting around this difficulty by dividing the “quantum hardware” up into chunks, carrying out calculations using short chains of ions that are shuffled about on a quantum computer chip by electric forces which do not disturb the internal quantum states of the strings. According to Wineland and Monroe,16 “the resulting architecture would somewhat resemble the familiar charge-coupled device (CCD) used in digital cameras; just as a CCD can move electric charge across an array of capacitors, a quantum chip could propel strings of individual ions through a grid of linear traps.” In 2005, Hensinger, at the University of Michigan, and his team, managed to demonstrate reliable transport of ions “’round the corner” in a T-shaped ion trap array. Since then, even more complicated ion trap CCDs have been developed.

  This is an active line of research today at NIST, where the researchers work with beryllium ions. Although the electrodes used to guide the ions in a practicable quantum computer would have to be very small—perhaps as little as 10 millionths of a meter across—Monroe and Wineland emphasize that the engineering involved uses just the same kind of micro-fabrication technologies that are already used in the manufacture of conventional computer chips. Other groups are also working along these lines, undaunted by the need to reduce noise by cooling the electrodes with liquid nitrogen or even liquid helium. But there is another way to combine information from different strings of ions in a quantum computer—using light.

  In this approach, instead of using the oscillatory motion of the ions (or ion strings), photons are used to link the qubits together. Ions emit light, and it is possible to set up situations in which the properties of the emitted photons, such as their polarization or their color, are entangled with the internal quantum states of the ion that is doing the emitting. Photons from two different ions are directed down optical fibers towards a device like the beam-splitting mirrors I described in Chapter 5, but working in reverse. With this setup, the photons enter the “splitter” from opposite sides, and are given the opportunity to interact with one another. If they have the same appropriate quantum property (the same polarization, for example), they will interact with one another, become entangled, and leave the beam splitter together along the same optic. But if they have different quantum properties—different polarizations, or different colors, or whatever—they will ignore one another and leave the splitter along different optical fibers. Simple photon detectors placed at the end of each fiber tell the experimenters whether entanglement has occurred or not. Crucially, though, there is no way to determine which ion has emitted which photon; but if the detectors reveal that the photons are now entangled with one another, the ions they came from have also become entangled. Although ion-photon entanglement is tricky to work with, the incentive is that it allows for the possibility of a modular quantum ion processor, built up from many smaller processors linked by photons. Eventually, this could lead to a quantum Internet.

  As is so often the case with quantum experiments, most of the time the emitted photons are never gathered up by the beam splitter, and the entanglement does not occur. But, as ever, the solution is simply to keep trying until the experimenters do find photons being detected simultaneously at the appropriate detectors. Once the detectors show that there is entanglement between the two ions—the two qubits—the experimenters also know that manipulating one of the qubits will affect the other one—the basis of the CNOT gate. This is n
ot just abstract theorizing. A team at the University of Michigan who later moved to the University of Maryland have successfully entangled two qubits in this way in the form of trapped ions separated by a distance of roughly a meter. This is Einstein's “spooky action at a distance” put to practical use.

  In these first experiments, the rate at which ion pairs were entangled was only a few per minute. There is a possible way to make the process more efficient by surrounding each ion by highly reflective mirrors, to make what is known as an optical cavity in which photons bounce around before being trapped in an optical fiber. The technology is tricky; but intriguingly it is closely related to the work for which the other half of the 2012 Nobel Prize in physics was awarded, to the Frenchman Serge Haroche, a good friend of Wineland who was born in the same year as him, 1944.

  But before I describe the work for which Haroche received the Nobel Prize, a little diversion, into the world of quantum teleportation. It sounds like science fiction, but it is sober science fact; and it turns out to be highly relevant to one of the most promising approaches to making computing with quanta practicable.

  THE TELEPORTATION TANGO

  Quantum teleportation is based on the spooky action at a distance that so disgusted Einstein but is demonstrated to be real in tests of the EPR “paradox” and measurements of Bell's inequality. It rests on the fact—confirmed in those experiments—that if two quantum entities, let's say two photons, are entangled, then no matter how far apart they are, what happens to one of those two photons instantly affects the state of the other photon. The key refinement is that, by tweaking the first photon in the appropriate way (called a “Bell-state measurement”), its quantum state can be transferred to the second photon, while the state of the first photon is, of course, changed by being tweaked. In effect, the first photon has been destroyed and the second photon has become what is termed in common parlance a clone of the first photon. Since the original has been destroyed, however, for all practical purposes the first photon has been teleported to the location of the second photon, instantly. It is not a duplication process (and it has also been done with trapped ions!).

 

‹ Prev