The Universe Within
Page 18
THE STORY OF HOW physics created the information age begins at the turn of the twentieth century, when electricity was becoming the lifeblood of modern society. There was galloping demand to move current around — quickly, predictably, safely — in light bulbs, radios, telegraphs, and telephones. Joseph John (J. J.) Thomson’s discovery of the electron in 1897 had explained the nature of electricity and launched the development of vacuum tubes.
For most of the twentieth century, amplifying vacuum tubes were essential components of radios, telephone equipment, and many other electrical devices. They consist of a sealed glass tube with a metal filament inside that releases lots of electrons when it is heated up. The negatively charged electrons stream towards a positively charged metal plate at the other end of the tube, carrying the electrical current. This simple arrangement, called a “diode,” allows current to flow only one way. In more complicated arrangements, one or more electrical grids are inserted between the cathode and anode. By varying the voltage on the grids, the flow of electrons can be controlled: if things are arranged carefully, tiny changes in the grid voltage result in large changes in the current. This is an amplifier: it is like controlling the flow of water from a tap. Gently twiddling the tap back and forth leads to big changes in the flow of water.
Vacuum tubes were used everywhere — in radios, in telephone and telegraph exchanges, in televisions and the first computers. However, they have many limitations. They are large and have to be warmed up. They use lots of power, and they run hot. Made of glass, they are heavy, fragile, and expensive to manufacture. They are also noisy, creating a background “hum” of electrical noise in any device using them.
In Chapter One, I described the Scottish Enlightenment and how it led to a flowering of education, literature, and science in Scotland. James Clerk Maxwell was one of the products of this period, as were the famous engineers James Watt, William Murdoch, and Thomas Telford; the mathematical physicists Peter Guthrie Tait and William Thomson (Lord Kelvin); and the writer Sir Walter Scott. Another was Alexander Graham Bell, who followed Maxwell to Edinburgh University before emigrating to Canada, where he invented the telephone in Brantford, Ontario — and in so doing, launched global telecommunications.
Bell believed in the profound importance of scientific research, and just as his company was taking off in the 1880s, he founded a research laboratory. Eventually christened Bell Labs, this evolved into the research and development wing of the U.S. telecommunications company AT&T, becoming one of the most successful physics centres of all time, with its scientists winning no fewer than seven Nobel Prizes.83
At Bell Labs, the scientists were given enormous freedom, with no teaching duties, and were challenged to do exceptional science. They were led by a visionary, Mervin Kelly, who framed Bell Labs as an “institute for creative technology,” housing physicists, engineers, chemists, and mathematicians together and allowing them to pursue investigations “sometimes without concrete goals, for years on end.”84 Their discoveries ranged from the basic theory of information and communication and the first cellular telephones to the first detection of the radiation from the big bang; they invented lasers, computers, solar cells, CCDs, and the first quantum materials.
One of quantum theory’s successes was to explain why some materials conduct electricity while others do not. A solid material consists of atoms stacked together. Each atom consists of a cloud of negatively charged electrons orbiting a positively charged nucleus. The outermost electrons are farthest from the nucleus and the least tightly bound to it — in conducting materials like metals, they are free to wander around. Like the molecules of air in a room, the free electrons bounce around continuously inside a piece of metal. If you connect a battery across the metal, the free electrons drift through it in one direction, forming an electrical current. In insulating materials, there are no free electrons, and no electrical currents can flow.
Shortly after the Second World War, Kelly formed a research group in solid state physics, under William Shockley. Their goal was to develop a cheaper alternative to vacuum tubes, using semiconductors — materials that are poor conductors of electricity. Semiconductors were already being used, for example, in “point-contact” electrical diodes, where a thin needle of metal, called a “cat’s whisker,” was placed in contact with a piece of semiconductor crystal (usually lead sulphide or galena). At certain special points on the surface, the contact acts like a diode, allowing current to flow only one way. Early “crystal” radio sets used these diodes to convert “amplitude modulated” AM radio signals into DC currents, which then drove a headset or earphone. In the 1930s, Bell scientists explored using crystal diodes for very high frequency telephone communications.
During the war, lots of effort had gone into purifying semiconductors like germanium and silicon, on the theory that removing impurities would reduce the electrical noise.85 But it was eventually realized that the magic spots where the crystal diode effect works best correspond to impurities in the material. This was a key insight — that controlling the impurities is the secret to the fine control of electrical current.
Just after the war, Shockley had tried to build a semiconductor transistor, but had failed. When Kelly asked Shockley to lead the Solid State Physics group, he placed the theorist John Bardeen and the experimentalist Walter Brattain under his supervision. The two then attempted to develop the “point-contact” idea, using two gold contacts on a piece of germanium which had been “doped” — seeded with a very low concentration of impurities to allow charge to flow through the crystal.
They were confounded by surface effects, which they initially overcame only through the drastic step of immersing the transistor in water, hardly ideal for an electrical device. After two years’ work, their breakthrough came in the “miracle month” of November–December 1947, when they wrapped a ribbon of gold foil around a plastic triangle and sliced the ribbon through one of the triangle’s points. They then pushed the gold-wrapped tip into the germanium to enable a flow of current through the bulk of the semiconductor. A voltage applied to one of the two gold contacts was then found to amplify the electric current flowing from the other contact into the germanium, like a tap being twiddled to control the flow of water.86
Bardeen, Brattain, and Shockley shared the 1956 Nobel Prize in Physics for their discovery of the transistor, which launched the modern electronics age. Their “point contact” transistor was quickly superseded by “junction” transistors, eventually to be made from silicon. Soon after, the team split up. Bardeen left for the University of Illinois, where he later won a second Nobel Prize. Shockley moved out to California, where he founded Shockley Semiconductor. He recruited eight talented young co-workers who, after falling out with him, left to form Fairchild and Intel, thereby launching Silicon Valley.
Transistors can control the flow of electricity intricately, accurately, and dependably. They are cheap to manufacture and have become easier and easier to miniaturize. Indeed, to date, making computers faster and more powerful has almost entirely been a matter of packing more and more transistors onto a single microprocessor chip.
For the past forty years, the number of transistors that can be packed onto a one-square-centimetre chip has doubled every two years — an effect known as Moore’s law, which is the basis for the information and communication industry’s explosive growth. There are now billions of transistors in a typical smartphone or computer CPU. But there are also fundamental limits, set by the size of the atom and by Heisenberg’s uncertainty principle. Extrapolating Moore’s law, transistors will hit these ultimate limits one or two decades from now.
In modern computers, information consists of strings of 0s and 1s stored in a pattern of electrical charges or currents or magnetized states of matter, and then processed via electrical signals according to the computer program’s instructions. Typically, billions of operations are performed per second upon billions of memory elements. It is cruci
al to the computer’s operation that the 0s and 1s are stored and changed accurately and not in unpredictable ways.
The problem is that the moving parts of a computer’s memory — in particular, the electrons — are not easy to hold still. Heisenberg’s uncertainty principle says that if we fix an electron’s position, its velocity becomes uncertain and we cannot predict where it will move next. If we fix its velocity, and therefore the electrical current it carries, its position becomes uncertain and we don’t know where it is. This problem becomes unimportant when large numbers of electrons are involved, because to operate a device one only needs the average charge or current, and for many electrons these can be predicted with great accuracy. However, when circuits get so tiny that only a few electrons are involved in any process, then their quantum, unpredictable nature becomes the main source of error, or “noise,” in the computer’s operations. Today’s computers typically store one bit of data in about a million atoms and electrons, although scientists at IBM Labs have made a twelve-atom bit register called “atomic-scale memory.”87
QUANTUM UNCERTAINTY IS THE modern version of the impurities in semiconductors. Initially impurities were seen as a nuisance, and large sums of money were spent trying to clean them away, before it was realized that the ability to manipulate and make use of them was the key to the development of cheap, reliable transistors. The same story is now repeating itself with “quantum uncertainty.” As far as classical computers are concerned, quantum uncertainty is an unremovable source of noise, and nothing but a nuisance. But once we understand how to use quantum uncertainty instead of trying to fight it, it opens entirely new horizons.
In 1984, I was a post-doctoral researcher at the University of California, Santa Barbara. It was announced that the great Richard Feynman was going to come and give a talk about quantum computers. Feynman was one of our heroes, and this was an opportunity to see him first-hand. Feynman’s talk focused on the question of whether there are ultimate limits to computation. Some scientists had speculated that each operation of a computer inevitably consumes a certain amount of energy, and that ultimately this would limit the size and power of any computer. Feynman’s interest was piqued by this challenge, and he came up with a design that overcame any such limit.
There were several aspects to his argument. One was the idea of a “reversible” computer that never erased (or overwrote) anything stored in its memory. It turns out that this is enough to overcome the energy limit. The other new idea was how to perform computations in truly quantum ways. I vividly remember him waving his arms (he was a great showman), explaining how the quantum processes ran forwards and backwards and gave you just what you needed and no more.
Feynman’s talk was entirely theoretical. He didn’t speak at all about building such a device. Nor did he give any specific examples of what a quantum computer would be able to do that a classical computer could not. His discussion of the theory was quite basic, and most of the ingredients could be found in any modern textbook. In fact, there was really no reason why all of this couldn’t have been said many decades ago. This is entirely characteristic of quantum theory: simply because it is so counterintuitive, new and unexpected implications are still being worked out today. Although he did not have any specific examples of the uses of a quantum computer, Feynman got people thinking just by raising the possibility. Gradually, more and more people started working on the idea.
In 1994, there came a “bolt from the blue.” U.S. mathematician Peter Shor, working at Bell Labs (perhaps unsurprisingly!), showed mathematically that a quantum computer would be able to find the prime factors of large numbers much faster than any known method on a classical computer. The result caused a shockwave, because the secure encryption of data (vital to the security systems of government, banks, and the internet) most commonly relies on the fact that it is very difficult to find the prime factors of large numbers. For example, if you write down a random 400-digit number (which might take you five minutes), then even with the best known algorithm and the most powerful conceivable classical computer, it would take longer than the age of the universe to discover the number’s prime factors. Shor’s work showed that a quantum computer could, in principle, perform the same task in a flash.
What makes a quantum computer so much more powerful than a classical one? A classical computer is an automatic information-processing machine. Information is stored in the computer’s memory and then read and manipulated according to pre-specified instructions — the program — also stored in the computer’s memory. The main difference between a classical and quantum computer is the way information is stored. In a classical computer, information is stored in a series of “bits,” each one of which can take just two values: either 0 or 1. The number of arrangements of the bits grows exponentially with the length of the string. So whereas there are only two arrangements for a single bit, there are four for two bits, eight for three, and there are nearly a googol (one with a hundred zeros after it) ways of arranging three hundred bits. You need five bits to encode a letter of the alphabet and about two million bits to encode all of the information in a book like this. Today, a typical laptop has a memory capacity measured in gigabytes, around ten billion bits (a byte is eight bits), with each gigabyte of memory capable of storing five thousand books.
A quantum computer works in an entirely different way. Its memory is composed of qubits, short for quantum bits. Qubits are somewhat like classical bits in that when you read them out, you get either 0 or 1. However, the resemblance ends there. According to quantum theory, the typical state for a qubit is to be in a superposition — a state consisting of 0 and 1 at the same time. The amount of 0 or 1 in the state indicates how probable it is to obtain 0 or 1 when the qubit is read.
The fact that the state of a qubit is specified by a continuous quantity — the proportion of 0 or 1 in the state — is a clue that it can store infinitely more information than a classical bit ever can.88 The situation gets even more interesting when you have more than one qubit and their states are entangled. This means that, unlike classical bits, qubits cannot be read independently: what you measure for one of them will influence what you measure for the other. For example, if two qubits are entangled, then the result you obtain when you measure one of them will completely determine the result you obtain if you measure the other. A collection of entangled qubits forms a whole that is very much greater than the sum of its parts.
Shor used these features to make prime-number factoring go fast. Classically, if you tried to find the prime factors of a large number,89 the brute force method would be to divide it by two as many times as you could, then three, then five, and so on, and keep going until no further divisions worked. However, what Shor realized, in essence, is that a quantum computer can perform all of these operations at the same time. Because the quantum state of the qubits in the computer simultaneously encodes many different classical states, the computations can all occur “in parallel,” dramatically speeding up the operation.
Shor’s discovery launched a global race to build a quantum computer, using a wide range of quantum technologies: atomic and nuclear spins, the polarization states of light, the current-carrying states of superconducting rings, and many other incarnations of qubits. In recent years, the race has reached fever pitch. At the time of writing, researchers at IBM are claiming they are close to producing a “scalable” quantum computing technology.
What will this vast increase in our information-handling capabilities mean? It is striking to compare our situation today, with the vast libraries at our fingertips and far vaster ones to come, with that of the authors of the modern scientific age. In the Wren Library in Trinity College, Cambridge, Isaac Newton’s personal library consists of a few hundred books occupying a single bookcase. This was quite enough to allow him to found modern physics and mathematical science. A short walk away, in the main University Library, Charles Darwin’s personal library is also preserved. His entire co
llection of books occupies a ten-metre stretch of shelving. Again, for one of the most profound and original thinkers in the history of science, it is a minuscule collection.
Today, on your smartphone, you can access information resources vastly greater than any library. And according to Moore’s law, in a couple of decades your laptop will comfortably hold every single book that has ever been written. A laptop quantum computer will seem more like Jorge Luis Borges’s Library of Babel — a fantastical collection holding every possible ordering of letters and words in a book, and therefore every book that could ever be written. With a quantum library, one might instead be able to search for all possible interesting passages of text without anyone having had to compose them.
Some of the uses of quantum computers and quantum communication are easy to anticipate. Ensuring the security of information is one of them. The codes currently used to protect access to bank accounts, computer passwords, and credit card information rely on the fact that it is hard to find the prime factors of large numbers using a classical computer. However, as Peter Shor showed, quantum computers will be able to quickly find these factors, rendering current security protocols obsolete. Also, quantum information is inherently safer from attack than classical information, because it is protected by the fundamental laws of physics. Whereas reading out classical information does nothing to change it, according to quantum physics, the mere fact of observing a quantum system almost always changes its quantum state. Through this effect, eavesdropping or hacking into quantum information can be detected. Hence quantum information can be made invulnerable to spying in ways that would be classically impossible.