Quantum Theory Cannot Hurt You

Home > Other > Quantum Theory Cannot Hurt You > Page 3
Quantum Theory Cannot Hurt You Page 3

by Marcus Chown


  In the microscopic domain, it turns out, identical things do not behave in identical ways in identical circumstances. Instead, they merely have an identical chance of behaving in any particular way. Each individual photon arriving at the window has exactly the same chance of being transmitted as any of its fellows—95 per cent—and exactly the same chance of being reflected—5 per cent. There is absolutely no way to know for certain what will happen to a given photon. Whether it is transmitted or reflected is entirely down to random chance.

  In the early 20th century, this unpredictability was something radically new in the world. Imagine a roulette wheel and a ball jouncing around as the wheel spins. We think of the number the ball comes to rest on when the wheel finally halts as inherently unpredictable. But it is not—not really. If it were possible to know the initial trajectory of the ball, the initial speed of the wheel, the way the air currents changed from instant to instant in the casino, and so on, the laws of physics could be used to predict with 100 per cent certainty where the ball will end up. The same is true with the tossing of a coin. If it were possible to know how much force is applied in the flipping, the exact shape of the coin, and so on, the laws of physics could predict with 100 per cent certainty whether the coin will come down heads or tails.

  Nothing in the everyday world is fundamentally unpredictable; nothing is truly random. The reason we cannot predict the outcome of a game of roulette or of the toss of a coin is that there is simply too much information for us to take into account. But in principle—and this is the key point—there is nothing to prevent us from predicting both.

  Contrast this with the microscopic world of photons. It matters not the slightest how much information we have in our possession. It is impossible to predict whether a given photon will be transmitted or reflected by a window—even in principle. A roulette ball does what it does for a reason—because of the interplay of myriad subtle forces. A photon does what it does for no reason whatsoever! The unpredictability of the microscopic world is fundamental. It is truly something new under the Sun.

  And what is true of photons turns out to be true of all the denizens of the microscopic realm. A bomb detonates because its timer tells it to or because a vibration disturbs it or because its chemicals have suddenly become degraded. An unstable, or “radioactive,” atom simply detonates. There is absolutely no discernible difference between one that detonates at this moment and an identical atom that waits quietly for 10 million years before blowing itself to pieces. The shocking truth, which stares you in the face every time you look at a window, is that the whole Universe is founded on random chance. So upset was Einstein by this idea that he stuck out his lip and declared: “God does not play dice with the Universe!”

  The trouble is He does. As British physicist Stephen Hawking has wryly pointed out: “Not only does God play dice with the Universe, he throws the dice where we cannot see them!”

  When Einstein received the Nobel Prize for Physics in 1921 it was not for his more famous theory of relativity but for his explanation of the photoelectric effect. And this was no aberration on the part of the Nobel committee. Einstein himself considered his work on the “quantum” the only thing he ever did in science that was truly revolutionary. And the Nobel committee completely agreed with him.

  Quantum theory, born out of the struggle to reconcile light and matter, was fundamentally at odds with all science that had gone before. Physics, pre-1900, was basically a recipe for predicting the future with absolute certainty. If a planet is in a particular place now, in a day’s time it will have moved to another place, which can be predicted with 100 per cent confidence by using Newton’s laws of motion and the law of gravity. Contrast this with an atom flying through space. Nothing is knowable with certainty. All we can ever predict is its probable path, its probable final position.

  Whereas quantum is based on uncertainty, the rest of physics is based on certainty. To say this is a problem for physicists is a bit of an understatement! “Physics has given up on the problem of trying to predict what would happen in a given circumstance,” said Richard Feynman. “We can only predict the odds.”

  All is not lost, however. If the microworld were totally unpredictable, it would be a realm of total chaos. But things are not this bad. Although what atoms and their like get up to is intrinsically unpredictable, it turns out that the unpredictability is at least predictable!

  PREDICTING THE UNPREDICTABILITY

  Think of the window again. Each photon has a 95 per cent chance of being transmitted and a 5 per cent chance of being reflected. But what determines these probabilities?

  Well, the two different pictures of light—as a particle and as a wave—must produce the same outcome. If half the wave goes through and half is reflected, the only way to reconcile the wave and particle pictures is if each individual particle of light has a 50 per cent probability of being transmitted and a 50 per cent probability of being reflected. Similarly, if 95 per cent of the wave is transmitted and 5 per cent is reflected, the corresponding probabilities for the transmission and reflection of individual photons must be 95 per cent and 5 per cent.

  To get agreement between the two pictures of light, the particlelike aspect of light must somehow be “informed” about how to behave by its wavelike aspect. In other words, in the microscopic domain, waves do not simply behave like particles; those particles behave like waves as well! There is perfect symmetry. In fact, in a sense this statement is all you need to know about quantum theory (apart from a few details). Everything else follows unavoidably. All the weirdness, all the amazing richness of the microscopic world, is a direct consequence of this wave-particle “duality” of the basic building blocks of reality.

  But how exactly does light’s wavelike aspect inform its particle-like aspect about how to behave? This is not an easy question to answer.

  Light reveals itself either as a stream of particles or as a wave. We never see both sides of the coin at the same time. So when we observe light as a stream of particles, there is no wave in existence to inform those particles about how to behave. Physicists therefore have a problem in explaining the fact that photons do things—for instance, fly through windows—as if directed by a wave.

  They solve the problem in a peculiar way. In the absence of a real wave, they imagine an abstract wave—a mathematical wave. If this sounds ludicrous, this was pretty much the reaction of physicists when the idea was first proposed by the Austrian physicist Erwin Schrödinger in the 1920s. Schrödinger imagined an abstract mathematical wave that spread through space, encountering obstacles and being reflected and transmitted, just like a water wave spreading on a pond. In places where the height of the wave was large, the probability of finding a particle was highest, and in locations where it was small, the probability was lowest. In this way Schrödinger’s wave of probability christened the wave function, informed a particle what to do, and not just a photon—any microscopic particle, from an atom to a constituent of an atom like an electron.

  There is a subtlety here. Physicists could make Schrödinger’s picture accord with reality only if the probability of finding a particle at any point was related to the square of the height of the probability wave at that point. In other words, if the probability wave at some point in space is twice as high as it is at another point in space, the particle is four times as likely to be found there than at the other place.

  The fact that it is the square of the probability wave and not the probability wave itself that has real physical meaning to this day causes debate about whether the wave is a real thing lurking beneath the skin of the world or just a convenient mathematical device for calculating things. Most but not all people favour the latter.

  The probability wave is crucially important because it makes a connection between the wavelike aspect of matter and familiar waves of all kinds, from water waves to sound waves to earthquake waves. All obey a so-called wave equation. This describes how they ripple through space and allows physicists to
predict the wave height at any location at any time. Schrödinger’s great triumph was to find the wave equation that described the behaviour of the probability wave of atoms and their like.

  By using the Schrödinger equation, it is possible to determine the probability of finding a particle at any location in space at any time. For instance, it can be used to describe photons impinging on the obstacle of a windowpane and to predict the 95 per cent probability of finding one on the far side of the pane. In fact, the Schrödinger equation can be used to predict the probability of any particle, be it a photon or an atom, doing just about anything. It provides the crucial bridge to the microscopic world, allowing physicists to predict everything that happens there—if not with 100 per cent certainty, at least with predictable uncertainty!

  Where is all this talk of probability waves leading? Well, the fact that waves behave like particles in the microscopic world leads unavoidably to the realisation that the microscopic world dances to an entirely different tune than that of the everyday world. It is governed by random unpredictability. This in itself was a shocking, confidence-draining blow to physicists and their belief in a predictable, clockwork universe. But this, it turns out, is only the beginning. Nature had many more shocks in store. The fact that waves not only behave as particles but also that those particles behave as waves leads to the realisation that all the things that familiar waves, like water waves and sound waves, can do, so too can the probability waves that inform the behaviour of atoms, photons, and their kin.

  So what? Well, waves can do an awful lot of different things. And each of these things turns out to have a semi-miraculous consequence in the microscopic world. The most straightforward thing waves can do is exist as superpositions. Remarkably, this enables an atom to be in two places at once, the equivalent of you being in London and New York at the same time.

  1 Another interesting characteristic of the photoelectric effect is that no electrons at all are emitted by the metal if it is illuminated by light with a wavelength—a measure of the distance between successive wave crests—above a certain threshold. This, as Einstein realised, is because photons of light have an energy that goes down with increasing wavelength. And below a certain wavelength the photons have insufficient energy to kick an electron out of the metal.

  3

  THE SCHIZOPHRENIC ATOM

  HOW AN ATOM CAN BE IN MANY PLACES AT ONCE AND DO MANY THINGS AT ONCE

  If you imagine the difference between an abacus and the world’s fastest supercomputer, you would still not have the barest inkling of how much more powerful a quantum computer could be compared with the computers we have today.

  Julian Brown

  It’s 2041. A boy sits at a computer in his bedroom. It’s not an ordinary computer. It’s a quantum computer. The boy gives the computer a task… and instantly it splits into thousands upon thousands of versions of itself, each of which works on a separate strand of the problem. Finally, after just a few seconds, the strands come back together and a single answer flashes on the computer display. It’s an answer that all the normal computers in the world put together would have taken a trillion trillion years to find. Satisfied, the boy shuts the computer down and goes out to play, his night’s homework done.

  Surely, no computer could possibly do what the boy’s computer has just done? Not only could a computer do such a thing, crude versions are already in existence today. The only thing in serious dispute is whether such a quantum computer merely behaves like a vast multiplicity of computers or whether, as some believe, it literally exploits the computing power of multiple copies of itself existing in parallel realities, or universes.

  The key property of a quantum computer—the ability to do many calculations at once—follows directly from two things that waves—and therefore microscopic particles such as atoms and photons, which behave like waves—can do. The first of those things can be seen in the case of ocean waves.

  On the ocean there are both big waves and small ripples. But as anyone who has watched a heavy sea on a breezy day knows, you can also get big, rolling waves with tiny ripples superimposed on them. This is a general property of all waves. If two different waves can exist, so too can a combination, or superposition, of the waves. The fact that superpositions can exist is pretty innocuous in the everyday world. However, in the world of atoms and their constituents, its implications are nothing short of earth-shattering.

  Think again of a photon impinging on a windowpane. The photon is informed about what to do by a probability wave, described by the Schrödinger equation. Since the photon can either be transmitted or reflected, the Schrödinger equation must permit the existence of two waves—one corresponding to the photon going through the window and another corresponding to the photon bouncing back. Nothing surprising here. However, remember that, if two waves are permitted to exist, a superposition of them is also permitted to exist. For waves at sea such a combination is nothing out of the ordinary. But here the combination corresponds to something quite extraordinary—the photon being both transmitted and reflected. In other words, the photon can be on both sides of the windowpane simultaneously!

  And this unbelievable property follows unavoidably from just two facts: that photons are described by waves and that superpositions of waves are possible.

  This is no theoretical fantasy. In experiments it is actually possible to observe a photon or an atom being in two places at once—the everyday equivalent of you being in San Francisco and Sydney at the same time. (More accurately, it is possible to observe the consequences of a photon or an atom being in two places at once.) And since there is no limit to the number of waves that can be superposed, a photon or an atom can be in three places, 10 places, a million places at once.

  But the probability wave associated with a microscopic particle does more than inform it where it could be located. It informs it how to behave in all circumstances—telling a photon, for instance, whether or not to be transmitted or reflected by a pane of glass. Consequently, atoms and their like can not only be in many places at once, they can do many things at once, the equivalent of you cleaning the house, walking the dog, and doing the weekly supermarket shopping all at the same time. This is the secret behind the prodigious power of a quantum computer. It exploits the ability of atoms to do many things at once, to do many calculations at once.

  DOING MANY THINGS AT ONCE

  The basic elements of a conventional computer are transistors. These have two distinct voltage states, one of which is used to represent the binary digit, or bit, “0”, the other to represent a “1.” A row of such zeros and ones can represent a large number, which in the computer can be added, subtracted, multiplied, and divided by another large number.1 But in a quantum computer the basic elements—which may be single atoms—can be in a superposition of states. In other words, they can represent a zero and a one simultaneously. To distinguish them from normal bits, physicists call such schizophrenic entities quantum bits, or qubits.

  One qubit can be in two states (0 or 1), two qubits in four (00 or 01 or 10 or 11), three qubits in eight, and so on. Consequently, when you calculate with a single qubit, you can do two calculations simultaneously, with two qubits four calculations, with three eight, and so on. If this doesn’t impress you, with 10 qubits you could do 1,024 calculations all at once, with 100 qubits 100 billion billion billion! Not surprisingly, physicists positively salivate at the prospect of quantum computers. For some calculations, they could massively outperform conventional computers, making conventional personal computers appear positively retarded.

  But for a quantum computer to work, wave superpositions are not sufficient on their own. They need another essential wave ingredient: interference.

  The interference of light observed by Thomas Young in the 18th century was the key observation that convinced everyone that light was a wave. When, at the beginning of the 20th century, light was also shown to behave like a stream of particles, Young’s double slit experiment assumed a new and
unexpected importance—as a means of exposing the central peculiarity of the microscopic world.

  INTERFERENCE IS THE KEY

  In the modern incarnation of Young’s experiment, a double slit in an opaque screen is illuminated with light, which is undeniably a stream of particles. In practice, this means using a light source so feeble that it spits out photons one at a time. Sensitive detectors at different positions on the second screen count the arrival of photons. After the experiment has been running for a while, the detectors show something remarkable. Some places on the screen get peppered with photons while other places are completely avoided. What is more, the places that are peppered by photons and the places that are avoided alternate, forming vertical stripes—exactly as in Young’s original experiment.

  But wait a minute! In Young’s experiment the dark and light bands are caused by interference. And a fundamental feature of interference is that it involves the mingling of two sets of waves from the same source—the light from one slit with the light from the other slit. But in this case the photons are arriving at the double slit one at a time. Each photon is completely alone, with no other photon to mingle with. How, then, can there be any interference? How can it know where its fellow photons will land?

  There would appear to be only one way—if each photon somehow goes through both slits simultaneously. Then it can interfere with itself. In other words, each photon must be in a superposition of two states—one a wave corresponding to a photon going through the left-hand slit and the other a wave corresponding to a photon going through the right-hand slit.

  The double slit experiment can be done with photons or atoms or any other microscopic particles. It shows graphically how the behaviour of such particles—where they can and cannot strike the second screen—is orchestrated by their wavelike alter ego. But this is not all the double slit experiment demonstrates. Crucially, it shows that the individual waves that make up a superposition are not passive but can actively interfere with each other. It is this ability of the individual states of a superposition to interfere with each other that is the absolute key to the microscopic world, spawning all manner of weird quantum phenomena.

 

‹ Prev