Book Read Free

A Brief Guide to the Great Equations

Page 21

by Robert Crease


  Other clever strategies to convey complicated science to outsiders include Edwin A. Abbott’s Flatland: A Romance of Many Dimensions, a famous novel involving an extended conversation between a square and a sphere that illustrates the problem of conceiving multiple dimensions. And Michael Frayn’s brilliant play Copenhagen dramatizes an encounter between Niels Bohr and Werner Heisenberg that ends up illustrating many points about quantum physics.

  What most needs bolstering in the contemporary discourse about science, however, is what might be called science critics. At least two individuals – the political scientist Langdon Winner and the philosopher Don Ihde – have called for such critics. In the arts, Winner points out, critics are instinctively understood as playing ‘a valuable, well established role, serving as a bridge between artists and audiences.’ A critic of literature, for instance, ‘examines a text, analyzing its features, evaluating its qualities, seeking a deeper appreciation that might be useful to other readers of the same text.’ Unfortunately, Winner lamented, the same kind of function is not performed in the sciences. One obstacle is that scientists tend to regard as suspect anyone who plays the role of critic, as if science critics are by definition objecting to science or insisting on its limitations.

  Don Ihde, meanwhile, actively calls for science critics and even outlines what he thinks they should be like. ‘The science critic would have to be a well informed – indeed [a] much better than simply well informed – amateur, in [the sense of] a ‘lover’ of the subject matter, and yet not the total insider.’ The reason why the science critic must not be a total insider – just as an arts critic would not be a practicing artist or literary author – is because, as Ihde puts it, ‘we are probably worst at our own self-criticism.’

  Science critics, according to Winner and Ihde, would have an essential function. They would be there to assess the impact of science and technology on our political world (Winner) and on the human experience (Ihde). So, for example, Winner writes about the ‘politics’ of technological artifacts, while Ihde writes about the transformation of experience by instruments. The kind of criticism advocated and practiced by Winner and Ihde, in short, judges the presence of science and technology in society, and has clear moral and political dimensions.

  But there is another, complementary model for science criticism, one that involves another kind of interpretation, outlining the impact of scientific discoveries on our understanding of ourselves, the world, and our place in it. This model would require not a one-step translation process, but the kind of multiple roles that art criticism performs. It would involve a kind of ‘science criticism’ just as elaborate and extensive as art criticism, whose presence is required for a thriving art culture. The necessary steps would include a complex field of several different niches of writings – books, articles, and columns, but also novels and plays, comments and reviews of these novels and plays, and so forth. This would allow the knowledge generated by science to have a cultural, and not merely an instrumental, presence, taking advantage of the processes by which culture enacts itself.

  This model might be called impedance matching. In acoustics and electrical engineering, impedance matching involves taking a signal – produced inside a speaker, say – and putting it in a new environment with a different ‘load’ – the surrounding environment – in a way that allows the signal to be heard. This is not a one- or two-step process, but requires a smooth and continuous matching or stepping down of the load. Scientific discourse, that is, bears one load – a heavy one – and public language a much different one. To connect the two effectively cannot be a matter of basic education plus popularization, but many different overlapping steps. Each of these steps requires more than rhetorical expertise, but connecting the signal with public issues and hopes.

  Why should we bother? Our system seems to work relatively well as it is. Why make an effort to do more than paraphrase, to trace out the moral and spiritual impact of scientific work such as general relativity on the world? Part of the answer is to avoid being infantilizing and patronizing to the public. To the extent that Einstein’s general theory of relativity represents humanity’s best efforts at understanding the basic structure of the world, it is desirable for citizens – and not just professional scientists – to have the ability to acquire some sense of Einstein’s general theory of relativity, some feel for what it means to our understanding of the universe, and a duty to make this possible. To put it more strongly, making this possible belongs to the human quest to acquire an understanding of ourselves and our place in the world. What is at stake is our own humanity.

  9

  ‘The Basic Equation of Quantum Theory’:

  SCHRÖDINGER’S EQUATION

  DESCRIPTION: How the quantum state of a system – interpreted, for instance, as the probability of a particle being detected at a certain location – evolves over time. discoverer: Erwin Schrödinger

  DATE: 1926

  The Schrödinger equation is the basic equation of quantum theory. The study of this equation plays an exceptionally important role in modern physics. From a mathematician’s point of view the Schrödinger equation is as inexhaustible as mathematics itself.

  – F. A. Berezin and M. A. Shubin, The Schrödinger Equation

  The journey taken by the scientific community from Planck’s introduction of the quantum to Schrödinger’s assertion of its universal presence took barely a quarter-century.

  When Planck introduced the idea in 1900, it was a tiny speck on the horizon. He used it to make classical theory work for black body radiation. The theory worked if we say that whatever absorbs and emits light (which he treated as ‘resonators’) does so selectively – only in integer multiples of a certain amount of energy. Many scientists saw this as a fudge, as problem avoidance rather than real science, and assumed that eventually they could discard the idea and it would drop back off the horizon.

  Growing Extension of the Quantum

  But in 1905, in a paper on the photoelectric effect, Einstein extended the idea. The quantum is not due to the selectivity of the resonators, he proposed, but to the fact that light itself is ‘grainy.’ By decade’s end, the quantum had shown up in several different branches of physics. Many who had dismissed it now took notice.

  In 1911, a landmark step was taken by Walther Nernst, a Prussian physical chemist who initially (like others) had dismissed quantum theory as the offspring of a ‘grotesque’ formula, but who had used the theory to address what Thomson had called ‘Cloud No. 2’, or the application of classical molecular theory of heat to experimental results involving low-temperature solids, gases, and metals. Nernst declared that, in the hands of Planck and Einstein (and, he should have mentioned, his own), the theory had proven ‘so fruitful’ that ‘it is the duty of science to take it seriously and to subject it to careful investigations.’1 He organized a conference of leading scientists to do so, holding it in Brussels with the support of a wealthy Belgian industrialist named Ernest Solvay.

  The conference, a milestone event, signaled that the quantum – the idea of a fundamental graininess to light and all other forms of energy – was in science to stay.

  It was one of those events whose significance was immediately clear. Participants communicated the excitement to others who had not attended. Nobel laureate Ernest Rutherford, returning to Cambridge, England, described the discussions in ‘vivid’ terms to a spellbound, 27-year-old Danish newcomer to his lab named Niels Bohr. In Paris, Henri Poincaré wrote that the quantum hypothesis appeared to involve ‘the greatest and most radical revolution in natural philosophy since the time of Newton.’2 Many scientists who were not present at the meeting caught its spirit from the proceedings. One was a 19-year-old Sorbonne student named Louis de Broglie, a recent convert to physics from an intended civil service career. De Broglie later wrote that the proceedings convinced him to devote ‘all my energies’ to quantum theory.

  But the quantum fit uneasily on the Newtonian horizon, even when it solved
key problems. It was like a guest whom you could not get around inviting to an event, but who you also knew would be awkward and whose presence you would have to manage carefully. Consider what happened when Niels Bohr used it to explain Rutherford’s until-then obscure idea about atomic structure. In 1911, Rutherford had proposed that atoms were like miniature solar systems, with a central core or ‘nucleus’ surrounded by electrons. This contradicted a basic principle of classical mechanics: why didn’t the orbiting electrons radiate away energy, as they should according to Maxwell’s theory, and fall into the nucleus? Because, Bohr proposed, using the quantum idea, electrons could only absorb and emit radiation in specific amounts, and thus could only fit in a small number of stationary orbits or states inside the atom, able to absorb and emit only the energy required to jump between such states. It was an odd assumption indeed. It implied that atomic electrons – to employ an image that the American philosopher William James used to describe the stream of consciousness, which may have influenced Bohr – made ‘flights and perchings’ amongst these states, without taking clear paths between them.3 The states were what mattered, not the trajectories – whence the phrase, ‘quantum leap.’ Bohr applied this idea to the classic atomic test case – the hydrogen atom, a single electron orbiting a single proton. He showed how his assumption predicted the Balmer formula, an empirical formula for the spectral lines of hydrogen devised by a schoolteacher and numerologist.4

  The flitting and perching things – now applied only to light, but soon to matter – would soon create a classification problem. In the classical horizon, the tiniest things came in two basic types: particles and waves. Particles were discrete things: each had its own definite position and momentum, and always followed a specific path in space and time. Waves were continuous things: they spread out spherically from their source without specific position or direction, smoothly broadening and thinning in space and time. Scientists used different theories to describe particles and waves. Particles were addressed by Newtonian theories that assumed masses were located at specific points and pushed by forces and had a definite momentum and position at every moment. Waves were addressed by Maxwellian theories that used continuous functions to describe how processes smoothly evolve in space and time. Both theories were well developed and deterministic: you input information about the initial state, turn the crank, and out popped a prediction of future behaviour.

  In which bin should the flitting and perching things be placed? They seemed to have aspects of each. How was that possible?

  Einstein provided some of the answer in his 1905 photoelectric effect paper. Traditional optics, he said, treats light as waves because it involves light in large amounts and averaged over time. But when light interacts with matter, as when it is emitted and absorbed, it does so on very short timescales, when it may well be grainy, localized in space, and with energies in integer multiples of hv (‘quanta’ of light later called ‘photons’). This idea, he proudly wrote to a friend, was ‘very revolutionary.’5

  For the next 20 years, physicists tended to be partisans of either particle theory or wave theory, trying to extend one or the other to cover quantum phenomena.

  Einstein carried the theoretical banner for the particle side, though not without some reluctance. In an important paper of 1916, he extended his idea that light is absorbed and emitted in the form of physically real quanta, each having a particular direction and momentum (a multiple of hv/c), and – making a general if somewhat overstated point – proclaimed that ‘radiation in the form of spherical waves does not exist.’6 This process conserved energy, he now showed, for the amount emitted at one end equaled the amount absorbed at the other. But Einstein also found that he had to incorporate statistics in his theory to make it work, in the form of ‘probability coefficients’ that described the emission and absorption of quanta.7 He found this to be a painful sacrifice, but hoped it would be temporary, expecting that his work would soon be replaced by some deeper understanding. Einstein’s experimental allies included Arthur H. Compton, who in 1923 demonstrated the ‘Compton effect’, that when photons bounce off electrons they both come from, and rebound in, definite directions.8

  One champion of waves was physicist Charles G. Darwin, grandson of his more famous naturalist namesake – though he, too, was somewhat dissatisfied with his role. Darwin believed light was emitted as waves, but realized that accommodating quantum phenomena such as the photoelectric effect would severely tax wave theory. In 1919, he wrote a ‘Critique of the Foundations of Physics’, in which he foresaw fundamental changes ahead. Quantum phenomena, he prophesied, might force physicists to abandon long-cherished principles. They might have to entertain wild ideas, he wrote – tongue-in-cheek – such as to ‘endow electrons with free will.’9 The least wild thing, he finally decided, would be to keep wave theory by abandoning the conservation of energy for individual events, having it conserved only on average.

  Darwin found a sympathizer in Niels Bohr. In 1924, Bohr enlisted two others – Hendrik Kramers and John Slater – in an attempt to eradicate Einstein’s radical idea and develop a more conventional approach that used wave theory to account for how light is emitted and absorbed, and for the photoelectric and Compton effects.10 The authors found they had to pay a heavy price to murder Einstein’s idea; they would indeed have to abandon the conservation of energy, conserving it only on average, along with any hope of having a visualizable picture of the mechanics of how light is emitted and absorbed.

  The word ‘visualizable’ – anschaulich, for the Germans – became something of a technical term in physics around this time. For something in a theory to be visualizable or intuitable two things had to happen: the variables in the theory had to be connected with physical things like mass, position, energy, etc.; and the operations in the theory had to be connected with familiar operations, such as point-by-point movement, action at a distance, and so forth. Thus for something to be visualizable, or anschaulich, it did not necessarily have to be Newtonian, because something strange and non-Newtonian can still be visualized, as long as it unfolds in space and time. If something were anschaulich it merely meant that a flip-book-like description could be created in which the pages were like slices of time, locating where everything in an event is at every moment – and that when you ruffle the pages, what is on each page blends smoothly into what is on the next.

  But the sacrifices of the Bohr-Kramers-Slater theory – abandonment of the conservation of energy and of anschaulichkeit, were regarded as too extreme not only by most physicists but even by at least one of its authors; Slater later claimed to have been coerced into signing his name. Few were surprised when, less than a year after publication, the Bohr-Kramers-Slater proposal was decisively refuted by experiment.

  The Bohr-Kramers-Slater paper is a unique document in the history of science. It is renowned among historians for being both obviously wrong and strongly influential. It was strongly influential because it brought to a head the conflict between particle and wave theory. It said: This is the kind of sacrifice you have to pay in order to keep what you have. The partisans of each side were only being cautious and conservative, trying to preserve those elements of classical theory which they thought most robust. But quantum phenomena were resisting.

  At the end of its first quarter-century, indeed, quantum theory was a mess. Historian Max Jammer called it ‘a lamentable hodgepodge of hypotheses, principles, theorems, and computational recipes rather than a logical consistent theory.’ Each problem had to be first solved as if it were a classical situation, then filtered through a ‘mysterious sieve’ in which quantum conditions were applied, weeding out forbidden states and leaving only a few permissible ones. This process involved not systematic deduction but ‘skillful guessing and intuition’ which resembled ‘special craftsmanship or even artistic technique.’11 A theory was needed that gave the right states from the start. To put it another way, quantum theory was more like a set of instructions for coming up with a way to get fr
om point A to point B, when what you really wanted was a map.

  Then, in 1925, came two dramatic breakthroughs from two very different people: Werner Heisenberg and Erwin Schrödinger. Each, struggling to act conservatively by sacrificing as little of the classical framework as possible, ended up a revolutionary.

  Heisenberg, who at age twenty-four was young even by physics standards, tried to save classical mechanics by abandoning it at Nature’s bottom rung. Inside the atom, he declared, not only do particles and electron orbits have no meaning, but neither do even such basic classical properties as position, momentum, velocity, and space and time. And because our imaginations require a space-time container, this atomic world cannot be pictured. We have to base our theories, he said, on what he called ‘quantum-theoretical quantities’ that are unvisualizable, or unanschaulich. The next chapter outlines the steps Heisenberg took in developing his approach. At one point, Heisenberg noticed an odd feature: certain sets of quantum-theoretical quantities were noncommutative under the peculiar definition of ‘multiplication’ they obeyed: the order in which they were multiplied mattered. He initially found this feature awkward, and tried to ignore it – but soon came to embrace the feature as the keystone of quantum mechanics. In 1925 he wrote ‘On the Quantum-Mechanical Reinterpretation of Kinematic and Mechanical Relations’, which provided a method for calculating quantum states that lacked both particles and waves. It utilized mathematical methods that we call matrices to provide a formal, mathematical apparatus into which one plugged experimental data, turned the mathematical crank, and out popped the allowed states. His supervisor, Max Born, quickly saw that Heisenberg had rediscovered matrices. But matrix mechanics, as it was called, was difficult to use, and many physicists resisted a theory that told them they could not picture Nature’s bottom rung.

 

‹ Prev