Ignorance
Page 4
The interest for us, and possibly for Leibniz, is not simply that this structure, this imaginary device, could construct all human thoughts, but that it could also identify and appraise thoughts that are unknown. It could survey not only what we know but also what we do not know. It is this attribute that holds the enticement, and indeed it may contain more of what we do not know—just as there are likely to be more sentences still unuttered than all those that have been spoken. The power of Leibniz’s thought alphabet and the thought algebra that would run it was not how well it would settle disputes but that it showed the infinity of human thought, the immensity of the unknown. Language is useful for what it allows one to say, but it is powerful because it admits, by its very structure, that there are an infinity of things that could be said, and that there will always be more unsaid than said. That the Leibniz alphabet never came to be used the way that his youthful vision had imagined is less important than the demonstration that simple things can be combined to make endless new compound things. We are now developing whole new branches of science to analyze and manage and manipulate this complexity.
Two hundred years later Hilbert’s positivism program was another attempt to codify knowledge, but this one was doomed by the seemingly simple ruminations of Kurt Gödel, another German mathematician who had become interested in logic. As Rebecca Goldstein recounts in her excellent, and highly detailed, book on Gödel, his shyness and reluctance to make big pronouncements, perhaps the opposite of Hilbert’s style, at first obscured the explosive power of his incompleteness theorems. Eventually, however, it dawned on the mathematics community: There would be no axiomatic, consistent, and complete theory of mathematics. A consistent system in mathematics, such as the familiar whole-number arithmetic and its operations (addition, subtraction, etc.), can never be complete as well as consistent. Consistency refers to the straightforward characteristic that a system’s rules will not result in contradictory statements, for example, that two things are and are not equal. Although this may seem simple, it is devilishly difficult to be sure that a few apparently simple statements do not, in any of the myriad possible ways they may be used (combinatorially, Leibniz would say), ever lead to an illogical conclusion. Introducing seemingly reasonable concepts like “zero” or “infinity” into simple arithmetic, for example, can result in strange incompatibilities (antinomies, mathematicians call them). The challenge is to show that for a particular system the root axioms, the basic fundamental rules, will never result in such incompatibilities. Proving this means that the system is complete. What Gödel showed, using a strange new correspondence between mathematics and logic that he invented, was that if a system were consistent it could never be shown to be complete within the rules of that system. This means that something that could be shown to be true using the system could not in fact be proved to be so. Since proofs are the foundation of mathematics, it is quite curious when obviously true statements cannot be proved. The math is complicated beyond the scope of this book, but the gist of it can be appreciated by considering any of several paradoxes that bend your brain in unpleasant ways. The most famous of these is the Cretan’s paradox, sometimes known as the Liar’s paradox. They all go something like: “The Cretan claims that all Cretans are liars.” So who are you to believe? Or another version—take a blank card and write on one side of it, “The statement on the other side of this card is true” and on the other side write, “The statement on the other side of this card is false.” These little mind games, became for Gödel, the basis of a new form of logic that he used to demonstrate that in many circumstances you can’t tell yourself the truth.
Was this the end of the messianic program to establish the primacy of mathematics and of logical thinking? As it turns out, quite the contrary. Gödel’s small, by comparison, but revolutionary output is so astonishing because of the technical and philosophical research opportunities it has created. Previously unconsidered ideas about recursiveness, paradox, algorithms, and even consciousness owe their foundations to Gödel’s ideas about incompleteness. What at first seems like a negative—eternal incompleteness—turns out to be fruitful beyond imagining. Perhaps paradoxically much of computer science, an area one might think was most dependent on empirical statements of unimpeachable logic, could not have progressed without the seminal ideas of Gödel. Indeed, unknowability and incompleteness are the best things that ever happened to science.
So some things can never be known and, get this, it doesn’t matter. We cannot know the exact value of pi. That has little practical effect on doing geometry. As Princeton astrophysicist Piet Hut points out, the early Pythagoreans were stopped for a while in their tracks when they realized that the square root of 2 could not be precisely represented on the number line, the continuum that translated the numbers of counting into smooth distances. You cannot cut the line at the point corresponding to the √2 and have two new lines that add up to the old one. Very disturbing if the value for the length of the hypotenuse of the simplest right triangle, one with each of its sides equal to 1, does not have a particular location anywhere on the number line from minus to plus infinity. Yet there is a very strong proof of this apparent paradox. A traditional, although possibly apocryphal, story has it that one of the Pythagoreans, Hippasus, upon showing his proof of this strange and, at the time, heretical finding, was drowned by his fellow Pythagoreans. This was a nasty consequence for getting the right answer; math, it seems, was much tougher in those days. But after a time mathematicians developed a work-around. It turns out there are other numbers like √2, and they are called irrational numbers—not because they are unreasonable but because they cannot be expressed as a fraction, that is, as a ratio of two other numbers. The irrational numbers, along with the more common rational numbers that do have spots on the number line, make up what we now call the set of real numbers. Now we can work with them more or less as we would with rational (“normal”) numbers and no one worries about it anymore. You don’t, do you? Probably never even occurred to you.
…
We now have an important insight. It is that the problem of the unknowable, even the really unknowable, may not be a serious obstacle. The unknowable may itself become a fact. It can serve as a portal to deeper understanding. Most important, it certainly has not interfered with the production of ignorance and therefore of the scientific program. Rather, the very notions of incompleteness or uncertainty should be taken as the herald of science.
This leads to a second insight regarding ignorance. If ignorance, even more than data, is what propels science, then it requires the same degree of care and thought that one accords data. Whatever it may look like from outside the science establishment, the incorrect management of ignorance has far more serious consequences than screwing up with the data. There are correction procedures for mishandled data—they must be replicable, must answer to the scrutiny of peers—but mishandled ignorance can be costly, harder to perceive, and so harder to correct.
When ignorance is managed incorrectly or thoughtlessly, it can be limiting rather than liberating.
Scientists use ignorance to program their work, to identify what should be done, what the next steps are, where they should concentrate their energies. Of course, there is nothing wrong in principle with laying out what you need to know—this is what grant proposals are supposed to accomplish. But as any working scientist will tell you, what gets proposed in the grant and what gets done during the actual period of grant funding are not often very similar. I speak from experience, but it is a common one. Things happen, or don’t, that redirect your thinking; work from other laboratories reveals a new result that requires you to revise your ideas; results from your own experiments are not what you expected and force new interpretations and new strategies. The goals may remain similar, but the path changes because the ignorance shifts. Thomas Huxley once bemoaned the great tragedy of science as the slaying of a beautiful hypothesis by an ugly fact—but nothing is more important to recognize, either. Grieve and move
on.
Ignorance then is really about the future; it is a best guess about where we should be digging for data. Can we learn something about managing ignorance from how these guesses are made? How does this view of the future direct scientific thinking?
FOUR
Unpredicting
Who among us would not be happy to lift the veil behind which is hidden the future; to gaze at the coming developments of our science and at the secrets of its development in the centuries to come?
—David Hilbert, introduction to his speech at the Second International Congress of Mathematicians held in Paris, 1900
The future ain’t what it used to be.
—Yogi Berra, American philosopher, baseball player, and team manager
Predictions come in two flavors in science. One is about the direction of future science. The other one, equally if not more important to the everyday mechanics of science, is the ability of science to make testable predictions. An experiment is designed to test the most general principle possible, even though it is almost always only a particular instance of that principle. Thus, a chemist wants to test the validity of a reaction between two elements under certain conditions and designs an experiment in which these two elements are brought together and the result of their interaction can be measured—how much heat is put out, what new molecules have appeared, how much of the original material is left, and so forth. By doing this he or she hopes to come up with a general rule about this type of reaction, such that over a wide range of particulars (the amount of stuff you start with, the initial conditions, etc.) anyone can predict the outcome. If an outcome can be reliably predicted from a limited amount of starting information, then you have gained an understanding of an underlying principle, of the rules governing this bit of the universe. A particular set of genes predicts the likely color of your hair or eyes; two massive bodies at a certain distance will orbit each other with a particular period. These are all instances where knowing the underlying mechanism allows you to make reliable predictions about outcomes. In science, predicting is knowing.
As I no longer need to tell you, this book is very specifically not about knowing, so that’s why I’m going to concentrate on the other side of predicting in science. By this I mean the sort of predicting Hilbert had in mind when he opened the Congress of Mathematicians in 1900 with the statement that opened this chapter: to see where science will take us, what new mysteries it will present, to imagine the future.
Predicting the coming advances in science and technology is a common if often silly exercise, mostly the provenance of magazine editors who see it as a requirement for their end of the year, decade, or millennium issues. Scientists are interviewed and asked what they see as the likely advances in their fields over the next decade or so. Being a generally optimistic lot, at least in their public face, they tend to tackle questions of this sort with gusto, invariably leading to inflated prognostications from the fantasy wish list that every scientist has tucked away in a desk drawer. Unbridled enthusiasm for scientific progress is good public relations, but it is often bad science. Things never go the way we think they will; there are always unexpected findings and unexpected consequences that may redirect or even stymie a field for years.
In fact, one of the most predictable things about predictions is how often they’re wrong. Nonetheless, they are a measure, even if somewhat imprecise, of our ignorance. They are a catalog of what we think the important ignorance is, and perhaps also a judgment of what we think is the most solvable ignorance. David Hilbert was probably the most successful at this game. In the talk that followed that opening comment in August 1900, he outlined 23 crucial problems for mathematics to solve in the next century. These problems, now known eponymously as the Hilbert problems, dominated mathematical research throughout the 20th century. Hilbert was a successful prognosticator because he cleverly turned the tables: his predictions were questions. His predictions were truly a catalog of ignorance because they simply set out what was unknown and suggested that this is where mathematicians might be wise to spend their time. The result is that slightly more than a century later 10 of the 23 problems have been solved to the satisfaction of a consensus, the others being partially solved, unsolved, or now considered unsolvable.
So Hilbert’s strategy, one that we might do well to learn from, was to predict ignorance and not answers. He put no time line on when the major problems might be solved, nor even if they would be solved, but nonetheless there are few mathematicians who would not agree that Hilbert’s little speech at the opening of the 20th century was a positive influence on mathematics that effectively set much of the field’s agenda for more than a hundred years.
When used this way, predicting scientific progress becomes more than just an exercise because it finds its way into making science policy, where it can have either positive or negative effects on determining how limited resources are spent on research. This is why it is as important to be careful with ignorance, no less so than with the facts. Granted, it is reassuring, when budgeting billions for scientific research, to believe that there is a rational program that can be mapped and followed to produce some set of desired results, or at least something that can be called progress. But this is a false assurance based on unreliable judgments about ignorance. It’s hard to see what will be and also what will not be. We are not flying about with individual jet packs, we are not wearing disposable clothes or dining on concentrated nutrients in foil packs, and we have not eradicated malaria or cancer, all predicted years ago as likely. But we do have an Internet that connects the entire world, and we do have a pill that provides erections on demand—neither of which will be found in any set of published predictions from 50, or even 25 years ago. As Enrico Fermi noted, predictions are a risky business, especially when they are about the future.
So how should our scientific goals be set? By thinking about ignorance and how to make it grow, not shrink—in other words, by moving the horizon. Predicting or targeting some specific advance is less useful than aiming for deeper understanding. Now this may sound like just so much screwing around, but again and again this is how most of the great advances in science and technology have occurred. We dig deeper into fundamental mechanisms and only then does it become clear how to make the applications. Whether it is lasers, X-rays, magnetic resonance imaging (MRI), or antibiotics, applications are surprisingly obvious, once you understand the fundamentals. They are just shots in the dark if you don’t.
Let’s take an example. In 1928 the eminent physicist Paul Dirac was trying to describe the electron in quantum mechanical terms. He derived what has become known as the Dirac equation, a rather complex mathematical formulation that neither you (unless you are a trained physicist) nor I can understand. What we can understand is that while the equation filled in some fundamental gaps in nuclear theory, it also raised many serious new questions—some of which are still around. One of those questions was that the equation predicted an anti-electron, a particle with all the electron’s properties but of opposite charge—a positron. No one had ever seen this particle in any experiment, and Dirac himself expressed some doubts about ever observing such a particle, but according to his calculations, which explained an awful lot, it had to be there. It was this glimpse of ignorance that led to new experiments, and in 1932, using a technology known as “cloud chambers” (later, “bubble chambers”), physicist Carl Anderson observed the track created in his chamber by a positron, thereby discovering what Dirac had predicted 4 years earlier. If you had asked Dirac or Anderson what the possible applications of their studies were, they would surely have said their research was aimed simply at understanding the fundamental nature of matter and energy in the universe and that applications were unlikely, and certainly outside of their interest. Nonetheless in the late 1970s biophysicists and engineers developed the first PET scanner—that stands for positron emission tomography. Yes, that positron. Some 40 years after Dirac and Anderson, the positron came to be used in one of the most
important diagnostic and research instruments in modern medicine. Of course, a great deal of additional research went into this as well, but only part of it was directed specifically at making this machine. Methods of tomography, an imaging technique, some new chemistry to prepare solutions that would produce positrons, and advances in computer technology and programming—all of these led in the most indirect and fundamentally unpredictable way to the PET scanner at your local hospital. The point is that this purpose could never have been imagined even by as clever a fellow as Paul Dirac.