On her way into life-threatening surgery, Gertrude Stein was asked by her lifelong companion, Alice B. Toklas, “What is the answer?” Stein replied, “What is the question?” There are a few different versions of this story, but they all come to the same thing: Questions are more relevant than answers. Questions are bigger than answers. One good question can give rise to several layers of answers, can inspire decades-long searches for solutions, can generate whole new fields of inquiry, and can prompt changes in entrenched thinking. Answers, on the other hand, often end the process.
Are we too enthralled with the answers these days? Are we afraid of questions, especially those that linger too long? We seem to have come to a phase in civilization marked by a voracious appetite for knowledge, in which the growth of information is exponential and, perhaps more important, its availability easier and faster than ever. Google is the symbol, the insignia, the coat of arms of the modern world of information. More information is demanded, more facts are offered, more data are requested, and more is delivered more quickly. According to the Berkeley Institute, in the year 2002, 5 exabytes of information were added to the world’s stores. That’s a billion billion bits of data, enough to fill the Library of Congress 37,000 times over. This means 80 megabytes for every individual on the planet, equaling a stack of books 30 feet high for each of us to read. That was in 2002. It appears to have increased by a million times according to the latest update in this series for 2007.
What can one do in the face of this kind of information growth? How can anyone hope to keep up? How come we have not ground to a halt in the deepening swamp of information? Would you be suspicious if I told you it was just a matter of perspective? Working scientists don’t get bogged down in the factual swamp because they don’t care all that much for facts. It’s not that they discount or ignore them, but rather that they don’t see them as an end in themselves. They don’t stop at the facts; they begin there, right beyond the facts, where the facts run out. Facts are selected, by a process that is a kind of controlled neglect, for the questions they create, for the ignorance they point to. What if we cultivated ignorance instead of fearing it, what if we controlled neglect instead of feeling guilty about it, what if we understood the power of not knowing in a world dominated by information? As the first philosopher, Socrates, said, “I know one thing, that I know nothing.”
Scholars agree that Isaac Newton, in 1687, formulating the laws of force and inventing the calculus in his Principia Mathematica, probably knew all of the extant science at that time. A single human brain could know everything there was to know in science. Today this is clearly impossible. Although the modern high school student probably possesses more scientific information than Newton did at the end of the 17th century, the modern professional scientist knows a far, far smaller amount of the available knowledge or information at the beginning of the 21st century. Curiously, as our collective knowledge grows, our ignorance does not seem to shrink. Rather, we know an ever smaller amount of the total, and our individual ignorance, as a ratio of the knowledge base, grows. This ignorance is a kind of limit, and it’s frankly a bit annoying, at least to me, because the one thing you know is that there is so much more out there that you will never know. Unfortunately, there seems to be nothing that can be done about this.
On the grander scale there is absolute or true ignorance, the ignorance represented by what really isn’t known, by anybody, anywhere—that is, communal ignorance. And this ignorance, the still mysterious, is also increasing. In this case, however, that’s the good news, because it’s not a limit; it is an opportunity. A Google search on the word “ignorance” gives 37 million hits; one on “knowledge” returns 495 million. This reflects Google’s utility but also its prejudice. Surely there is more ignorance than knowledge. And because of that there is more left to do.
I feel better about all that ignorance than I do about all that knowledge. The vast archives of knowledge seem impregnable, a mountain of facts that I could never hope to learn, let alone remember. Libraries are both awe inspiring and depressing. The cultural effort that they represent, to record over generations what we know and think about the world and ourselves, is unquestionably majestic; but the impossibility of reading even a small fraction of the books inside them can be personally dispiriting.
Nowhere is this dynamic more true than in science. Every 10–12 years there is an approximate doubling of the number of scientific articles. Now this is not entirely new—it’s actually been going on since Newton—and scientists have been complaining about it for almost as long. Francis Bacon, the pre-Enlightenment father of the scientific method, complained in the 1600s of how the mass of accumulated knowledge had become unmanageable and unruly. It was perhaps the impetus for the Enlightenment fascination with classification and with encyclopedias, an attempt to at least alphabetize knowledge, if not actually contain it. And the process is exponential, so it gets “worser and worser,” as they say, over time. That first doubling of information amounted to a few tens of new books or papers, while the most recent doubling saw more than 1,000,000 new publications. It’s not just the rate of increase; it’s the actual amount that makes the pile so daunting. How does anyone even get started being a scientist? And if it’s intimidating to trained and experienced scientists, what could it be to the average citizen? No wonder science attracts only the most devoted. Is this the reason that science seems so inaccessible?
Well, it is difficult, and there is no denying that there are a lot of facts that you have to know to be a professional scientist. But clearly you can’t know all of them, and knowing lots of them does not automatically make you a scientist, just a geek. There are a lot of facts to be known in order to be a professional anything—lawyer, doctor, engineer, accountant, teacher. But with science there is one important difference. The facts serve mainly to access the ignorance. As a scientist, you don’t do something with what you know to defend someone, treat someone, or make someone a pile of money. You use those facts to frame a new question—to speculate about a new black cat. In other words, scientists don’t concentrate on what they know, which is considerable but also miniscule, but rather on what they don’t know. The one big fact is that science traffics in ignorance, cultivates it, and is driven by it. Mucking about in the unknown is an adventure; doing it for a living is something most scientists consider a privilege. One of the crucial ideas of this book is that ignorance of this sort need not be the province of scientists alone, although it must be admitted that the good ones are the world’s experts in it. But they don’t own it, and you can be ignorant, too. Want to be on the cutting edge? Well, it’s all, or mostly, ignorance out there. Forget the answers, work on the questions.
In the early days of television, the pioneering performer Steve Allen introduced on his variety show a regular routine known as The Question Man. The world it seemed had an overabundance of answers but too few questions. In the postwar 1950s, with its emphasis on science and technology, it could easily have felt this way to many people. The Question Man would be given an answer, and it was his task to come up with the question. We need The Question Man again. We still have too many answers, or at least we put too much stock in answers. Too much emphasis on the answers and too little attention to the questions have produced a warped view of science. And this is a pity, because it is the questions that make science such a fun game.
But surely all those facts must be good for something. We pay a very high price for them, in both money and time, and one hopes they are worth it. Of course, science creates and uses facts; it would be foolish to pretend otherwise. And certainly to be a scientist you have to know these facts or some subset of them. But how does a scientist use facts beyond simply accumulating them? As raw material, not as finished product. In those facts is the next round of questions, improved questions with new unknowns. Mistaking the raw material for the product is a subtle error but one that can have surprisingly far-reaching consequences. Understanding this error and its ramifications,
and setting it straight, is crucial to understanding science.
The poet John Keats hit upon an ideal state of mind for the literary psyche that he called Negative Capability—“that is when a man is capable of being in uncertainties, Mysteries, doubts without any irritable reaching after fact & reason.” He considered Shakespeare to be the exemplar of this state of mind, allowing him to inhabit the thoughts and feelings of his characters because his imagination was not hindered by certainty, fact, and mundane reality (think Hamlet). This notion can be adapted to the scientist who really should always find himself or herself in this state of “uncertainty without irritability.” Scientists do reach after fact and reason, but it is when they are most uncertain that the reaching is often most imaginative. Erwin Schrodinger, one of the great philosopher-scientists, says, “In an honest search for knowledge you quite often have to abide by ignorance for an indefinite period.” (Schrodinger knew something about uncertainty; he posed the now famous Schrodinger’s cat thought experiment in which a cat placed in a box with a vial of poison that could or could not be activated according to some quantum event was, until observed, both dead and alive, or neither.) Being a scientist requires having faith in uncertainty, finding pleasure in mystery, and learning to cultivate doubt. There is no surer way to screw up an experiment than to be certain of its outcome.
To summarize, my purpose in this intentionally short book is to describe how science progresses by the growth of ignorance, to disabuse you of the popular idea that science is entirely an accumulation of facts, to show how you can be part of the greatest adventure in the history of human civilization without slogging through dense texts and long lectures. You won’t be a scientist at the end of it (unless you’re already one), but you won’t have to feel as if you’re excluded from participating in the remarkable worldview that science offers, if you want to. I’m not proselytizing for science as the only legitimate way to understand the world; it’s clearly not that. Many cultures have lived, and continue to live, quite happily without it. But in a scientifically sophisticated culture, such as ours, it is as potentially dangerous for the citizenry to be oblivious about science as it is for them to be ignorant of finance or law. And aside from being a good citizen, it’s simply too interesting and too much fun to ignore.
We might start by looking at how science gets its facts and at how that process is really one of ignorance generation. From there we can examine how scientists do their work—choosing and making decisions about their careers and the questions they will devote themselves to; how we teach or fail to teach science; and finally how nonspecialists can have access to science through the unlikely portal of ignorance.
TWO
Finding Out
Science, it is generally believed, proceeds by accumulating data through observations and manipulations and other similar activities that fall under the category we commonly call experimental research. The scientific method is one of observation, hypothesis, manipulation, further observation, and new hypothesis, performed in an endless loop of discovery. This is correct, but not entirely true, because it gives the sense that this is an orderly process, which it almost never is. “Let’s get the data, and then we can figure out the hypothesis,” I have said to many a student worrying too much about how to plan an experiment.
The purpose of experiments is of course to learn something. The words we use to describe this process are interesting. We say that some feature is revealed, we find something out, we discover something. In fact the word discover itself has an evocative literal meaning—“to discover,” that is, to uncover, to remove a veil that was hiding something already there, to reveal a fact. Some artists talk also of revealing or discovering as the basis of the creative act—Rodin claimed that his sculpting process was to remove the stone that was not part of the sculpture; Louis Armstrong said that the important notes were the ones he didn’t play.
The direct result of this discovery process in science is data. Observations, measurements, findings, and results accumulate and at some point may gel into a fact. The literary critic and historian Mary Poovey recently wrote a noteworthy book titled A History of the Modern Fact in which she traces the development of the fact as a respected and preferred unit of knowledge. In its growth to this exalted position it has supposedly shed any debt to authority, opinion, bias, or perspective. That is, it can be trusted because it supposedly arose from unbiased observations and measurements without being affected by subjective interpretation. Obviously this is ridiculous, as she so exhaustively shows. No matter how objective the measurement, someone still had to decide to make that measurement, providing ample opportunity for bias to enter the scheme right there. And of course data and facts are always interpreted because they often fail to produce an uncontestable result. Nonetheless, this idealized view of the fact still commands a central place, especially in science education (although not so clearly in science practice), where facts occupy a position at least as exalted as truth, and where they provide credibility by being separated from opinion. Scientific facts are “disinterested,” which certainly doesn’t sound like much fun and may be why they have become so uninteresting.
I don’t mean by all of this to demean facts, but rather to place them in a more accurate perspective, or at least in the perspective of the working scientist. Facts are what we work for in science, but they are not actually the currency of the community of scientists. It may seem surprising to the nonscientist, but all scientists know that it is facts that are unreliable. No datum is safe from the next generation of scientists with the next generation of tools. The known is never safe; it is never quite sufficient. And perhaps nonintuitively, the more exact the fact, the less reliable it is likely to be; a precise measurement can always be revised and made a decimal point more precise, a definitive prediction is more likely to be wrong than a vague one that allows several possible outcomes.
One of the more gratifying, if slightly indulgent, pleasures of actually doing science is proving someone wrong—even yourself at an earlier time. How do scientists even know for sure when they know something? When is something known to their satisfaction? When is the fact final? In reality, only false science reveres “facts,” thinks of them as permanent and claims to be able to know everything and predict with unerring accuracy—one might think here of astrology, for example. Indeed, when new evidence forces scientists to modify their theories, it is considered a triumph, not a defeat. Max Planck, the brilliant physicist who led the revolution in physics now known as quantum mechanics, was asked how often science changed. He replied: “with every funeral,” a nod to the way science often changes on a generational time scale. As each new generation of scientists comes to maturity, unencumbered by the ideas and “facts” of the previous generation, conception and comprehension is free to change in ways both revolutionary and incremental. Real science is a revision in progress, always. It proceeds in fits and starts of ignorance.
THE DARK SIDE OF KNOWLEDGE
There are cases where knowledge, or apparent knowledge, stands in the way of ignorance. The luminiferous ether of late 19th-century physics is an example. This was the medium that was believed to permeate the universe, providing the substrate through which light waves could propagate. Albert Michelson was awarded a Nobel Prize in 1907 for failing to observe this ether in his experiments to measure the speed of light—possibly the only Nobel Prize awarded for an experiment that didn’t work. He was also, as it happens, the first American to win a Nobel Prize. The ether was a black cat that had physicists measuring and testing and theorizing in a dark room for decades—until Michelson’s experiments raised the specter that this particular feline didn’t even exist, thereby allowing Albert Einstein to postulate a view of the universe in a new and previously unimaginable way with his theories of relativity.
Phrenology, the investigation of brain function through an analysis of cranial bumps, functioned as a legitimate science for nearly 50 years. Although it contained a germ of truth, certain mental fa
culties are indeed localized to regions of the brain, and many attested to its accuracy in describing personality traits, it is now clear that a large bump on the right side of your head just behind your ear has nothing to do with your being an especially combative person. Nonetheless, hundreds of scientific papers appeared in the literature, and several highly respected scientific names of the 19th century were attached to it. Charles Darwin, not himself a subscriber, was reckoned by an examination of a picture of his head to have “enough spirituality for ten priests”! In these, and many other cases (the magical phlogiston to explain combustion and rust, or the heat fluid caloric), apparent knowledge hid our ignorance and retarded progress. We may look at these quaint ideas smugly now, but is there any reason, really, to think that our modern science may not suffer from similar blunders? In fact, the more successful the fact, the more worrisome it may be. Really successful facts have a tendency to become impregnable to revision.
Here are two current examples:
• Almost everyone believes that the tongue has regional sensitivities—sweet is sensed on the tip, bitter on the back, salt and sour on the sides. Pictures of “tongue maps” continue to appear not only in popular books on taste and cooking but in medical textbooks as well. The only problem is that it’s not true. The whole thing arose from the mistranslation of a German physiology textbook by a Professor D. P. Hanig, who claimed that his very anecdotal experiments showed that parts of the tongue were slightly more or slightly less sensitive to the four basic tastes. Very slightly as it has turned out when the experiments are done more carefully (you can try this on your own tongue with some salt and sugar, for example). The Hanig work was published in 1901 and the translation, which considerably overstated the findings and canonized the myth, was by the famed Harvard psychologist Edward G. Boring (joke to thousands of undergraduate psychology majors forced to read his textbooks) in 1942. Boring, by the way, was a pioneer in sensory perception who gave us the well-known ambiguous figure that can be seen, depending on how you look at it, as a beautiful young woman or an old hag. Perhaps at least in part because of Boring’s stature the mythical tongue map was canonized into a fact and maintained by repetition, rather than experimentation, now having endured for more than a century.
Ignorance Page 2