Ignorance
Page 5
The problem with the dichotomy between basic and applied research is that it is fundamentally false—that’s why it never seems to get solved and we endlessly oscillate back and forth in favor of one, then the other—as if they were two things and not just one research effort. Following the ignorance often leads to marvelous inventions. But trying to take short cuts, to short circuit the process by going directly to the application, rarely produces anything of value. Thus, for example, the vast amount of work that has been expended on trying to make computers converse as if this was just a programming problem and not a deep issue of cognitive neuroscience. Where, finally we must ask ourselves, is the source of the inventions—is it from Edisons or Einsteins? Given the choice, which would we want more of, Edisons or Einsteins? Edison was a great inventor, but without the understanding of electricity that came from Faraday’s basic experiments and mathematical formulations, he could have done none of it, wouldn’t have even thought of doing any of it. Granted, it often takes an Edison to make something of Faraday or Einstein’s pure knowledge, but carts before horses don’t go anywhere. Faraday, by the way, had no idea what electricity might be good for and responded to a question about the possible use of electromagnetic fields with the retort, “Of what use is a newborn baby?” This phrase he apparently borrowed from Benjamin Franklin, no less, who was the first to make the analogy in his response to someone asking him what good flight would ever be after witnessing the first demonstration of hot air balloons. People who want to know what use something is rarely seem to have much imagination.
One favorite style of the predictions issues of magazines is to number them—the “50 Greatest Advances for the next 50 years” or the “10 Greatest Enigmas in Science.” This is a subtly dangerous approach as well. I am sure that those who engineer these articles mean no harm, but enumerating ignorance in this way is to make us believe that we can see the horizon, that we can get there, that it will not infinitely recede, and that there is a finite number of scientific problems to solve and then that will be that and we can get on with the leisurely utopian part of humanity’s saga. Numbering, in this case, places limits where there are none and at its worst drives us to direct science at false goals that are often unattainable and are ultimately a waste of money and other resources. Numbering leads to prioritizing—the accounting alternative to creativity.
There is also a certain conclusive, but wrong, notion that comes from an explicit number. In a peculiar way it is an ending, not a beginning. A recipe to finish, not to continue. One might say that Hilbert’s “23 problems” suffers a bit from this, but perhaps it’s just in the nature of mathematicians to number things, including even their ignorance. For the rest of science it seems wiser not to enumerate so precisely but to take the important lesson that predicting ignorance, not accomplishments, is more fruitful—and less likely to be wrong.
Ignorance works as the engine of science because it is virtually unbounded, and that makes science much more expansive. This is not just a plea for unlimited science; it may well all come to an end one day for any of a variety of reasons, from economic to social to intellectual. Rather it is an argument for the view that as long as we are doing science it is better to see it as unbounded in all directions so that discovery can proceed everywhere. It is best not to be too judgmental about progress.
However, this doesn’t mean we should just go off in whatever direction our whims take us and hope for the best. Ignorance is not just an excuse for poor planning. We must think about how ignorance works, and we have to be explicit about how to make it work to our advantage. While for many experienced scientists this is intuitive, it is not so obvious to the layperson, and it often seems not so apparent to young scientists starting out their career and worrying about grant support and tenure. Let me take a stab at analyzing ignorance more deeply.
FIVE
The Quality of Ignorance
We can see from these so far straightforward arguments that ignorance is not so simple a concept. In its less pejorative uses it describes a productive state of scholarship, experimentation, and hypothesizing. It is both the beginning of the scientific process—and its result. It is the beginning, of course, because it asks the question. “It is always advisable to perceive clearly our ignorance,” said Charles Darwin early in his book The Expression of the Emotions in Man and Animals. The ignorance of a subject is the motivating force. At first, it is most of what we know. Insufficiently considered ignorance is problematic. Just saying we don’t know something is not critical or thoughtful enough. It can lead to questions that are too big, or too amorphous, or too hard to imagine solving. Thoroughly conscious ignorance is, as Maxwell had it, the prelude to discovery.
It is the product of science as well. Although not the explicit goal, the best science can really be seen as refining ignorance. Scientists, especially young ones, can get too enamored with results. Society helps them along in this mad chase. Big discoveries are covered in the press, show up on the University’s home page, garner awards, help get grants, and make the case for promotions and tenure. But it’s wrong. Great scientists, the pioneers that we admire, are not concerned with results but with the next questions. The eminent physicist Enrico Fermi told his students that an experiment that successfully proves a hypothesis is a measurement; one that doesn’t is a discovery. A discovery, an uncovering—of new ignorance.
The Nobel Prize, the pinnacle of scientific accomplishment, is awarded, not for a lifetime of scientific achievement, but for a single discovery, a result. Even the Nobel committee realizes in some way that this is not really in the scientific spirit, and their award citations commonly honor the discovery for having “opened a field up,” “transformed a field,” or “taken a field in new and unexpected directions.” All of which means that the discovery created more, and better, ignorance. David Gross, in his acceptance speech for the Nobel Prize in Physics (2004), noted that the two requirements for continuing Nobel Prizes were money, kindly supplied by Alfred Nobel’s bequest, and ignorance, currently being well supplied by scientists.
Okay, so we’re convinced that ignorance is worth taking seriously. But how do scientists actually work with ignorance; specifically, how does it show up in their day-to-day work in the lab or the way they organize their labs and plan experiments? The first thing to recognize is that ignorance, like many such big meaningful words, fails to describe the breadth of its subject—or rather it describes only the breadth, missing the many details within its depths. Ignorance comes in many flavors, and there are correspondingly many ways of working with it. There is low-quality ignorance and high-quality ignorance. Scientists argue about this all the time. Sometimes these arguments are called grant proposals; sometimes bull sessions. They are always serious. Decisions about ignorance may be the most critical ones a scientist makes.
Perhaps the first thing for a scientist to consider is how to decide, against the enormous backdrop of the unknown, what particular part of the darkness he or she will inhabit. My laboratory works on olfaction, the sense of smell. It is a small subfield within the larger field of sensory systems that include vision, hearing, touch, taste, and pain. “Sensory systems” is itself a subfield within the much larger discipline of neurobiology, the study of nervous systems. And that, in turn, is just one area of investigation within the still larger domain known as biology, itself encompassing ecology, evolution, genetics, physiology, anatomy, zoology, botany, biochemistry, and so on. The Society for Neuroscience, the professional society representing workers in all fields of neuroscience, boasts a membership of over 40,000 and holds an annual meeting attended by more than 30,000 scientists and educators. How do all these scientists sort themselves out? Why don’t they all work on the same thing or one of a few (e.g., 23) things—memory, schizophrenia, paralysis, stroke, or development? Aren’t these the big questions in neuroscience? Aren’t these the hot topics you watch presented in slick documentaries on public or cable television?
How do scientists, as op
posed to TV producers, ponder these big questions about ignorance? How do they get from these and other interesting and important issues to an actual scientific research program? Well, at the most pedestrian, but nonetheless critical level, there are grant proposals. Every scientist spends a significant percentage of his or her time writing grants. Many complain about this, but I actually think it’s a good idea. These documents are, after all, a detailed statement of what the scientist hopes to know, but doesn’t, as well as a rudimentary plan for finding it out. Scientists write grant proposals that are reviewed by other scientists, serving unpaid on government committees, who recommend what they consider the best of the proposals for funding. These grant proposals, numbering many thousands per year, represent a virtual marketplace of ignorance. Imagine being awarded a prize for what you don’t know: Here’s some money for what you don’t know. Everyone else in the world is getting paid for what they know—or claim to know. But scientists get rewarded for their ignorance. If that’s the case, then it can’t just be for any old ignorance. It has to be really good ignorance. One must become an expert, a kind of connoisseur of ignorance. In its most unkind characterization this might be called grants-manship. But this is unfair. The art of writing a grant, of writing about ignorance authoritatively, is not trivial.
How does one develop into a connoisseur of ignorance? There are numerous strategies, and I will endeavor to list and describe some of them in the discussion that follows. To be honest, though, it is often a matter of intuition and taste. As you will see, questions can be tractable or intractable, interesting or ordinary, narrow or broad, focused or diffuse—and any of all the possible combinations of those attributes. There is not a single Method of Ignorance. While I would like to provide a simple-to-follow Handbook of Ignorance, I cannot in fact be prescriptive. One of the surprising things that I learned from teaching a class on ignorance is that science is remarkably idiosyncratic. Individual scientists, although bound together by a few crucial rules about what will pass muster, otherwise take quite distinctive approaches to how they do their work. So what I present to you here is a Potpourri of Ignorance, a Multiplicity of Ignorance. It will sometimes seem conflicted, one strategy will be at odds with the following and preceding ones, but that’s actually the way it is. There are many strategies of ignorance. I have come to appreciate this richness, but I understand that it may be bewildering at first. Bear with me.
THE MANY MANIFESTATIONS OF IGNORANCE
Let’s begin with what makes a question interesting? Mathematicians often use this term when they say that so and so a conjecture is correct but not interesting. When I ask Columbia University mathematician Maria Chudnovsky, who works in a very specialized area called Perfect Graph Theory (which by the way has nothing to do with the graphs you and I are familiar with), she says that a question is interesting if it leads somewhere and is connected to other questions. Something can be unknown, and you test it out for a bit, but then you can see, often pretty quickly, that it is not very connected to other things that are unknown and therefore it is not likely to be interesting or worthy of pursuit. If it seems as though you are working away on a project and nothing that anyone else is doing or has done becomes helpful to your work, then you begin to think that you are perhaps in some cul-de-sac of irrelevance. This happens to graduate students quite often. They start a project with a question that is mostly untouched or has received little attention. But some ways into it, the data don’t seem to lead anywhere, they keep proving the same small thing over and over again, and eventually there is nothing to do but abandon the project. So connectedness seems to be an important quality.
On the other hand, biology is full of people working on a barely known species of organism, from a virus to a mammal, that has some quirky lifestyle and that they find immensely interesting, perhaps because it is not in any obvious way connected to the mainstream of biology. Sometimes these apparent cul-de-sacs become part of the mainstream in very unexpected ways, suddenly connecting up to the main branch and bringing new comprehension to questions that no one had even thought about asking. Just as often they remain dead ends. But like Darwin and his worms, the biologist’s curiosity is enough for him or her to spend a lifetime mastering the details of another creature’s life history. This kind of work takes a certain faith that it will all mean something someday. Or maybe it just takes a laissez-faire attitude that not everything has to mean something.
One example of curiosity-driven research that unpredictably produced one of the crucial tools in the biotechnology revolution is the study of thermophiles, a word that literally means “heat loving.” What a wonderful word. I can’t help thinking of it when I walk along a beach in southern Florida watching hordes of people lying in the radiation, risking melanoma and wrinkles, and loving every minute of it. But nature’s real thermophiles thrive in the inferno of near-boiling, sulfurous, deep sea vents and in the hot sulfur springs of Yellowstone National Park. This is where they were first discovered in the 1960s by microbiologist Thomas Brock of Indiana University and an undergraduate in his laboratory named Hudson Freeze (I’m not making that up). These microorganisms, at first an oddity, became suddenly important because their enzymes had adapted to the high temperatures of their niche, temperatures that would cause similar enzymes in our bodies to disintegrate. Then 30 years later, in the 1990s, temperature-resistant enzymes were precisely what was required for reactions like those in polymerase chain reaction, known more commonly as PCR, the technique that is fundamental to most biotechnology experiments and ubiquitous on forensic crime shows. PCR works by cycling through temperatures that vary from 40°C to 98°C (approximate body temperature to nearly boiling) and requires enzymes that can withstand, and function, at these high temperatures. Once again, apparently isolated research, undertaken only for the sake of curiosity, comes to play a critical but completely unpredicted role in the discovery and invention of new technologies and products.
Here are some other ways that scientists think about ignorance, presented in no particular order, because no particular order presents itself. Or because, to tell you the truth, there is no particular order. They are all more or less equivalent.
One way is through the lens of what the Nobel laureate immunologist Peter Medawar called “The Art of the Soluble.” Medawar claims that simply showing something is possible can be sufficient to motivate work and make progress. His Nobel Prize–winning work was to show that the immune system can recognize self from other in all its tissues and how this occurs. By explaining the biological basis for the well-known phenomenon of organ rejection, Medawar is often credited with making organ transplantation possible. Medawar, however, eschews this, saying that all he did was to show that in principle it was not impossible—all that was needed was to find a way to fool the immune system into accepting “other” as self. So just showing something is solvable is one strategy. What sorts of ignorance can we erase? What questions look like they could be answered? After all, there is no sense in banging your head against the wall; why not apply yourself to something tractable?
A story many of us tell our graduate students is about a scientist searching the ground under a street lamp late at night. A man walks up to him and asks him what he has lost. “My car keys,” says the scientist, and the friendly chap helps him look. After a while, and no success, the fellow asks whether he is sure this is where he dropped them. “No, I think probably over there,” he says pointing to a dark portion of the street. “Then why are you looking here?” “Well,” says the canny scientist, “the light is so much better here.” This story is often told in a way to make the seeker seem ridiculous (in fact, in some tellings he is not a scientist but a drunk, or maybe a drunken scientist), but I think it’s just the opposite. One very decent strategy in science is to look where you have a good chance of finding something, anything. The lesson here is to recognize the value of the observable and to leave the unmeasurable stuff for later. Anyway, if you’re drunk it’s better not to find your car
keys.
…
An almost opposite strategy can be summed up by the parable that began this book: it is very difficult to find a black cat in a dark room—especially when there is no cat. This is ignorance driven by deep mysteries. One enters the room and stumbles around, the black cat is reported to be in here, but no one has seen it directly, and the reports are of questionable reliability. In science there are dark rooms everywhere that have been found to be completely empty, each one representing careers that in whole or part have been spent finding out this important but not very satisfying fact. False leads are followed, seemingly good ideas and reasonable theories are pursued, only to find out that they were pitifully mistaken, fundamentally incorrect. This is the fear of every scientist. But it is also the motivation, the excitement, the thing that gets you to the lab early and keeps you there until late at night. It is also the part of science that most nonscientists miss entirely.
It gets missed because we rely so heavily on the newspaper or the TV for information about what’s going on in science, where only the black cats that get discovered are featured. We rarely hear of the pursuits, especially the unsuccessful or not-yet-successful ones. The reports from the frontier are unfortunately “improved” by highlighting the findings and ignoring the process—ignoring the ignorance, if you will.
I say this is unfortunate because it has two unwanted effects. First, it makes science seem inaccessible, because how could you ever keep up with the steady stream of new facts (remember the 5 exabytes of new information in 2002, the 1 million new scientific publications last year). Second, it gives a false impression of science as a surefire, tough-as-nails, deliberate method for finding things out and getting things done, when in fact the process is actually more fragile than you might imagine and requires more nurturing and more patience (and more money) than we commonly think is the case. Einstein, responding to a question about why modern science seemed to flower in the West rather than India or China (at that particular time in history this was largely the case), remarked that what was puzzling was that it was here at all, not why it wasn’t in India or China. Science is a risky business. For some scientists that is a reason to stay with the more tractable questions; for others the risk seems to be what makes it worthwhile.