Ignorance
Page 6
Again, though, there is the flip side. Faced with black cats that may or may not be there, some scientists are happy instead to measure the room—its size, its temperature, its age, its material composition, its location—somehow forgetting about or ignoring the cat. Perhaps this has a ring of timidity to the reader, of a concern with the mundane rather than the extraordinary, but in fact measurement is critical to advancing science. Much that is good and valuable has come from just this sort of quotidian scientific activity. Many of the comforts of modern life, not to mention the amelioration of many miseries suffered by our ancestors, have come from the work of scientists who make these measurements. Kepler spent 6 years battling with an error of 8 minutes of arc in the planetary motion of Mars (that’s an amount of sky about equal to one-third the width of your thumb held at arm’s length). But the result of this attention to measurement and exactitude was that he freed astronomy from the Platonic tyranny of the perfect circle and showed that planets move around the sun in ellipses. Newton could never have understood motion and gravity if he hadn’t had this critical advance in front of him. Advances in measurement techniques almost always precede important new discoveries. Facts that seem settled at 5 decimal places become ambiguous at 6 or more. The desire to measure more accurately drives technology and innovation, resulting in new microscopes with more resolution, new colliders with more smashing power, new detectors with more capturing capability, new telescopes with more reach. And each of these advances in turn makes searching for black cats more tractable. Ignorance of the next decimal place is a scientific frontier no less grand than theorizing about the nature of consciousness or some other “big” question.
…
The ignorance in one’s own professional backyard is sometimes the most difficult to identify. The journals Nature and Science are published weekly and contain reports that are judged to be of especially high significance. Getting a paper in one of these journals is the science version of landing a leading role or winning a big account. For many it can make a career, or at least get one started on the right foot. Each week doctoral and postdoctoral students in labs around the world scour the pages of these journals for the latest finding in their field and then try to think of the next experiment so that they can get to work on their Nature paper. But of course it’s already too late; the folks who wrote that paper have already figured out the next experiments—in fact, they’ve probably just about finished them. I have a colleague who always suggests that his students look not to yesterday’s issue of Nature or Science for experimental ideas but rather to work that is at least 10 or more years old. This is work that is ready to be revisited, ready for revision. Questions still lurk in these data, questions that have now ripened and matured, that could not be answered then with the available techniques. More than likely they could not even have been asked because they didn’t fit any current thinking. But now they come alive, suddenly possible, potential, promising. Here is another fertile, if unintuitive, place to look for ignorance—among what’s known.
…
How big should a question be? How important should it be? How can you estimate the size or importance of a question? Does size matter? (Sorry, how could I resist?) There are no answers to these questions, but they are nonetheless good questions because they provide a way to think about … questions. Some scientists like big questions—how did the universe begin, what is consciousness, and so forth. But most prefer to take smaller bites, thinking about more modest questions in depth and detail, sometimes admittedly mind-numbing detail to anyone outside their immediate field. In fact, those who choose the larger questions almost always break them down into smaller sized bits, and those who work on narrower questions will tell you how their pursuit could reveal fundamental processes, that is, answers to big questions. The famed astronomer and astrophysicist Carl Sagan, to use a well-known scientist as an example, published hundreds of scientific papers on very particular findings relating to the chemical makeup of the atmosphere of Venus and other planetary objects, while thinking widely and publicly on the question of life’s origin (and perhaps less scientifically, but not less critically, on where it was going). Both approaches converge on manageable questions with potentially wide implications.
This strategy, of using smaller questions to ask larger ones, is, if not particular to science, one of its foundations. In scientific parlance this is called using a “model system.” As Marvin Minsky, one of the fathers of artificial intelligence, points out, “In science one can learn the most by studying the least.” Think how much more we know about viruses and how they work than about elephants and how they work. The brain, for example, is a very complicated piece of biological machinery. Figuring out how it works is understandably one of humankind’s great quests. But, unlike a real machine, a man-made, designed machine, we have no schematic. We have to discover, uncover, the inner workings by dissection—we have to take it apart. Not just physically but also functionally. That’s a tall order since there are some 80 billion nerve cells that make up the human brain, and they make about 100 trillion connections with each other. Keeping second-to-second track of each cell and all its connections is a task well beyond even the largest and fastest of supercomputers. The solution is to break the whole big gamish up into smaller parts or to find other brains that are smaller and simpler and therefore more manageable. So instead of a human brain, neuroscientists study rat and mouse brains, fly brains because they can do some very fancy genetics on them, or even the nervous system of the nematode worm, which has exactly 302 neurons. Not only is the number of neurons very manageable, the connections between every one of them are known, with the added advantage that every worm is just like every other worm, which is not true of humans—or rats or mice.
“But,” says the non-neuroscientist and possessor of a late-model human brain, “my brain and the nematode worm nervous system are simply not the same; you can’t pretend to know anything about human brains by knowing about a worm brain, a nematode worm at that.” Perhaps not everything. But it is true that a neuron is a neuron is a neuron. At the most fundamental level, the building blocks of nervous systems are not so different. Neurons are special cells that can be electrically active, and this is crucial to brain activity. The ways in which they become electrically active turn out to be the same whether they are in a worm, fly, mouse, or human brain. So if you want to know about electrical behavior in neurons, you might just prefer using one of the 302 identified neurons in a worm versus neuron number 123,456,789 out of 80,000,000,000 in a human brain. The critical step is to choose the model system carefully and appropriately. It won’t work to ask questions about visual perception in a nematode worm (they have no eyes), but it is a fabulous organism to ask about the sense of touch (one of the great puzzles of modern neuroscience you may be surprised to learn) because touch is critical for their survival and in the worm you identify the parts that make up a touch sensor by using genetics to literally dissect it. A now almost forgotten statistician and industrialist of the 1920s, George Box, noted that “All models are wrong, but some are useful.”
As a quick sidelight this explains modern biology’s debt to Darwin. You often hear that contemporary biology could not exist without the explanatory power of Darwin’s theory of evolution by natural selection. But it is rarely made clear why this must be the case. Do physicians, for example, really have to believe in evolution to treat sick people? They do, at least implicitly, because the use of model systems to study more complicated ones relies on the relatedness of all biological organisms, us included. It is the process of evolution, the mechanisms of genetic inheritance and occasional mutation, that have conserved the genes responsible for making the proteins that confer electrical activity on neurons, as well as those that make kidneys and livers, and hearts and lungs work the way they do. If that were not the case, then we couldn’t study these things in worms, flies, rats, mice, or monkeys and believe that it would have relevance to humans. There would be no drugs, no surg
ical procedures, no treatments, and no diagnostic tests. All of these have been developed using model systems ranging from cells in culture dishes to rodents to primates. No evolution, no model systems, no progress.
Darwin himself used model systems to frame his questions about evolution—from his famous finches and observations of other isolated island species, to the raising of dogs, horses, and especially the breeding of pigeons, which was popular in his day. Flowers and plants were an especially useful model system because he could cultivate them in his greenhouse. It is notable that Darwin never travelled after he returned from the voyage of the Beagle. For a naturalist he was an almost pathological homebody. Many of his insights about origins of species started with “simple” questions about the dynamic and changing nature of these model systems in his backyard—where the light was perhaps better.
There are similar examples of the use of model systems in physics and chemistry and all fields of science. Indeed classical physics, faced with impossible tasks like measuring the weight of the earth, used simplified systems made up of those innocuous balls rolling down inclined planes to measure the stuff of the universe. And post-Einstein physics is even more indebted to model systems, from colliders to computer simulations, for investigating things that happened long ago or far away.
But it is very easy, and very dangerous, to mistake a model system for a trivial pursuit. In the 1970s a US senator named William Proxmire took to presenting what he called the Golden Fleece Award to various scientists whose work was supported by the government, and that he saw as some sort of boondoggle that was swindling the public out of their hard-earned tax money. These Golden Fleece Awards, not limited only to science, but to any government program blatantly wasting taxpayer money, were quite popular in the press and as fodder for satirical comedy routines. Many were well deserved and indeed quite laughable. But in several cases, serious science projects were swept up in the witch hunt. They often had titles that sounded ridiculous when taken literally because they were using model systems. One famous example was the “Aspen Movie Map,” a project that filmed the streetscape of Aspen, Colorado, and translated it into a virtual tour of the city. Ridiculed by Proxmire, it later became the basis for Google Earth.
At one time I had a grant from the National Institutes of Health (NIH) to study olfaction, the sense of smell, in salamanders. Aside from wondering why someone might devote his life to this quest, you could imagine many more critical places for NIH dollars to be spent. In fact, I have no abiding interest in how salamanders smell. But I can tell you that the biological nose is the best chemical detector on the face of the planet, and that the same principles by which all animals recognize odors in their environment operates in brains, human ones, to recognize and react to pharmaceutical drugs. Olfaction can tell us about molecular recognition, how we can tell the difference between molecules that are very similar chemicals—the difference, for example, between a toxin and a treatment, a poison and a palliative. And if that’s not enough, the neurons in your nose and brain that are involved in this process are unique in their ability to regenerate new neurons throughout your life—the only brain cells that do this. So understanding how they work could tell us how to make replacement brain cells when they are lost to disease or injury. Why salamander? Because they are robust creatures that are easy to keep in the laboratory and they happen to have bigger cells, which are therefore easier to work on, than many other vertebrates. Nonetheless, except for being bigger and being less sensitive to temperature (salamanders are cold blooded), those cells are in the most critical respects just like the olfactory cells in your brain. So am I haunted by a need to know how salamanders smell? No, but they are an excellent model system for working out how brains detect molecules and how new brain cells might be generated. And, by the way, we also get to understand why food tastes good, or not, and how mosquitoes find your juicy body, and how smell plays a role in sex and reproduction.
My grant was titled “Molecular Physiology of the Salamander Olfactory System.” Definitely a contender for the Golden Fleece Award, although I think there was too little money involved for me to qualify. But since 1991, when that grant was funded, it has spawned a research program that has produced more than 100 scientific papers and, more important, trained nearly two dozen new scientists. And my case is not exceptional. It is easy to see folly in science: scientists talk funny and can dress weird, and they speak in riddles, literally, because this is what grant proposals are. When you are talking, writing, or thinking about ignorance, it is critical to be as precise as possible. I am interested in understanding olfaction, and chemical recognition and brain cell replacement—but those interests are too broad to be judged on their worth. Of course, they’re worthwhile, but how, specifically, would one go about understanding them? It’s in the details, and the details often turn out to be funny-sounding titles for grant proposals.
…
You may have noticed that I haven’t made much use of the word hypothesis in this discussion. This might strike you as curious, especially if you know a little about science, because the hypothesis is supposed to be the starting point for all experiments. The development of a hypothesis is typically considered the brainiest thing a scientist does—it is his or her idea about how something works based on past data, perhaps some casual observations, and a lot of thinking typically ending in an insightful and potential new explanation for how something works. The best of these, in fact the only legitimate ones, suggest experiments that could prove them to be true or false—the false part of that equation being the most important. There are many experimental results that could be consistent with a hypothesis yet not prove it true. But it only has to be shown to be false once for it to be abandoned.
So doesn’t this sound like a pretty succinct prescription for ignorance? The hypothesis is a statement of what one doesn’t know and a strategy for how one is going to find it out. I hate hypotheses. Maybe that’s just a prejudice, but I see them as imprisoning, biasing, and discriminatory. Especially in the public sphere of science, they have a way of taking on a life of their own. Scientists get behind one hypothesis or another as if they were sports teams or nationalities—or religions. They have conferences where different laboratories or theorists present evidence supporting their hypothesis and derogating the other guy’s idea. Controversy is created and papers get published, especially in the higher profile journals, because they are controversial—not necessarily because they are the best science. Suddenly from nowhere it seems there is a bubble of interest and attention, much like the speculative economic bubbles that develop in commodities, and more scientists are attracted to this “hot” field. There are dozens of examples—is the universe stable or expanding, is learning due to changes in the membrane of the neuron before the synapse or after the synapse (“pre or post,” as it’s known in the jargon), is there water on Mars (and does it matter), is consciousness real or an illusion, and on and on. Some of these get resolved, while many just fade away after some time in the spotlight, either due to fatigue or because the question gets transformed into a series of smaller more manageable questions that are less glitzy. Newton famously declared, “Hypotheses non fingo (I frame no hypotheses) … whatever is not deduced from the phenomena is to be called a hypothesis, and hypotheses … have no place in experimental philosophy.” Just the data, please.
At the personal level, for the individual scientist, I think the hypothesis can be just as useless. No, worse than useless, it is a real danger. First, there is the obvious worry about bias. Imagine you are a scientist running a laboratory, and you have a hypothesis and naturally you become dedicated to it—it is, after all, your very clever idea about how things will turn out. Like any bet, you prefer it to be a winner. Do you now unconsciously favor the data that prove the hypothesis and overlook the data that don’t? Do you, ever so subtly, select one data point over another—there is always an excuse to leave an outlying data point out of the analysis (e.g., “Well, that was a bad day,
nothing seemed to work,” “The instruments probably had to be recalibrated,” “Those observations were made by a new student in the lab”). In this way, slowly but surely, the supporting data mount while the opposing data fade away. So much for objectivity.
Worse even than this, you may often miss data that would lead to a better answer, or a better question, because it doesn’t fit your idea. Alan Hodgkin, a famous neurophysiologist responsible for describing how the voltage in neurons changes rapidly when they are stimulated (for which he won a Nobel Prize), would go around the laboratory each day visiting with each student or postdoctoral researcher working on one project or another. If you showed him data from yesterday’s experiments that were the expected result, he would nod approval and move on. The only way to get his attention was to have an anomalous result that stuck out. Then he would sit down, light his pipe, and go to work with you on what this could mean. But there are not many like Alan Hodgkin.
The alternative to hypothesis-driven research is what I referred to earlier as curiosity-driven research. Although you might have thought that curiosity was a good thing, the term is more commonly used in a derogatory manner, as if simple curiosity was too childish a thing to drive a serious research project. “Just a fishing expedition” is a criticism that is not at all uncommon in grant reviews, and it is usually enough to sink an application. I hope this sounds as ridiculous to you as it does to me. Anyone who thinks we aren’t all on a fishing expedition is just kidding himself. The trick is to have some idea about where to fish (e.g., stay out of polluted waters, go where there are lots of other fishermen catching lots of fish—or avoid them since the fish are now all gone from there) and some sense of what’s likely to be tasty and what not. I’m not sure you can hope to know much more than that.