Ignorance
Page 3
• In the early 20th century, using what were then the new techniques of electrical recording from living tissue, two pioneering neuroscientists, Lord Adrian and Keffer Hartline, recorded electrical activity in the brain. The most prominent form of this activity was a train of brief pulses of voltage, typically less than a tenth of a volt and lasting only a few milliseconds. Adrian and Hartline characterized them as “spikes” because they appeared on their recording equipment as sharp vertical lines that looked like “spikes” of voltage. These spikes of voltage could appear singly or in trains that contained hundreds of spikes and could last for several seconds. Adrian recorded them in the cells that bring messages from the skin to the brain, and Hartline found them in cells in the retina. In both cases they noted that increases in the strength of the stimulus—touch or light—caused more rapid trains of spikes in these cells. These spikes have since been recorded in virtually every area of the brain and in all the sensory organs, and they have come to be regarded as the language of the brain—that is, they encode all the information passing into and around the brain. Spikes are a fundamental unit of neurobiology. For the last 75 years my neuroscience colleagues and I have been studying spikes and teaching our students about spikes and making grand theories about how the brain works based on spiking behavior. Some of it is true. But what have we missed by concentrating on spikes for the last eight decades? A lot, it turns out. There are many other electrical sorts of signals in the brain, not as prominent as spikes, but that’s a reflection of our recording technology not of the brain itself. These other processes, as well as chemical events that are not electrical, and therefore can’t even be seen with an electrical apparatus, are now being recognized as perhaps the more salient features of brain activity. But we have been mesmerized by spikes, and the rest has been virtually invisible, even though it is right in front of our faces, happening all the time in our brains. Spike analysis was a successful industry in neuroscience that occupied us for the better part of a century and filled journals and textbooks with mountains of data and facts. But it may have been too much of a good thing. We should also have been looking at what they didn’t tell us about the brain.
I can’t resist one more very recent example.
• For at least as long as I have been teaching neuroscience, I have told students that the human brain is composed of about 100 billion neurons and 10 times that number of glial cells—a kind of cell that nourishes neurons and provides some packing and structure for the organ (the word glia comes from Greek for “glue”). These numbers are also in all the textbooks. In early 2009 I received an e-mail from a neuroanato-mist in Argentina named Suzana Herculano-Houzel asking me if I would help her group’s research project by taking a short survey. Among the questions on that survey was how many neurons and glial cells I thought were in the human brain, and where I got that number from. The first part of the question was easy—I filled in the stock answers. But actually I wasn’t sure where that number had come from. It was in the textbooks, but no references to any work were ever given for it. No one, it turned out, knew where the number came from. It sounded reasonable; it wasn’t, after all, an exact number, not like 101,954,467,298 neurons, which would have required a reference to back it up. A little over a year later I heard back from Suzana. Her group had developed a new method for counting cells that was more exact and less prone to errors and could be used on big tissues, like brains. They counted the number of neurons and the number of glial cells in several human brains. For neurons they found that the average number for humans is 86 billion—20% less than we thought; and more remarkably for glial cells there were about an equal number as there were neurons—not 10 times more! In one fell swoop, we lost 929 billion cells in our brains! How could this have happened? How did that first, wrong number become so widespread? It seemed as though the textbook writers had just picked it up from one another and kept passing it around. The number became true as the result of repetition, not experiment.
WHAT SCIENCE MAKES
George Bernard Shaw, in a toast at a dinner feting Albert Einstein, proclaimed, “Science is always wrong. It never solves a problem without creating 10 more.” Isn’t that glorious? Science (and I think this applies to all kinds of research and scholarship) produces ignorance, possibly at a faster rate than it produces knowledge.
Science, then, is not like the onion in the often used analogy of stripping away layer after layer to get at some core, central, fundamental truth. Rather it’s like the magic well: no matter how many buckets of water you remove, there’s always another one to be had. Or even better, it’s like the widening ripples on the surface of a pond, the ever larger circumference in touch with more and more of what’s outside the circle, the unknown. This growing forefront is where science occurs. Curious then that in so many settings—the classroom, the television special, the newspaper accounts—it’s the inside of the circle that seems so enticing, rather than what’s out there on the ripple. It is a mistake to bob around in the circle of facts instead of riding the wave to the great expanse lying outside the circle. But that’s still where most people who are not scientists find themselves.
Now it may seem obvious to say that science is about the unknown, but I would like to have a deeper look at this apparently simple statement, to see if we can’t mine it for something more profound. Stories abound in the history of science of well-respected scientists claiming that everything but the measurements out to another decimal place were now known and all major outstanding questions were settled. At one time or another, geography, physics, chemistry, and so on were all declared finished. Obviously these claims were premature. Seems we don’t always know what we don’t know. In the inimitable words of no less than Donald H. Rumsfeld, the former US secretary of defense now best known for his incompetent handling of the war in Iraq, “there are known unknowns and unknown unknowns.” He was roundly ridiculed for this and other tortured locutions, and in matters of war and security it might not be the clearest sort of thinking, but he was certainly right that there are things we don’t even know we don’t know. We might even go a step further and recognize that there are unknowable unknowns—things that we cannot know due to some inherent and implacable limitation. History, as a subject, could be said to be fundamentally unknowable; the data are lost and they are not recoverable.
So it’s not so much that there are limits to our knowledge, more critically there may be limits to our ignorance. Can we investigate these limits? Can ignorance itself become a subject for investigation? Can we construct an epistemology of ignorance like we have one for knowledge? Robert Proctor, a historian of science at Stanford University, and perhaps best known as an implacable foe of the tobacco industry’s misinformation campaigns, has coined the word agnotology as the study of ignorance. We can investigate ignorance with the same rigor as philosophers and historians have been investigating knowledge.
Starting with the idea that good ignorance springs from knowledge, we might begin by looking at some of the limits on knowledge in science and see what their effects have been on ignorance generation, that is, on progress.
THREE
Limits, Uncertainty, Impossibility, and Other Minor Problems
The notion of discovery as uncovering or revealing is in essence a Platonic view that the world already exists out there and eventually we will, or could, know all about it. The tree falling in an uninhabited forest indeed makes noise—as long as noise is defined as a simple physical process in which air molecules are forced to move in compression waves. That they are perceived by us as “sound” simply means that evolution hit upon the possibility of detecting this movement of air with some specialized sensors that eventually became our ears. Now, of course, there are things going on out there that evolution perhaps ignored—leading to our ignorance of them. For example, consider the wide stretches of the electromagnetic spectrum, including most obviously the ultraviolet and infrared but also several million additional wavelengths that we now detect only by
using devices such as televisions, cell phones, and radios. All were completely unknown, indeed inconceivable, to our ancestors of just a few generations ago.
It is a fairly clear and simple point to understand that our sensory apparatus, molded by evolution to enable us to find food for ourselves and avoid becoming food for someone else long enough to have sex and produce offspring, is not capable of perceiving great parts of the universe around us. But that same evolutionary process molded our mental apparatus as well. Are there things beyond its comprehension? Just as there are forces beyond the perception of our sensory apparatus, there may be perspectives that are beyond the conception of our mental apparatus. The renowned early 20th-century biologist J. B. S. Haldane, known for his keen and precise insights, admonished that “not only is the universe queerer than we suppose, it is queerer than we can suppose.” Since then we have discovered neutrinos and quarks of various flavors, possible new dimensions, long molecules of a snotty substance called DNA that contains our genes, antibodies that recognize us from others, and we have used this and other knowledge to invent television, telecommunications, and an endless list of truly amazing things. And for all of this, Haldane’s aphorism actually seems more correct and relevant now than when he uttered it in 1927.
In a similar vein, Nicholas Rescher, a philosopher and historian of science, has coined the term Copernican cognitivism. If the original Copernican revolution showed us that there is nothing privileged about our position in space, perhaps there is also nothing privileged about our cognitive landscape either. In Edwin Abbott’s 19th-century fantasy novel, a civilization called Flatland is populated by geometric beings (squares, circles, triangles) that live in only two dimensions and cannot imagine a third dimension. It is surprisingly easy to identify with the lives of these creatures, leaving one to wonder whether we don’t all live in a place that is at least one dimension short. The inhabitants of Flatland are mystified and terrified by the appearance one day of a circle that can magically change its circumference. It appears from nowhere as a point, grows slowly to a small circle, becomes larger and larger, and then just as smoothly diminishes in size until it is a point again, and then, incomprehensibly to the Flatlanders, disappears. It is of course just the observation of a three-dimensional sphere passing through the two-dimensional plane of Flatland. But this simple solution is inconceivable to the inhabitants of Flatland, just as it is almost inconceivable to us that they could be so stupid, no matter that the 11 or so dimensions proposed by string theory are well beyond our conception (or the physical limits of our senses).
Let’s take an example from the history of science. Since the Greeks started it all, there has been ongoing controversy in science as to whether the world is composed essentially of a very large number of very small particles (atomism) or is a continuum, a wave not a particle, a smooth progression of time only falsely and arbitrarily divided up into seconds or minutes, a single expanse of space not divided by degrees and coordinates. As Bertrand Russell is claimed to have remarked, is the universe a bucket of sand or a pail of molasses? We tend to see the continuum better than the discrete entities because the infinitesimal is not available to our senses. Is this what stands in the way of our breaking through the apparent paradoxes of quantum physics? Is it a shortcoming in our perceptual and cognitive apparatus?
There is a kind of discomfort that arises from this line of reasoning. As if there were things going on, right under our noses, that we didn’t know about. Worse than that, couldn’t know about. And even more discomforting is that we may never have the capability to know about them. There may be limits. If there are sensory stimuli beyond our perception, why not ideas beyond our conception? Have we run into any of those limits yet? Would we know them if we did? Comedian philosopher George Carlin wryly observed that “One can never know for sure what a deserted area looks like.”
OFFICIAL LIMITS
In science there are so far two well-known instances where knowledge is shown to have limits. The famous physicist Werner Heisenberg’s Uncertainty Principle tells us that in the subatomic universe there are theoretically imposed limits to knowledge—the position and momentum of a subatomic particle (as well as other pairs of observations) can never be known simultaneously. Similarly, in mathematics, Kurt Gödel in his Incompleteness Theorems demonstrated that every logical system that is complex enough to be interesting must remain incomplete. Are there other limits like these? For example, in biology some ask whether a brain can understand itself. Turbulence or the weather may be fundamentally unpredictable in ways we don’t yet grasp. We don’t know. Do they matter? Surprisingly they don’t really have much effect on vast parts of the scientific enterprise, at least not as much as some metaphysically minded writers would have you believe. Why not? Let’s have a closer look—for those among us acutely aware of our ignorance, it is sometimes instructive to see how not knowing may not matter.
In the sphere of subatomic particles, “uncertainty” does make a difference, but this is a very rarefied place, and generally of little concern to corporeal beings such as ourselves. But it is a useful example of a limitation that rose up unexpectedly and could have set physics on its ear. In fact, it revealed new and previously unknown unknowns, it gave rise to decades of fruitful and unanticipated advances, and it created stranger and yet more interesting problems that remain active areas of inquiry today. Entanglement, one of the most peculiar results in the whole mad zoo of quantum physics, grew almost directly from the required uncertainty unveiled by Heisenberg.
Heisenberg’s result is not simply a case of lacking a good-enough measuring device. The very nature of the universe, what is called the wave-particle duality of subatomic entities, makes these measurements impossible, and their impossibility proves the validity of this deep view of the universe. Some fundamental things can never be known with certainty. And the hard fact is that if you can’t measure starting values, you can never predict the future state. If you can’t measure the position (or the momentum) of a particle at time zero, you can’t know, for sure, where the particle will be at any future time. The universe is not deterministic; it is probabilistic, and the future can’t be predicted with certainty. Now it is true that, as a practical matter, for things that have masses greater than about 10–28 grams, the probabilities become so large that predicting how they will act is quite possible—baseball players regularly predict the path of 150 gram spheres on a trajectory covering 100 meters and if a shoe that has been thrown by an irate journalist is coming at your head from the right, ducking to the left is certainly a good bet. Unfortunately it is just this discontinuity in scale between the quantum and the inhabited worlds that makes quantum uncertainty so difficult to appreciate. As many of the pioneers in quantum mechanics noted, these phenomena can only be understood by willingly forgoing any sensible (i.e., sensory-based) description of the world. How ironic that the weird but undeniable results of quantum mechanics rest on a rigorous mathematical scaffold, even while it is conceptually available only in metaphorical allusions like “entanglement” or Schrodinger’s cat that is at once alive and dead and neither. But regardless of whether you can grasp it, the important thing to know about quantum uncertainty is that, whatever it may look like, it actually has not been a limitation; rather it has actually spawned more research, more inquiry, and more new ideas. Sometimes limitations on knowledge can be very useful.
Then there is Gödel’s daring challenge to the completeness of mathematics. The diminutive and unassuming Gödel began hatching his ideas at a time when scientific and philosophical thinking was dominated by positivism, the oversized and intellectually aggressive belief that everything could be explained by empirical observation and logic because the universe, and all it contained, was essentially mechanistic. In mathematics this view was advanced especially by David Hilbert, who proposed a philosophy called formalism that sought to describe all of mathematics in a set of formal rules, axioms to a mathematician, that were logical and consistent and, well, c
omplete.
He was not the first or the only great mathematician to have this dream. The 17th-century German philosopher and mathematician Gottfried Leibniz, one of the inventors of calculus, had a lifelong project to construct a “basic alphabet of human thoughts” that would allow one to take combinations of simple thoughts and form any complex idea, just as a limited number of words can be combined endlessly to form any sentence—including sentences never before heard or spoken. Thus, with a few primary simple thoughts and the rules of combination one could generate computationally (although in Leibniz’s day it would have been mechanically) all the possible human thoughts. It was Leibniz’s idea that this procedure would allow one to determine immediately if a thought were true or valuable or interesting in much the same way these judgments can be made about a sentence or an equation—is it properly formed, does it make sense, is it interesting? He was famously quoted as saying that any dispute could be settled by calculating—“Let us calculate!” he was apparently known to blurt out in the middle of a bar brawl. It was this obsession that led Leibniz to develop the branch of mathematics known today as combinatorics. This in turn sprang from the original insight that all truths can be deduced from a smaller number of primary or primitive statements, which could be made no simpler, and that mathematical operations (multiplication was the one Leibniz proposed but also prime factorization) could derive all subsequent thoughts. In many ways this was the beginning of modern logic; indeed, some consider his On the Art of Combinations the major step leading from Aristotle to modern logic, although Leibniz himself never made such claims. Does it somehow seem naive for him to have proposed that we could think of everything if we just built this little calculating device and put in a few simple ideas? Leibniz himself seems to have recognized his naivety as he notes that the idea, which came to him when he was 18 years old, excited him greatly “assuredly because of youthful delight.” Nonetheless, he was obsessed with this “thought alphabet” and its implications for most of the rest of his life, and the On the Art of Combinations, which was part of the project, introduced a powerful new mathematics.