This Explains Everything
Page 32
Note that we can set this threshold to any arbitrary value. However, the higher we put it, the longer we have to wait for a decision. There is a speed/accuracy tradeoff: We can wait a long time and make a highly accurate but conservative decision, or we can hazard a response earlier but at the cost of making more errors. Whatever our choice, we will always make a few errors.
Suffice it to say that the decision algorithm I sketched—which simply describes what any rational creature should do in the face of noise—is now considered a fully general mechanism for human decision making. It explains our response times, their variability, and the entire shape of their distribution. It describes why we make errors, how errors relate to response time, and how we set the speed/accuracy tradeoff. It applies to all sorts of decisions, from sensory choices (Did I see movement or not?) to linguistics (Did I hear “dog” or “bog”?) to higher-level conundrums (Should I do this task first or second?). And in more complex cases, such as performing a multidigit calculation or a series of tasks, the model characterizes our behavior as a sequence of accumulate-and-threshold steps, which turns out to be an excellent description of our serial, effortful Turing-like computations.
Furthermore, this behavioral description of decision making is now leading to major progress in neuroscience. In the monkey brain, neurons can be recorded whose firing rates index an accumulation of relevant sensory signals. The theoretical distinction between evidence accumulation and threshold helps parse out the brain into specialized subsystems that make sense from a decision-theoretic viewpoint.
As with any elegant scientific law, many complexities are waiting to be discovered. There is probably not just one accumulator but many, as the brain accumulates evidence at each of several successive levels of processing. Indeed, the human brain increasingly fits the bill for a superb Bayesian machine that makes massively parallel inferences and microdecisions at every stage. Many of us think our sense of confidence, stability, and even conscious awareness may result from such higher-order cerebral “decisions” and will ultimately fall prey to the same mathematical model. Valuation is also a key ingredient, one that I skipped, although it demonstrably plays a crucial role in weighing our decisions. Finally, the system is ripe with a-prioris, biases, time pressures, and other top evaluations that draw it away from strict mathematical optimality.
Nevertheless, as a first approximation, this law stands as one of the most elegant and productive discoveries of 20th-century psychology: Humans act as near-optimal statisticians, and our decisions correspond to an accumulation of the available evidence up to some threshold.
LORD ACTON’S DICTUM
MIHALY CSIKSZENTMIHALYI
Distinguished Professor of Psychology and Management, Claremont Graduate University; founding codirector of CGU’s Quality of Life Research Center; author, Flow: The Psychology of Optimal Experience
I hope I will not be drummed out of the corps of social science if I confess that I can’t think of an explanation in our field that is both elegant and beautiful. Perhaps deep. . . . I guess we are still too young to have explanations of that sort. But there is one elegant and deep statement (which, alas, is not quite an “explanation”) that comes close to fulfilling the Edge Question criteria and that I find very useful as well as beautifully simple.
I refer to the well-known lines Lord Acton wrote in a letter from Naples in 1887 to the effect that “Power tends to corrupt, and absolute power corrupts absolutely.” At least one philosopher of science has written that on this sentence an entire science of human beings could be built.
I find that the sentence offers the basis for explaining how a failed painter like Adolf Hitler and a failed seminarian like Joseph Stalin could end up with the blood of millions on their hands; or how the Chinese emperors, the Roman popes, and the French aristocracy failed to resist the allure of power. When a religion or ideology becomes dominant, the lack of controls will result in widening spirals of license, leading to degradation and corruption.
It would be nice if Acton’s insight could be developed into a full-fledged explanation before the hegemonies of our time, based on blind faith in science and the worship of the Invisible Hand, follow older forms of power into the dustbins of history.
FACT, FICTION, AND OUR PROBABILISTIC WORLD
VICTORIA STODDEN
Computational legal scholar; assistant professor of statistics, Columbia University
How do we separate fact from fiction? We are frequently struck by seemingly unusual coincidences. Imagine seeing an inscription describing a fish in your morning reading, and then at lunch you are served fish and the conversation turns to “April fish” (or April fools). That afternoon, a work associate shows you several pictures of fish, and in the evening you are presented with an embroidery of fishlike sea monsters. The next morning, a colleague tells you she dreamed of fish. This might start to seem spooky, but it turns out that we shouldn’t find it surprising. The reason has a long history, resulting in the unintuitive insight of building randomness directly into our understanding of nature, through the probability distribution.
Chance as Ignorance
Tolstoy was skeptical of our understanding of chance. He gave an example of a flock of sheep, one of which had been chosen for slaughter. This one sheep was given extra food separately from the others, and Tolstoy imagined that the flock, with no knowledge of what was coming, must find the continually fattening sheep extraordinary—something he thought they would assign to chance due to their limited viewpoint. Tolstoy’s solution was for the flock of sheep to stop thinking that things happen only for “the attainment of their sheep aims” and realize that there are hidden aims that explain everything perfectly well, and so no need to resort to the concept of chance.
Chance as an Unseen Force
Eighty-three years later, Carl Jung published a similar idea in his well-known essay “Synchronicity, An Acausal Connecting Principle.” He postulated the existence of a hidden force responsible for the occurrence of seemingly related events that otherwise appear to have no causal connection. The initial story of the six fish encounters is Jung’s, taken from his book. He finds this string of events unusual—too unusual to be ascribable to chance. He thinks something else must be going on and labels it the acausal connecting principle.
Persi Diaconis, Mary V. Sunseri Professor of Statistics and Mathematics at Stanford and a former professor of mine, thinks critically about Jung’s example: Suppose we encounter the concept of fish once a day on average, according to what statisticians call a Poisson process (another fish reference!). The Poisson process is a standard mathematical model for counts—for example, radioactive decay seems to follow a Poisson process. The model presumes a certain fixed rate at which observations appear on average, and otherwise they are random. So we can consider a Poisson process for Jung’s example with a long-run average rate of one observation per twenty-four hours and calculate the probability of seeing six or more observations of fish in a twenty-four-hour window. Diaconis finds the chance to be about 22 percent. Seen from this perspective, Jung shouldn’t have been surprised.
The Statistical Revolution: Chance in Models of Data Generation
Only about two decades after Tolstoy penned his lines about sheep, the English mathematician Karl Pearson brought about a statistical revolution in scientific thinking with a new idea of how observations arose—the same idea used by Diaconis in his probability calculation. Pearson suggested that nature presents data from an unknown distribution but with some random scatter. His insight was that this is a different concept from measurement error, which adds additional error when the observations are actually recorded.
Before Pearson, science dealt with things that were “real,” such as laws describing the movement of the planets or blood flow in horses (to use examples from David Salsburg’s book, The Lady Tasting Tea). What Pearson made possible was a probabilistic conception of the world. Planets didn’t follow laws with exact precision, even after accounting for measu
rement error. The exact course of blood flow differed in different horses, but the horse circulatory system wasn’t purely random. In estimating distributions rather than the phenomena themselves, we are able to abstract a more accurate picture of the world.
Chance Described by Probability Distributions
That measurements themselves have a probability distribution was a marked shift from confining randomness to the errors in the measurement. Pearson’s conceptualization is useful because it permits us to estimate whether what we see is likely or not, under the assumptions of the distribution. This reasoning is now our principal tool for judging whether or not we think an explanation is likely to be true.
We can, for example, quantify the likelihood of drug effectiveness or carry out particle detection in high-energy physics. Is the distribution of the mean-response difference between drug treatment and control groups centered at zero? If that seems likely, we can be skeptical of the drug’s effectiveness. Are candidate signals so far from the distribution for known particles that they must be from a different distribution, suggesting a new particle? Detecting the Higgs boson requires such a probabilistic understanding of the data, to differentiate Higgs signals from other events. In all these cases, the key is that we want to know the characteristics of the underlying distribution that generated the phenomenon of interest.
Pearson’s incorporation of randomness directly into the probability distribution enables us to think critically about likelihoods and quantify our confidence in particular explanations. We can better evaluate when what we see has special meaning and when it does not, permitting us to better reach our “human aims.”
ELEGANT = COMPLEX
GEORGE CHURCH
Professor of genetics, Harvard Medical School; director, Personal Genome Project
Many would say the opposite, elegance = simplicity. They have (classical) physics envy—for smooth, linear physics and describably delicious four-letter words, like F=ma. But modern science has moved on, embracing the complex. Occam now uses a Web-enabled fractal e-razor. Even in mathematics, stripped of the awkward realities of nonideal gases, turbulence, and nonspherical cows, simple statements about integers like Fermat’s an + bn = cn and wrangling maps with four colors take many years and pages (occasionally computers) to prove.
The question is not “What is your favorite elegant explanation?” but “What should your favorite elegant explanation be?” We’re capable of changing not only our minds but also the whole fabric of human nature. As we engineer, we recurse—successively approximating ourselves as an increasingly survivable force of nature. If so, what will we ultimately admire? Our evolutionary baggage served our ancestors well but could kill our descendants. Faced with modern foods, our frugal metabolisms feed a diabetes epidemic. Our love of “greedy algorithms” leads to exhausted resources. Our too-easy switching from rationality to blind faith or fear-based decision making can be manipulated politically to drive conspicuous consumption. (Think Easter Island, 163 square kilometers of devastation, scaled to Earth Island at 510 million square kilometers.) “Humans” someday may be born with bug-fixes for dozens of current cognitive biases, as well as intuitive understanding and motivation to manipulate quantum weirdnesses, dimensions beyond three, super rare events, global economics, etc. Agricultural and cultural monocultures are evolutionarily bankrupt. Evolution was only briefly focused on surviving in a sterile world of harsh physics, but ever since has focused on life competing with itself. Elegant explanations are those that predict the future farther and better. Our explanations will help us dodge asteroids, solar red-giant flares, and even close encounters with the Andromeda galaxy. But most of all, we will diversify to deal with our own ever-increasing complexity.
TINBERGEN’S QUESTIONS
IRENE PEPPERBERG
Research associate & lecturer, Harvard University; adjunct associate professor of psychology, Brandeis University; author, Alex & Me
Why do we—and all other creatures—behave as we do? No answers really exist. I chose the ethologist and ornithologist Nikolaas Tinbergen’s questions for exactly that reason, because sometimes there is no one deep, elegant, and beautiful explanation. Much like a teacher of fishing rather than a giver of fish, Tinbergen did not try to provide a global explanation but instead gave us a scaffolding upon which to build our own answers to each individual behavioral pattern we observe—a scaffolding that can be used not only for the ethological paradigms for which he was famous but also for all forms of behavior in any domain. Succinctly, Tinbergen asked:
• What is the mechanism? How does it seem to work?
• What is the ontogeny? How do we observe it develop over time?
• What is its function? What are all the possible reasons it is done?
• What is its origin? What are the many ways in which it could have arisen?
In attempting to answer each of these questions, we are forced to think, at the very least, about the interplay of genes and environment, of underlying processes (neuroanatomy, neurophysiology, hormones, and so on), of triggers and timing, what advantages and disadvantages are balanced, and how these may have changed over time.
Furthermore, unlike most “favorite” explanations, Tinbergen’s questions are enduring. Answers to his questions often reflect a current zeitgeist in the scientific community, but those answers mutate as additional knowledge becomes available. His questions challenge us to rethink our basic presumptions each time another chunk of data lands in our laps, whatever our field of study. Our fascination with simple elegant answers strikes me as a Douglas Adams (Hitchhiker’s Guide to the Galaxy) pursuit: We may find “42,” but unless we know how to formulate the appropriate questions, the answer isn’t always very meaningful.
THE UNIVERSAL TURING MACHINE
GLORIA ORIGGI
Philosopher, Centre National de la Recherche Scientifique, Paris; editor, Text-e: Text in the Age of the Internet
“There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy,” says Hamlet to his friend Horatio. An elegant way to point to all the unsolvable, untreatable questions that haunt our lives. One of the most wonderful demonstrations of all time ends up with the same sad conclusion: Some mathematical problems are simply unsolvable.
In 1936, the British mathematician Alan Turing conceived the simplest and most elegant computer ever, a device (as he later described it in a 1948 essay titled “Intelligent Machinery”) with
an infinite memory capacity obtained in the form of an infinite tape marked out into squares, on each of which a symbol could be printed. At any moment there is one symbol in the machine; it is called the scanned symbol. The machine can alter the scanned symbol and its behaviour is in part determined by that symbol, but the symbols on the tape elsewhere do not affect the behaviour of the machine. However the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine.
An abstract machine, conceived by the mind of a genius, to solve an unsolvable problem: the decision problem. That is, for each logical formula in a theory, is it possible to decide in a finite number of steps if the formula is valid in that theory? Well, Turing shows that it’s not possible. The decision problem, or Entscheidungsproblem, was well known by mathematicians: It was the tenth on a list of unsolved problems that David Hilbert presented in 1900 to the mathematics community, thus setting most of the 20th century’s agenda for mathematical research. It asks whether there is a mechanical process realizable in a finite number of steps that can decide whether a formula is valid or not or whether a function is computable or not. Turing began by asking himself, “What does a mechanical process mean?” and his answer was that a mechanical process is a process that can be realized by a machine. Obvious, isn’t it?
He then designed a machine for each possible formula in first-order logic and for each possible recursive function of natural numbers—given the logical equivalence proved by Gödel, in his incompleteness theorem, b
etween the set of first-order-logic formulas and the set of natural numbers. And, indeed, using Turing’s simple definition, we can write down a string of 0s and 1s for each tape to describe a function, then give to the machine a list of simple instructions (move left, move right, stop) so that it writes down the demonstration of the function and then stops.
This is his Universal Turing Machine—universal because it can take as input any possible string of symbols describing a function and give as output its demonstration. But if you feed the Universal Turing Machine with a description of itself, it doesn’t stop; it goes on infinitely generating 0s and 1s. That’s it. The Mother of all computers, the soul of the digital age, was designed to show that not everything can be reduced to a Turing machine. There are more things in heaven and earth than are dreamt of in our philosophy.
A MATTER OF POETICS
RICHARD FOREMAN
Playwright and director; founder, Ontological-Hysteric Theater
Since every explanation is contingent, limited by its circumstances, and certain to be superseded by a better or momentarily more ravishing one, the favorite explanation is really a matter of poetics rather than science or philosophy. That being said, I, like everyone else, fall in “love”—a romantic infatuation that either passes or transforms into something else. But it is the repeated momentary ravishment that slowly shapes one, because, in a sense, one is usually falling in love with the same type again and again, and this repetition defines and shapes one’s mental character. When young, I was so shaped and oriented by what I shall now call my two favorite explanations.