Chances Are

Home > Other > Chances Are > Page 34
Chances Are Page 34

by Michael Kaplan


  How do they all know this? How do our many internal agents come to their conclusions—and how do they do it on so little evidence? Our senses are not wonderfully sharp; what’s remarkable is our ability to draw conclusions from them. Such a seemingly straightforward task as using the two-dimensional evidence from our eyes to master a three-dimensional world is a work of inference that still baffles the most powerful computers.

  Vision is less a representation than a hypothesis—a theory about the world. Its counterexamples, optical illusions, show us something about the structure and richness of that theory. For we come up against optical illusions not just in the traditional flexing cubes or converging parallel lines, but in every perspective drawing or photograph. In looking, we are making complex assumptions for which there are almost no data; so we can be wrong. The anthropologist Colin Turnbull brought a Pygmy friend out of the rain forest for the first time; when the man saw a group of cows across a field, he laughed at such funny-shaped ants. He had never had the experience of seeing something far off, so if the cows took up such a small part of his visual field, they must be tiny. The observer is the true creator.

  Seeing may require a complex theory, but it’s a theory that four-month-old infants can hold and act upon, focusing their attention on where they expect things to be. Slightly older children work with even more powerful theories: that things are still there when you don’t see them, that things come in categories, that things and categories can both have names, that things make other things happen, that we make things happen—and that all this is true of the world, not just of me and my childish experience.

  In a recent experiment, four-year-olds were shown making sophisticated and extended causal judgments based on the behavior of a “blicket detector”—a machine that did or did not light up depending on whether particular members of a group of otherwise identical blocks were put on top of it. It took only two or three examples for the children to figure out which blocks were blickets—and that typifies human cognition’s challenge to the rules of probability. If we were drawing our conclusions based solely on the frequency of events, on association or similarity, we would need a lot of examples, both positive and negative, before we could put forward a hypothesis. Perhaps we would not need von Mises’ indefinitely expanding collectives, but we would certainly need more than two or three trials. Even “Student” would throw up his hands at such a tiny sample. And yet, as if by nature, we see, sort, name, and seek for cause.

  Joshua Tenenbaum heads the Computational Cognitive Science Group at MIT. His interest in cognition bridges the divide between human and machine. One of the frustrations of recent technology, otherwise so impressive, has been the undelivered promise of artificial intelligence. Despite the hopes of the 1980s, machines not only do not clean our houses, drive for us, or bring us a drink at the end of a long day; they cannot even parse reality. They have trouble pulling pattern out of a background of randomness: “The thing about human cognition, from 2-D visual cognition on up, is that it cannot be deductive. You aren’t making a simple, logical connection with reality, because there simply isn’t enough data. All sorts of possible worlds could, for example, produce the same image on the retina. Intuitively, you would say—not that we know the axioms, the absolute rules of the visual world—but that we have a sense of what is likely: a hypothesis.

  “In scientific procedure, you are supposed to assume the null hypothesis and test for significance. But the data requirements are large. People don’t behave like that: you can see them inferring that one thing causes another when there isn’t even enough data to show formally that they are even correlated. The model that can explain induction from few examples requires that we already have a hypothesis—or more than one—through which we test experience.” The model that Tenenbaum and his colleagues favor is a hierarchy of Bayesian probability judgments.

  We first considered Bayes’ theorem in the context of law and forensic science, where a theory about what happened needed to be considered in the light of each new piece of evidence. The theorem lets you calculate how your belief in a theory would change depending on how likely the evidence appears, given this theory—or given another theory. Bayesian reasoning remains unpopular in some disciplines, both because it requires a prior opinion and because its conclusions remain provisional—each new piece of evidence forces a reexamination of the hypothesis. But that’s exactly what learning feels like, from discovering that the moo-cow in the field is the same as the moo-cow in the picture book to discovering in college that all the chemistry you learned at school was untrue. The benefit of the Bayesian approach is that it allows one to make judgments in conditions of relative ignorance, and yet sets up the repeated sequence by which experience can bolster or undermine our suppositions. It fits well with our need, in our short lives, to draw conclusions from slight premises.

  One reason for Tenenbaum and his group to talk about hierarchical Bayesian induction is that we are able to make separate judgments about several aspects of reality at once, not just the aspect the conscious mind is concentrating on. Take, for instance, the blicket detector. “It is an interesting experiment,” says Tenenbaum, “because you’re clearly seeing children make a causal picture of the world—‘how it works,’ not just ‘how I see it.’ But there’s more going on there—the children are also showing they have a theory about how detectors work: these machines are deterministic, they’re not random, they respond to blickets even when non-blickets are also present. Behind that, the children have some idea of how causality should behave. They don’t just see correlation and infer cause—they have some prior theory of how causes work in general.” And, one assumes, they have theories about how researchers work: asking rational questions rather than trying to trip you up—now, if it was your older sister . . .

  This is what is meant by a Bayesian hierarchy: not only are we testing experience in terms of one or more hypotheses, we are applying many different layers of hypothesis. Begin with the theory that this experience is not random; pass up through theories of sense experience, emotional value, future consequences, and the opinions of others; and you find you’ve reached this individual choice: peach ice cream or chocolate fudge cake? Say you decide on peach ice cream and find, as people often claim, that it doesn’t taste as good as you’d expected. You’ve run into a counterexample—but countering what? How does this hierarchy of hypothesis deal with the exception? How far back is theory disproved?

  “In the scientific method, you’re supposed to set up your experiment to disprove your hypothesis,” says Tenenbaum, “but that’s not how real scientists behave. When you run into a counterexample, your first questions are: ‘Was the equipment hooked up incorrectly? Is there a calibration problem? Is there a flaw in the experimental design?’ You rank your hypotheses and look at the contingent ones first, rather than the main one. So if that’s what happens when we are explicitly testing an assumption, you can see that a counterexample is unlikely to shake a personal theory that has gone through many Bayesian cycles.”

  Even the most open-minded of us don’t keep every assumption in play, ready for falsification; as experience confirms assumptions, we pack our early hypotheses down into deep storage. We discard the incidental and encode the important in its minimum essential information. The conscious becomes the reflex; the hypothetical approaches certainty. Children ask “Whassat?” for about a year and then stop; naming is done—they can pick up future nouns automatically, in passing. They ask “Why?” compulsively for longer—but soon the question becomes rhetorical: “Why won’t you let me have a motorcycle? It’s because you want to ruin my life, that’s why.”

  This plasticity, this permanent shaping of cognition by experience, leaves physical traces that show up in brain scans. London taxi drivers have a bigger hippocampus—the center for remembered navigation— than the rest of us; violinists have bigger motor centers associated with the fingers of the left hand. The corporation headquartered in our skulls behaves like any c
ompany, allocating resources where they are most needed, concentrating on core business, and streamlining repetitive processes. As on the assembly line, the goal seems to be to drain common actions of the need for conscious thought—to make them appear automatic. In one delightfully subtle experiment, people were asked to memorize the position of a number of chess pieces on a board. Expert chess players could do this much more quickly and accurately than the others—but only if the arrangement of pieces represented a possible game situation. If not, memorizing became a conscious act, and the experts took just as long as duffers to complete it.

  This combination of plasticity and a hierarchical model of probabilities may begin to explain our intractable national, religious, and political differences. Parents who have adopted infants from overseas see them grow with remarkable ease into their new culture—yet someone like Henry Kissinger, an immigrant to America at the age of 15, still retains a German accent acquired in less time than he spent at Harvard and the White House. A local accent, a fluent second language, a good musical ear, deep and abiding prejudice—we develop them young or we do not develop them at all; and once we have them they do not easily disappear. After a few cycles of inference, new evidence has little effect.

  As Tenenbaum explains, Bayesian induction offers us speed and adaptability at the cost of potential error: “If you don’t get the right data or you start with the wrong range of hypotheses, you can get causal illusions just as you get optical ones: conspiracy theories, superstitions. But you can still test them: if you think you’ve been passing all these exams because of your lucky shirt—and then you start failing—you might say, ‘Aha; maybe it’s the socks.’ In any case, you’re still assuming that something causes it.” It’s easy, though, to imagine a life—especially, crucially, a childhood—composed of all the wrong data, so that the mind’s assumptions grow increasingly skew to life’s averages and, through a gradual hardening of expectation, remain out of kilter forever.

  It is a deep tautology that the mad lack common sense—since common sense is very much more than logic. The mentally ill often reason too consistently, but from flawed premises: After all, if the CIA were indeed trying to control your brain with radio waves, then a hat made of tinfoil might well offer protection. What is missing, to different degrees in different ailments, is precisely a sense of probability: Depression discounts the chance of all future pleasures to zero; mania makes links the sense data do not justify. Some forms of brain damage separate emotional from rational intelligence, reducing the perceived importance of future reward or pain, leading to reckless risk-taking. Disorders on the autistic spectrum prevent our gauging the likely thoughts of others; the world seems full of irrational, grimacing beings who yet, through some telepathic power, comprehend one another’s behavior.

  One of the subtlest and most destructive failures of the probability mechanism produces the personality first identified in the 1940s by Hervey Cleckley: the psychopath. The psychopath suffers no failure of rational intelligence; he (it is usually he) is logical, often clever, charming. He knows what you want to hear. In situations where there are formal rules (school, the law, medicine) he knows how to work them to his advantage. His impulsiveness gets him into trouble, but his intelligence gets him out; he is often arrested, rarely convicted. He could tell you in the abstract what would be the likely consequences of his behavior—say, stealing money from neighbors, falsifying employment records, groping dance partners, or running naked through town carrying a jug of corn liquor; he can even criticize his having done so in the past. Yet he is bound to repeat his mistakes, to “launch himself” (as the elderly uncle of one of Cleckley’s subjects put it) “into another pot-valiant and fatuous rigadoon.” The psychopath’s defect is a specific loss of insight: an inability to connect theoretical probability with actual probability and thus give actions and consequences a value. His version of cause and effect is like a syllogism with false premises: It works as a system; it just doesn’t mean anything.

  We have pursued truth through a labyrinth and come up against a mirror. It turns out that things seem uncertain to us because certainty is a quality not of things but of ideas. Things seem to have particular ways of being or happening because that is how we see and sort experience: we are random-blind; we seek the pattern in the weft, the voice on the wind, the hand in the dark. The formal calculation of probabilities will always feel artificial to us because it slows and makes conscious our leap from perception to conclusion. It forces us to acknowledge the gulf of uncertainty and randomness that gapes below—and leaps are never easy if you look down.

  Such a long story should have a moral. Another bishop (this time, in fact, the Archbishop of York), musing aloud on the radio, once asked: “Has it occurred to you that the lust for certainty may be a sin?” His point was that, by asserting as true what we know is only probable, we repudiate our humanity. When we disguise our reasoning about the world as deductive, logical fact—or, worse, hire the bully Authority to enforce our conclusions for us—we claim powers reserved, by definition, to the superhuman. The lesson of Eve’s apple is the world’s fundamental uncertainty: nothing outside Eden is more than probable.

  Is this bad news? Hardly. Just as probability shows there are infinite degrees of belief between the impossible and the certain, there are degrees of fulfillment in this task of being human. If you want a trustworthy distinction between body and soul, it might be this: our bodies, like all life forms, are essentially entropy machines. We exist by flattening out energy gradients, absorbing concentrations of value, and dissipating them in motion, heat, noise, and waste. Our souls, though, swim upstream, struggling against entropy’s current. Every neuron, every cell, contains an equivalent of Maxwell’s demon—the ion channels—which sort and separate, increasing local useful structure. We use that structure for more than simply assessing and acting, like mindless automata. We remember and anticipate, speculate and explain. We tell stories and jokes—the best of which could be described as tickling our sense of probabilities.

  This is our fate and our duty: to search for, devise, and create the less probable, the lower-entropy state—to connect, build, describe, preserve, extend . . . to strive and not to yield. We reason, and examine our reasoning, not because we will ever achieve certainty, but because some forms of uncertainty are better than others. Better explanations have more meaning, wider use, less entropy.

  And in doing all this, we must be brave—because, in a world of probability, there are no universal rules to hide behind. Because fortune favors the brave: the prepared mind robs fate of half its terrors. And because each judgment, each decision we make, if made well, is part of the broader, essential human quest: the endless struggle against randomness.

  Index

  A1 standard

  Abraham

  Adams, Henry

  aerodynamics

  aerospace industry

  African Americans

  Agathon

  agronomy

  AIDS

  Albuera, battle of

  Alembert, Jean Le Rond d,’

  Alfonso X, King of Castile and Leon

  algebra

  algorithms

  Allen, Woody

  Alpha Arietis

  al-Qaeda

  altruism

  amygdala

  annuities

  antidepressants

  appellation contrôlée

  Aratus

  Aristotle

  Ars Conjectandi (Bernoulli)

  artificial intelligence

  artillery

  Art of War, The (Sun)

  Asad, Muhammad

  ascending sequences

  Asclepius

  Astor, John Jacob

  astrology

  astronomy

  Astruc, Jean

  atmospheric pressure

  atomic bomb

  Auden, W. H.

  Augustus, Emperor of Rome

  authorship

  autism

  average
individual

  Aviatik DD 1 bomber

  Avicenna

  Aviva

  Axelrod, Kenneth

  axioms

  Azo of Bologna

  Bach, Johann Sebastian

  Bacon, Francis

  Ball, Patrick

  balloon flights

  Balzac, Honoré de

  Banque Royale

  Barbon, Nicholas

  barometers

  barratry

  base-rate effect

  Bass, John

  Bayes, Thomas

  Bayes’ theorem

  Beagle

  Belgium

  belief systems

  bell curve (normal curve)

  Bentham, Jeremy

  Berchtold, Count

  Bernard, Claude

  Bernoulli, Daniel

  Bernoulli, Jakob

  Bernoulli, Johann

  Bernoulli, Nicholas

  Bertrand, Joseph

  Bethmann Hollweg, Theobald von

  Bible

  billiard balls

  Bills of Mortality

  bingo

  Biometrika

  Bird, Alex

  Bird, John

  birth defects

  birth rate

  bivariate distribution

  Bjerknes, Vilhelm

  Black, Joseph

  blackjack

  blicket detectors

  bloodletting

  Body Mass Index

  Boethius

  Boltzmann, Ludwig

  bookies

  Booth, Charles

  Borel, Émile

  Born, Max

  Bortkiewicz, Ladislaus

  bottomry

  Boyle, Robert

  Brahe, Tycho

  brain function

  “breaking the line” maneuver

  breast cancer

  Breslau

  bridge

  British Medical Journal

  Brooke, Rupert

  Brooks, Juanita

  Brothers Karamazov, The (Dostoyevsky)

 

‹ Prev