A Beautiful Math

Home > Other > A Beautiful Math > Page 11
A Beautiful Math Page 11

by Tom Siegfried


  In one such test of a public goods game,18 most players began by giving up an average of half their points. After several rounds, though, contributions dropped off. In one test, nearly three-fourths of the players donated nothing by round 10. It appeared to the researchers that people became angry at others who donated too little at the beginning, and retaliated by lowering their own donations—punishing everybody. That is to say, more of the players became reciprocators.

  But in another version of the game, a researcher announced each player's contribution after every round and solicited comments from the rest of the group. When low-amount donors were ridiculed, the cheapskates coughed up more generous contributions in later rounds. When nobody criticized the low donors, later contributions dropped. Shame, apparently, induced improved behavior.

  Other experiments consistently show that noncooperators risk punishment. So it may have been in the evolutionary past that groups containing punishers—and thus more incentive for cooperation—outsurvived groups that did not practice punishment. The tendency to punish may therefore have become ingrained in surviving human populations, even though the punishers do so at a cost to themselves. ("Ingrained" might not be just in the genes, though—many experts believe that culture transmits the punishment attitude down through the generations.)

  Of course, it's not so obvious what form that punishment might have taken back in the human evolutionary past. Bowles and Gintis have suggested that the punishment might have consisted of ostracism, making the cost to the punisher relatively low but still inflicting a significant cost on the noncooperator. They show how game theory interactions would naturally lead societies to develop with some proportion of all three types— noncooperators (free riders), cooperators, and punishers (reciprocators)—just as other computer simulations have shown. The human race plays a mixed strategy.

  Still, experts argue about these issues. I came across one paper showing that, in fact, altruism could evolve solely through benefits to the altruistic individual, not necessarily to the group, based on simulations of yet another popular game. Known as the ultimatum game, it is widely used today in another realm of game theory research, the "behavioral game theory" explored by scientists like Colin Camerer. Behavioral game theorists believe that getting to the roots of human social behavior—understanding the Code of Nature—ultimately requires knowing what makes individuals tick. In other words, you need to get inside people's heads. And the popular way of doing that has spawned a hybrid discipline uniting game theory, economics, psychology, and neuroscience in a controversial new discipline called neuroeconomics.

  * * *

  5

  Freud's Dream Games and the brain

  The intention is to furnish a psychology that shall be a natural science: that is, to represent psychical processes as quantitatively determinate states of specifiable material particles, thus making those processes perspicuous and free from contradiction.

  —Sigmund Freud, Project for a Scientific Psychology, 1895

  Sigmund Freud really wanted to understand the brain.

  He studied medicine and specialized in neurology. He planned to decipher the code linking the brain's physical processes to the mysteries of the mind. In 1895, he outlined a project for "a scientific psychology," in which mental states and human behavior could be explained materialistically, in terms of the physical interaction of nerve cells in the brain. But Freud found the brain science of the late 19th century too immature to link cranial chemistry to thought and behavior. So he skipped the brain and went straight to the mind, analyzing dreams for clues to the unconscious memories that manipulate mental life.

  Others never even dreamed of achieving the "brain physics" that Freud envisioned. Many simply regarded the brain as off limits, declaring it to be a "black box" inaccessible to scientific scrutiny. These "behaviorists" decreed that psychology should stick to observing behavior, studying stimulus and response.

  As the 20th century progressed, both Freudianism and behaviorism faded. The black box concealing the brain turned translucent as molecular medicine revealed some of its inner workings. Nowadays the brain is almost transparent, thanks to a variety of scanning technologies that produce images of the brain in action. And so the infant neuroscience that Freud abandoned over a century ago has now matured, nearly to the point of fulfilling his original intention.

  Freud could not have dreamed about merging neuroscience with economics, though, for he died before the rise of game theory. And even though they regarded game theory as a window into human behavior, game theory's originators themselves did not imagine that their math would someday advance the cause of brain science. The original game theorists would not have predicted that game theory could someday partner with neuroscience, or that such a partnership would facilitate game theory's quest to conquer economics.1 But in the late 1990s, game theory turned out to be just the right math for bringing neuroscience and economics together, in a new hybrid field known as neuroeconomics.

  BRAINS AND ECONOMICS

  One of the appealing features of game theory is the way it reflects so many aspects of real life. To win a game, or survive in the jungle, or succeed in business, you need to know how to play your cards. You have to be clever about choosing whether to draw or stand pat, bet or pass, or possibly bid nillo. You have to know when to hold 'em and know when to fold 'em. And usually you have to think fast. Winners excel at making smart snap judgments. In the jungle, you don't have time to calculate, using game theory or otherwise, the relative merits of fighting or fleeing, hiding or seeking.

  Animals know this. They constantly face many competing choices from a long list of possible behaviors, as neuroscientists Gregory Berns and Read Montague have observed (in language rather more colloquial than what you usually find in a neuroscience journal). "Do I chase this new prey or do I continue nibbling on my last kill?" Berns and Montague wrote in Neuron. "Do I run from the possible predator that I see in the bushes or the one that I hear? Do I chase that potential mate or do I wait around for something better?"2

  Presumably, animals don't deliberate such decisions consciously, at least not for very long. Hesitation is bad for their health. And even if animals could think complexly and had time to do so, there's no obvious way for them to compare all their needs for food, safety, and sex. Yet somehow animal brains add up all the factors and compute a course of action that enhances the odds of survival. And humans differ little from other animals in that regard. Brains have evolved a way to compare and choose among behaviors, apparently using some "common currency" for valuing one choice over others. In other words, not only do people have money on the brain, they have the neural equivalent of money operating within the brain. Just as money replaced the barter system— providing a common currency for comparing various goods and services—nerve cell circuitry evolved to translate diverse behavioral choices into the common currency of brain chemistry.

  When you think about it, it makes a lot of sense. But neuroscientists began to figure it all out only when they joined forces with economists inspired by game theory. Game theory, after all, was the key to quantifying the fuzzy notion of economic utility. Von Neumann and Morgenstern showed how utility could be rigorously defined and derived logically from simple axioms, but still thought of utility in terms of money. Economists continued to consider people to be "rational" actors who would make behavioral choices that maximized their money or the monetary value of their purchases.

  Putting game theory into experimental action, though, showed that people don't always do that. Money—gasp—turned out not to be everything, after all. And people turned out not to be utterly rational, but pretty darn emotional. Imagine that.

  GAMES AND EMOTIONS

  You might think (and some people do) that game theory therefore becomes irrelevant to the real world of human social interaction, because people are not rational seekers of maximum utility, as game theory allegedly assumes. But while game theory is often described in that way, it's not quite t
he right picture. Game theory actually only tells you what people would do if they were "rationally" maximizing their utility. That makes game theory the ideal instrument for identifying deviations from that notion of rationality, and many game theorists are happy with that.

  There is, however, another interpretation of what's going on. Perhaps people really do maximize their utility—but utility is not really based on dollars and cents, at least not exclusively. And maybe "emotional" and "rational" are not mutually exclusive descriptions of human behavior. Is it really so irrational to behave in a way that makes you feel good, even if it costs you money? After all, the root notion of utility was really based on happiness, which is surely an emotional notion.

  Actually, most economists have long recognized that people are emotional. But when your goal is describing economics scientifically—and mathematically—acknowledging emotions poses a real problem, as Colin Camerer explained to me. "One of the things mainstream economists have said is, well, rationality is mathematically precise," he said. "There's one way to be rational. But there are a lot of ways to not be rational. So they've often used that as an excuse—anything can happen if people aren't perfectly rational." And if anything can happen, there's no hope of finding a mathematical handle on the situation. "Economists have been a little defeatist about this—if you give up rationality, we'll never be able to have anything precise."

  This argument seems very much like the strategy of looking for lost keys only under the lamp post, because you couldn't see them if they were anywhere else. If there's only one sort of behavior (rational) that you can describe with your math, then that's the behavior you will assume is correct. But Camerer and other behavioral economists would rather first figure out what behavior is actually like. "Our view is to say, let's find scientists who have been thinking about how brains actually work … and ask them for some help," Camerer said. "It might be that even though, mathematically, there are lots of possible alternative models, the psychologists say, ‘oh, it's this one.'"3

  Of course, there was a time—as in Freud's day—when psychologists couldn't have provided very reliable answers to the questions about brain processes underlying human behavior. But with the rise of modern neuroscience, that situation has changed. Human emotions, for instance, are no longer as much of a mystery as they used to be. Scientists can now peer inside the brain to observe what's going on when people experience contempt and disgust, fear or anger, empathy and love. Not to mention getting high on drugs. The driving forces of human decision making can now be traced to signals traveling between specific brain regions. Consequently human behavior, economic and otherwise, can now be analyzed in terms other than the economist's "rational" and monetary notion of utility. In fact, it now seems likely that the brain measures utility not with dollars, but with dopamine. And that's just one of the insights that the new discipline of neuroeconomics is providing into human economic behavior.

  ECONOMICS AND THE BRAIN

  I had encountered a few papers on neuroeconomics, but really didn't get the big picture until 2003, when I visited Read Montague's laboratory, at the Baylor College of Medicine in Houston. His "Human Neuro-imaging Laboratory" is a cutting-edge model of advanced technology in the service of science, with 100 or so computers, walls lined with plasma screen monitors, and state-of-the-art brain scanning machines. Montague explains it all with the speed of a Pentium processor, emphasizing the power of this new science to grasp human behavior in a precise way.

  "We're quantifying the mind and human experience," he said. "We're turning feelings into numbers."4

  Montague began his scientific life in mathematics and biophysics, but foresight warned him that physics was not the wave of the future. While dabbling in a quantum chemistry project, his thoughts turned to the brain. Why not put math to use in comprehending cognition as well as the cosmos? He began to work on computational modeling of brain processes, and then proceeded to peer deep into real brains, exploiting a technology provided by physics to revolutionize psychology.

  Brain scanners are so familiar today that it's hard to remember that a generation or so ago many scientists still considered the brain to be forever inscrutable. The behaviorist psychology of the early 20th century, proselytized by B. F. Skinner, had left its imprint on general beliefs about brain and behavior. Brains could not be observed in action, so only the behaviors that the brain produced mattered to science, the behaviorists contended. It turned out to be a misguided notion of both science and the brain.

  By the 1970s, imaginative new technologies had begun to make the brain transparent to clever neurovoyeurs. Radioactive atoms could be attached to critical molecules, allowing their activity to be observed in living brains, providing clues to what brains were doing while animals were behaving. Later methods dispensed with the radioactivity, using magnetic fields to jostle molecules in the blood that flowed through brain tissue. Ultimately this method, known as magnetic resonance imaging, or MRI, became widely used in medicine to "see" beneath the skin. And a variant of MRI technology was adopted by researchers in neuroscience to watch brains in action.5

  "It can make a movie of the dynamic blood flow changes in every region of your brain," Montague said. And blood flow has been shown to be tightly linked to neural activity—active neurons need nourishment, so that's where the blood goes. You can watch how patterns of activity change in different parts of the brain as its owner performs various behaviors.

  Consequently, the old limits on which aspects of the brain could be studied and understood had dissolved, Montague explained, as a new wave of neuroscientists embraced the imaging tools. "There's a kind of sea change of belief in what you can and can't explain," he said. "People put people into scanners like this and do every manner of cognitive task, literally from having sex to thinking about the word sailboat. The experiments are working beautifully. I think the sky's the limit."6

  A new scientific discipline to exploit these technological abilities seems to have emerged almost out of nowhere. The term neuroeconomics itself apparently first appeared in 2002.7 Before that, people like Montague had been referring to their studies as "neural economics." In any event, the first attention-getting published paper in the new genre appeared in 1999, reporting a study by Paul Glimcher and Michael Platt of the Center for Neural Science at New York University. Glimcher and Platt had measured nerve-cell activity in the brains of monkeys performing a decision-making task. The results supported the notion that nervous activity reflects choice-making factors—that is, something like utility— that economists had already identified.

  Monkeys, of course, are not obsessed with money, but they do really enjoy getting squirts of fruit juice and can be fairly easily trained to perform all sorts of tasks for a juice-squirt reward. In the Platt-Glimcher experiment, all a monkey was required to do was switch its gaze from a cross on a screen to one of two lights. Looking at a light earned a squirt of juice.

  Looking at one of the lights, though, earned a bigger squirt than looking at the other. It didn't take the monkey long to figure that out. (If I'm going to maximize my utility, the monkey obviously thought, I should look at the light on the right.) If the experimenters changed the high-reward squirt to the other light, the monkey caught on right away and preferred the new high-reward light.

  None of that was very surprising—similar experiments had been done before. But in this case, Platt and Glimcher also recorded the activity of a nerve cell in a region of the monkey brain that processes visual input and is involved in directing eye movement. (If you must know, the cell was in the lateral intraparietal cortex, or LIP.)

  Now here's the tricky part of the experiment. The lights on the screen were positioned so that only one of them was in the field of view accessible to the nerve cell being monitored. When the accessible light appeared, that nerve cell fired electrical impulses, as nerve cells do when stimulated. That nerve cell also boosted its activity as the monkey's eyes moved to gaze at that light. No surprises there. But if that
light happened to be the "high reward" light, the nerve cell fired its signals much more vigorously than when viewing the "low reward" light. To an old-school neurophysiologist, that would be surprising. For the actual visual stimulus was precisely the same in either case—a light comes on, and the eyes move to look at it. Somehow the neuron linked to that visual stimulus "knew" which light was the Big Gulp of juice dispensers. The monkey's choice of looking toward the high-reward light (that is, the utility-maximizing choice) reflected a specific change in activity by a nerve cell in a specific region of the brain.8

  Of course, that experiment was just a start, but it opened a lot of scientists' eyes to the possibility of understanding economic decision making by looking inside the brain. The next year, neuroeconomics pioneers met in Princeton for the first major conference on the topic. Montague recalls the skepticism expressed by one of the economists attending, who saw no reason to believe that brain chemicals had anything to do with economics. "I said that is just complete poppycock," Montague recalled. "If your brain doesn't generate economic behavior, what kind of ghost horses do you believe in?" Even worse, the economist didn't even think his remarks were particularly provocative. "I was stunned by that," said Montague. "I might still be stunned by that."9

  Gradually, though, the idea of merging neuroscience and economics caught on, though perhaps more rapidly in neuroscience than economics. A special issue of Neuron, published in October 2002, included a passel of papers on human decision making, many of them exploring the new insights offered by neural economic studies.

 

‹ Prev