Book Read Free

This Explains Everything

Page 10

by Mr. John Brockman


  COMMITMENT

  RICHARD H. THALER

  Theorist, behavioral economics; director, Center for Decision Research, Graduate School of Business, University of Chicago; coauthor (with Cass R. Sunstein), Nudge: Improving Decisions About Health, Wealth, and Happiness

  It is a fundamental principle of economics that a person is always better off if they have more alternatives to choose from. But this principle is wrong. There are cases when I can make myself better off by restricting my future choices and committing myself to a specific course of action.

  The idea of commitment as a strategy is an ancient one. Odysseus famously had his crew tie him to the mast so he could listen to the sirens’ songs without steering the ship into the rocks. Another classic is Cortez’s decision to burn his ships upon arriving in South America, thus removing retreat as one of the options his crew could consider. But although the idea is an old one, we did not understand its nuances until Nobel laureate Thomas Schelling wrote his 1956 masterpiece: “An Essay on Bargaining.”

  It is well known that games such as the Prisoner’s Dilemma work out if both players can credibly commit themselves to cooperating, but how can I convince you that I will cooperate when it is a dominant strategy for me to defect? (And if you and I are game theorists, you know that I know that you know that I know that defecting is a dominant strategy.) Schelling gives many examples of how this can be done, but here’s my favorite: A Denver rehabilitation clinic whose clientele consisted of wealthy cocaine addicts offered a “self-blackmail” strategy. Patients were offered an opportunity to write a self-incriminating letter, which would be delivered if and only if the patient, who would be tested on a random schedule, was found to have used cocaine. Most patients would now have a very strong incentive to stay off drugs; they were committed.

  Many of society’s thorniest problems, from climate change to Middle East conflict, could be solved if the relevant parties could only find a way to commit themselves to some future course of action. They would be well advised to study Tom Schelling in order to figure out how to make that commitment.

  TIT FOR TAT

  JENNIFER JACQUET

  Clinical assistant professor of environmental studies, NYU

  Selfishness can sometimes seem like the best strategy. It is the rational response to the Prisoner’s Dilemma, for instance, where each individual in a pair can either cooperate or defect, leading to four potential outcomes. No matter what the other person does, selfish behavior (defecting) always yields greater return. But if both players defect, both do worse than if they had cooperated. Yet when political scientist Robert Axelrod and his colleagues ran hundreds of rounds of the Prisoner’s Dilemma expressed using a mathematical equation on a computer, the repetition of the game led to a different result.

  Experts from a wide range of disciplines submitted 76 different game strategies for Axelrod to try out against each other—some of them very elaborate. Each strategy would play against all the others for 200 rounds. In the end, the strategy that received the highest score was also the simplest. Tit for Tat, an if-then strategy where the player cooperates on the first move and thereafter does what its partner does, was the winner. The importance of reciprocity to the evolution of cooperation was detected by humans but simulated and verified with machines.

  This elegant explanation was then documented in living egoists with an elegant experiment. Evolutionary biologist Manfred Milinski noticed Tit for Tat behavior in his subjects, three-spined sticklebacks. When he watched a pair of these fish approach a predator, he observed four options: they could swim side by side, one could take the lead while the other followed closely behind (or vice versa), or they could both retreat. These four scenarios satisfied the four inequalities that define the Prisoner’s Dilemma.

  For the experiment, Milinski wanted to use pairs of sticklebacks, but they’re impossible to train. So he placed in the tank a single stickleback and a set of mirrors that would act like two different types of companions. In the first treatment, a parallel mirror was used to simulate a cooperative companion that swam alongside the subject stickleback. In the second treatment, an oblique mirror system set at a 32-degree angle simulated a defecting partner—that is, as the stickleback approached the predator, the companion appeared to fall increasingly and uncooperatively behind. Depending on the mirror, the stickleback felt he was sharing the risk equally or increasingly going it alone.

  When the sticklebacks were partnered with a defector, they preferred the safer half of the tank, farthest away from the predator. But in the trials with the cooperating mirror, the sticklebacks were twice as likely to venture into the half of the tank closest to the predator. The sticklebacks were more adventurous if they had a sidekick. In nature, cooperative behavior translates to more food and more space, and therefore greater individual reproductive success. Contrary to predictions that selfish behavior or retreat was optimal, Milinski’s observation that sticklebacks most often approached the predator together was in line with Axelrod’s conclusion that Tit for Tat was the optimal evolutionary strategy.

  Milinski’s evidence, published in 1987 in the journal Nature,* was the first to demonstrate that cooperation based on reciprocity definitely evolved among egoists, albeit small ones. A large body of research now shows that many biological systems, especially human societies, are organized around various cooperative strategies; the scientific methods continue to become more and more sophisticated, but the original experiments and Tit For Tat strategy are beautifully simple.

  TRUE OR FALSE: BEAUTY IS TRUTH

  JUDITH RICH HARRIS

  Independent investigator and theoretician; author, The Nurture Assumption: Why Children Turn Out the Way They Do

  “Beauty is truth, truth beauty,” said John Keats. But what did he know? Keats was a poet, not a scientist. In the world that scientists inhabit, truth is not always beautiful or elegant, though it may be deep. In fact, it’s my impression that the deeper an explanation goes, the less likely it is to be beautiful or elegant.

  In 1938, the psychologist B. F. Skinner proposed an elegant explanation of “the behavior of organisms” (the title of his first book), based on the idea that rewarding a response—he called it reinforcement—increases the probability that the same response will occur in the future. The theory failed, not because it was false (reinforcement generally does increase the probability of a response) but because it was too simple. It ignored innate components of behavior. It couldn’t even handle all learned behavior. Much behavior is acquired or shaped through experience, but not necessarily by means of reinforcement. Organisms learn various things in various ways.

  The theory of the modular mind is another way of explaining behavior—in particular, human behavior. The idea is that the human mind is made up of a number of specialized components called modules that work more or less independently. These modules collect various kinds of information from the environment and process it in different ways. They issue different commands—occasionally, conflicting commands. It’s not an elegant theory; on the contrary, it’s the sort of thing that would make Occam whip out his razor. But we shouldn’t judge theories by asking them to compete in a beauty pageant. We should ask whether they can explain more, or explain better, than previous theories could. The modular theory can explain, for example, the curious effects of brain injuries. Some abilities may be lost while others are spared, with the pattern differing from one patient to another.

  More to the point, the modular theory can explain some of the puzzles of everyday life. Consider intergroup conflict. The Montagues and the Capulets hated each other; yet Romeo (a Montague) fell in love with Juliet (a Capulet). How can you love a member of a group yet go on hating that group? The answer is that two separate mental modules are involved. One deals with groupness (identification with one’s group and hostility toward other groups), the other specializes in personal relationships. Both modules collect information about people, but they do different things with the data. The groupness
module draws category lines and computes averages within categories; the result is called a stereotype. The relationship module collects and stores detailed information about specific individuals. It takes pleasure in collecting this information, which is why we love to gossip, read novels and biographies, and watch political candidates unravel on our TV screens. No one has to give us food or money to get us to do these things, or even administer a pat on the back, because collecting the data is its own reward.

  The theory of the modular mind is not beautiful or elegant. But not being a poet, I prize truth above beauty.

  ERATOSTHENES AND THE MODULAR MIND

  DAN SPERBER

  Social and cognitive scientist; director, International Cognition and Culture Institute; coauthor (with Deirdre Wilson), Meaning and Relevance

  Eratosthenes (276–195 B.C.), the head of the famous Library of Alexandria in Ptolemaic Egypt, made groundbreaking contributions to mathematics, astronomy, geography, and history. He also argued against dividing humankind into Greeks and “Barbarians.” What he is remembered for, however, is having provided the first correct measurement of the circumference of the Earth (a story well told in Nicholas Nicastro’s recent book, Circumference). How did he do it?

  Eratosthenes had heard that every year, on a single day at noon, the sun shone directly to the bottom of an open well in the town of Syene (now Aswan). This meant that the sun was then at the zenith. For that, Syene had to be on the Tropic of Cancer and the day had to be the summer solstice (our June 21). He knew how long it took caravans to travel from Alexandria to Syene and, on that basis, estimated the distance between the two cities to be 5,014 stades. He assumed that Syene was due south on the same meridian as Alexandria. Actually, in this he was slightly mistaken—Syene is somewhat to the east of Alexandria—and also in assuming that Syene was right on the Tropic. But, serendipitously, the effect of these two mistakes canceled each other out. He understood that the sun was far enough away to treat as parallel the rays of the sun that reach the Earth. When the sun was at the zenith in Syene, it had to be south of the zenith in the more northern Alexandria. By how much? He measured the length of the shadow cast by an obelisk located in front of the Library (says the story—or cast by some other, more convenient vertical object), and—even without trigonometry that had yet to be developed—he could determine that the sun was at an angle of 7.2˚ south of the zenith. That very angle, he understood, measured the curvature of the Earth between Alexandria and Syene (see the figure). Since 7.2˚ is a fiftieth of 360˚, Eratosthenes could then, by multiplying the distance between Alexandria and Syene by 50, calculate the circumference of the Earth. The result, 252,000 stades, is 1 percent shy of the modern measurement of 40,008 km.

  Eratosthenes brought together apparently unrelated pieces of evidence (the pace of caravans, the sun shining to the bottom of a well, the length of the shadow of an obelisk), assumptions (the sphericity of the Earth, its distance from the sun), and mathematical tools to measure a circumference he could neither see nor survey but only imagine. His result is simple and compelling. The way he reached it epitomizes human intelligence at its best.

  Jerry Fodor (whose contribution to modern philosophy of mind is second to none) might well use this intellectual prowess as a perfect illustration of the way the central systems of our mind operate. They are, he claims, “isotropic,” in the sense that any belief or evidence is relevant to the evaluation of any new hypothesis, and “Quinean” (after the philosopher Willard Van Orman Quine), in the sense that all our beliefs are part of a single integrated system. This contrasts with the view (which I have contributed to developing) that the mind is wholly made up of specialized “modules,” each attending to a specific cognitive domain or task, and that our mental activity results from the complex interactions (complementarities, competitions . . . ) among these modules. Doesn’t, however, the story of Eratosthenes prove that Fodor’s view is right? How could a massively modular mind ever have achieved such a feat?

  Here’s an answer. Some of our modules are metarepresentational. They specialize in processing different kinds of representations: mental representations for mind-reading modules; linguistic representations for communication modules; abstract representations for reasoning modules. These metarepresentational modules are highly specialized. After all, representations are very special objects, found only in information-processing devices such as people and in their output. Representations have original properties—truth-or-falsity, consistency, and so on—which are not found in any other objects. Given, however, that the representations these metarepresentational modules process may themselves be about just anything, they provide a kind of virtual domain-generality. Hence the illusion that metarepresentational thinking is truly general and nonspecialized.

  Eratosthenes, I am suggesting, was not thinking concretely about the circumference of the Earth (in the way he might have been thinking concretely about the distance from the Library to the Palace in Alexandria). Rather, he was thinking about a challenge posed by the quite different estimates of the circumference of the Earth that had been offered by other scholars at the time. He was thinking about various mathematical principles and tools that could be brought to bear on the issue. He was thinking of the evidential use that could be made of sundry observations and reports. He was aiming at finding a clear and compelling solution, a convincing argument. In other terms, he was thinking about objects of a single kind—representations—and looking for a new way to put them together. In doing so, he was inspired by others and aiming at others. His intellectual feat makes sense only as a particularly remarkable link in a social-cultural chain of mental and communicational events. To me, it is a stunning illustration not of the solitary functioning of the individual mind but of the powers of socially and culturally extended modular minds.

  DAN SPERBER’S EXPLANATION OF CULTURE

  CLAY SHIRKY

  Social media researcher; arts professor, NYU Tisch School of the Arts’ Interactive Telecommunications Program (ITP); author, Cognitive Surplus: How Technology Makes Consumers into Collaborators

  Why do members of a group of people behave the same way? Why do they behave differently from other groups living nearby? Why are those behaviors so stable over time? Alas, the obvious answer—cultures are adaptations to their environments—doesn’t hold up. Multiple adjacent cultures along the Indus, the Euphrates, the Upper Rhine have differed in language, dress, and custom, despite existing side-by-side in almost identical environments.

  Something happens to keep one group of people behaving in a certain set of ways. In the early 1970s, both E. O. Wilson and Richard Dawkins noticed that the flow of ideas in a culture exhibited similar patterns to the flow of genes in a species—high flow within the group but sharply reduced flow between groups. Dawkins’ response was to assume a hypothetical unit of culture called the meme, though he also made its problems clear: With genetic material, perfect replication is the norm and mutations rare. With culture, it is the opposite; events are misremembered and then misdescribed, quotes are mangled, even jokes (pure meme) vary from telling to telling. The gene/meme comparison remained, for a generation, an evocative idea of not much analytic utility.

  Dan Sperber has, to my eye, cracked this problem. In a slim 1996 volume modestly titled Explaining Culture, he outlined a theory of culture as the residue of the epidemic spread of ideas. In this model, there is no meme, no unit of culture separate from the blooming, buzzing confusion of transactions. Instead, all cultural transmission can be reduced to one of two types: making a mental representation public, or internalizing a mental version of a public presentation. As Sperber puts it, “Culture is the precipitate of cognition and communication in a human population.”

  Sperber’s two primitives—externalization of ideas, internalization of expressions—give us a way to think of culture not as a big container that people inhabit but as a network whose traces, drawn carefully, let us ask how the behaviors of individuals create larger, longer-lived
patterns. Some public representations are consistently learned and then reexpressed and relearned—Mother Goose rhymes, tartan patterns, and peer review have all survived for centuries. Others move from ubiquitous to marginal in a matter of years—pet rocks, “The Piña Colada Song.” Still others thrive only within a subcultural domain—cosplay, Civil War re-enactments. (Indeed, a subculture is simply a network of people who traffic in particular representations—representations that are mostly inert in the larger culture.)

  With Sperber’s network-tracing model, culture is best analyzed as an overlapping set of transactions rather than as a container or a thing or a force. Given this, we can ask detailed questions about which private ideas are made public where, and we can ask when and how often those public ideas take hold in individual minds.

  Rather than arguing about whether the sonnet is still a vital part of Western culture, for example, Sperber makes it possible to ask instead, “Which people have mental representations of individual sonnets, or of the sonnet as an overall form? How often do they express those representations? How often do others remember those expressions?” Understanding sonnet-knowing becomes a network-analysis project, driven by empirical questions about how widespread, detailed, and coherent the mental representations of sonnets are. Cultural commitment to sonnets and Angry Birds and American exceptionalism and the theory of relativity can all be placed under the same lens.

  This is what is so powerful about Sperber’s idea: Culture is a giant, asynchronous network of replication, ideas turning into expressions, which turn into other, related ideas. Sperber also allows us to understand why persistence of public expression can be so powerful. When I sing “Camptown Races” to my son, he internalizes his own (slightly different) version. As he learns to read sheet music, however, he gains access to a much larger universe of such representations; Beethoven is not around to hum “Für Elise” to him, but through a set of agreed-on symbols (themselves internalized as mental representations) Beethoven’s public representations can be internalized centuries later.

 

‹ Prev