Farsighted

Home > Other > Farsighted > Page 8
Farsighted Page 8

by Steven Johnson

Let him start for the Continent, then, without our pronouncing on his future. Among all forms of mistake, prophecy is the most gratuitous.

  • GEORGE ELIOT, MIDDLEMARCH

  For most of its history, the science of brain function had a grim reliance on catastrophic injuries. Scientists had been dissecting brains for centuries, but until modern neuro-imaging tools like PET and fMRI allowed us to track blood flow to different parts of the brain in real time, it was extremely difficult to tell which parts of the brain were responsible for different mental states. Most of our understanding of brain specialization was based on case studies like that of Phineas Gage, the nineteenth-century railroad worker who somehow survived an iron rod piercing his left frontal lobe and went on to display a striking set of changes in his personality. Before neuro-imaging, if you wanted to ascertain the function of a specific part of the brain, you found someone who had lost that part in some terrible accident, and you figured out how the injury had impaired them. If they were blind, the injury must have impacted their visual system; if they had amnesia, the damaged area must have something to do with memory.

  This was an extremely inefficient way to study the human brain, so when PET and fMRI arrived in the 1970s and 1980s, bringing the promise of studying healthy brains at work, neuroscientists were understandably thrilled at the prospect. But scientists quickly realized that the new technologies required a baseline state for their scans to be meaningful. Blood, after all, is circulating through the entire brain all the time, so what you are looking for in a PET or fMRI are changes in that blood flow: a surge of activity in one area, a decrease in another. When you see a surge in the auditory cortex as a Bach sonata plays in the fMRI room, the scan makes it clear that that specific part of the temporal lobe plays a role in listening to music. But to see that surge, you have to be able to contrast it to a resting state. It’s only by tracking the differences between different states—and their different patterns of blood flow throughout the brain—that the scans become useful.

  For years, scientists assumed this wasn’t all that tricky. You put your research subjects into the scanner, asked them to rest and do nothing, and then asked them to do whatever task you were studying: listening to music, speaking, playing chess. You scanned their brains while they were resting and then scanned them again when they were active, and the computer analyzed the differences and conjured up an image that foregrounded the changes in blood flow, not unlike a modern weather map that shows you the different intensities of a storm system approaching a metro area. In the mid-nineties, a brain researcher at the University of Iowa named Nancy Andreasen was conducting an experiment on memory using PET machines when she noticed something unusual in the results. The “rest” state scans didn’t seem to show a decrease in activity. On the contrary—telling her subjects to sit still and not try to do anything in particular seemed to trigger a very specific pattern of active stimulation in their brains. In a paper published in 1995, Andreasen noted one additional detail about that pattern: the systems of the brain that lit up in the rest state were systems that are far less developed in the brains of non-human primates. “Apparently, when the brain/mind thinks in a free and unencumbered fashion,” Andreasen speculated, “it uses its most human and complex parts.”

  Soon a number of other researchers began exploring this strange behavior. In many studies, the brain turned out to be more active at rest than it was when it was supposedly being active. Before long, scientists began calling this recurring pattern of activity the “default network.” In 1999, a team of researchers at the Medical College of Wisconsin led by J. R. Binder published an influential paper that suggested the default network involved “retrieval of information from long-term memory, information representation in conscious awareness in the form of mental images and thoughts, and manipulation of this information for problem-solving and planning.” In other words, when we are left to our own mental devices, the mind drifts into a state where it swirls together memories and projections, mulls problems, and concocts strategies for the future. Binder went on to speculate on the adaptive value of this kind of mental activity. “By storing, retrieving, and manipulating internal information, we organize what could not be organized during stimulus presentation, solve problems that require computation over long periods of time, and create effective plans governing behavior in the future. These capabilities have surely made no small contribution to human survival and the invention of technology.”

  There is a simpler—and less revelatory—way of describing these discoveries: human beings daydream. We didn’t need an fMRI scanner to find this out about ourselves. What the technology did reveal was just how much energy daydreaming required. What feels like the mind drifting off into reverie is actually, on the level of neural activity, a full workout. And the brain regions involved in that workout happen to be ones that are uniquely human. Why would our brains devote so many resources to something as innocuous and seemingly unproductive as daydreaming? This mystery has compelled another group of researchers to investigate what exactly we are thinking about when we daydream. An elaborate recent study by the social psychologist Roy Baumeister pinged five hundred people in Chicago at random points during the day and asked them what they were thinking about at that exact moment. If they weren’t actively involved in a specific task, Baumeister found they were surprisingly likely to be thinking about the future, imagining events and emotions that, technically speaking, hadn’t happened yet. They were three times more likely to be thinking about future events than about past events. (And even the past events they were ruminating on usually had some relevance for their future prospects.) If you take a step back and think about this finding, there is something puzzling about it. Human beings seem to spend a remarkable amount of time thinking about events that are by definition not real, that are figments of our imagination—because they haven’t happened yet. This future orientation turns out to be a defining characteristic of the brain’s default network. When we let our mind wander, it naturally starts to run imagined scenarios about what lies ahead. We are not, as F. Scott Fitzgerald put it at the end of The Great Gatsby, boats against the current, borne back ceaselessly into the past. In fact, our minds tend to race ahead of the current, contemplating the future whenever they get the opportunity.

  The psychologist Martin Seligman has argued recently that this capacity to build working hypotheses about future events—our ability to make long-term predictions that shape the decisions we make in life—may be the defining attribute of human intelligence. “What best distinguishes our species,” he writes, “is an ability that scientists are just beginning to appreciate: We contemplate the future. Our singular foresight created civilization and sustains society. . . . A more apt name for our species would be Homo prospectus, because we thrive by considering our prospects. The power of prospection is what makes us wise. Looking into the future, consciously and unconsciously, is a central function of our large brain.”

  It is unclear whether non-human animals have any real concept of the future at all. Some organisms display behavior that suggests long-term forethought—like a squirrel burying a nut for winter—but those behaviors are all instinctive, shaped by genes, not cognition. The most advanced study of animal time schemes concluded that most animals can only plan ahead deliberately on the scale of minutes. Making decisions based on future prospects on the scale of months or years—even something as simple as planning a summer vacation in December—would be unimaginable even to our closest primate relatives. The truth is that we are constantly making predictions about events on the horizon, and those predictions steer the choices that we make in life. Without that talent for prospection, we would be a fundamentally different species.

  THE SUPERFORECASTERS

  The fact that our brains evolved a default network that likes to ruminate on what might lie ahead of us does not necessarily mean that we are flawless at predicting future events, particularly when those events are full-spectrum affairs with l
ong time horizons. A few decades ago, the political science professor Philip Tetlock famously conducted a series of forecasting tournaments, where pundits and public intellectuals were asked to make predictions about future events. Tetlock assembled a group of 284 “experts” from a broad range of institutions and political perspectives. Some were government officials, others worked for institutions like the World Bank, and some were public intellectuals who published frequently on the op-ed pages of major newspapers. Part of the brilliance of Tetlock’s experiment is that he was trying to measure what author Stewart Brand called “the long view”—not the daily churn of the news cycle, but the slower, more momentous changes in society. Some forecasts involved events happening over the next year, but others asked participants to look forward over the coming decade. Most of the questions were geopolitical or economic in nature: Will a member of the European Union withdraw over the coming ten years? Will there be a recession in the United States in the next five years?

  Tetlock collected 28,000 predictions over the course of his research and then took the momentous step that almost never accompanies the pronouncements of op-ed writers and cable news pundits: he actually compared the predictions to real-world outcomes and graded the forecasters for their comparative accuracy. As a kind of control, Tetlock compared the human forecasts to simple algorithmic versions, like “always predict no change” or “assume the current rate of change continues uninterrupted.” If the forecast asked for the size of the US deficit in ten years, one algorithm would just answer, “The same as it is now.” The other would calculate the rate at which the deficit was growing or shrinking and calculate the ten-year forecast accordingly.

  The results, once Tetlock finished evaluating all the forecasts, were appallingly bad. Most so-called experts were no better than the figurative dart-throwing chimp. When asked to make predictions that looked at the longer-range trends, the experts actually performed worse than random guesses would have. The simplistic algorithmic forecasts (“present trends will continue”) actually outperformed many of the experts, and Tetlock generally found that there was an inverse correlation between how well-known the expert was and the efficacy of their forecasts. The more media exposure you had, the less valuable your predictions were likely to be.

  When Tetlock finally published these results in his 2009 book, Expert Political Judgment, they were widely reported in the news media, which was somewhat ironic, given that the lesson of Tetlock’s study seemed to undermine the authority of media opinions. Yet Tetlock did uncover a statistically meaningful group of experts who were, in fact, better than the chimps, even at long-term forecasts. Their accuracy rates weren’t anywhere near total clairvoyance, but there was something about them that was helping them see the long view more clearly than their peers. And so Tetlock turned to an even more interesting mystery: What separated the successful forecasters from the charlatans? The usual suspects didn’t pan out: it didn’t make a difference if they had a PhD, or a higher IQ, or a post at a prestigious institution, or a higher level of security clearance. And it didn’t really matter what their political beliefs were. “The critical factor,” Tetlock writes, “was how they thought”:

  One group tended to organize their thinking around Big Ideas, although they didn’t agree on which Big Ideas were true or false. Some were environmental doomsters (“We’re running out of everything”); others were cornucopian boomsters (“We can find cost-effective substitutes for everything”). Some were socialists (who favored state control of the commanding heights of the economy); others were free-market fundamentalists (who wanted to minimize regulation). As ideologically diverse as they were, they were united by the fact that their thinking was so ideological. They sought to squeeze complex problems into the preferred cause-effect templates and treated what did not fit as irrelevant distractions. . . . As a result, they were unusually confident and likelier to declare things “impossible” or “certain.” . . . The other group consisted of more pragmatic experts who drew on many analytical tools, with the choice of tool hinging on the particular problem they faced. These experts gathered as much information from as many sources as they could. . . . They talked about possibilities and probabilities, not certainties. And while no one likes to say “I was wrong,” these experts more readily admitted it and changed their minds.

  Tetlock borrowed a metaphor from Isaiah Berlin’s legendary line, which was itself cribbed from the ancient Greek poet Archilochus—“The fox knows many things but the hedgehog knows one big thing”—and dubbed these two kinds of forecasters hedgehogs and foxes. In Tetlock’s analysis, the foxes—attuned to a wide range of potential sources, willing to admit uncertainty, not devoted to an overarching theory—turned out to be significantly better at predicting future events than the more single-minded experts. The foxes were full spectrum; the hedgehogs were narrowband. When trying to make sense of a complex, shifting situation—a national economy, or technological developments like the invention of a computer—the unified perspective of a single field of expertise or worldview actually appears to make you less able to project future changes. For the long view, you need to draw on multiple sources for clues; dabblers and hobbyists outperform the unified thinkers.

  Tetlock also noticed one other interesting trait about the successful forecasts—one drawn from the study of personality types instead of research methodology. Psychologists often refer to the “big five” traits that define the major axes of human personality: conscientiousness, extraversion, agreeableness, neuroticism, and openness to experience, which is also sometimes described as curiosity. When he evaluated his forecasters in terms of these basic categories, one jumped out: the successful forecasters as a group were much more likely to be open to experience. “Most people who are not from Ghana would find a question like ‘Who will win the presidential election in Ghana?’ pointless,” Tetlock writes. “They wouldn’t know where to start, or why to bother. But when I put that hypothetical question to [one of the successful forecasters] and asked for his reaction, he simply said, ‘Well, here’s an opportunity to learn something about Ghana.’”

  But Tetlock’s superforecasters were hardly prophets. As a group they were roughly 20 percent better at predicting the future than the average hedgehog, which meant they only slightly outperformed chance. You could fill an entire library wing with histories of people who failed to see momentous developments coming, developments that with hindsight now seem obvious to us. Almost no one predicted the network-connected personal computer, for instance. Numerous science-fiction narratives—starting with H. G. Wells’s vision of a “global brain”—imagined some kind of centralized, mechanical superintelligence that could be consulted for advice on humanity’s biggest problems. But the idea of computers becoming effectively home appliances—cheap, portable, and employed for everyday tasks like reading advice columns or looking up sports scores—seems to have been almost completely inconceivable, even to the very people whose job it was to predict future developments in society! (The one exception was an obscure 1947 short story called “A Logic Named Joe” that features a device that not only closely resembles a modern PC but also includes functionality that anticipates Google queries as well.)

  The sci-fi failure to anticipate network-connected PCs was matched by an equivalently flawed estimation of our future advances in transportation. Most science-fiction writers in the middle of the twentieth century assumed that space travel would become a commonplace civilian activity by the end of the century, while seriously underestimating the impact of the microprocessor, leading to what the sci-fi scholar Gary Westfahl calls “absurd scenes of spaceship pilots frantically manipulating slide rules in order to recalculate their courses.” Somehow it was much easier to imagine that human beings would colonize Mars than it was to imagine that they would check the weather and chat with their friends on a PC.

  Why were network-connected personal computers so hard to predict? The question is an important one, because the forces that
kept our most visionary writers from imagining the digital revolution—and compelled them to wildly overestimate the future of space travel—can tell us a great deal about how predictions fail when we are attempting to forecast the behavior of complex systems. The simplest explanation is what Westfahl calls the “fallacy of extrapolation”:

  This is the assumption that an identified trend will always continue in the same manner, indefinitely into the future. Thus, George Orwell in the 1940s observed steady growth in totalitarian governments and predicted the trend would continue until it engulfed the entire world by the year Nineteen Eighty-Four. . . . Robert A. Heinlein in “Where To?” (1952) was one of many commentators who, noticing that the extent of clothing that society requires people to wear had steadily declined during the last century, confidently predicted the future acceptance of complete public nudity.

  Space travel is the ultimate example of the fallacy of extrapolation at work. From about 1820 to around 1976—a period bookended by the invention of the railroads on one end and the first supersonic flight of the Concorde on the other—the top speed of the human species accelerated dramatically. We went from the blistering 40 mph of the first locomotives—the fastest any human being had ever traveled—to the supersonic speeds of jet airplanes and rockets in just over a century, more than a twentyfold increase in velocity. It seemed only logical that the trend would continue and humans would soon be traveling at 20,000 mph, thus making travel to Mars not much more consequential than a transatlantic flight on a jet airplane. But, of course, that upward curve in top speed hit a series of unanticipated roadblocks, some of which involved the laws of physics, some of which involved declining funding for space programs around the world. In fact, with the decommissioning of the Concorde, the average civilian top speed of travel has actually declined over the past two decades. The prediction of colonies on Mars failed because current trends did not continue to hold in a steady state.

 

‹ Prev