Book Read Free

This Will Make You Smarter

Page 7

by John Brockman


  Mischel refers to this skill as the “strategic allocation of attention,” and he argues that it’s the skill underlying self-control. Too often, we assume that willpower is about having strong moral fiber. But that’s wrong. Willpower is really about properly directing the spotlight of attention, learning how to control that short list of thoughts in working memory. It’s about realizing that if we’re thinking about the marshmallow, we’re going to eat it, which is why we need to look away.

  What’s interesting is that this cognitive skill isn’t just useful for dieters. It seems to be a core part of success in the real world. For instance, when Mischel followed up with the initial subjects thirteen years later—they were now high school seniors—he realized that their performance on the marshmallow task had been highly predictive on a vast range of metrics. Those kids who had struggled to wait at the age of four were also more likely to have behavioral problems, both in school and at home. They struggled in stressful situations, often had trouble paying attention, and found it difficult to maintain friendships. Most impressive, perhaps, were the academic numbers: The kids who could wait fifteen minutes for a marshmallow had an SAT score that was, on average, 210 points higher than that of the kids who could wait only thirty seconds.

  These correlations demonstrate the importance of learning to strategically allocate our attention. When we properly control the spotlight, we can resist negative thoughts and dangerous temptations. We can walk away from fights and improve our odds against addiction. Our decisions are driven by the facts and feelings bouncing around the brain—the allocation of attention allows us to direct this haphazard process, as we consciously select the thoughts we want to think about.

  Furthermore, this mental skill is getting more valuable. We live, after all, in the age of information, which makes the ability to focus on the important information incredibly important. (Herbert Simon said it best: “A wealth of information creates a poverty of attention.”) The brain is a bounded machine, and the world is a confusing place, full of data and distractions. Intelligence is the ability to parse the data so that it makes just a little bit more sense. Like willpower, this ability requires the strategic allocation of attention.

  One final thought: In recent decades, psychology and neuroscience have severely eroded classical notions of free will. The unconscious mind, it turns out, is most of the mind. And yet, we can still control the spotlight of attention, focusing on those ideas that will help us succeed. In the end, this may be the only thing we can control. We don’t have to look at the marshmallow.

  The Focusing Illusion

  Daniel Kahneman

  Professor emeritus of psychology and public affairs, Woodrow Wilson School, Princeton University; recipient, 2002 Nobel Memorial Prize in Economic Sciences

  Education is an important determinant of income—one of the most important—but it is less important than most people think. If everyone had the same education, the inequality of income would be reduced by less than 10 percent. When you focus on education, you neglect the myriad other factors that determine income. The differences of income among people who have the same education are huge.

  Income is an important determinant of people’s satisfaction with their lives, but it is far less important than most people think. If everyone had the same income, the differences among people in life satisfaction would be reduced by less than 5 percent.

  Income is even less important as a determinant of emotional happiness. Winning the lottery is a happy event, but the elation does not last. On average, individuals with high income are in a better mood than people with lower income, but the difference is about a third as large as most people expect. When you think of rich and poor people, your thoughts are inevitably focused on circumstances in which income is important. But happiness depends on other factors more than it depends on income.

  Paraplegics are often unhappy, but they are not unhappy all the time, because they spend most of the time experiencing and thinking about things other than their disability. When we think of what it is like to be a paraplegic, or blind, or a lottery winner, or a resident of California, we focus on the distinctive aspects of each of these conditions. The mismatch in the allocation of attention between thinking about a life condition and actually living it is the cause of the focusing illusion.

  Marketers exploit the focusing illusion. When people are induced to believe that they “must have” a good, they greatly exaggerate the difference that the good will make to the quality of their life. The focusing illusion is greater for some goods than for others, depending on the extent to which the goods attract continued attention over time. The focusing illusion is likely to be more significant for leather car seats than for books on tape.

  Politicians are almost as good as marketers in causing people to exaggerate the importance of issues on which their attention is focused. People can be made to believe that school uniforms will significantly improve educational outcomes, or that health care reform will hugely change the quality of life in the United States—either for the better or for the worse. Health care reform will make a difference, but the difference will be smaller than it appears when you focus on it.

  The Uselessness of Certainty

  Carlo Rovelli

  Physicist, Centre de Physique Théorique, Marseille, France; author, The First Scientist: Anaximander and His Legacy

  There is a widely held notion that does plenty of damage: the notion of “scientifically proved.” Nearly an oxymoron. The very foundation of science is to keep the door open to doubt. Precisely because we keep questioning everything, especially our own premises, we are always ready to improve our knowledge. Therefore a good scientist is never “certain.” Lack of certainty is precisely what makes conclusions more reliable than the conclusions of those who are certain, because the good scientist will be ready to shift to a different point of view if better evidence or novel arguments emerge. Therefore certainty is not only something of no use but is also in fact damaging, if we value reliability.

  Failure to appreciate the value of uncertainty is at the origin of much silliness in our society. Are we sure that the Earth is going to keep heating up if we don’t do anything? Are we sure of the details of the current theory of evolution? Are we sure that modern medicine is always a better strategy than traditional ones? No, we are not, in any of these cases. But if, from this lack of certainty, we jump to the conviction that we had better not care about global heating, that there is no evolution and the world was created six thousand years ago, or that traditional medicine must be more effective than modern medicine—well, we are simply stupid. Still, many people do make these inferences, because the lack of certainty is perceived as a sign of weakness instead of being what it is—the first source of our knowledge.

  Every knowledge, even the most solid, carries a margin of uncertainty. (I am very sure what my own name is . . . but what if I just hit my head and got momentarily confused?) Knowledge itself is probabilistic in nature, a notion emphasized by some currents of philosophical pragmatism. A better understanding of the meaning of “probability”—and especially realizing that we don’t need (and never possess) “scientifically proved” facts but only a sufficiently high degree of probability in order to make decisions—would improve everybody’s conceptual toolkit.

  Uncertainty

  Lawrence Krauss

  Physicist, Foundation Professor, and director, Origins Project, Arizona State University; author, Quantum Man: Richard Feynman’s Life in Science

  The notion of uncertainty is perhaps the least well understood concept in science. In the public parlance, uncertainty is a bad thing, implying a lack of rigor and predictability. The fact that global warming estimates are uncertain, for example, has been used by many to argue against any action at the present time.

  In fact, however, uncertainty is a central component of what makes science successful. Being able to quantify uncertainty and incorporate i
t into models is what makes science quantitative rather than qualitative. Indeed, no number, no measurement, no observable in science is exact. Quoting numbers without attaching an uncertainty to them implies that they have, in essence, no meaning.

  One of the things that makes uncertainty difficult for members of the public to appreciate is that the significance of uncertainty is relative. Take, for example, the distance between Earth and the sun: 1.49597 x 108 km., as measured at one point during the year. This seems relatively precise; after all, using six significant digits means I know the distance to an accuracy of one part in a million or so. However, if the next digit is uncertain, that means the uncertainty in knowing the precise Earth-sun distance is larger than the distance between New York and Chicago!

  Whether or not the quoted number is “precise” therefore depends on what I’m intending to do with it. If I care only about what minute the sun will rise tomorrow, then the number quoted here is fine. If I want to send a satellite to orbit just above the sun, however, then I would need to know distances more accurately.

  This is why uncertainty is so important. Until we can quantify the uncertainty in our statements and our predictions, we have little idea of their power or significance. So, too, in the public sphere. Public policy performed in the absence of understanding quantitative uncertainties, or even absent understanding the difficulty of obtaining reliable estimates of uncertainties, usually means bad public policy.

  A Sense of Proportion About Fear of the Unknown

  Aubrey de Grey

  Gerontologist; chief science officer, SENS Foundation; coauthor (with Michael Rae), Ending Aging

  Einstein ranks extremely high not only among the all-time practitioners of science but also among the producers of aphorisms that place science in its real-world context. One of my favorites is “If we knew what we were doing, it wouldn’t be called research.” This disarming comment, like so many of the best quotes by experts in any field, embodies a subtle mix of sympathy and disdain for the difficulties the great unwashed have in appreciating what those experts do.

  One of the foremost challenges facing scientists today is to communicate the management of uncertainty. The public knows that experts are, well, expert—that they know more than anyone else about the issue at hand. What is evidently far harder for most people to grasp is that “more than anyone else” does not mean “everything”—and especially that given the possession of only partial knowledge, experts must also be expert at figuring out what is the best course of action. Moreover, those actions must be well judged, whether in the lab, the newsroom, or the policy maker’s office.

  Of course it’s true that many experts are decidedly inexpert at communicating their work in lay terms. This remains a major issue largely because experts are only very rarely called upon to engage in general-audience communication, hence they do not see gaining such skills as a priority. Training and advice are available, often from university press offices, but even when experts take advantage of such opportunities, they generally do so too little and too late.

  However, in my view that’s a secondary issue. As a scientist with the luxury of communicating frequently with the general public, I can report with confidence that experience helps only up to a point. A fundamental obstacle remains: Nonscientists harbor deep-seated instincts concerning the management of uncertainty in their everyday lives—instincts that exist because they generally work but that profoundly differ from the optimal strategy in science and technology. And of course it is technology that matters here, because technology is where the rubber hits the road—where science and the real world meet and must communicate effectively.

  Examples of failure in this regard abound—so much so that they are hardly worthy of enumeration. Whether it be swine flu, bird flu, GM crops, or stem cells, the public debate departs so starkly from the scientist’s comfort zone that it is hard not to sympathize with the errors scientists make, such as letting nuclear transfer be called “cloning,” which end up holding critical research fields back for years.

  One particular aspect of this problem stands out in its potential for public self-harm, however: risk aversion. When uncertainty revolves around such areas as ethics (as with nuclear transfer) or economic policy (as with flu vaccination), the issues are potentially avoidable by appropriate planning. This is not the case when it comes to the public attitude to risk. The immense decrease in vaccinations for major childhood diseases, following a single, controversial study linking them to autism, is a prime example. Another is the suspension of essentially all clinical trials of gene therapy for at least a year in response to the death of one person in a trial—a decision taken by regulatory bodies, yes, but one that was in line with public opinion.

  These responses to the risk/benefit ratio of cutting-edge technologies are examples of fear of the unknown—of an irrationally conservative prioritization of the risks of change over the benefits, with unequivocally deleterious consequences in terms of quality and quantity of life in the future. Fear of the unknown is not remotely irrational in principle, when “fear of” is understood as a synonym for “caution about,” but it can be and generally is overdone. If the public could be brought to a greater understanding of how to evaluate the risks inherent in exploring future technology, and the merits of accepting some short-term risk in the interests of overwhelmingly greater expected long-term benefit, progress in all areas of technology—especially biomedical technology—would be greatly accelerated.

  Because

  Nigel Goldenfeld

  Professor of physics, University of Illinois–Urbana-Champaign

  When you’re facing in the wrong direction, progress means walking backward. History suggests that our worldview undergoes disruptive change not so much when science adds new concepts to our cognitive toolkit as when it takes away old ones. The sets of intuitions that have been with us since birth define our scientific prejudices, and they not only are poorly suited to the realms of the very large and very small but also fail to describe everyday phenomena. If we are to identify where the next transformation of our worldview will come from, we need to take a fresh look at our deep intuitions. In the two minutes it takes you to read this essay, I am going to try and rewire your basic thinking about causality.

  Causality is usually understood as meaning that there is a single, preceding cause for an event. For example, in classical physics, a ball may be flying through the air because of having been hit by a tennis racket. My sixteen-year-old car always revs much too fast because the temperature sensor wrongly indicates that the engine temperature is cold, as if the car were in start-up mode. We are so familiar with causality as an underlying feature of reality that we hardwire it into the laws of physics. It might seem that this would be unnecessary, but it turns out that the laws of physics do not distinguish between time going backward and time going forward. And so we make a choice about which sort of physical law we would like to have.

  However, complex systems, such as financial markets or the Earth’s biosphere, do not seem to obey causality. For every event that occurs, there are a multitude of possible causes, and the extent to which each contributes to the event is not clear, not even after the fact! One might say that there is a web of causation. For example, on a typical day, the stock market might go up or down by some fraction of a percentage point. The Wall Street Journal might blithely report that the stock market move was due to “traders taking profits” or perhaps “bargain hunting by investors.” The following day, the move might be in the opposite direction, and a different, perhaps contradictory, cause will be invoked. However, for each transaction there is both a buyer and a seller, and their worldviews must be opposite for the transaction to occur. Markets work only because there is a plurality of views. To assign a single or dominant cause to most market moves is to ignore the multitude of market outlooks and fail to recognize the nature and dynamics of the temporary imbalances between the numbers of traders who hold
these differing views.

  Similar misconceptions abound elsewhere in public debate and the sciences. For example, are there single causes for diseases? In some cases, such as Huntington’s disease, the cause can be traced to a unique factor—in this case, extra repetitions of a particular nucleotide sequence at a particular location in an individual’s DNA, coding for the amino acid glutamine. However, even in this case, the age of onset and the severity of the condition are also known to be controlled by environmental factors and interactions with other genes. The web of causation has been for many decades a well-worked metaphor in epidemiology, but there is still little quantitative understanding of how the web functions or forms. As Nancy Krieger of the Harvard School of Public Health poignantly asked in a celebrated 1994 essay, “Has anyone seen the spider?”

  The search for causal structure is nowhere more futile than in the debate over the origin of organismal complexity: intelligent design vs. evolution. Fueling the debate is a fundamental notion of causality—that there is a beginning to life and that such a beginning must have had a single cause. On the other hand, if there is instead a web of causation driving the origin and evolution of life, a skeptic might ask: Has anyone seen the spider?

  It turns out that there is no spider. Webs of causation can form spontaneously through the concatenation of associations between the agents or active elements in the system. For example, consider the Internet. Although a unified protocol for communication (TCP/IP, etc.) exists, the topology and structure of the Internet emerged during a frenzied build-out, as Internet service providers staked out territory in a gold rush of unprecedented scale. Remarkably, once the dust began to settle, it became apparent that the statistical properties of the resulting Internet were quite special: The time delays for packet transmission, the network topology, and even the information transmitted exhibit fractal properties.

 

‹ Prev