Book Read Free

Know This

Page 28

by Mr. John Brockman


  Much of what counts as “news” today involves such narratives. The combination of an ever-shortening news cycle, near-instantaneous communications, fragmented markets, heightened competition for viewership, and our cognitive and emotional biases conspire to make it all but inevitable that these narratives dominate—and unlikely for us to grasp the progressive themes that large-scale data analyses reveal.

  The result is today’s dominant alarmist and declinist news cycle, which is essentially a random walk from moral panic to moral panic. To appreciate the real news—that by many fundamental measures the state of the world is improving—requires an exercise in cognitive control, inhibiting our first emotional impulses and allowing a rational appraisal of scientifically informed data. This is by no means some Pollyanna-ish exercise of denial. The most important scientific news to me is that the broad historical trajectory of human societies provides a powerful counternarrative to today’s dominant declinist worldview.

  The Healthy Diet U-Turn

  Ed Regis

  Science writer; author, Monsters: The Hindenburg Disaster and the Birth of Pathological Technology

  To me, the most interesting bit of news in the last couple of years was the sea change in attitude among nutritional scientists—from promotion of an anti-fat, pro-carbohydrate set of dietary recommendations to a lower-carbohydrate, selectively pro-fat dietary regime. The issue is important, because human health and, indeed, human lives are at stake.

  For years Americans had been told by the experts to avoid fats at all costs, as if fats were the antichrists of nutrition. A diet low in fats and rich in carbohydrates was the way to go in order to achieve a sleek, gazelle-like body and physiological enlightenment. In consequence, no-fat or low-fat foods became all the rage; for a long time, the only kind of yogurt you could find on grocery shelves was the jelly-like zero-fat variety and the only available canned tuna was packed not in olive oil but in water, as if the poor creature were still swimming.

  Unappetizing as much of it was, many Americans duly followed this stringent set of dietary dos and don’ts. But we did not thereby become a nation of fit, trim, and healthy physical specimens—far from it. Instead, we suffered an obesity epidemic across all age groups, a tidal wave of heart disease, and highly increased rates of Type 2 diabetes. Once they were digested, all those carbohydrate-rich foods got converted into glucose, which raised insulin levels and in turn caused storage of excess bodily fat.

  Nutritional scientists learned the dual lesson that a diet high in carbohydrates can in fact be hazardous to your health, and that their fat phobia was unjustified by the evidence. In reality, there are good fats (like olive oil) and bad fats, healthy carbs and unhealthy carbs (like refined sugars). Many nutritionists now favor a diametrically opposite approach, allowing certain fats as wholesome and healthy, while calling for a reduction in carbohydrates, especially refined sugars and starches.

  A corollary of this about-face in dietary wisdom was the realization that much of so-called nutritional “science” was bad science to begin with. Many of the canonical studies of diet and nutrition were flawed by selective use of evidence, unrepresentative sampling, absence of adequate controls, and shifting clinical-trial populations. Furthermore, some of the principal investigators were prone to selection bias and loath to refute their preconceived viewpoints with contrary evidence. (These and other failings of the discipline are exhaustively documented in journalist Nina Teicholz’s book The Big Fat Surprise [2014].)

  Unfortunately, nutritional science remains something of a backwater. NASA’s Curiosity rover explores the plains, craters, and dunes of Mars, and the New Horizons spacecraft takes exquisite pictures of Pluto. Molecular biologists wield superb gene-editing tools and are in the process of resurrecting extinct species. Nevertheless, when it comes to the prosaic task of telling us what to eat to achieve good health and avoid heart disease, obesity, and other ailments, dietary science still has a long way to go.

  Fatty Foods Are Good for Your Health

  Peter Turchin

  Professor, Department of Ecology and Evolutionary Biology, University of Connecticut; author, Ultrasociety: How 10,000 Years of War Made Humans the Greatest Cooperators on Earth

  Amid all the confusing fluctuations in dietary fashion to which Americans have been exposed since the 1960s, one recommendation has remained unchallenged. Beginning in the 1960s, and until 2015, Americans have been getting consistent dietary advice: Fat, especially saturated fat, is bad for your health. By the 1980s, the belief equating a low-fat diet with better health had become enshrined in the national dietary advice from the U.S. Department of Agriculture and was endorsed by the surgeon general. Meanwhile, as Americans ate less fat, they steadily became more obese.

  The obesity epidemic probably has many causes, not all well understood. But clearly the misguided dietary advice bombarding us over the past five decades is an important contributing factor.

  There has in fact never been any scientific evidence that cutting down total fat consumption has a positive effect on health—specifically, reduced risks of heart disease and diabetes. For years, those who pointed this out were marginalized, but recently evidence debunking the supposed benefits of low-fat diets has reached a critical mass, so that a mainstream magazine such as Time could write in 2014: “Scientists labeled fat the enemy. Why they were wrong.” And now the official Scientific Report of the 2015 Dietary Guidelines Advisory Committee admits that much.

  There are several reasons why eating a low-fat diet is bad for your health. One is that if you lower the proportion of fat in your diet, you must replace it with something else. Eating more carbohydrates (whether refined or “complex”) increases your chances of becoming diabetic. Eating more proteins increases your chances of getting gout.

  But perhaps a more important reason is that many Americans stopped eating food and switched to highly-processed food substitutes: margarine, processed meats (such as the original Spam—not to be confused with email spam), low-fat cookies, and so on. In each case, we now have abundant evidence that these are “anti-health foods,” because they contain artificial trans fats, preservatives, or highly processed carbohydrates.

  While controlled diet studies are important and necessary for making informed decisions about our diets, an exciting recent scientific breakthrough has resulted from the infusion of evolutionary science into nutrition science. After all, you need to figure out what hypotheses you want to test with controlled trials, and evolution turned out to be a fertile generator of theoretical ideas for such tests.

  One of the sources of ideas to test clinically is the growing knowledge of the characteristic diets of early human beings. Consider this simple idea (although it clearly was too much for traditional nutritionists): We will be better adapted to something eaten by our ancestors over millions of years than to, say, margarine, which we first encountered only 100 years ago. Or take a food like wheat, to which some populations (those in the Fertile Crescent) have been exposed for 10,000 years and others (Pacific Islanders) for only 200 years. Is it surprising that Pacific Islanders have the greatest prevalence of obesity in the world, higher even than in the United States? And should we really tell them to switch to a Mediterranean diet, heavy on grains, pulse, and dairy, to which they’ve had no evolutionary exposure whatsoever?

  Our knowledge of ancestral diets is growing rapidly. We’re adapted to eating a variety of fatty foods, including grass-fed ruminants (beef and lamb) and seafood (oily fish), both good sources of Omega-3 fatty acids. Of particular importance could be bone marrow—it’s likely that the first members of the genus Homo (e.g., habilis) were not hunters but scavengers, who competed with hyenas for large marrow bones. It’s also likely that nutrients from bone marrow (and brains!) of scavenged savannah ungulates were the key resource for the evolution of our oversized brains.

  The new knowledge explains why Americans are getting fatter by eating low-fat diets. When you eliminate fatty foods that your body (especially you
r brain) needs, your body starts sending you persistent signals that you’re malnourished. So you’ll overeat foods other than fatty ones. The extra, unnecessary calories you consume (probably from carbohydrates) will be stored as fat. As a result, you’ll be unhappy, unhealthy, and overweight. You can avoid those extra pounds, of course, if you have a steely will (which few people have). Then you won’t be overweight—just unhappy and unhealthy.

  So, to lose fat you need to eat, not fat, but fatty foods. Paradoxically, eating enough fatty food of the right sorts will help make you lean, as well as happy and—Edge readers, take note—smart!

  Partisan Hostility

  Jonathan Haidt

  Social psychologist; professor, NYU-Stern School of Business; author, The Righteous Mind

  If you were on a committee tasked with choosing someone to hire (or admit to your university, or receive a prize in your field) and it came down to two candidates who were equally qualified on objective measures, which would you be most likely to choose?

  The one who shared your race?

  The one who shared your gender?

  The one who shared your religion?

  The one who shared your political party or ideology?

  The correct answer, for most Americans, is now (d). It is surely good news that prejudice based on race, gender, and religion are down in recent decades. But it is bad news—for America, for the world, and for science—that partisan hostility is way up.

  A 2014 paper by Princeton University political scientists Shanto Iyengar and Sean Westwood, titled “Fear and Loathing Across Party Lines: New Evidence on Group Polarization,” reports four studies (all using nationally representative samples) in which respondents were given various ways to reveal both cross-partisan and cross-racial prejudice, and in all cases cross-partisan prejudice was larger.

  Iyengar and Westwood used a measure of implicit attitudes (called the Implicit Association Test), which gauges how quickly and easily people pair words with “good” and “bad” connotations with words and images associated with “African Americans” vis-à-vis “European Americans.” They also ran a new version of the test using words and images related to Republicans and Democrats. The effect sizes for cross-partisan implicit attitudes were much larger than those for race. For white participants who identified with a party, the cross-partisan effect was about 50 percent larger than the cross-race effect. That is, when Americans look at or listen to one another, their automatic associations are more negative toward people from the “other side” than toward people of a different race.

  In another study, they had participants read pairs of fabricated résumés of graduating high school seniors and select one to receive a scholarship. Race made a difference—black and white participants generally preferred to award the scholarship to the student with the stereotypically black name. But party affiliation made an even bigger difference, and always in a tribal way: About 80 percent of the time, participants selected the candidate belonging to their party, and it made little difference whether their co-partisan had a higher or lower GPA than the other candidate.

  In the two further studies, Iyengar and Westwood had participants play behavioral economics games (the “trust game” and the “dictator game”). Each person played with what they thought was a particular other person, about whom they read a brief profile including the person’s age, gender, income, race, and political ideology. Race and ideology were manipulated systematically. Race made no difference, but partisanship mattered a lot: People were more trusting and generous when they thought they were playing with a co-partisan than an opposing partisan.

  This is bad news for America because it is hard to have an effective democracy without compromise. But rising partisan hostility means that Americans increasingly see the other side not just as wrong but as evil, as a threat to the very existence of the nation, according to Pew Research. Americans can expect rising polarization, nastiness, paralysis, and governmental dysfunction for a long time to come.

  This is a warning for the rest of the world because some of the trends that have driven America to this point are occurring in many other countries, including rising education and individualism (which make people more ideological), rising immigration and ethnic diversity (which reduces social capital and trust), and stagnant economic growth (which puts people into a zero-sum mindset).

  This is bad news for science and universities because universities are usually associated with the left. In the United States, universities have moved rapidly left since 1990, when the left/right ratio of professors across all departments was less than 2:1. By 2004, the left/right ratio was roughly 5:1, and it is still climbing. In the social sciences and humanities, it’s far higher. Because this political purification is happening at a time of rising cross-partisan hostility, we can expect increasing hostility from Republican legislators toward universities and the things they desire, including research funding and freedom from federal and state control.

  Tribal conflicts and tribal politics took center stage in 2015. Iyengar and Westwood help us understand that tribal conflicts are no longer primarily about race, religion, or nationality. Cross-partisan prejudice should become a focus of concern and further research. In the United States, it may even be a more urgent problem than racial prejudice.

  Cognitive Science Transforms Moral Philosophy

  Stephen P. Stich

  Board of Governors Professor, Department of Philosophy, Rutgers University

  For 2,500 years, moral philosophy was entrusted to philosophers and theologians. But in recent years moral philosophers who are also cognitive scientists and cognitive scientists with a sophisticated mastery of moral philosophy have transformed moral philosophy. Findings and theories from many branches of cognitive science have been used to reformulate traditional questions and defend substantive views on some of the most important moral issues facing contemporary societies. In this new synthesis, the cognitive sciences are not replacing moral philosophy; rather, they are providing new insights into the psychological and neurological mechanisms underlying moral reasoning and moral judgment, and these insights are being used to construct empirically informed moral theories that are reshaping moral philosophy.

  Here’s the backstory: From Plato onward, philosophers concerned with morality have made claims about the way the mind works when we consider moral issues. But these claims were always speculative and often set out in metaphors or allegories. With the emergence of scientific psychology in the 20th century, psychologists became increasingly interested in moral judgment and moral development. But much of this work was done by researchers who had little or no acquaintance with the rich philosophical tradition that had drawn important distinctions and defended sophisticated positions about a wide range of moral issues. So philosophers who dipped into this work typically found it naïve and unhelpful.

  At the beginning of the current century, that began to change. Prompted by the interdisciplinary zeitgeist, young philosophers (and a few who weren’t so young) resolved to master the methods of contemporary psychology and neuroscience and use them to explore questions about the mind which philosophers had been debating for centuries. On the other side of the disciplinary divide, psychologists, neuroscientists, and researchers interested in the evolution of the mind began to engage with the philosophical tradition more seriously. What started as a trickle of papers that were both scientifically and philosophically sophisticated has turned into a flood. Hundreds are published every year, and moral psychology has become a hot topic. There are many examples of this extraordinary work. I’ll mention just three.

  Joshua Greene is in many ways the poster child for the new synthesis of cognitive science and moral philosophy. While working on his PhD in philosophy, Greene had the altogether novel idea of asking people to make judgments about moral dilemmas while in a brain scanner. Philosophers had already constructed a number of hypothetical moral dilemmas in which a protagonist was required to make a choice between two courses of action.
One choice would result in the death of five innocent people; the other would result in the death of one innocent person. But philosophers were puzzled by the fact that in similar cases people sometimes chose to save the five and sometimes chose to let the five die. What Greene found was that different brain regions were involved in these choices. When the five were saved, the brain regions involved were thought to be associated with rational deliberation; when the five were not saved, the brain regions involved were thought to be associated with emotion.

  This early result prompted Greene to retrain as a cognitive neuroscientist and triggered a tsunami of studies exploring what goes on in the brain when people make moral judgments. But although Greene became a cognitive scientist, he was still a philosopher, and he draws on a decade of work in moral psychology to defend his account of how moral decisions that divide groups should be made.

  If Greene is the poster child for the new synthesis, John Mikhail is its Renaissance man. While completing a philosophy PhD, he spent several years studying cognitive science and then got a law degree. He’s now a law professor whose areas of expertise include human-rights law and international law. Drawing on the same family of moral dilemmas that were center stage in Greene’s early work, Mikhail has conducted an extensive series of experiments that, he argues, support the view that all normal humans share an important set of innate moral principles. Mikhail argues that this empirical work provides the much needed intellectual underpinning for the doctrine of universal human rights.

 

‹ Prev