Book Read Free

This Will Make You Smarter

Page 20

by John Brockman


  Anthropocene thinking tells us the problem is not necessarily inherent in the systems like commerce and energy that degrade nature; one hopes that these can be modified to become self-sustaining, with innovative advances and entrepreneurial energy. The real root of the Anthropocene dilemma lies in our neural architecture.

  We approach the Anthropocene threat with brains shaped by evolution to survive the previous geological epoch, the Holocene, when dangers were signaled by growls and rustles in the bushes, and it served one well to reflexively abhor spiders and snakes. Our neural alarm systems still attune to this largely antiquated range of danger.

  Add to that misattunement to threats our built-in perceptual blind spot: We have no direct neural register for the dangers of the Anthropocene epoch, which are too macro or micro for our sensory apparatus. We are oblivious to, say, our body burden, the lifetime build-up of damaging industrial chemicals in our tissues.

  To be sure, we have methods for assessing CO2 buildups or blood levels of BHA. But for the vast majority of people, those numbers have little to no emotional impact. Our amygdala shrugs.

  Finding ways to counter the forces that feed the Anthropocene effect should count high in prioritizing scientific efforts. The earth sciences, of course, embrace the issue, but they do not deal with the root of the problem—human behavior. The sciences that have most to offer have done the least Anthropocene thinking.

  The fields that hold keys to solutions include economics, neuroscience, social psychology, and cognitive science, and their various hybrids. With a focus on Anthropocene theory and practice, they might well contribute species-saving insights. But first they have to engage this challenge, which for the most part has remained off their agenda.

  When, for example, will neuroeconomics tackle the brain’s perplexing indifference to the news about planetary meltdown, let alone how that neural blind spot might be patched? Might cognitive neuroscience one day offer some insight that would change our collective decision making away from a lemming’s march to oblivion? Could any of the computer, behavioral, or brain sciences come up with an information prosthetic that might reverse our course?

  Paul Crutzen, the Dutch atmospheric chemist who received a Nobel for his work on ozone depletion, coined the term “Anthropocene” ten years ago. As a meme, “Anthropocene” has as yet little traction in scientific circles beyond geology and environmental science, let alone the wider culture: A Google check on “anthropocene,” as of this writing, shows 78,700 references (mainly in geoscience), while, by contrast, “placebo,” a once esoteric medical term now well established as a meme, has more than 18 million (and even the freshly coined “vuvuzela” has 3,650,000).

  Homo dilatus

  Alun Anderson

  Senior consultant, former editor-in-chief and publishing director, New Scientist; author, After the Ice: Life, Death, and Geopolitics in the New Arctic

  Our species might well be renamed Homo dilatus, the procrastinating ape. Somewhere in our evolution, we acquired the brain circuitry to deal with sudden crises and respond with urgent action. Steady declines and slowly developing threats are quite different. “Why act now, when the future is far off?” is the maxim for a species designed to deal with near-term problems and not long-term uncertainties. It’s a handy view of humankind that all those who use science to change policy should keep in their mental toolkit, and a tendency greatly reinforced by the endless procrastination in tackling climate change. Cancún follows Copenhagen follows Kyoto, but the more we dither and no extraordinary disaster follows, the more dithering seems just fine.

  Such behavior is not unique to climate change. It took the sinking of the Titanic to put sufficient lifeboats on passenger ships, the huge spill from the Amoco Cadiz to set international marine pollution rules, and the Exxon Valdez disaster to drive the switch to double-hulled tankers. The same pattern is seen in the oil industry, with the 2010 Gulf spill the latest chapter in the “Disaster first; regulations later” mind-set of Homo dilatus.

  There are a million similar stories from human history. So many great powers and once-dominant corporations slipped away as their fortunes declined without the necessary crisis to force change. Slow and steady change simply leads to habituation, not action. You could walk in the British countryside now and hear only a fraction of the birdsong that would have delighted a Victorian poet, but we simply cannot feel insidious loss. Only a present crisis wakes us.

  So puzzling is our behavior that the “psychology of climate change” has become a significant area of research, with efforts to find those vital messages that will turn our thinking toward the longer term and away from the concrete Now. Sadly, the skull of Homo dilatus seems too thick for the tricks that are currently on offer. In the case of climate change, we might better focus on adaptation until a big crisis comes along to rivet our minds. The complete loss of the summer Arctic ice might be the first. A huge dome of shining ice, about half the size of the United States, covers the top of the world in summer now. In a couple of decades, it will likely be gone. Will millions of square kilometers of white ice turning to dark water feel like a crisis? If that doesn’t do it, then following soon after will likely be painful and persistent droughts across the United States, much of Africa, Southeast Asia, and Australia.

  Then the good side of Homo dilatus may finally surface. A crisis might bring out the Bruce Willis in all of us, and with luck we’ll find an unexpected way to right the world before the end of the reel. Then we’ll no doubt put our feet up again.

  We Are Lost in Thought

  Sam Harris

  Neuroscientist; chairman, Project Reason; author, The Moral Landscape and The End of Faith

  I invite you to pay attention to anything—the sight of this text, the sensation of breathing, the feeling of your body resting against your chair—for a mere sixty seconds without getting distracted by discursive thought. It sounds simple enough: Just pay attention. The truth, however, is that you will find the task impossible. If the lives of your children depended on it, you could not focus on anything—even the feeling of a knife at your throat—for more than a few seconds, before your awareness would be submerged again by the flow of thought. This forced plunge into unreality is a problem. In fact, it is the problem from which every other problem in human life appears to be made.

  I am by no means denying the importance of thinking. Linguistic thought is indispensable to us. It is the basis for planning, explicit learning, moral reasoning, and many other capacities that make us human. Thinking is the substance of every social relationship and cultural institution we have. It is also the foundation of science. But our habitual identification with the flow of thought—that is, our failure to recognize thoughts as thoughts, as transient appearances in consciousness—is a primary source of human suffering and confusion.

  Our relationship to our own thinking is strange to the point of paradox, in fact. When we see a person walking down the street talking to himself, we generally assume that he is mentally ill. But we all talk to ourselves continuously—we just have the good sense to keep our mouths shut. Our lives in the present can scarcely be glimpsed through the veil of our discursivity: We tell ourselves what just happened, what almost happened, what should have happened, and what might yet happen. We ceaselessly reiterate our hopes and fears about the future. Rather than simply existing as ourselves, we seem to presume a relationship with ourselves. It’s as though we were having a conversation with an imaginary friend possessed of infinite patience. Who are we talking to?

  While most of us go through life feeling that we are the thinker of our thoughts and the experiencer of our experience, from the perspective of science we know that this is a distorted view. There is no discrete self or ego lurking like a Minotaur in the labyrinth of the brain. There is no region of cortex or pathway of neural processing that occupies a privileged position with respect to our personhood. There is no unchanging “center of narrative gravi
ty” (to use Daniel Dennett’s phrase). In subjective terms, however, there seems to be one—to most of us, most of the time.

  Our contemplative traditions (Hindu, Buddhist, Christian, Muslim, Jewish, etc.) also suggest, to varying degrees and with greater or lesser precision, that we live in the grip of a cognitive illusion. But the alternative to our captivity is almost always viewed through the lens of religious dogma. A Christian will recite the Lord’s Prayer continuously over a weekend, experience a profound sense of clarity and peace, and judge this mental state to be fully corroborative of the doctrine of Christianity; a Hindu will spend an evening singing devotional songs to Krishna, feel suddenly free of his conventional sense of self, and conclude that his chosen deity has showered him with grace; a Sufi will spend hours whirling in circles, pierce the veil of thought for a time, and believe that he has established a direct connection to Allah.

  The universality of these phenomena refutes the sectarian claims of any one religion. And, given that contemplatives generally present their experiences of self-transcendence as inseparable from their associated theology, mythology, and metaphysics, it is no surprise that scientists and nonbelievers tend to view their reports as the product of disordered minds, or as exaggerated accounts of far more common mental states—like scientific awe, aesthetic enjoyment, artistic inspiration, and so on.

  Our religions are clearly false, even if certain classically religious experiences are worth having. If we want to actually understand the mind, and overcome some of the most dangerous and enduring sources of conflict in our world, we must begin thinking about the full spectrum of human experience in the context of science.

  But we must first realize that we are lost in thought.

  The Phenomenally Transparent Self-Model

  Thomas Metzinger

  Philosopher, Johannes Gutenberg-Universität, Mainz, and Frankfurt Institute for Advanced Studies; author, The Ego Tunnel

  A self-model is the inner representation that some information-processing systems have of themselves as a whole. A representation is phenomenally transparent if it (a) is conscious and (b) cannot be experienced as a representation. Therefore, transparent representations create the phenomenology of naïve realism—the robust and irrevocable sense that you are directly and immediately perceiving something that must be real. Now apply the second concept to the first: A transparent self-model necessarily creates the realistic conscious experience of selfhood—of being directly and immediately in touch with oneself as a whole.

  This concept is important, because it shows how, in a certain class of information-processing systems, the robust phenomenology of being a self would inevitably appear—although these systems never were, or had, anything like a self. It is empirically plausible that we might just be such systems.

  Correlation Is Not a Cause

  Sue Blackmore

  Psychologist; author, Consciousness: An Introduction

  The sentence “Correlation is not a cause” (CINAC) may be familiar to scientists but has not found its way into everyday language, even though critical thinking and scientific understanding would improve if more people had this simple reminder in their mental toolkit.

  One reason for this lack is that CINAC can be surprisingly difficult to grasp. I learned just how difficult when I was teaching experimental design to nurses, physiotherapists, and other assorted groups. They usually understood my favorite example: Imagine you’re watching at a railway station. More and more people arrive, until the platform is crowded, and then—hey, presto!—along comes a train. Did the people cause the train to arrive (A causes B)? Did the train cause the people to arrive (B causes A)? No, they both depended on a railway timetable (C caused both A and B).

  I soon discovered that this understanding tended to slip away again and again, until I began a new regimen and started every lecture with an invented example to get them thinking. “Right,” I might say. “Suppose it’s been discovered—I don’t mean it’s true—that children who eat more tomato ketchup do worse in their exams. Why could this be?”

  They would argue that it wasn’t true. (I’d explain the point of thought experiments again.)

  “But there’d be health warnings on ketchup if it were poisonous.”

  (Just pretend it’s true for now, please.)

  And then they’d start using their imaginations: “There’s something in the ketchup that slows down nerves.” “Eating ketchup makes you watch more telly instead of doing your homework.” “Eating more ketchup means eating more chips, and that makes you fat and lazy.”

  Yes, yes, probably wrong but great examples of A causes B. Go on.

  And so to “Stupid people have different taste buds and don’t like ketchup.” “Maybe if you don’t pass your exams, your mum gives you ketchup.” And finally “Poorer people eat more junk food and do less well at school.”

  Next week: “Suppose we find that the more often people consult astrologers or psychics, the longer they live.”

  “But it can’t be true—astrology’s bunkum.”

  (Sigh . . . just pretend it’s true for now, please.)

  OK. “Astrologers have a special psychic energy that they radiate to their clients.” “Knowing the future means you can avoid dying.” “Understanding your horoscope makes you happier and healthier.”

  Yes, yes, excellent ideas, go on.

  “The older people get, the more often they go to psychics.” “Being healthy makes you more spiritual, and so you seek out spiritual guidance.”

  Yes, yes, keep going, all testable ideas.

  And finally “Women go to psychics more often and also live longer than men.”

  The point is that once you greet any new correlation with CINAC, your imagination is let loose. Once you listen to every new science story CINACally (which conveniently sounds like “cynically”), you find yourself thinking “OK, if A doesn’t cause B, could B cause A? Could something else cause them both, or could they both be the same thing even though they don’t appear to be? What’s going on? Can I imagine other possibilities? Could I test them? Could I find out which is true?” Then you can be critical of the science stories you hear. Then you are thinking like a scientist.

  Stories of health scares and psychic claims may get people’s attention, but understanding that a correlation is not a cause could raise levels of debate over some of today’s most pressing scientific issues. For example, we know that global temperature rise correlates with increasing levels of atmospheric carbon dioxide, but why? Thinking CINACally means asking which variable causes which, or whether something else causes both, with important consequences for social action and the future of life on Earth.

  Some say that the greatest mystery facing science is the nature of consciousness. We seem to be independent selves having consciousness and free will, and yet the more we understand how the brain works, the less room there seems to be for consciousness to do anything. A popular way of trying to solve the mystery is the hunt for the “neural correlates of consciousness.” For example, we know that brain activity in parts of the motor cortex and frontal lobes correlates with conscious decisions to act. But do our conscious decisions cause the brain activity, does the brain activity cause our decisions, or are both caused by something else?

  The fourth possibility is that brain activity and conscious experiences are really the same thing, just as light turned out not to be caused by electromagnetic radiation but to be electromagnetic radiation, and heat turned out to be the movement of molecules in a fluid. At the moment, we have no inkling of how consciousness could be brain activity, but my guess is that it will turn out that way. Once we clear away some of our delusions about the nature of our own minds, we may finally see why there is no deep mystery and our conscious experiences simply are what is going on inside our brains. If this is right, then there are no neural correlates of consciousness. But whether it’s right or not, remembering CINAC and
working slowly from correlations to causes is likely to be how this mystery is finally solved.

  Information Flow

  David Dalrymple

  Researcher, MIT Media Lab

  The concept of cause-and-effect is better understood as the flow of information between two connected events, from the earlier event to the later one. Saying “A causes B” sounds precise but is actually very vague. I would specify much more by saying, “With the information that A has happened, I can compute with almost total confidence* that B will happen.” This rules out the possibility that other factors could prevent B even if A does happen, but allows the possibility that other factors could cause B even if A doesn’t happen.

  As shorthand, we can say that one set of information “specifies” another, if the latter can be deduced or computed from the former. Note that this doesn’t apply only to one-bit sets of information, like the occurrence of a specific event. It can also apply to symbolic variables (given the state of the Web, the results you get from a search engine are specified by your query), numeric variables (the number read off a precise thermometer is specified by the temperature of the sensor), or even behavioral variables (the behavior of a computer is specified by the bits loaded in its memory).

 

‹ Prev