This Explains Everything

Home > Other > This Explains Everything > Page 9
This Explains Everything Page 9

by Mr. John Brockman


  I visited him at his home in La Jolla in July of 2004. He saw me to the door as I was leaving and, as we parted, gave me a sly, conspiratorial wink: “I think it’s the claustrum, Rama. That’s where the secret is.” A week later, he passed away.

  OVERLAPPING SOLUTIONS

  DAVID M. EAGLEMAN

  Neuroscientist, Baylor College of Medicine; author, Incognito: The Secret Lives of the Brain

  The elegance of the brain lies in its inelegance. For centuries, neuroscience attempted to neatly assign labels to the various parts of the brain: This is the area for language, this for morality, this for tool use, color detection, face recognition, and so on. The search for an orderly brain map started off as a viable endeavor but turned out to be misguided.

  The deep and beautiful trick of the brain is more interesting: It possesses multiple, overlapping ways of dealing with the world. It is a machine built of conflicting parts. It is a representative democracy that functions by competition among parties who all believe they know the right way to solve the problem.

  As a result, we can get mad at ourselves, argue with ourselves, curse at ourselves, and contract with ourselves. We can feel conflicted. These sorts of neural battles lie behind marital infidelity, relapses into addiction, cheating on diets, breaking of New Year’s resolutions—all situations in which some parts of a person want one thing and other parts another.

  These are things that modern machines simply do not do. Your car cannot be conflicted about which way to turn: It has one steering wheel commanded by one driver, and it follows directions without complaint. Brains, on the other hand, can be of two minds, and often many more. We don’t know whether to turn toward the cake or away from it, because there are several sets of hands on the steering wheel of behavior.

  Take memory. Under normal circumstances, memories of daily events are consolidated by an area of the brain called the hippocampus. But in frightening situations—such as a car accident or a robbery—another area, the amygdala, also lays down memories along an independent, secondary memory track. Amygdala memories have a different quality to them: They are difficult to erase and they can return in “flash-bulb” fashion—a common description of rape victims and war veterans. In other words, there is more than one way to lay down memory. We’re talking not about memories of different events but about different memories of the same event. According to the unfolding picture, there may be even more than two factions involved, all writing down information and later competing to tell the story. The unity of memory is an illusion.

  Consider the different systems involved in decision making: Some are fast, automatic, and below the surface of conscious awareness; others are slow, cognitive, and conscious. And there’s no reason to assume that there are only two systems; there may well be a spectrum. Some networks in the brain are implicated in long-term decisions, others in short-term impulses—and there may be a fleet of medium-term biases as well.

  Attention, also, has recently come to be understood as the end result of multiple, competing networks, some for focused, dedicated attention to a specific task and others for monitoring broadly (vigilance). They are always locked in competition to steer the actions of the organism. Even basic sensory functions like the detection of motion appear now to have been reinvented multiple times by evolution. This provides the perfect substrate for a neural democracy.

  On a larger anatomical scale, the two hemispheres of the brain, left and right, can be understood as overlapping systems that compete. We know this from patients whose hemispheres are disconnected: They essentially function with two independent brains. For example, put a pencil in each hand and they can simultaneously draw incompatible figures, such as a circle and a triangle. The two hemispheres function differently in the domains of language, abstract thinking, story construction, inference, memory, gambling strategies, and so on. They constitute a team of rivals: agents with the same goals but slightly different ways of going about them.

  To my mind, this elegant solution to the mysteries of the brain should change the goal for aspiring neuroscientists. Instead of spending years advocating for your favorite solution, the mission should evolve into elucidating the various overlapping solutions: how they compete, how the union is held together, and what happens when things fall apart.

  Part of the importance of discovering elegant solutions is capitalizing on them. The neural-democracy model may be just the thing to dislodge artificial intelligence. We human programmers still approach a problem by assuming there’s a best way to solve it or there’s a way it should be solved. But evolution does not solve a problem and then check it off the list. Instead, it ceaselessly reinvents programs, each with overlapping and competing approaches. The lesson is to abandon the question “What’s the cleverest way to solve this problem?” in favor of “Are there multiple, overlapping ways to solve this problem?” That will be the starting point in ushering in a fruitful new age of elegantly inelegant computational devices.

  OUR BOUNDED RATIONALITY

  MAHZARIN BANAJI

  Richard Clarke Cabot Professor of Social Ethics, Department of Psychology, Harvard University

  Explanations that are extraordinary both analytically and aesthetically share, among others, these properties: (a) They are often simpler compared with what was received wisdom, (b) they point to the truer cause as being something quite removed from the phenomenon, and (c) they make you wish you’d come upon the explanation yourself.

  Those of us who attempt to understand the mind have a unique limitation to confront: The mind is the thing doing the explaining; the mind is also the thing to be explained. Distance from one’s own mind, distance from attachments to the specialness of one’s species or tribe, getting away from introspection and intuition (not as hypothesis generators but as answers and explanations) are all especially hard to achieve when what we seek to do is explain our own minds and those of others of our kind.

  For this reason, my candidate for the most deeply satisfying explanation of recent decades is the idea of bounded rationality. The idea that human beings are smart compared to other species but not smart enough by their own standards, including behaving in line with basic axioms of rationality, is now a well-honed observation with a deep empirical foundation.

  The cognitive scientist and Nobel laureate in economics Herbert Simon put one stake in the ground through the study of information processing and artificial intelligence, showing that people and organizations alike adopt principles of behaviors such as “satisficing” that constrain them to decent but not the best decisions. The second stake was placed by Daniel Kahneman and Amos Tversky, who showed the stunning ways in which even experts are error-prone, with consequences not only for their own welfare but also for that of their societies.

  The view of human nature that has evolved over the past four decades has systematically changed the explanation for who we are and why we do what we do. We are error-prone in the unique ways in which we are, the explanation goes, not because we have malign intent but because of the evolutionary basis of our mental architecture—the way in which we learn and remember information, the way in which we are affected by those around us, and so on. The reason we are boundedly rational is because the information space in which we must do our work is large compared to the capacities we have, including severe limits on our conscious awareness and our ability to control our behavior and act in line with our own intentions.

  We can also look at the compromise of ethical standards: Again, the story is the same; that is, it’s not the intention to harm that’s the problem. Rather, the explanation lies in such sources as the manner in which some information plays a disproportionate role in our decision making, the ability to generalize or overgeneralize, and the commonness of wrongdoing that typifies daily life. These are the more potent causes of the ethical failures of individuals and institutions.

  The idea that bad outcomes result from limited minds that cannot store, compute, or adapt to the demands of their environment is
a radically different explanation of our capacities and therefore our nature. Its elegance and beauty come from its emphasis on the ordinary and the invisible rather than on specialness and malign motives. This is not so dissimilar from another shift in explanation—from God to natural selection—and is likely to be equally resisted.

  SWARM INTELLIGENCE

  ROBERT SAPOLSKY

  Professor of neurology and neurological sciences, Stanford University; research associate, National Museums of Kenya; author, Monkeyluv: And Other Essays on Our Lives as Animals

  The obvious answer should be the double helix. With the incomparably laconic “It has not escaped our notice . . . ,” it explained the very mechanism of inheritance. But the double helix doesn’t do it for me. By the time I got around to high school biology, the double helix was ancient history, like pepper moths evolving or mitochondria as the powerhouses of the cell. Watson and Crick—as comforting, but as taken for granted, as Baskin and Robbins.

  Then there’s the work of Hubel and Wiesel, which showed that the cortex processes sensations with a hierarchy of feature extraction. In the visual cortex, for example, neurons in the initial layer each receive inputs from a single photoreceptor in the retina. Thus, when one photoreceptor is stimulated, so is “its” neuron in the primary visual cortex. Stimulate the adjacent photoreceptor, and the adjacent neuron activates. Basically, each of these neurons “knows” one thing—namely, how to recognize a particular dot of light. Groups of I-know-a-dot neurons then project onto single neurons in the second cortical layer. Stimulate a particular array of adjacent neurons in that first cortical layer, and a single second-layer neuron activates. Thus, a second-layer neuron knows one thing, which is how to recognize, say, a 45-degree-angle line of light. Then groups of I-know-a-line neurons send projections on to the next layer.

  Beautiful, explains everything—just keep going, cortical layer upon layer of feature extraction, dot to line to curve to collection of curves, until the top layer, where a neuron would know one complex, specialized thing only, like how to recognize your grandmother. And it would be the same in the auditory cortex: first-layer neurons knowing particular single notes, second layer knowing pairs of notes, some neuron at the top that would recognize the sound of your grandmother singing along with Lawrence Welk.

  It turns out, though, that things didn’t quite work that way. There are few “grandmother neurons” in the cortex (although a 2005 Nature paper reported someone with a Jennifer Aniston neuron). The cortex can’t rely too much on grandmother neurons, because that requires a gazillion more neurons to accommodate such inefficiency and overspecialization. Moreover, a world of nothing but grandmother neurons on top precludes making multimodal associations (for example, when seeing a particular Monet reminds you of croissants and Debussy’s music and the disastrous date you had at an Impressionism show at the Met). Instead, we’ve entered the world of neural networks.

  Which brings me to my selection, which is emergence and complexity, as represented by “swarm intelligence.” Observe a single ant and it doesn’t make much sense—walking in one direction, suddenly careening in another for no obvious reason, doubling back on itself. Thoroughly unpredictable. The same happens with two ants, with a handful of ants. But a colony of ants makes fantastic sense. Specialized jobs, efficient means of exploiting new food sources, complex underground nests with temperature regulated within a few degrees. And critically, there’s no blueprint or central source of command—each individual ant has algorithms for its behaviors. But this is not the wisdom of the crowd, where a bunch of reasonably informed individuals outperform a single expert. The ants aren’t reasonably informed about the big picture. Instead, the behavior algorithms of each ant consist of a few simple rules for interacting with the local environment and local ants. And out of this emerges a highly efficient colony.

  Ant colonies excel at generating trails that connect locations in the shortest possible way, accomplished with simple rules about when to lay down a pheromone trail and what to do when encountering someone else’s trail—approximations of optimal solutions to the Traveling Salesman problem. In “ant-based routing,” simulations using virtual ants with similar rules can generate optimal ways of connecting the nodes in a network, something of great interest to telecommunications companies. It also applies to the developing brain, which must wire up vast numbers of neurons with vaster numbers of connections without constructing millions of miles of connecting axons. And migrating fetal neurons generate an efficient solution with a different version of ant-based routing.

  A wonderful example is how local rules about attraction and repulsion (that is, positive and negative charges) allow simple molecules in an organic soup to occasionally form more complex ones. Life may have originated this way, without the requirement of bolts of lightning to catalyze the formation of complex molecules.

  And why is self-organization so beautiful to my atheistic self? Because if complex adaptive systems don’t require a blueprint, they don’t require a Blueprint Maker. If they don’t require lightning bolts, they don’t require Someone hurtling lightning bolts.

  LANGUAGE AND NATURAL SELECTION

  KEITH DEVLIN

  Executive director, H-STAR Institute, Stanford University; author, The Man of Numbers: Fibonacci’s Arithmetic Revolution

  Not only does evolution by natural selection explain how we all got here and how we are and behave as we do, it can even explain (at least to my fairly critical satisfaction) why many people refuse to accept it and why even more people believe in an all-powerful Deity. But since other Edge respondents are likely to have natural selection as their favorite deep, elegant, and beautiful explanation (it has all three attributes, in addition to wide-ranging explanatory power), I’ll home in on one particular instance: the explanation of how humans acquired language, by which I mean grammatical structure.

  There is evidence to suggest that our ancestors developed effective means to communicate using verbal utterances starting at least 3 million years ago. But grammar is much more recent, perhaps as recent as 75,000 years ago. How did grammar arise?

  Anyone who has traveled abroad knows that to communicate basic needs, desires, and intentions to people in your vicinity concerning objects within sight, a few referring words together with gestures suffice. The only grammar required is to occasionally juxtapose two words (“Me Tarzan, you Jane” being the information-[and innuendo-] rich classic example from Hollywood). Anthropologists refer to such a simple word-pairing communication system as protolanguage.

  But to communicate about things not in the here-and-now, you need more. Effectively planning future joint activities needs pretty well all of grammatical structure, particularly if the planning involves more than two people—with even more demands made on the grammar if the plan requires coordination among groups not all present at the same place or time.

  Given the degree to which human survival depends on our ability to plan and coordinate our actions and to collectively debrief after things go wrong so we avoid repeating our mistakes, it’s clear that grammatical structure is hugely important to Homo sapiens. Indeed, many argue that it’s our defining characteristic. But communication, while arguably the killer app for grammar, clearly cannot be what put it into the gene pool in the first place, and for a very simple reason. Since grammar is required in order for verbal utterances to convey ideas more complex than is possible with protolanguage, it comes into play only when the brain can form such ideas. These considerations lead to what is accepted (although not without opposition) as the Standard Explanation of language acquisition. In highly simplified terms, the Standard Explanation runs like this.

  Brains (or the organs that became brains) first evolved to associate motor responses to sensory input stimuli.

  In some creatures, brains became more complex, performing a mediating role between input stimuli and motor responses.

  In some of those creatures, the brain became able to override automatic stimulus-response se
quences.

  In Homo sapiens, and to a lesser extent in other species, the brain acquired the ability to function off-line, effectively running simulations of actions without the need for sensory input stimuli and without generating output responses.

  Stage 4 is when the brain acquires grammar. What we call grammatical structure is in fact a descriptive/communicative manifestation of a mental structure for modeling the world.

  As a mathematician, what I like about this explanation is that it also tells us where the brain got its capacity for mathematical thinking. Mathematical thinking is essentially another manifestation of the brain’s simulation capacity, but in quantitative/relational/logical terms rather than descriptive/communicative.

  As is usually the case with natural-selection arguments, it takes considerable work to flesh out the details of these simplistic explanations (and some days I’m less convinced than others about some aspects), but overall they strike me as about right. In particular, the mathematical story explains why doing mathematics carries with it an overpowering Platonic sense of reasoning not about abstractions but about real objects—at least “real” in a Platonic realm. At which point, the lifelong mathematics educator in me says I should leave the proof of that corollary as an exercise for the reader—so I shall.

 

‹ Prev