The Enigma of Reason: A New Theory of Human Understanding

Home > Other > The Enigma of Reason: A New Theory of Human Understanding > Page 36
The Enigma of Reason: A New Theory of Human Understanding Page 36

by Dan Sperber


  The overwhelming mass of reasons and evidence gathered by the abolitionists ended up convincing most members of Parliament—directly or through the popular support the arguments had gathered. In 1792, three-quarters of the House of Commons voted for a gradual abolition of the slave trade. The House of Lords, closer to the slavers’ interests, asked for more time to ponder the case. Awkward timing: for years, the threats posed by the French revolution, and then by Napoleon, would quash all radical movements—which, at the time, included abolition. But as soon as an opportunity arose, the abolitionists whetted their arguments, popular clamor rekindled, Parliament was flooded with new petitions, and Wilberforce again handily carried the debate in the Commons. In 1807, both houses voted to abolish the slave trade.

  The British abolitionists didn’t invent most of the arguments against slavery. But they refined them, backed them with masses of evidence, increased their credibility by relying on trustworthy witnesses, and made them more accessible by allowing people to see life through a slave’s eyes. Debates, public meetings, and newspapers brought these strengthened arguments to a booming urban population. And it worked. People were convinced not only of the evils of slavery but also of the necessity of doing something about it. They petitioned, gave money, and—with the help of other factors, from economy to international politics—had first the slave trade and then slavery itself banned.

  The Best and Worst of Reason

  The interactionist approach is in a unique position to account for the range of effects reason has on moral judgments and decisions. Many experiments and, before them, countless personal and historical observations have rendered the intellectualist view of moral reason implausible. Moral judgments and decisions are quite commonly dominated by intuitions and emotions with reason providing, at best, inert rationalizations and, at worst, excuses that allow the reasoner to engage in morally dubious behavior—from sitting away from someone with a disability to keeping one’s slaves. Reason does what it is expected to do as a biased and lazy producer of justifications.

  Yet we do not quite share the pessimism regarding the ability of reason to change people’s minds. People do not just provide their own justifications and arguments; they also evaluate those of others. As evaluators, people should be able to recognize strong arguments and be swayed by them in all domains, including the moral realm. Clearly, arguments that challenge the moral values of one’s community can be met with disbelief, distrust of motives, even downright hostility. Still, on many moral issues, people have been influenced by good arguments, from local politics—for example, how to organize the local school curriculums—to major societal issues—such as the abolition of the slave trade.

  18

  Solitary Geniuses?

  In the 1920s, the small community of particle physicists faced a dilemma of epic consequences. The mathematics they used to understand elementary particles conflicted with their standard representations of space. While mathematics enabled very accurate predictions of the particles’ behavior, the equations could not ascribe to these particles a trajectory in space. There seemed to be a deep flaw in quantum theory. Would the physicists have to reject this extraordinarily successful framework? Or would they be forced to accept its limitations? Werner Heisenberg was already a well-recognized figure in this community. By developing the mathematical framework used to understand elementary particles, he had contributed to creating this dilemma. So he set out to resolve it.

  At the time, most of the action in particle physics was happening in Niels Bohr’s laboratory in Copenhagen. Bohr not only was a great scientist; he also had a flair for obtaining funding, which he used wisely, promoting the best young researchers—such as Heisenberg. However, Bohr’s larger-than-life persona clashed with Heisenberg’s ambition and independence. In search of a more peaceful atmosphere, Heisenberg retreated to the small island of Heligoland, a three hours’ boat ride off the coast of Germany. After months of isolation, he finally found a solution to the dilemma that was haunting quantum physics: a mathematical formulation of what would become the uncertainty principle. Heisenberg summarized the principle in layman’s terms: “One can never know with perfect accuracy both of those two important factors which determine the movement of one of the smallest particles—its position and its velocity.”1 This is a third route: quantum physics does not have to be replaced by a new framework, and physicists do not have to resign themselves to an imperfect understanding of the world. If elementary particles cannot be ascribed a precise trajectory, it is not because quantum physics is flawed; it is because position and velocity do not simultaneously exist before being measured and cannot be measured simultaneously.

  A stunningly brilliant mind, isolated on an island, reaches a deeper understanding of the nature of the world—a perfect illustration of the solitary genius view of science. In this popular view, scientific breakthroughs often come from great minds working in isolation. The image of the lonely genius is a figment of the romantic imagination of the eighteenth century, as it can be found in the verses of the poet Wordsworth. To him, a statue of Newton was

  The marble index of a mind for ever

  Voyaging through strange seas of thought, alone2

  In the solitary genius view of science, geniuses are fed data by underlings—students, assistants, lesser scientists—they think very hard about a problem, and come up with a beautiful theory. This is very close to the view of science that had been advocated by Francis Bacon.

  Bacon was a visionary statesman and philosopher living in England at the turn of the sixteenth century. He urged his colleagues to abandon their specious arguments and adopt a more empirical approach, to stop adding new commentaries on Aristotle and conduct experiments instead—ideas that would inspire the scientific revolution. While Bacon emphasized the collaborative character of science, he held a very hierarchical notion of collaboration. In his description of a utopian New Atlantis, Bacon makes a long list of people in charge of “collect[ing] the experiments which are in all books … [and] of all mechanical arts,” and “try[ing] new experiments.” But the theorizing is reserved to a very select group: “Lastly, we have three that raise the former discoveries by experiments into greater observations, axioms, and aphorisms. These we call Interpreters of Nature.”

  As one of the two foremost lawyers of his day, Bacon was intimately familiar with argumentation, yet there is little place for it in his grand scheme. In such a hierarchical view of science, if argumentation is needed at all, it is to enable recognized geniuses to convince lesser minds of their discoveries. And even that may not be working so well.

  Dispirited by what he perceived to be the slow acceptance of his ideas, Max Planck, one of the founders of quantum physics, quipped, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”3 Quoted in millions of web pages, thousands of books, and hundreds of scientific articles, Planck’s aphorism encapsulates a deeply held belief about scientific change (or lack of it).

  If scientific progress is achieved by lone geniuses building revolutionary theories, if even these geniuses fail to convince their colleagues to share in their vision, then science, the most successful of our epistemic endeavors, violates all the predictions of our interactive approach to reason. That would be a problem.

  Scientists Are Biased, Too

  Scientists rely on diverse cognitive skills. As many researchers, from Poincaré to Einstein, have pointed out, intuitions play a crucial role in the emergence of new insights. No one denies, of course, that reason plays an important role in science. Is it that individual scientists can successfully answer very complex questions on their own? If so, they must be exceptionally good reasoners capable of overcoming the two problems that plague laypeople in their solitary reasoning: myside bias and low evaluation criteria for one’s own arguments—that is, laziness. Perhaps scientists are a special breed. Perha
ps they have been drilled about falsification to such an extent that it has become second nature, allowing them to reason impartially about their own ideas. The example of Linus Pauling, discussed in Chapter 11, gave us a glimpse of the answer, but less anecdotal evidence also speaks to this question.

  In the 1970s, Michael Mahoney and his colleagues conducted a series of experiments to discover whether scientists are also prey to the myside bias. One of their studies compared the answers of scientists from two different fields, psychology and physics, to the answers of Protestant ministers on a task designed to assess the myside bias. The three groups showed a bias, the scientists being as biased as the ministers or the participants of previous experiments.4 When Mahoney conducted an experiment in which scientists had to reason about their own domain of expertise, he again observed a strong myside bias.5

  The same pattern emerges from observations of scientists in their natural environment. Kevin Dunbar conducted an extraordinary study of how scientists think in real life and not just in experimental settings.6 He interviewed scores of researchers, in an effort to understand how they made sense of their data, developed new hypotheses, and solved new problems. His observations showed that scientists reason to write off inconvenient results. When an experiment has a disappointing outcome, researchers do not reason impartially, questioning their initial hypothesis and trying to come up with a new one. Instead, they are satisfied with weak arguments rescuing the initial hypothesis: a technical problem occurred; the experiment was flawed; someone made a mistake.

  Yet Scientists Are Sensitive to Good Arguments

  If scientists’ reasoning shares the biases and limitations of laypeople’s reasoning, it should also share its strengths: be much more objective in the evaluation than in the production of reasons. Scientists should pay attention to each other’s arguments and change their minds when given good reasons to. But, following Planck and the common wisdom, it seems they fail to do even that, resisting new theories to their last breath.

  To be fair to scientists, they’ve been asked to swallow some crazy ideas: that we are the descendants of unicellular organisms, the product of billions of years of random mutations, barreling around the sun at over 100,000 kilometers per hour, glued to the ground by the same force that keeps the earth in its orbit, seeing this page thanks to light particles for which time does not exist, our consciousness the product of a three-pound slab of gray matter. When Planck was complaining about the sluggish diffusion of his ideas, he was trying to convince physicists that energy is not a continuous variable but that it comes instead in quanta of a specified size. In giving birth to quantum physics, Planck was violating one of classical physics’ most fundamental assumptions. That he was met with skepticism should not come as a surprise. The counterintuitiveness of scientific ideas is not the only factor slowing their spread. Thomas Kuhn, the author of the revolutionary book The Structure of Scientific Revolutions, argued that much of the resistance to new theories comes from scientists who have staked their entire careers on one paradigm. One could hardly blame them for being somewhat skeptical of the upstarts’ apparently silly yet threatening ideas.

  Although he was more conscious than anyone before him of the challenge involved in getting revolutionary ideas accepted, Kuhn did not share Planck’s bleak assessment: “Though some scientists, particularly the older and more experienced ones, may resist indefinitely, most of them can be reached in one way or another.”7 Ironically, Bernard Cohen, another scholar of scientific revolutions, noted how “Planck himself actually witnessed the acceptance, modification, and application of his fundamental concept by his fellow scientists.”8 Planck and Kuhn also seem to have been wrong about the deleterious effects of age on the capacity to take new ideas in stride. Quantitative historical studies suggest that older scientists are only barely less likely to accept novel theories than their younger peers.9

  In his review of scientific revolutions, Cohen went further. Not only does change of mind happen, but “whoever reads the literature of scientific revolution cannot help being struck by the ubiquity of references to conversion.”10 Being confronted with good arguments sometimes leads to epiphanies, as that recounted by a nineteenth-century chemist who had been given a pamphlet at a conference: “I … read it over and over at home and was astonished at the light which the little work shed upon the most important points of controversy. Scales as it were fell from my eyes, doubts evaporated, and a feeling of the most tranquil certainty took their place.”11

  Revolutions in science proceed in a gentle manner. There is no need to chop off senior scientists’ heads; they can change their minds through “unforced agreement.”12 Ten years after Darwin had published The Origin of Species, three-quarters of British scientists had been at least partly convinced.13 Barely more than ten years after a convincing theory of plate tectonics was worked out, it had been integrated into textbooks.14

  Science Makes the Best of Argumentation

  Scientists’ reasoning is not different in kind from that of laypeople. Science doesn’t work by recruiting a special breed of super-reasoners but by making the best of reasoning’s strengths: fostering discussions, providing people with tools to argue, giving them the latitude to change their minds.

  The Royal Society, founded in England in 1660, was one of the first scientific societies in the world. Several of its founders would become important figures in the scientific revolution. Yet, however brilliant these founders might have been, they still got a few things wrong. Christopher Wren believed that “a true astrology [could] be found by the inquiring philosopher.” The hand of a freshly hanged man was, according to Robert Boyle, a cure for goiter. John Wilkins championed the miraculous powers of Valentine Greatrakes, “the most famous occult healer of the seventeenth century,”15 who could supposedly cure tuberculosis and other ailments. But what they got right was much more important than any particular belief they got wrong. It was a way of acquiring new, better beliefs. These pioneers thought that real experiments should replace thought experiments,16 that scholars ought to engage in open-ended dialogue and not in the sterile mind games of medieval disputationes, that more knowledge could be gained from tradesmen and travelers than from centuries-old books.

  Wren, Boyle, Wilkins, and their colleagues aimed high. To illustrate the power of discussion, Boyle wrote The Sceptical Chymist, the report of an imaginary symposium of chemists. The dialogue style was hardly new; Galileo had used it with great power to expose his revolutionary theories. But in the Dialogue Concerning the Two Chief World Systems, it is all too clear that Salviati—Galileo’s mouthpiece—has it all figured out from the beginning.

  Boyle’s use of dialogue was different. In The Sceptical Chymist, none of the characters know the answer from the start. It is a “piece of theatre that exhibit[s] how persuasion, dissensus and, ultimately, conversion to truth ought to be conducted.”17 Carneades, the protagonist whose views most closely mirror Boyle’s, does not inculcate the truth to his interlocutors; “rather [truth] is dramatized as emerging through the conversation.”18

  Boyle’s grand vision might not have materialized, but he wouldn’t be entirely disappointed by the everyday workings of contemporary science. Experiments have become common currency. And Boyle’s constructive dialogues can be observed in thousands of lab meetings every day. What happened to the scientists that Dunbar had found so prompt to rationalize away disappointing results? When they brought their excuses to the lab meetings, they were rebuffed. The rationalizations were only good enough to convince those who offered them, not the other lab members. The researchers were forced to come up with better explanations of their results, assisted by other group members who provided alternative hypotheses and explanations.19

  The lab meetings Dunbar observed offer a perfect demonstration of how discussion can rein in the biases that mislead individual reasoners. And this is only one of the many forms that the exchange of arguments takes in science, from informal chats to peer review and international s
ymposia.

  The importance of discussions and arguments for science has not escaped contemporary scientists—as opposed to the poets and other external observers who created the myth of the lone genius. When interviewed by Mihaly Csikszentmihalyi and Keith Sawyer about creativity, a mathematical physicist described science as

  a very gregarious business, it’s essentially the difference between having this door open and having it shut. If I’m doing science, I have the door open. That’s kind of symbolic, but it’s true. You want to be all the time talking with people …. It’s only by interaction with other people in the building that you get anything interesting done; it’s essentially a communal enterprise.20

  Even Daniel Kahneman, who with Amos Tversky has made some of the most important contributions to the development of an individualist view of reasoning, is well aware of the power of discussion: “I did the best thinking of my life on leisurely walks with Amos.”21

  How Solitary Are Solitary Geniuses?

  Sociologists and historians of science concur: science is an intrinsically collective enterprise. Not in Bacon’s sense of a division of labor between the lowly data gatherer and the high-minded theoretician, but as a more integrated collaborative endeavor, shot through with discussions and arguments. Yet it seems that there are exceptions. What of those scientists who break new ground on their own? What of Heisenberg on his island?

  The historian of science Mara Beller has looked in detail at the process that led Heisenberg to formulate the uncertainty principle, and she doesn’t buy the solitary genius view. Drawing from a range of sources—scientific papers, letters, interviews, and autobiographies—she reconstructed the dialogical nature of Heisenberg’s achievement. “Not the magisterial unfolding of a single argument, but the creative coalescence of different arguments, each reinforcing and illuminating the others, resulted in Heisenberg’s monumental contribution to physics.”22 Heisenberg’s insights were a reaction to Schrödinger’s position, built by engaging with the thought of Bohr and Dirac, recycling ideas of less famous physicists such as Norman Campbell and H. A. Sentfleben.23

 

‹ Prev