This Will Make You Smarter

Home > Other > This Will Make You Smarter > Page 5
This Will Make You Smarter Page 5

by John Brockman


  However, and this is ironic, the antiscientific coalition is also more scientifically organized! If a company wants to change public opinion to increase their profits, it deploys scientific and highly effective marketing tools. What do people believe today? What do we want them to believe tomorrow? Which of their fears, insecurities, hopes, and other emotions can we take advantage of? What’s the most cost-effective way of changing their minds? Plan a campaign. Launch. Done.

  Is the message oversimplified or misleading? Does it unfairly discredit the competition? That’s par for the course when marketing the latest smartphone or cigarette, so it would be naïve to think that the code of conduct should be any different when this coalition fights science.

  Yet we scientists are often painfully naïve, deluding ourselves that just because we think we have the moral high ground, we can somehow defeat this corporate-fundamentalist coalition by using obsolete unscientific strategies. Based on what scientific argument will it make a hoot of difference if we grumble, “We won’t stoop that low” and “People need to change” in faculty lunchrooms and recite statistics to journalists? We scientists have basically been saying “Tanks are unethical, so let’s fight tanks with swords.”

  To teach people what a scientific concept is and how a scientific lifestyle will improve their lives, we need to go about it scientifically: We need new science advocacy organizations, which use all the same scientific marketing and fund-raising tools as the antiscientific coalition. We’ll need to use many of the tools that make scientists cringe, from ads and lobbying to focus groups that identify the most effective sound bites.

  We won’t need to stoop all the way down to intellectual dishonesty, however. Because in this battle, we have the most powerful weapon of all on our side: the facts.

  Experimentation

  Roger Schank

  Psychologist and computer scientist, Engines for Education, Inc.; author, Making Minds Less Well Educated Than Our Own

  Some scientific concepts have been so ruined by our education system that it is necessary to explain the ones that everyone thinks they know about and really don’t.

  We learn about experimentation in school. What we learn is that scientists conduct experiments, and in our high school labs if we copy exactly what they did, we will get the results they got. We learn about the experiments scientists do—usually about the physical and chemical properties of things—and we learn that they report their results in scientific journals. So, in effect, we learn that experimentation is boring, is something done by scientists, and has nothing to do with our daily lives.

  And this is a problem. Experimentation is something done by everyone all the time. Babies experiment with what might be good to put in their mouths. Toddlers experiment with various behaviors to see what they can get away with. Teenagers experiment with sex, drugs, and rock and roll. But because people don’t really see these things as experiments or as ways of collecting evidence in support or refutation of hypotheses, they don’t learn to think about experimentation as something they do constantly and thus need to learn to do better.

  Every time we take a prescription drug, we are conducting an experiment. But we don’t carefully record the results after each dose, and we don’t run controls, and we mix up the variables by not changing only one behavior at a time, so that when we suffer from side effects we can’t figure out what might have been their true cause. We do the same with personal relationships: When they go wrong, we can’t figure out why, because the conditions are different in each one.

  Now, while it is difficult if not impossible to conduct controlled experiments in most aspects of our lives, it is possible to come to understand that we are indeed conducting an experiment when we take a new job, or try a new tactic in a game, or pick a school to attend—or when we try and figure out how someone is feeling or wonder why we ourselves feel as we do.

  Every aspect of life is an experiment that can be better understood if it is perceived in that way. But because we don’t recognize this, we fail to understand that we need to reason logically from evidence we gather, carefully consider the conditions under which our experiment has been conducted, and decide when and how we might run the experiment again with better results. The scientific activity that surrounds experimentation is about thinking clearly in the face of evidence obtained from the experiment. But people who don’t see their actions as experiments and don’t know how to reason carefully from data will continue to learn less well from their experiences than those who do.

  Most of us, having learned the word “experiment” in the context of a boring ninth-grade science class, have long since learned to discount science and experimentation as irrelevant to our lives. If schools taught basic cognitive concepts, such as experimentation in the context of everyday experience, instead of concentrating on algebra as a way of teaching people how to reason, then people would be much more effective at thinking about politics, child raising, personal relationships, business, and every other aspect of their daily lives.

  The Controlled Experiment

  Timo Hannay

  Managing director, Digital Science, Macmillan Publishers Ltd.

  The scientific concept that most people would do well to understand and exploit is the one that almost defines science itself: the controlled experiment.

  When they are required to make a decision, the instinctive response of most nonscientists is to introspect, or perhaps call a meeting. The scientific method dictates that wherever possible we should instead conduct a suitable controlled experiment. The superiority of the latter approach is demonstrated not only by the fact that science has uncovered so much about the world but also, and even more powerfully, by the fact that such a lot of it—the Copernican Principle, evolution by natural selection, general relativity, quantum mechanics—is so mind-bendingly counterintuitive. Our embrace of truth as defined by experiment (rather than by common sense, or consensus, or seniority, or revelation, or any other means) has, in effect, released us from the constraints of our innate preconceptions, prejudices, and lack of imagination. It has freed us to appreciate the universe in terms well beyond our abilities to derive by intuition alone.

  What a shame, then, that experiments are by and large performed only by scientists. What if businesspeople and policy makers were to spend less time relying on instinct or partially informed debate and more time devising objective ways to identify the best answers? I think that would often lead to better decisions.

  In some domains, this is already starting to happen. Online companies, such as Amazon and Google, don’t anguish over how to design their Web sites. Instead, they conduct controlled experiments by showing different versions to different groups of users until they have iterated to an optimal solution. (And with the amount of traffic those sites receive, individual tests can be completed in seconds.) They are helped, of course, by the fact that the Web is particularly conducive to rapid data acquisition and product iteration. But they are helped even more by the fact that their leaders often have backgrounds in engineering or science and therefore adopt a scientific—which is to say, experimental—mind-set.

  Government policies—from teaching methods in schools to prison sentencing to taxation —would also benefit from more use of controlled experiments. This is where many people start to get squeamish. To become the subject of an experiment in something as critical or controversial as our children’s education or the incarceration of criminals feels like an affront to our sense of fairness and our strongly held belief in the right to be treated exactly the same as everybody else. After all, if there are separate experimental and control groups, then surely one of them must be losing out. Well, no, because we do not know in advance which group will be better off, which is precisely why we are conducting the experiment. Only when a potentially informative experiment is not conducted do true losers arise: all those future generations who stood to benefit from the results. The real reason people ar
e uncomfortable is simply that we’re not used to seeing experiments conducted in these domains. After all, we willingly accept them in the much more serious context of clinical trials, which are literally matters of life and death.

  Of course, experiments are not a panacea. They will not tell us, for example, whether an accused person is innocent or guilty. Moreover, experimental results are often inconclusive. In such circumstances, a scientist can shrug his shoulders and say that he is still unsure, but a businessperson or lawmaker will often have no such luxury and may be forced to make a decision anyway. Yet none of this takes away from the fact that the controlled experiment is the best method yet devised to reveal truths about the world, and we should use them wherever they can be sensibly applied.

  Gedankenexperiment

  Gino Segre

  Professor of physics at the University of Pennsylvania; author, Ordinary Geniuses: Max Delbrück, George Gamow, and the Origins of Genomics and Big Bang Cosmology

  The notion of a gedankenexperiment, or thought experiment, has been integral to the theoretical physics toolkit ever since that discipline came into existence. It involves setting up an imagined piece of apparatus and running a simple experiment with it in your mind, for the purpose of proving or disproving a hypothesis. In many cases, a gedankenexperiment is the only approach. An actual experiment to examine retrieval of information falling into a black hole cannot be carried out.

  The notion was particularly important during the development of quantum mechanics, when legendary gedankenexperiments were conducted by the likes of Niels Bohr and Albert Einstein to test such novel ideas as the uncertainty principle and wave-particle duality. Examples, like that of “Schrödinger’s cat,” have even come into the popular lexicon. Is the cat simultaneously dead and alive? Others, particularly the classic double slit through which a particle/wave passes, were part of the first attempt to understand quantum mechanics and have remained as tools for understanding its meaning.

  However, the subject need not be an esoteric one for a gedankenexperiment to be fruitful. My own favorite is Galileo’s proof that, contrary to Aristotle’s view, objects of different mass fall in a vacuum with the same acceleration. One might think that a real experiment needs to be conducted to test that hypothesis, but Galileo simply asked us to consider a large and a small stone tied together by a very light string. If Aristotle was right, the large stone should speed up the smaller one, and the smaller one retard the larger one, if they fall at different rates. However, if you let the string length approach zero, you have a single object with a mass equal to the sum of their masses, and hence it should fall at a higher rate than either. This is nonsensical. The conclusion is that all objects fall in a vacuum at the same rate.

  Consciously or unconsciously, we carry out gedankenexperiments of one sort or another in our everyday life and are even trained to perform them in a variety of disciplines, but it would be useful to have a greater awareness of how they are conducted and how they can be positively applied. We could ask, when confronted with a puzzling situation, “How can I set up a gedankenexperiment to resolve the issue?” Perhaps our financial, political, and military experts would benefit from following such a tactic—and arrive at happier outcomes.

  The Pessimistic Meta-Induction from the History of Science

  Kathryn Schulz

  Journalist; author, Being Wrong: Adventures in the Margin of Error

  OK, OK: It’s a terrible phrase. In my defense, I didn’t coin it; philosophers of science have been kicking it around for a while. But if “the pessimistic meta-induction from the history of science” is cumbersome to say and difficult to remember, it is also a great idea. In fact, as the “meta” part suggests, it’s the kind of idea that puts all other ideas into perspective.

  Here’s the gist: Because so many scientific theories from bygone eras have turned out to be wrong, we must assume that most of today’s theories will eventually prove incorrect as well. And what goes for science goes in general. Politics, economics, technology, law, religion, medicine, child rearing, education: No matter the domain of life, one generation’s verities so often become the next generation’s falsehoods that we might as well have a pessimistic meta-induction from the history of everything.

  Good scientists understand this. They recognize that they are part of a long process of approximation. They know they are constructing models rather than revealing reality. They are comfortable working under conditions of uncertainty—not just the local uncertainty of “Will this data bear out my hypothesis?” but the sweeping uncertainty of simultaneously pursuing and being cut off from absolute truth.

  The rest of us, by contrast, often engage in a kind of tacit chronological exceptionalism. Unlike all those suckers who fell for the flat Earth or the geocentric universe or cold fusion, we ourselves have the great good luck to be alive during the very apex of accurate human thought. The literary critic Harry Levin put this nicely: “The habit of equating one’s age with the apogee of civilization, one’s town with the hub of the universe, one’s horizons with the limits of human awareness, is paradoxically widespread.” At best, we nurture the fantasy that knowledge is always cumulative and therefore concede that future eras will know more than we do. But we ignore or resist the fact that knowledge collapses as often as it accretes, that our own most cherished beliefs might appear patently false to posterity.

  That fact is the essence of the meta-induction—and yet, despite its name, this idea is not pessimistic. Or rather, it is pessimistic only if you hate being wrong. If, by contrast, you think that uncovering your mistakes is one of the best ways to revise and improve your understanding of the world, then this is actually a highly optimistic insight.

  The idea behind the meta-induction is that all of our theories are fundamentally provisional and quite possibly wrong. If we can add that idea to our cognitive toolkit, we will be better able to listen with curiosity and empathy to those whose theories contradict our own. We will be better able to pay attention to counterevidence—those anomalous bits of data that make our picture of the world a little weirder, more mysterious, less clean, less done. And we will be able to hold our own beliefs a bit more humbly, in the happy knowledge that better ideas are almost certainly on the way.

  Each of Us Is Ordinary, Yet One of a Kind

  Samuel Barondes

  Director of the Center for Neurobiology & Psychiatry at the University of California–San Francisco; author, Making Sense of People: Decoding the Mysteries of Personality

  Each of us is ordinary, yet one of a kind.

  Each of us is standard issue, conceived by the union of two germ cells, nurtured in a womb, and equipped with a developmental program that guides our further maturation and eventual decline.

  Each of us is also unique, the possessor of a particular selection of gene variants from the collective human genome and immersed in a particular family, culture, era, and peer group. With inborn tools for adaptation to the circumstances of our personal world, we keep building our own ways of being and the sense of who we are.

  This dual view of each of us, as both run-of-the-mill and special, has been so well established by biologists and behavioral scientists that it may now seem self-evident. But it still deserves conscious attention as a specific cognitive chunk, because it has such important implications. Recognizing how much we share with others promotes compassion, humility, respect, and brotherhood. Recognizing that we are each unique promotes pride, self-development, creativity, and achievement.

  Embracing these two aspects of our personal reality can enrich our daily experience. It allows us to simultaneously enjoy the comfort of being ordinary and the excitement of being one of a kind.

  Nexus Causality, Moral Warfare, and Misattribution Arbitrage

  John Tooby

  Founder of the field of evolutionary psychology; codirector, University of California–Santa Barbara’s Center for Evolutionary Psychology

>   We could become far more intelligent than we are by adding to our stock of concepts and forcing ourselves to use them even when we don’t like what they are telling us. This will be nearly always, because they generally tell us that our self-evidently superior selves and in-groups are error-besotted. We all start from radical ignorance in a world that is endlessly strange, vast, complex, intricate, and surprising. Deliverance from ignorance lies in good concepts—inference fountains that geyser out insights that organize and increase the scope of our understanding. We are drawn to them by the fascination of the discoveries they afford, but we resist using them well and freely, because they would reveal too many of our apparent achievements to be embarrassing or tragic failures. Those of us who are nonmythical lack the spine that Oedipus had—the obsidian resolve that drove him to piece together shattering realizations despite portents warning him off. Because of our weakness, “to see what is in front of one’s nose needs a constant struggle,” as Orwell says. So why struggle? Better instead to have one’s nose and what lies beyond shift out of focus—to make oneself hysterically blind as convenience dictates, rather than risk ending up like Oedipus, literally blinding oneself in horror at the harvest of an exhausting, successful struggle to discover what is true.

  Alternatively, even modest individual-level improvements in our conceptual toolkit can have transformative effects on our collective intelligence, promoting incandescent intellectual chain reactions among multitudes of interacting individuals. If this promise of intelligence-amplification through conceptual tools seems like hyperbole, consider that the least inspired modern engineer, equipped with the conceptual tools of calculus, can understand, plan, and build things far beyond what Leonardo or the mathematics-revering Plato could have achieved without it. We owe a lot to the infinitesimal, Newton’s counterintuitive conceptual hack—something greater than zero but less than any finite magnitude. Much simpler conceptual innovations than calculus have had even more far-reaching effects—the experiment (a danger to authority), zero, entropy, Boyle’s atom, mathematical proof, natural selection, randomness, particulate inheritance, Dalton’s element, distribution, formal logic, culture, Shannon’s definition of information, the quantum . . .

 

‹ Prev