This Will Make You Smarter

Home > Other > This Will Make You Smarter > Page 26
This Will Make You Smarter Page 26

by John Brockman


  So maybe when I argue for anecdotalism going into everyone’s cognitive toolkit, I am really arguing for two things to be incorporated: (a) an appreciation of how distortive it can be; and (b) recognition, in a salute to the work of people like Amos Tversky and Daniel Kahneman, of its magnetic pull, its cognitive satisfaction. As social primates complete with a region of the cortex specialized for face recognition, we find that the individual face—whether literal or metaphorical—has a special power. But unappealing, unintuitive patterns of statistics and variation generally teach us much more.

  You Can Show That Something Is Definitely Dangerous but Not That It’s Definitely Safe

  Tom Standage

  Digital editor, The Economist; author, An Edible History of Humanity

  A wider understanding of the fact that you can’t prove a negative would, in my view, do a great deal to upgrade the public debate around science and technology.

  As a journalist, I have lost count of the number of times people have demanded that a particular technology be “proved to do no harm.” This is, of course, impossible, in just the same way that proving that there are no black swans is impossible. You can look for a black swan (harm) in various ways, but if you fail to find one, that doesn’t mean none exist: Absence of evidence is not evidence of absence.

  All you can do is look again for harm in a different way. If you still fail to find it after looking in all the ways you can possibly think of, the question is still open: “lack of evidence of harm” means both “safe as far as we can tell” and “we still can’t be sure if it’s safe or not.”

  Scientists are often accused of logic-chopping when they point this out. But it would be immensely helpful to public discourse if there were a wider understanding that you can show that something is definitely dangerous, but you cannot show that it is definitely safe.

  Absence and Evidence

  Christine Finn

  Archaeologist; journalist; author, Artifacts: An Archaeologist’s Year in Silicon Valley

  I first heard the words “absence of evidence is not evidence of absence” as a first-year archaeology undergraduate. I now know it was part of Carl Sagan’s retort against evidence from ignorance, but at the time the nonascribed quote was part of the intellectual toolkit offered by my professor to help us make sense of the process of excavation.

  Philosophically this is a challenging concept, but at an archaeological site all became clear in the painstaking tasks of digging, brushing, and troweling. The concept was useful to remind us, as we scrutinized what was there, to take note of the possibility of what was not there. What we were finding, observing, and lifting were the material remains, the artifacts that had survived, usually as a result of their material or the good fortune of their deposition. There were barely recordable traces of what was there—the charcoal layer of a prehistoric hearth, for example—and others recovered in the washing or the lab, but this was still tangible evidence. What the concept brought home to us was the invisible traces, the material that had gone from our reference point in time but that still had a bearing in the context.

  It was powerful stuff that stirred my imagination. I looked for more examples outside philosophy. I learned about the great Near Eastern archaeologist Leonard Woolley, who, when excavating the third millennium B.C. Mesopotamian palace at Ur, in modern-day Iraq, conjured up musical instruments from their absence. The evidence was the holes left in the excavation layers, the ghosts of wooden objects that had long since disappeared into time. He used this absence to advantage by making casts of the holes and realizing the instruments as reproductions. It struck me at the time that he was creating works of art. The absent lyres were installations he rendered as interventions and transformed into artifacts. More recently, the British artist Rachel Whiteread has made her name through an understanding of the absent form, from the cast of a house to the undersides and spaces of domestic interiors.

  Recognizing the evidence of absence is not about forcing a shape on the intangible but about acknowledging a potency in the not-thereness. Taking the absence concept to be a positive idea, I suggest interesting things happen. For years, Middle Eastern archaeologists puzzled over the numerous isolated bathhouses and other structures in the deserts of North Africa. Where was the evidence of habitation? The clue was in the absence: The buildings were used by nomads, who left only camel tracks in the sand. Their habitations were ephemeral—tents that, if not taken away with them, were of such material that they would disappear into the sand. Observed again in this light, the aerial photos of desert ruins are hauntingly repopulated.

  The absent evidence of ourselves is all around us, beyond the range of digital traces.

  When my parents died and I inherited their house, the task of clearing their rooms was both emotional and archaeological. The last mantelpiece in the sitting room had accreted over thirty-five years of married life a midden of photos, ephemera, beachcombing trove, and containers of odd buttons and old coins. I wondered what a stranger—maybe a forensic scientist or traditional archaeologist—would make of this array if the narrative had been woven simply from the tangible evidence. But as I took the assemblage apart in a charged moment, I felt there was a whole lot of no-thing that was coming away with it. Something invisible and unquantifiable, which had been holding these objects in that context.

  I recognized the feeling and cast my memory back to my first archaeological excavation. It was of a long-limbed hound, one of those “fine hunting dogs” that the classical writer Strabo described as being traded from ancient Britain into the Roman world. As I knelt in the two-thousand-year-old grave, carefully removing each tiny bone, as if engaged in a sculptural process, I felt the presence of something absent. I could not quantify it, but it was that unseen “evidence” that, it seemed, had given the dog its dogness.

  Path Dependence

  John McWhorter

  Linguist; cultural commentator; senior fellow, Manhattan Institute; lecturer, Department of English & Comparative Literature, Columbia University; author, What Language Is (And What It Isn’t and What It Could Be)

  In an ideal world, all people would spontaneously understand that what political scientists call path dependence explains much more of how the world works than is apparent. “Path dependence” refers to the fact that often something that seems normal or inevitable today began with a choice that made sense at a particular time in the past but survived despite the eclipse of its justification, because, once it had been established, external factors discouraged going into reverse to try other alternatives.

  The paradigm example is the seemingly illogical arrangement of letters on typewriter keyboards. Why not just have the letters in alphabetical order, or arrange them so that the most frequently occurring ones are under the strongest fingers? In fact, the first typewriter tended to jam when typed on too quickly, so its inventor deliberately concocted an arrangement that put “A” under the ungainly little finger. In addition, the first row was provided with all of the letters in the word “typewriter,” so that salesmen, new to typing, could type the word using just one row.

  Quickly, however, mechanical improvements made faster typing possible, and new keyboards placing letters according to frequency were presented. But it was too late: There was no going back. By the 1890s, typists across America were used to QWERTY keyboards, having learned to zip away on new versions of them that did not stick so easily. Retraining them would have been expensive and, ultimately, unnecessary, so QWERTY was passed down the generations, and even today we use the queer QWERTY configuration on computer keyboards, where jamming is a mechanical impossibility.

  The basic concept is simple, but in general estimation tends to be processed as the province of “cute” stories like the QWERTY one, rather than explaining a massive weight of scientific and historical processes. Instead, the natural tendency is to seek explanations for modern phenomena in modern conditions.

  One may a
ssume that cats cover their waste out of fastidiousness, when the same creature will happily consume its own vomit and then jump on your lap. Cats do the burying as an instinct from their wild days, when the burial helped avoid attracting predators, and there is no reason for them to evolve out of the trait now (to pet owners’ relief). I have often wished there were a spontaneous impulse among more people to assume that path-dependence–style explanations are as likely as jerry-rigged present-oriented ones. For one thing, that the present is based on a dynamic mixture of extant and ancient conditions is simply more interesting than assuming that the present is (mostly) all there is, with history as merely “the past,” interesting only for seeing whether something that happened then could now happen again—which is different from path dependence.

  For example, path dependence explains a great deal about language which is otherwise attributed to assorted just-so explanations. Much of the public embrace of the idea that one’s language channels how one thinks is based on this kind of thing. Robert McCrum celebrates English as “efficient” in its paucity of suffixes of the kind that complexify most European languages. The idea is that this is rooted in something in its speakers’ spirit that would have propelled them to lead the world via exploration and the Industrial Revolution.

  But English lost its suffixes starting in the eighth century A.D., when Vikings invaded Britain and so many of them learned the language incompletely that children started speaking it that way. After that, you can’t create gender and conjugation out of thin air—there’s no going back until gradual morphing re-creates such things over eons of time. That is, English’s current streamlined syntax has nothing to do with any present-day condition of the spirit nor with any even four centuries ago. The culprit is path dependence, as are most things about how a language is structured.

  Or we hear much lately about a crisis in general writing skills, supposedly due to e-mail and texting. But there is a circularity here: Why, precisely, could people not write e-mails and texts with the same “writerly” style that people once couched letters in? Or we hear of a vaguely defined effect of television, although kids were curled up endlessly in front of the tube starting in the fifties, long before the eighties when outcries of this kind first took on their current level of alarm, in the National Commission on Excellence in Education’s report A Nation at Risk.

  Once again, the presentist explanation does not cohere, whereas one based on an earlier historical development that there is no turning back from does. Public American English began a rapid shift from cosseted to less formal “spoken” style in the sixties, in the wake of cultural changes amid the counterculture. This sentiment directly affected how language arts textbooks were composed, the extent to which any young person was exposed to an old-fashioned formal “speech,” and attitudes toward the English language heritage in general. The result: a linguistic culture stressing the terse, demotic, and spontaneous. After just one generation minted in this context, there was no way to go back. Anyone who decided to communicate in the grandiloquent phraseology of yore would sound absurd and be denied influence or exposure. Path dependence, then, identifies this cultural shift as the cause of what dismays, delights, or just interests us in how English is currently used—and reveals television, e-mail and other technologies as merely epiphenomenal.

  Most of life looks path-dependent to me. If I could create a national educational curriculum from scratch, I would include the concept as one taught to young people as early as possible.

  Interbeing

  Scott D. Sampson

  Dinosaur paleontologist, evolutionary biologist, science communicator; author, Dinosaur Odyssey: Fossil Threads in the Web of Life

  Humanity’s cognitive toolkit would greatly benefit from adoption of “interbeing,” a concept that comes from the Vietnamese Buddhist monk Thich Nhat Hanh. In his words:

  If you are a poet, you will see clearly that there is a cloud floating in [a] sheet of paper. Without a cloud, there will be no rain; without rain, the trees cannot grow; and without trees, we cannot make paper. The cloud is essential for the paper to exist. If the cloud is not here, the sheet of paper cannot be here either. . . . “Interbeing” is a word that is not in the dictionary yet, but if we combine the prefix “inter-” with the verb “to be,” we have a new verb, inter-be. Without a cloud, we cannot have a paper, so we can say that the cloud and the sheet of paper inter-are. . . . To be is to inter-be. You cannot just be by yourself alone. You have to inter-be with every other thing. This sheet of paper is, because everything else is.

  Depending on your perspective, the above passage may sound like profound wisdom or New Age mumbo-jumbo. I would like to propose that interbeing is a robust scientific fact—at least, insofar as such things exist—and, further, that this concept is exceptionally critical and timely.

  Arguably the most cherished and deeply ingrained notion in the Western mind-set is the separateness of our skin-encapsulated selves—the belief that we can be likened to isolated, static machines. Having externalized the world beyond our bodies, we are consumed by thoughts of furthering our own ends and protecting ourselves. Yet this deeply rooted notion of isolation is illusory, as evidenced by our constant exchange of matter and energy with the “outside” world. At what point did your last breath of air, sip of water, or bite of food cease to be part of the outside world and become you? Precisely when did your exhalations and wastes cease being you? Our skin is as much permeable membrane as barrier—so much so that, like a whirlpool, it is difficult to discern where “you” end and the remainder of the world begins. Energized by sunlight, life converts inanimate rock into nutrients, which then pass through plants, herbivores, and carnivores before being decomposed and returned to the inanimate Earth, beginning the cycle anew. Our internal metabolisms are intimately interwoven with this Earthly metabolism; one result is the replacement of every atom in our bodies every seven years or so.

  You might counter with something like, “OK, sure, everything changes over time. So what? At any given moment, you can still readily separate self from other.”

  Not quite. It turns out that “you” are not one life-form—that is, one self—but many. Your mouth alone contains more than seven hundred distinct kinds of bacteria. Your skin and eyelashes are equally laden with microbes, and your gut houses a similar bevy of bacterial sidekicks. Although this still leaves several bacteria-free regions in a healthy body—for example, brain, spinal cord, and bloodstream—current estimates indicate that your physical self possesses about 10 trillion human cells and about 100 trillion bacterial cells. In other words, at any given moment, your body is about 90 percent nonhuman, home to many more life-forms than the number of people presently living on Earth; more even than the number of stars in the Milky Way galaxy! To make things more interesting still, microbiological research demonstrates that we are utterly dependent on this ever-changing bacterial parade for all kinds of “services,” from keeping intruders at bay to converting food into usable nutrients.

  So, if we continually exchange matter with the outside world, if our bodies are completely renewed every few years, and if each of us is a walking colony of trillions of largely symbiotic life-forms, exactly what is this self that we view as separate? You are not an isolated being. Metaphorically, to follow current bias and think of your body as a machine is not only inaccurate but destructive. Each of us is far more akin to a whirlpool, a brief, ever-shifting concentration of energy in a vast river that has been flowing for billions of years. The dividing line between self and other is, in many respects, arbitrary; the “cut” can be made at many places, depending on the metaphor of self that one adopts. We must learn to see ourselves not as isolated but as permeable and interwoven—selves within larger selves, including the species self (humanity) and the biospheric self (life). The interbeing perspective encourages us to view other life-forms not as objects but subjects, fellow travelers in the current of this ancient river. On a still more pro
found level, it enables us to envision ourselves and other organisms not as static “things” at all but as processes deeply and inextricably embedded in the background flow.

  One of the greatest obstacles confronting science education is the fact that the bulk of the universe exists either at extremely large scales (e.g., planets, stars, and galaxies) or extremely small scales (e.g., atoms, genes, cells) well beyond the comprehension of our (unaided) senses. We evolved to sense only the middle ground, or “mesoworld,” of animals, plants, and landscapes. Yet, just as we have learned to accept the nonintuitive, scientific insight that the Earth is not the center of the universe, so too must we now embrace the fact that we are not outside or above nature but fully enmeshed within it. Interbeing, an expression of ancient wisdom backed by science, can help us comprehend this radical ecology, fostering a much-needed transformation in mind-set.

  The Other

  Dimitar Sasselov

  Professor of astronomy; director, Harvard Origins of Life Initiative

  The concept of “otherness” or “the Other” is part of how a human being perceives his or her own identity: “How do I relate to others?” is a part of what defines the self and is constituent in self-consciousness. It is a philosophical concept widely used in psychology and social science. Recent advances in the life and physical sciences have made possible new and even unexpected expansions of this concept. The map of the human genome and of the diploid genomes of individuals; the map of our geographic spread; the map of the Neanderthal genome—these are new tools to address the age-old issues of human unity and diversity. Reading the life code of DNA does not stop there; it places humans in the vast and colorful mosaic of earthly life. “Otherness” is seen in a new light. Our microbiomes—the trillions of microbes on and in each of us, and essential to our physiology, become part of our selves.

 

‹ Prev