Book Read Free

This Will Make You Smarter

Page 34

by John Brockman


  A better understanding of the pitfalls associated with this term would go a long way toward improving our cognitive toolkits. If common knowledge of what this term means included the ideas that (a) the findings may not be important, and (b) conclusions based on finding or failure to find statistically significant differences may be wrong, then we would have substantially advanced our general knowledge. When people read or use the term “statistically significant difference,” it is an affirmation of the scientific process, which, for all its limitations and misunderstandings, is a substantial advance over alternative ways of knowing about the world. If we could just add two more key concepts to the meaning of that phrase, we could improve how the general public thinks about science.

  The Dece(i)bo Effect

  Beatrice Golomb

  Associate professor of medicine, University of California–San Diego

  The Dece(i)bo effect (think a portmanteau of “deceive” and “placebo”) refers to the facile application of constructs—without unpacking the concept and the assumptions on which it relies—in a fashion that, rather than benefiting thinking, leads reasoning astray.

  Words and phrases that capture a concept enter common parlance: Ockham’s razor, placebo, Hawthorne effect. Such phrases and code words, in principle, facilitate discourse—and can indeed do so. Deploying the word or catchphrase adds efficiency to the interchange by obviating the need for a pesky review of the principles and assumptions encapsulated in the word. Unfortunately, bypassing the need to articulate the conditions and assumptions on which validity of the construct rests may lead to bypassing consideration of whether these conditions and assumptions legitimately apply. Use of the term can then, far from fostering sound discourse, serve to undermine it.

  Take, for example, the “placebo” and “placebo effects.” Unpacking the terms, a placebo is defined as something physiologically inert but believed by the recipient to be active or possibly so. The term “placebo effect” refers to improvement of a condition when someone has received a placebo—improvement due to the effects of expectation/suggestion.

  With these terms ensconced in the vernacular, dece(i)bo effects associated with them are much in evidence. Key presumptions regarding placebos and placebo effects are more typically wrong than not.

  1. When hearing the word “placebo,” scientists often presume “inert” without stopping to ask, What is that allegedly physiologically inert substance? Indeed, in principle, what substance could it be? There isn’t anything known to be physiologically inert.

  There are no regulations about what constitute placebos, and their composition—commonly determined by the manufacturer of the drug under study—is typically undisclosed. Among the uncommon cases where placebo composition has been noted, there are documented instances in which the placebo composition apparently produced spurious effects. Two studies used corn-oil and olive-oil placebos for cholesterol-lowering drugs. One noted that the “unexpectedly” low rate of heart attacks in the control group may have contributed to failure to see a benefit from the cholesterol drug. Another study noted the “unexpected” benefit of a drug to gastrointestinal symptoms in cancer patients. But cancer patients bear an increased likelihood of lactose intolerance—and the placebo was lactose, a “sugar pill.” When the term “placebo” substitutes for actual ingredients, any thinking about how the composition of the control agent may have influenced the study is circumvented.

  2. Because there are many settings in which people with a problem, given a placebo, report sizable improvement on average when they are queried (see #3), many scientists have accepted that “placebo effects”—of suggestion—are both substantial and widespread in the scope of what they benefit.

  The Danish researchers Asbjørn Hróbjartsson and Peter C. Götzsche conducted a systematic review of studies that compared a placebo to no treatment. They found that the placebo generally does . . . nothing. In most instances, there is no placebo effect. Mild “placebo effects” are seen, in the short term, for pain and anxiety. Placebo effects for pain are reported to be blocked by naloxone, an opiate antagonist—specifically implicating endogenous opiates in pain placebo effects, which would not be expected to benefit every possible outcome that might be measured.

  3. When hearing that people given a placebo report improvement, scientists commonly presume this must be due to the “placebo effect,” the effect of expectation/suggestion. However, the effects are usually something else entirely—for instance, the natural history of the disease, or regression to the mean. Consider a distribution such as a bell curve. Whether the outcome of interest is the reduction of pain, blood pressure, cholesterol, or something else, people are classically selected for treatment if they are at one end of the distribution—say, the high end. But these outcomes are quantities that vary (for instance because of physiological variation, natural history, measurement error, etc.), and on average the high values will vary back down—a phenomenon termed “regression to the mean” that operates, placebo or no. (Hence, the Danish researchers’ findings.)

  A different dece(i)bo problem beset Ted Kaptchuk’s recent Harvard study in which researchers gave a placebo, or nothing, to people afflicted with irritable bowel syndrome. They administered the placebo in a bottle boldly labeled “Placebo” and advised patients that they were receiving placebos, which were known to be potent. The thesis was that one might harness the effects of expectation honestly, without deception, by telling subjects how powerful placebos in fact were—and by developing a close relationship with subjects. Researchers met repeatedly with subjects, gained subjects’ appreciation for their concern, and repeatedly told subjects that placebos are powerful. Those placed on the placebo obliged the researchers by telling them that they got better, more so than those on nothing. The scientists attributed this to a placebo effect.

  But what’s to say that the subjects weren’t simply telling the scientists what they thought the scientists wished to hear? Denise Grady, writing for the New York Times, has noted: “Growing up, I got weekly hay fever shots that I don’t think helped me at all. But I kept hoping they would, and the doctor was very kind, so whenever he asked if I was feeling better, I said yes . . .” Such desire to please (a form, perhaps, of “social approval” reporting bias) made for fertile ground in which to operate and create what was interpreted as a placebo effect, which implies actual subjective benefit to symptoms. One wonders whether so great an error of presumption would operate were there not an existing term (“placebo effect”) to signify the interpretation the Harvard group chose among the suite of other compelling possibilities.

  Another explanation consistent with these results is specific physiological benefit. The Kaptchuk study used a nonabsorbed fiber—microcrystalline cellulose—as the placebo that subjects were told would be effective. The authors are applauded for disclosing its composition. But other nonabsorbed fibers benefit both constipation and diarrhea—symptoms of irritable bowel—and are prescribed for that purpose; psyllium is an example. Thus, specific physiological benefit of the “placebo” to the symptoms cannot be excluded.

  Together these points illustrate that the term “placebo” cannot be presumed to imply “inert” (and generally does not); and that when studies see large benefit to symptoms in patients treated with a placebo (a result expected from distribution considerations alone), one cannot infer that these arose from suggestion.

  Thus, rather than facilitating sound reasoning, evidence suggests that in many cases, including high-stakes settings in which inferences may propagate to medical practice, substitution of a term—here, “placebo” and “placebo effect”—for the concepts they are intended to convey may actually thwart or bypass critical thinking about key issues, with implications for fundamental concerns of us all.

  Anthropophilia

  Andrew Revkin

  Journalist; environmentalist; writer, New York Times’s Dot Earth blog; author, The North Pole Was Here
/>   To sustain progress on a finite planet that is increasingly under human sway but also full of surprises, what is needed is a strong dose of anthropophilia. I propose this word as shorthand for a rigorous and dispassionate kind of self-regard, even self-appreciation, to be employed when individuals or communities face consequential decisions attended by substantial uncertainty and polarizing disagreement.

  The term is an intentional echo of E. O. Wilson’s valuable effort to nurture biophilia, the part of humanness that values and cares for the facets of the nonhuman world we call nature. What’s been missing too long is an effort to fully consider, even embrace, the human role within nature and—perhaps more important still—to consider our own inner nature as well.

  Historically, many efforts to propel a durable human approach to advancement were shaped around two organizing ideas: “Woe is me” and “Shame on us,” with a good dose of “Shame on you” thrown in.

  The problem?

  Woe is paralytic, while blame is both divisive and often misses the real target. (Who’s the bad guy, BP or those of us who drive and heat with oil?) Discourse framed around those concepts too often produces policy debates that someone once described to me, in the context of climate, as “blah, blah, blah, bang.” The same phenomenon can as easily be seen in the unheeded warnings leading to the September 11 attacks and to the most recent financial implosion.

  More fully considering our nature—both the “divine and felonious” sides, as Bill Bryson has summed us up—could help identify certain kinds of challenges that we know we’ll tend to get wrong. The simple act of recognizing such tendencies could help refine how our choices are made—at least giving slightly better odds of getting things a little less wrong the next time. At the personal level, I know that when I cruise into the kitchen tonight I’ll tend to prefer reaching for a cookie instead of an apple. By preconsidering that trait, I may have a slightly better chance of avoiding a couple hundred unnecessary calories.

  Here are a few instances where this concept is relevant on larger scales.

  There’s a persistent human pattern of not taking broad lessons from localized disasters. When China’s Sichuan province was rocked by a severe earthquake, tens of thousands of students (and their teachers) died in collapsed schools. Yet the American state of Oregon, where more than a thousand schools are already known to be similarly vulnerable when the great Cascadia fault off the Northwest Coast next heaves, still lags terribly in speeding investments in retrofitting. Sociologists understand, with quite a bit of empirical backing, why this disconnect exists even though the example was horrifying and the risk in Oregon is about as clear as any scientific assessment can be. But does that knowledge of human biases toward the “near and now” get taken seriously in the realms where policies are shaped and the money to carry them out is authorized? Rarely, it seems.

  Social scientists also know, with decent rigor, that the fight over human-driven global warming—both over the science and policy choices—is largely cultural. As in many other disputes (consider health care), the battle is between two fundamental subsets of human communities—communitarians (aka liberals) and individualists (aka libertarians). In such situations, a compelling body of research has emerged showing that information is fairly meaningless. Each group selects information to reinforce a position, and there are scant instances where information ends up shifting a position. That’s why no one should expect the next review of climate science from the Intergovernmental Panel on Climate Change to suddenly create a harmonious path forward.

  The more such realities are recognized, the more likely it is that innovative approaches to negotiation can build from the middle, instead of arguing endlessly from the edge. The same body of research on climate attitudes, for example, shows far less disagreement on the need for advancing the world’s limited menu of affordable energy choices.

  The physicist Murray Gell-Mann has spoken often of the need, when faced with multidimensional problems, to take a “crude look at the whole”—a process he has even given an acronym, CLAW. It’s imperative, where possible, for that look to include an honest analysis of the species doing the looking as well.

  There will never be a way to invent a replacement for, say, the United Nations or the House of Representatives. But there is a ripe opportunity to try new approaches to constructive discourse and problem solving, with the first step being an acceptance of our humanness, for better and worse.

  That’s anthropophilia.

  A Solution for Collapsed Thinking: Signal Detection Theory

  Mahzarin R. Banaji

  Richard Clarke Cabot Professor of Social Ethics, Department of Psychology, Harvard University

  We perceive the world through our senses. The brain-mediated data we receive in this way form the basis of our understanding of the world. From this becomes possible the ordinary and exceptional mental activities of attending, perceiving, remembering, feeling, and reasoning. Via these mental processes, we understand and act on the material and social world.

  In the town of Pondicherry in South India, where I sit as I write this, many do not share this assessment. There are those, including some close to me, who believe there are extrasensory paths to knowing the world that transcend the five senses, that untested “natural” foods and methods of acquiring information are superior to those based in evidence. On this trip, for example, I learned that they believe that a man has been able to stay alive without any caloric intake for months (although his weight falls, but only when he is under scientific observation).

  Pondicherry is an Indian Union Territory that was controlled by the French for three hundred years (staving off the British in many a battle right outside my window) and that the French held on to until a few years after Indian independence. It has, in addition to numerous other points of attraction, become a center for those who yearn for spiritual experience, attracting many (both whites and natives) to give up their worldly lives to pursue the advancement of the spirit, undertake bodily healing, and invest in good works on behalf of a larger community.

  Yesterday I met a brilliant young man who had worked as a lawyer for eight years and now lives in an ashram and works in its book-sales division. “Sure,” you retort, “the legal profession would turn any good person toward spirituality,” but I assure you that the folks here have given up wealth and a wide variety of professions to pursue this manner of life. The point is that seemingly intelligent people seem to crave nonrational modes of thinking.

  I do not mean to pick on any one city, and certainly not this unusual one in which so much good effort is spent on the arts and culture and social upliftment of the sort we would admire. But this is also a city that attracts a particular type of European, American, and Indian—those whose minds seem more naturally prepared to believe that herbs do cure cancer and standard medical care is to be avoided (until one desperately needs chemotherapy), that Tuesdays are inauspicious for starting new projects, that particular points in the big toe control the digestive system, that the position of the stars at the time of their birth led them to Pondicherry through an inexplicable process emanating from a higher authority and through a vision from “the Mother,” a deceased Frenchwoman who dominates the ashram and surrounding area in death more thoroughly than many skilled politicians do during their terms in office.

  These types of beliefs may seem extreme, but they are not considered so in most of the world. Change the content, and the underlying false manner of thinking is readily observed just about anywhere. The twenty-two inches of new snow that fell recently where I live in the United States will no doubt bring forth beliefs of a god angered by crazy scientists touting global warming.

  As I contemplate the single most powerful tool that could be put into our toolkits, it is the simple and powerful concept of “signal detection.” In fact, the Edge Question this year happens to be one I’ve contemplated for a while. I use David Green and John Swets’s Signal De
tection Theory and Psychophysics as the prototype, although the idea has its origins in earlier work among scientists concerned with the influence of photon fluctuations on visual detection and of sound waves on audition.

  The idea underlying the power of signal-detection theory is simple: The world provides us with noisy, not pure, data. Auditory data, for instance, are degraded for a variety of reasons having to do with the physical properties of the communication of sound. The observing organism has properties that further affect how those data will be experienced and interpreted, such as auditory acuity; the circumstances under which the information is being processed (e.g., during a thunderstorm); and motivation (e.g., disinterest). Signal-detection theory allows us to put both aspects of the stimulus and the respondent together to understand the quality of the decision that will result, given the uncertain conditions under which data are transmitted both physically and psychologically.

  To understand the crux of signal-detection theory, each event of any data impinging on the receiver (human or other) is coded into four categories, providing a language to describe the decision. One dimension concerns whether an event occurred or not (was a light flashed or not?); the other dimension concerns whether the human receiver detected it or not (was the light seen or not?). This gives us a 2 x 2 table of the sort laid out below, but it can be used to configure many different types of decisions. For example, were homeopathic pills taken or not? Did the disease get cured or not?

 

‹ Prev