Book Read Free

This Will Make You Smarter

Page 33

by John Brockman


  In other words, cultures and people (and some other primates) make each other up. This process involves four nested levels: individual selves (one’s thoughts, feelings, and actions); the everyday practices and artifacts that reflect and shape those selves; the institutions (education, law, media) that afford or discourage those everyday practices and artifacts; and pervasive ideas about what is good, right, and human that both influence and are influenced by all four levels. (See figure below.)

  The culture cycle rolls for all types of social distinctions, from the macro (nation, race, ethnicity, region, religion, gender, social class, generation, etc.) to the micro (occupation, organization, neighborhood, hobby, genre preference, family, etc.).

  One consequence of the culture cycle is that no action is caused by either individual psychological features or external influences. Both are always at work. Just as there is no such thing as a culture without agents, there are no agents without culture. Humans are culturally shaped shapers. And so, for example, in the case of a school shooting, it is overly simplistic to ask whether the perpetrator acted because of mental illness, or because of a hostile and bullying school climate, or because he had easy access to a particularly deadly cultural artifact (i.e., a gun), or because institutions encourage that climate and allow access to that artifact, or because pervasive ideas and images glorify resistance and violence. The better question and the one that the culture cycle requires is, How do these four levels of forces interact? Indeed, researchers at the vanguard of public health contend that neither social stressors nor individual vulnerabilities are enough to produce most mental illnesses. Instead, the interplay of biology and culture, of genes and environments, of nature and nurture is responsible for most psychiatric disorders.

  Social scientists succumb to another form of this oppositional thinking. For example, in the face of Hurricane Katrina, thousands of poor African-American residents “chose” not to evacuate the Gulf Coast, to quote most news accounts. More charitable social scientists had their explanations ready and struggled to get their variables into the limelight. “Of course they didn’t leave,” said the psychologists, “because poor people have an external locus of control.” Or “low intrinsic motivation.” Or “low self-efficacy.” “Of course they didn’t leave,” said the sociologists and political scientists—because their cumulative lack of access to adequate income, banking, education, transportation, health care, police protection, and basic civil rights made staying put their only option. “Of course they didn’t leave,” said the anthropologists—because their kin networks, religious faith, or historical ties held them there. “Of course they didn’t leave,” said the economists—because they didn’t have the material resources, knowledge, or financial incentives to get out.

  The irony in the interdisciplinary bickering is that everyone is mostly right. But they are right in the same way that the blind men touching the elephant in the Indian fable are right: the failure to integrate each field’s contributions makes everyone wrong and, worse, not very useful.

  The culture cycle illustrates the relationships of these different levels of analyses to one another. Granted, our four-level-process explanation is not as zippy as the single-variable accounts that currently dominate most public discourse. But it’s far simpler and accurate than the standard “It’s complicated” and “It depends” that more thoughtful experts supply.

  Moreover, built into the culture cycle are the instructions for how to reverse-engineer it: A sustainable change at one level usually requires change at all four levels. There are no silver bullets. The ongoing U.S. civil rights movement, for example, requires the opening of individual hearts and minds; the mixing of people as equals in daily life, along with media representations thereof; the reform of laws and policies; and a fundamental revision of our nation’s idea of what a good human being is.

  Just because people can change their cultures, however, does not mean that they can do so easily. A major obstacle is that most people don’t even realize they have cultures. Instead, they think of themselves as standard-issue humans—they’re normal; it’s all those other people who are deviating from the natural, obvious, and right way to be.

  Yet we are all part of multiple culture cycles. And we should be proud of that fact, for the culture cycle is our smart human trick. Because of it, we don’t have to wait for mutation or natural selection to allow us to range farther over the face of the Earth, to extract nutrition from a new food source, to cope with a change in climate. As modern life becomes more complex and social and environmental problems become more widespread and entrenched, people will need to understand the culture cycle and use it skillfully.

  Phase Transitions and Scale Transitions

  Victoria Stodden

  Computational legal scholar; assistant professor of statistics, Columbia University

  Physicists created the term “phase transition” to describe a change of state in a physical system, such as liquid to gas. The concept has since been applied in a variety of academic circles to describe other types of transformation, from social (think hunter-gatherer to farmer) to statistical (think abrupt changes in algorithm performance as parameters change) but has not yet emerged as part of the common lexicon.

  One interesting aspect of the phase transition is that it describes a shift to a state seemingly unrelated to the previous one and hence provides a model for phenomena that challenge our intuition. With knowledge of water only as a liquid, who would have imagined a conversion to gas with the application of heat? The mathematical definition of a phase transition in the physical context is well defined, but even without this precision, this idea can be usefully extrapolated to describe a much broader class of phenomena, particularly those that change abruptly and unexpectedly with an increase in scale.

  Imagine points in two dimensions—a spray of dots on a sheet of paper. Now imagine a point cloud in three dimensions—say, dots hovering in the interior of a cube. Even if we could imagine points in four dimensions, would we have guessed that all these points lie on the convex hull of this point cloud? In dimensions greater than three, they always do. There hasn’t been a phase transition in the mathematical sense, but as dimension is scaled up, the system shifts in a way we don’t intuitively expect.

  I call these types of changes “scale transitions,” unexpected outcomes resulting from increases in scale. For example, increases in the number of people interacting in a system can produce unforeseen outcomes: The operation of markets at large scales is often counterintuitive. Think of the restrictive effect that rent-control laws can have on the supply of affordable rental housing, or how minimum-wage laws can reduce the availability of low-wage jobs. (James Flynn gives “markets” as an example of a “shorthand abstraction”; here I am interested in the often counterintuitive operation of a market system at large scale.) Think of the serendipitous effects of enhanced communication—for example, collaboration and interpersonal connection generating unexpected new ideas and innovation; or the counterintuitive effect of massive computation in science reducing experimental reproducibility as data and code have proved harder to share than their descriptions. The concept of the scale transition is purposefully loose, designed as a framework for understanding when our natural intuition leads us astray in large-scale situations.

  This contrasts with the sociologist Robert K. Merton’s concept of “unanticipated consequences,” in that a scale transition both refers to a system rather than individual purposeful behavior and is directly tied to the notion of changes due to scale increases. Our intuition regularly seems to break down with scale, and we need a way of conceptualizing the resulting counterintuitive shifts in the world around us. Perhaps the most salient feature of the digital age is its facilitation of huge increases in scale—in data storage, processing power, and connectivity—thus permitting us to address an unparalleled number of problems on an unparalleled scale. As technology becomes increasingly pervasive, I bel
ieve scale transitions will become commonplace.

  Replicability

  Brian Knutson

  Associate professor of psychology and neuroscience, Stanford University

  Since different visiting teachers had promoted contradictory philosophies, the villagers asked the Buddha whom they should believe. The Buddha advised: “When you know for yourselves . . . these things, when performed and undertaken, conduce to well-being and happiness—then live and act accordingly.” Such empirical advice might sound surprising coming from a religious leader, but not from a scientist.

  “See for yourself” is an unspoken credo of science. It is not enough to run an experiment and report the findings. Others who repeat that experiment must find the same thing. Repeatable experiments are called “replicable.” Although scientists implicitly respect replicability, they do not typically explicitly reward it.

  To some extent, ignoring replicability comes naturally. Human nervous systems are designed to respond to rapid changes, ranging from subtle visual flickers to pounding rushes of ecstasy. Fixating on fast change makes adaptive sense—why spend limited energy on opportunities or threats that have already passed? But in the face of slowly growing problems, fixation on change can prove disastrous (think of lobsters in the cooking pot or people under greenhouse gases).

  Cultures can also promote fixation on change. In science, some high-profile journals, and even entire fields, emphasize novelty, consigning replications to the dustbin of the unremarkable and unpublishable. More formally, scientists are often judged based on their work’s novelty rather than its replicability. The increasingly popular “h-index” quantifies impact by assigning a number (h) which indicates that an investigator has published h papers that have been cited h or more times (so, Joe Blow has an h-index of 5 if he has published five papers, each of which others have cited five or more times). While impact factors correlate with eminence in some fields (e.g., physics), problems can arise. For instance, Dr. Blow might boost his impact factor by publishing controversial (thus, cited) but unreplicable findings.

  Why not construct a replicability (or “r”) index to complement impact factors? As with h, r could indicate that a scientist has originally documented r separate effects that independently replicate r or more times (so, Susie Sharp has an r-index of 5 if she has published five independent effects, each of which others have replicated five or more times). Replication indices would necessarily be lower than citation indices, since effects have to first be published before they can be replicated, but they might provide distinct information about research quality. As with citation indices, replication indices might even apply to journals and fields, providing a measure that can combat biases against publishing and publicizing replications.

  A replicability index might prove even more useful to nonscientists. Most investigators who have spent significant time in the salt mines of the laboratory already intuit that most ideas don’t pan out, and those that do sometimes result from chance or charitable interpretations. Conversely, they also recognize that replicability means they’re really onto something. Not so for the general public, who instead encounter scientific advances one cataclysmic media-filtered study at a time. As a result, laypeople and journalists are repeatedly surprised to find the latest counterintuitive finding overturned by new results. Measures of replicability could help channel attention toward cumulative contributions. Along those lines, it is interesting to consider applying replicability criteria to public-policy interventions designed to improve health, enhance education, or curb violence. Individuals might even benefit from using replicability criteria to optimize their personal habits (e.g., more effectively dieting, exercising, working, etc.).

  Replication should be celebrated rather than denigrated. Often taken for granted, replicability may be the exception rather than the rule. As running water resolves rock from mud, so can replicability highlight the most reliable findings, investigators, journals, and even fields. More broadly, replicability may provide an indispensable tool for evaluating both personal and public policies. As suggested in the Kalama Sutta, replicability might even help us decide whom to believe.

  Ambient Memory and the Myth of Neutral Observation

  Xeni Jardin

  Tech culture journalist; partner, contributor, coeditor, Boing Boing; executive producer, host, Boing Boing Video

  Like others whose early life experiences were punctuated with trauma, my memory has holes. Some of those holes are as wide as years. Others are just big enough to swallow painful incidents that lasted moments but reverberated for decades.

  The brain-record of those experiences sometimes submerges, then resurfaces, sometimes submerging again over time. As I grow older, stronger, and more capable of contending with memory, I become more aware of how different my own internal record may be from that of others who lived the identical moment.

  Each of us commits our experiences to memory and permanence differently. Time and human experience are not linear, nor is there one and only one neutral record of each lived moment. Human beings are impossibly complex tarballs of muscle, blood, bone, breath, and electrical pulses that travel through nerves and neurons; we are bundles of electrical pulses carrying payloads, pings hitting servers. And our identities are inextricably connected to our environments: No story can be told without a setting.

  My generation is the last generation of human beings who were born into a pre-Internet world but who matured in tandem with that great networked hive-mind. In the course of my work online, committing new memories to network mind each day, I have come to understand that our shared memory of events, truths, biography, and fact—all of this shifts and ebbs and flows, just as our most personal memories do.

  Ever-edited Wikipedia replaces paper encyclopedias. The chatter of Twitter eclipses fixed-form and hierarchical communication. The news flow we remember from our childhoods, a single voice of authority on one of three channels, is replaced by something hyperevolving, chaotic, and less easily defined. Even the formal histories of a nation may be rewritten by the likes of Wikileaks and its yet unlaunched children.

  Facts are more fluid than in the days of our grandfathers. In our networked mind, the very act of observation—reporting or tweeting or amplifying some piece of experience—changes the story. The trajectory of information, the velocity of this knowledge on the network, changes the very nature of what is remembered, who remembers it, and for how long it remains part of our shared archive. There are no fixed states.

  So must our notion of memory and record evolve.

  The history we are creating now is alive. Let us find new ways of recording memory, new ways of telling the story, that reflect life. Let us embrace this infinite complexity as we commit new history to record.

  Let us redefine what it means to remember.

  A Statistically Significant Difference in Understanding the Scientific Process

  Diane F. Halpern

  Trustee Professor of Psychology and Roberts Fellow, Claremont McKenna College

  Statistically significant difference—it’s a simple phrase that is essential to science and has become common parlance among educated adults. These three words convey a basic understanding of the scientific process, random events, and the laws of probability. The term appears almost everywhere that research is discussed—in newspaper articles, advertisements for “miracle” diets, research publications, and student laboratory reports, to name just a few of the many diverse contexts. It is a shorthand abstraction for a sequence of events that includes an experiment (or other research design), the specification of a null and alternative hypothesis, (numerical) data collection, statistical analysis, and the probability of an unlikely outcome. That’s a lot of science conveyed in a few words.

  It would be difficult to understand the outcome of any research without at least a rudimentary understanding of what is meant by the conclusion that the researchers found or did not find evidence of
a “statistically significant difference.” Unfortunately, the old saying that “a little knowledge is a dangerous thing” applies to the partial understanding of this term. One problem is that “significant” has a different meaning when used in everyday speech than when used to report research findings.

  Most of the time, the word means that something important happened. For example, if a physician told you that you would feel significantly better following surgery, you would correctly infer that your pain would be reduced by a meaningful amount—you would feel less pain. But, when used in “statistically significant difference,” “significant” means that the results are unlikely to be due to chance (if the null hypothesis were true); the results themselves may or may not be important. Moreover, sometimes the conclusion will be wrong, because the researchers can assert their conclusion only at some level of probability. “Statistically significant difference” is a core concept in research and statistics, but, as anyone who was taught undergraduate statistics or research methods can tell you, it is not an intuitive idea.

  Although “statistically significant difference” communicates a cluster of ideas essential to the scientific process, many pundits would like to see it removed from our vocabulary, because it is frequently misunderstood. Its use underscores the marriage of science and probability theory, and despite its popularity, or perhaps because of it, some experts have called for a divorce, because the term implies something that it should not, and the public is often misled. In fact, experts are often misled as well. Consider this hypothetical example: In a well-done study that compares the effectiveness of two drugs relative to a placebo, it is possible that Drug X is statistically significantly different from a placebo and Drug Y is not, yet Drugs X and Y might not be statistically significantly different from each other. This could result when Drug X is statistically different from placebo at a probability level of p < .04 but Drug Y is statistically significantly different from a placebo only at a probability level of p < .06, which is higher than most a priori levels used to test for statistical significance. If reading about this makes your head hurt, you are among the masses who believe they understand this critical shorthand phrase which is at the heart of the scientific method but who actually may have only a shallow level of understanding.

 

‹ Prev