The Best American Science and Nature Writing 2014

Home > Science > The Best American Science and Nature Writing 2014 > Page 4
The Best American Science and Nature Writing 2014 Page 4

by Deborah Blum


  Psychologists have found that when we work with computers, we often fall victim to two cognitive ailments—complacency and bias—that can undercut our performance and lead to mistakes. Automation complacency occurs when a computer lulls us into a false sense of security. Confident that the machine will work flawlessly and handle any problem that crops up, we allow our attention to drift. We become disengaged from our work, and our awareness of what’s going on around us fades. Automation bias occurs when we place too much faith in the accuracy of the information coming through our monitors. Our trust in the software becomes so strong that we ignore or discount other information sources, including our own eyes and ears. When a computer provides incorrect or insufficient data, we remain oblivious to the error.

  Examples of complacency and bias have been well documented in high-risk situations—on flight decks and battlefields, in factory control rooms—but recent studies suggest that the problems can bedevil anyone working with a computer. Many radiologists today use analytical software to highlight suspicious areas on mammograms. Usually the highlights aid in the discovery of disease. But they can also have the opposite effect. Biased by the software’s suggestions, radiologists may give cursory attention to the areas of an image that haven’t been highlighted, sometimes overlooking an early-stage tumor. Most of us have experienced complacency when at a computer. In using e-mail or word-processing software, we become less proficient proofreaders when we know that a spell checker is at work.

  The way computers can weaken awareness and attentiveness points to a deeper problem. Automation turns us from actors into observers. Instead of manipulating the yoke, we watch the screen. That shift may make our lives easier, but it can also inhibit the development of expertise. Since the late 1970s, psychologists have been documenting a phenomenon called the “generation effect.” It was first observed in studies of vocabulary, which revealed that people remember words much better when they actively call them to mind—when they generate them—than when they simply read them. The effect, it has since become clear, influences learning in many different circumstances. When you engage actively in a task, you set off intricate mental processes that allow you to retain more knowledge. You learn more and remember more. When you repeat the same task over a long period, your brain constructs specialized neural circuits dedicated to the activity. It assembles a rich store of information and organizes that knowledge in a way that allows you to tap into it instantaneously. Whether it’s Serena Williams on a tennis court or Magnus Carlsen at a chessboard, an expert can spot patterns, evaluate signals, and react to changing circumstances with speed and precision that can seem uncanny. What looks like instinct is hard-won skill, skill that requires exactly the kind of struggle that modern software seeks to alleviate.

  In 2005, Christof van Nimwegen, a cognitive psychologist in the Netherlands, began an investigation into software’s effects on the development of know-how. He recruited two sets of people to play a computer game based on a classic logic puzzle called Missionaries and Cannibals. To complete the puzzle, a player has to transport five missionaries and five cannibals (or, in van Nimwegen’s version, five yellow balls and five blue ones) across a river, using a boat that can accommodate no more than three passengers at a time. The tricky part is that cannibals must never outnumber missionaries, either in the boat or on the riverbanks. One of van Nimwegen’s groups worked on the puzzle using software that provided step-by-step guidance, highlighting which moves were permissible and which weren’t. The other group used a rudimentary program that offered no assistance.

  As you might expect, the people using the helpful software made quicker progress at the outset. They could simply follow the prompts rather than having to pause before each move to remember the rules and figure out how they applied to the new situation. But as the test proceeded, those using the rudimentary software gained the upper hand. They developed a clearer conceptual understanding of the task, plotted better strategies, and made fewer mistakes. Eight months later, van Nimwegen had the same people work through the puzzle again. Those who had earlier used the rudimentary software finished the game almost twice as quickly as their counterparts. Enjoying the benefits of the generation effect, they displayed better “imprinting of knowledge.”

  What van Nimwegen observed in his laboratory—that when we automate an activity, we hamper our ability to translate information into knowledge—is also being documented in the real world. In many businesses, managers and other professionals have come to depend on decision-support systems to analyze information and suggest courses of action. Accountants, for example, use the systems in corporate audits. The applications speed the work, but some signs suggest that as the software becomes more capable, the accountants become less so. One recent study, conducted by Australian researchers, examined the effects of systems used by three international accounting firms. Two of the firms employed highly advanced software that, based on an accountant’s answers to basic questions about a client, recommended a set of relevant business risks to be included in the client’s audit file. The third firm used simpler software that required an accountant to assess a list of possible risks and manually select the pertinent ones. The researchers gave accountants from each firm a test measuring their expertise. Those from the firm with the less helpful software displayed a significantly stronger understanding of different forms of risk than did those from the other two firms.

  What’s most astonishing, and unsettling, about computer automation is that it’s still in its early stages. Experts used to assume that there were limits to the ability of programmers to automate complicated tasks, particularly those involving sensory perception, pattern recognition, and conceptual knowledge. They pointed to the example of driving a car, which requires not only the instantaneous interpretation of a welter of visual signals but also the ability to adapt seamlessly to unanticipated situations. “Executing a left turn across oncoming traffic,” two prominent economists wrote in 2004, “involves so many factors that it is hard to imagine the set of rules that can replicate a driver’s behavior.” Just six years later, in October 2010, Google announced that it had built a fleet of seven “self-driving cars,” which had already logged more than 140,000 miles on roads in California and Nevada.

  Driverless cars provide a preview of how robots will be able to navigate and perform work in the physical world, taking over activities requiring environmental awareness, coordinated motion, and fluid decision making. Equally rapid progress is being made in automating cerebral tasks. Just a few years ago, the idea of a computer competing on a game show like Jeopardy would have seemed laughable, but in a celebrated match in 2011, the IBM supercomputer Watson trounced Jeopardy’s all-time champion, Ken Jennings. Watson doesn’t think the way people think; it has no understanding of what it’s doing or saying. Its advantage lies in the extraordinary speed of modern computer processors.

  In Race Against the Machine, a 2011 e-book on the economic implications of computerization, the MIT researchers Erik Brynjolfsson and Andrew McAfee argue that Google’s driverless car and IBM’s Watson are examples of a new wave of automation that, drawing on the “exponential growth” in computer power, will change the nature of work in virtually every job and profession. Today, they write, “computers improve so quickly that their capabilities pass from the realm of science fiction into the everyday world not over the course of a human lifetime, or even within the span of a professional’s career, but instead in just a few years.”

  Who needs humans anyway? That question, in one rhetorical form or another, comes up frequently in discussions of automation. If computers’ abilities are expanding so quickly and if people, by comparison, seem slow, clumsy, and error-prone, why not build immaculately self-contained systems that perform flawlessly without any human oversight or intervention? Why not take the human factor out of the equation? The technology theorist Kevin Kelly, commenting on the link between automation and pilot error, argued that the obvious solution is to develop an entirely
autonomous autopilot: “Human pilots should not be flying planes in the long run.” The Silicon Valley venture capitalist Vinod Khosla recently suggested that health care will be much improved when medical software—which he has dubbed “Doctor Algorithm”—evolves from assisting primary-care physicians in making diagnoses to replacing the doctors entirely. The cure for imperfect automation is total automation.

  That idea is seductive, but no machine is infallible. Sooner or later, even the most advanced technology will break down, misfire, or, in the case of a computerized system, encounter circumstances that its designers never anticipated. As automation technologies become more complex, relying on interdependencies among algorithms, databases, sensors, and mechanical parts, the potential sources of failure multiply. They also become harder to detect. All of the parts may work flawlessly, but a small error in system design can still cause a major accident. And even if a perfect system could be designed, it would still have to operate in an imperfect world.

  In a classic 1983 article in the journal Automatica, Lisanne Bainbridge, an engineering psychologist at University College London, described a conundrum of computer automation. Because many system designers assume that human operators are “unreliable and inefficient,” at least when compared with a computer, they strive to give the operators as small a role as possible. People end up functioning as mere monitors, passive watchers of screens. That’s a job that humans, with our notoriously wandering minds, are especially bad at. Research on vigilance, dating back to studies of radar operators during World War II, shows that people have trouble maintaining their attention on a stable display of information for more than half an hour. “This means,” Bainbridge observed, “that it is humanly impossible to carry out the basic function of monitoring for unlikely abnormalities.” And because a person’s skills “deteriorate when they are not used,” even an experienced operator will eventually begin to act like an inexperienced one if restricted to just watching. The lack of awareness and the degradation of know-how raise the odds that when something goes wrong, the operator will react ineptly. The assumption that the human will be the weakest link in the system becomes self-fulfilling.

  Psychologists have discovered some simple ways to temper automation’s ill effects. You can program software to shift control back to human operators at frequent but irregular intervals; knowing that they may need to take command at any moment keeps people engaged, promoting situational awareness and learning. You can put limits on the scope of automation, making sure that people working with computers perform challenging tasks rather than merely observing. Giving people more to do helps sustain the generation effect. You can incorporate educational routines into software, requiring users to repeat difficult manual and mental tasks that encourage memory formation and skill building.

  Some software writers take such suggestions to heart. In schools, the best instructional programs help students master a subject by encouraging attentiveness, demanding hard work, and reinforcing learned skills through repetition. Their design reflects the latest discoveries about how our brains store memories and weave them into conceptual knowledge and practical know-how. But most software applications don’t foster learning and engagement. In fact, they have the opposite effect. That’s because taking the steps necessary to promote the development and maintenance of expertise almost always entails a sacrifice of speed and productivity. Learning requires inefficiency. Businesses, which seek to maximize productivity and profit, would rarely accept such a trade-off. Individuals, too, almost always seek efficiency and convenience. We pick the program that lightens our load, not the one that makes us work harder and longer. Abstract concerns about the fate of human talent can’t compete with the allure of saving time and money.

  The small island of Igloolik, off the coast of the Melville Peninsula in the Nunavut territory of northern Canada, is a bewildering place in the winter. The average temperature hovers at about 20 degrees below zero, thick sheets of sea ice cover the surrounding waters, and the sun is rarely seen. Despite the brutal conditions, Inuit hunters have for some four thousand years ventured out from their homes on the island and traveled across miles of ice and tundra to search for game. The hunters’ ability to navigate vast stretches of the barren Arctic terrain, where landmarks are few, snow formations are in constant flux, and trails disappear overnight, has amazed explorers and scientists for centuries. The Inuit’s extraordinary way-finding skills are born not of technological prowess—they long eschewed maps and compasses—but of a profound understanding of winds, snowdrift patterns, animal behavior, stars, and tides.

  Inuit culture is changing now. The Igloolik hunters have begun to rely on computer-generated maps to get around. Adoption of GPS technology has been particularly strong among younger Inuit, and it’s not hard to understand why. The ease and convenience of automated navigation makes the traditional Inuit techniques seem archaic and cumbersome.

  But as GPS devices have proliferated on Igloolik, reports of serious accidents during hunts have spread. A hunter who hasn’t developed way-finding skills can easily become lost, particularly if his GPS receiver fails. The routes so meticulously plotted on satellite maps can also give hunters tunnel vision, leading them onto thin ice or into other hazards a skilled navigator would avoid. The anthropologist Claudio Aporta, of Carleton University in Ottawa, has been studying Inuit hunters for more than fifteen years. He notes that while satellite navigation offers practical advantages, its adoption has already brought a deterioration in way-finding abilities and, more generally, a weakened feel for the land. An Inuit on a GPS-equipped snowmobile is not so different from a suburban commuter in a GPS-equipped SUV: as he devotes his attention to the instructions coming from the computer, he loses sight of his surroundings. He travels “blindfolded,” as Aporta puts it. A unique talent that has distinguished a people for centuries may evaporate in a generation.

  Whether it’s a pilot on a flight deck, a doctor in an examination room, or an Inuit hunter on an ice floe, knowing demands doing. One of the most remarkable things about us is also one of the easiest to overlook: each time we collide with the real, we deepen our understanding of the world and become more fully a part of it. While we’re wrestling with a difficult task, we may be motivated by an anticipation of the ends of our labor, but it’s the work itself—the means—that makes us who we are. Computer automation severs the ends from the means. It makes getting what we want easier, but it distances us from the work of knowing. As we transform ourselves into creatures of the screen, we face an existential question: Does our essence still lie in what we know, or are we now content to be defined by what we want? If we don’t grapple with that question ourselves, our gadgets will be happy to answer it for us.

  DAVID DOBBS

  The Social Life of Genes

  FROM Pacific Standard

  A FEW YEARS AGO, Gene Robinson, of Urbana, Illinois, asked some associates in southern Mexico to help him kidnap some one thousand newborns. For their victims they chose bees. Half were European honeybees, Apis mellifera ligustica, the sweet-tempered kind most beekeepers raise. The other half were ligustica’s genetically close cousins, Apis mellifera scutellata, the African strain better known as killer bees. Though the two subspecies are nearly indistinguishable, the latter defend territory far more aggressively. Kick a European honeybee hive, and perhaps a hundred bees will attack you. Kick a killer bee hive, and you may suffer a thousand stings or more. Two thousand will kill you.

  Working carefully, Robinson’s conspirators—researchers at Mexico’s National Center for Research in Animal Physiology, in the high resort town of Ixtapan de la Sal—jiggled loose the lids from two African hives and two European hives, pulled free a few honeycomb racks, plucked off about 250 of the youngest bees from each hive, and painted marks on the bees’ tiny backs. Then they switched each set of newborns into the hive of the other subspecies.

  Robinson, back in his office at the University of Illinois at Urbana-Champaign’s Department of Entomolog
y, did not fret about the bees’ safety. He knew that if you move bees to a new colony in their first day, the colony accepts them as its own. Nevertheless, Robinson did expect that the bees would be changed by their adoptive homes: he expected the killer bees to take on the European bees’ moderate ways and the European bees to assume the killer bees’ more violent temperament. Robinson had discovered this in prior experiments. But he hadn’t yet figured out how it happened.

  He suspected the answer lay in the bees’ genes. He didn’t expect the bees’ actual DNA to change: random mutations aside, genes generally don’t change during an organism’s lifetime. Rather, he suspected that the bees’ genes would behave differently in their new homes—wildly differently.

  This notion was both reasonable and radical. Scientists have known for decades that genes can vary their level of activity, as if controlled by dimmer switches. Most cells in your body contain every one of your 22,000 or so genes. But in any given cell at any given time, only a tiny percentage of those genes are active, sending out chemical messages that affect the activity of the cell. This variable gene activity, called gene expression, is how your body does most of its work.

  Sometimes these turns of the dimmer switch correspond to basic biological events, as when you develop tissues in the womb, enter puberty, or stop growing. At other times gene activity cranks up or spins down in response to changes in your environment. Thus certain genes switch on to fight infection or heal your wounds—or, running amok, give you cancer or burn your brain with fever. Changes in gene expression can make you thin, fat, or strikingly different from your supposedly identical twin. When it comes down to it, really, genes don’t make you who you are. Gene expression does. And gene expression varies depending on the life you live.

 

‹ Prev