The Glass Cage: Automation and Us

Home > Other > The Glass Cage: Automation and Us > Page 8
The Glass Cage: Automation and Us Page 8

by Nicholas Carr


  Since the late 1970s, cognitive psychologists have been documenting a phenomenon called the generation effect. It was first observed in studies of vocabulary, which revealed that people remember words much better when they actively call them to mind—when they generate them—than when they read them from a page. In one early and famous experiment, conducted by University of Toronto psychologist Norman Slamecka, people used flash cards to memorize pairs of antonyms, like hot and cold. Some of the test subjects were given cards that had both words printed in full, like this:

  HOT : COLD

  Others used cards that showed only the first letter of the second word, like this:

  HOT : C

  The people who used the cards with the missing letters performed much better in a subsequent test measuring how well they remembered the word pairs. Simply forcing their minds to fill in a blank, to act rather than observe, led to stronger retention of information.12

  The generation effect, it has since become clear, influences memory and learning in many different circumstances. Experiments have revealed evidence of the effect in tasks that involve not only remembering letters and words but also remembering numbers, pictures, and sounds, completing math problems, answering trivia questions, and reading for comprehension. Recent studies have also demonstrated the benefits of the generation effect for higher forms of teaching and learning. A 2011 paper in Science showed that students who read a complex science assignment during a study period and then spent a second period recalling as much of it as possible, unaided, learned the material more fully than students who read the assignment repeatedly over the course of four study periods.13 The mental act of generation improves people’s ability to carry out activities that, as education researcher Britte Haugan Cheng has written, “require conceptual reasoning and requisite deeper cognitive processing.” Indeed, Cheng says, the generation effect appears to strengthen as the material generated by the mind becomes more complex.14

  Psychologists and neuroscientists are still trying to figure out what goes on in our minds to give rise to the generation effect. But it’s clear that deep cognitive and memory processes are involved. When we work hard at something, when we make it the focus of attention and effort, our mind rewards us with greater understanding. We remember more and we learn more. In time, we gain know-how, a particular talent for acting fluidly, expertly, and purposefully in the world. That’s hardly a surprise. Most of us know that the only way to get good at something is by actually doing it. It’s easy to gather information quickly from a computer screen—or from a book, for that matter. But true knowledge, particularly the kind that lodges deep in memory and manifests itself in skill, is harder to come by. It requires a vigorous, prolonged struggle with a demanding task.

  The Australian psychologists Simon Farrell and Stephan Lewandowsky made the connection between automation and the generation effect in a paper published in 2000. In Slamecka’s experiment, they pointed out, supplying the second word of an antonym pair, rather than forcing a person to call the word to mind, “can be considered an instance of automation because a human activity—generation of the word ‘COLD’ by participants—has been obviated by a printed stimulus.” By extension, “the reduction in performance that is observed when generation is replaced by reading can be considered a manifestation of complacency.”15 That helps illuminate the cognitive cost of automation. When we carry out a task or a job on our own, we seem to use different mental processes than when we rely on the aid of a computer. When software reduces our engagement with our work, and in particular when it pushes us into a more passive role as observer or monitor, we circumvent the deep cognitive processing that underpins the generation effect. As a result, we hamper our ability to gain the kind of rich, real-world knowledge that leads to know-how. The generation effect requires precisely the kind of struggle that automation seeks to alleviate.

  In 2004, Christof van Nimwegen, a cognitive psychologist at Utrecht University in the Netherlands, began a series of simple but ingenious experiments to investigate software’s effects on memory formation and the development of expertise.16 He recruited two groups of people and had them play a computer game based on a classic logic puzzle called Missionaries and Cannibals. To complete the puzzle, a player has to transport across a hypothetical river five missionaries and five cannibals (or, in van Nimwegen’s version, five yellow balls and five blue ones), using a boat that can accommodate no more than three passengers at a time. The tricky part is that there can never be more cannibals than missionaries in one place, either in the boat or on the riverbanks. (If outnumbered, the missionaries become the cannibals’ dinner, one assumes.) Figuring out the series of boat trips that can best accomplish the task requires rigorous analysis and careful planning.

  One of van Nimwegen’s groups worked on the puzzle using software that provided step-by-step guidance, offering, for instance, on-screen prompts to highlight which moves were permissible and which weren’t. The other group used a rudimentary program that offered no assistance. As you’d expect, the people using the helpful software made faster progress at the outset. They could follow the prompts rather than having to pause before each move to recall the rules and figure out how they applied to the new situation. But as the game advanced, the players using the rudimentary software began to excel. In the end, they were able to work out the puzzle more efficiently, with significantly fewer wrong moves, than their counterparts who were receiving assistance. In his report on the experiment, van Nimwegen concluded that the subjects using the rudimentary program developed a clearer conceptual understanding of the task. They were better able to think ahead and plot a successful strategy. Those relying on guidance from the software, by contrast, often became confused and would “aimlessly click around.”

  The cognitive penalty imposed by the software aids became even clearer eight months later, when van Nimwegen had the same people work through the puzzle again. Those who had earlier used the rudimentary software finished the game almost twice as quickly as their counterparts. The subjects using the basic program, he wrote, displayed “more focus” during the task and “better imprinting of knowledge” afterward. They enjoyed the benefits of the generation effect. Van Nimwegen and some of his Utrecht colleagues went on to conduct experiments involving more realistic tasks, such as using calendar software to schedule meetings and event-planning software to assign conference speakers to rooms. The results were the same. People who relied on the help of software prompts displayed less strategic thinking, made more superfluous moves, and ended up with a weaker conceptual understanding of the assignment. Those using unhelpful programs planned better, worked smarter, and learned more.17

  What van Nimwegen observed in his laboratory—that when we automate cognitive tasks like problem solving, we hamper the mind’s ability to translate information into knowledge and knowledge into know-how—is also being documented in the real world. In many businesses, managers and other professionals depend on so-called expert systems to sort and analyze information and suggest courses of action. Accountants, for example, use decision-support software in corporate audits. The applications speed the work, but there are signs that as the software becomes more capable, the accountants become less so. One study, conducted by a group of Australian professors, examined the effects of the expert systems used by three international accounting firms. Two of the companies employed advanced software that, based on an accountant’s answers to basic questions about a client, recommended a set of relevant business risks to include in the client’s audit file. The third firm used simpler software that provided a list of potential risks but required the accountant to review them and manually select the pertinent ones for the file. The researchers gave accountants from each firm a test measuring their knowledge of risks in industries in which they had performed audits. Those from the firm with the less helpful software displayed a significantly stronger understanding of different forms of risk than did those from the other two firms. The decline in learning ass
ociated with advanced software affected even veteran auditors—those with more than five years of experience at their current firm.18

  Other studies of expert systems reveal similar effects. The research indicates that while decision-support software can help novice analysts make better judgments in the short run, it can also make them mentally lazy. By diminishing the intensity of their thinking, the software retards their ability to encode information in memory, which makes them less likely to develop the rich tacit knowledge essential to true expertise.19 The drawbacks to automated decision aids can be subtle, but they have real consequences, particularly in fields where analytical errors have far-reaching repercussions. Miscalculations of risk, exacerbated by high-speed computerized trading programs, played a major role in the near meltdown of the world’s financial system in 2008. As Tufts University management professor Amar Bhidé has suggested, “robotic methods” of decision making led to a widespread “judgment deficit” among bankers and other Wall Street professionals.20 While it may be impossible to pin down the precise degree to which automation figured in the disaster, or in subsequent fiascos like the 2010 “flash crash” on U.S. exchanges, it seems prudent to take seriously any indication that a widely used technology may be diminishing the knowledge or clouding the judgment of people in sensitive jobs. In a 2013 paper, computer scientists Gordon Baxter and John Cartlidge warned that a reliance on automation is eroding the skills and knowledge of financial professionals even as computer-trading systems make financial markets more risky.21

  Some software writers worry that their profession’s push to ease the strain of thinking is taking a toll on their own skills. Programmers today often use applications called integrated development environments, or IDEs, to aid them in composing code. The applications automate many tricky and time-consuming chores. They typically incorporate auto-complete, error-correction, and debugging routines, and the more sophisticated of them can evaluate and revise the structure of a program through a process known as refactoring. But as the applications take over the work of coding, programmers lose opportunities to practice their craft and sharpen their talent. “Modern IDEs are getting ‘helpful’ enough that at times I feel like an IDE operator rather than a programmer,” writes Vivek Haldar, a veteran software developer with Google. “The behavior all these tools encourage is not ‘think deeply about your code and write it carefully,’ but ‘just write a crappy first draft of your code, and then the tools will tell you not just what’s wrong with it, but also how to make it better.’ ” His verdict: “Sharp tools, dull minds.” 22

  Google acknowledges that it has even seen a dumbing-down effect among the general public as it has made its search engine more responsive and solicitous, better able to predict what people are looking for. Google does more than correct our typos; it suggests search terms as we type, untangles semantic ambiguities in our requests, and anticipates our needs based on where we are and how we’ve behaved in the past. We might assume that as Google gets better at helping us refine our searching, we would learn from its example. We would become more sophisticated in formulating keywords and otherwise honing our online explorations. But according to the company’s top search engineer, Amit Singhal, the opposite is the case. In 2013, a reporter from the Observer newspaper in London interviewed Singhal about the many improvements that have been made to Google’s search engine over the years. “Presumably,” the journalist remarked, “we have got more precise in our search terms the more we have used Google.” Singhal sighed and, “somewhat wearily,” corrected the reporter: “ ‘Actually, it works the other way. The more accurate the machine gets, the lazier the questions become.’ ”23

  More than our ability to compose sophisticated queries may be compromised by the ease of search engines. A series of experiments reported in Science in 2011 indicates that the ready availability of information online weakens our memory for facts. In one of the experiments, test subjects read a few-dozen simple, true statements—“an ostrich’s eye is bigger than its brain,” for instance—and then typed them into a computer. Half the subjects were told the computer would save what they typed; the other half were told that the statements would be erased. Afterward, the participants were asked to write down all the statements they could recall. People who believed the information had been stored in the computer remembered significantly fewer of the facts than did those who assumed the statements had not been saved. Just knowing that information will be available in a database appears to reduce the likelihood that our brains will make the effort required to form memories. “Since search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally,” the researchers concluded. “When we need it, we will look it up.”24

  For millennia, people have supplemented their biological memory with storage technologies, from scrolls and books to microfiche and magnetic tape. Tools for recording and distributing information underpin civilization. But external storage and biological memory are not the same thing. Knowledge involves more than looking stuff up; it requires the encoding of facts and experiences in personal memory. To truly know something, you have to weave it into your neural circuitry, and then you have to repeatedly retrieve it from memory and put it to fresh use. With search engines and other online resources, we’ve automated information storage and retrieval to a degree far beyond anything seen before. The brain’s seemingly innate tendency to offload, or externalize, the work of remembering makes us more efficient thinkers in some ways. We can quickly call up facts that have slipped our mind. But that same tendency can become pathological when the automation of mental labor makes it too easy to avoid the work of remembering and understanding.

  Google and other software companies are, of course, in the business of making our lives easier. That’s what we ask them to do, and it’s why we’re devoted to them. But as their programs become adept at doing our thinking for us, we naturally come to rely more on the software and less on our own smarts. We’re less likely to push our minds to do the work of generation. When that happens, we end up learning less and knowing less. We also become less capable. As the University of Texas computer scientist Mihai Nadin has observed, in regard to modern software, “The more the interface replaces human effort, the lower the adaptivity of the user to new situations.”25 In place of the generation effect, computer automation gives us the reverse: a degeneration effect.

  BEAR WITH me while I draw your attention back to that ill-fated, slicker-yellow Subaru with the manual transmission. As you’ll recall, I went from hapless gear-grinder to reasonably accomplished stick-handler with just a few weeks’ practice. The arm and leg movements my dad had taught me, cursorily, now seemed instinctive. I was hardly an expert, but shifting was no longer a struggle. I could do it without thinking. It had become, well, automatic.

  My experience provides a model for the way humans gain complicated skills. We often start off with some basic instruction, received directly from a teacher or mentor or indirectly from a book or manual or YouTube video, which transfers to our conscious mind explicit knowledge about how a task is performed: do this, then this, then this. That’s what my father did when he showed me the location of the gears and explained when to step on the clutch. As I quickly discovered, explicit knowledge goes only so far, particularly when the task has a psychomotor component as well as a cognitive one. To achieve mastery, you need to develop tacit knowledge, and that comes only through real experience—by rehearsing a skill, over and over again. The more you practice, the less you have to think about what you’re doing. Responsibility for the work shifts from your conscious mind, which tends to be slow and halting, to your unconscious mind, which is quick and fluid. As that happens, you free your conscious mind to focus on the more subtle aspects of the skill, and when those, too, become automatic, you proceed up to the next level. Keep going, keep pushing yourself, and ultimately, assuming you have some native aptitude for the task, you’re rewarded with expertise.

&nbs
p; This skill-building process, through which talent comes to be exercised without conscious thought, goes by the ungainly name automatization, or the even more ungainly name proceduralization. Automatization involves deep and widespread adaptations in the brain. Certain brain cells, or neurons, become fine-tuned for the task at hand, and they work in concert through the electrochemical connections provided by synapses. The New York University cognitive psychologist Gary Marcus offers a more detailed explanation: “At the neural level, proceduralization consists of a wide array of carefully coordinated processes, including changes to both gray matter (neural cell bodies) and white matter (axons and dendrites that connect between neurons). Existing neural connections (synapses) must be made more efficient, new dendritic spines may be formed, and proteins must be synthesized.”26 Through the neural modifications of automatization, the brain develops automaticity, a capacity for rapid, unconscious perception, interpretation, and action that allows mind and body to recognize patterns and respond to changing circumstances instantaneously.

  All of us experienced automatization and achieved automaticity when we learned to read. Watch a young child in the early stages of reading instruction, and you’ll witness a taxing mental struggle. The child has to identify each letter by studying its shape. She has to sound out how a set of letters combine to form a syllable and how a series of syllables combine to form a word. If she’s not already familiar with the word, she has to figure out or be told its meaning. And then, word by word, she has to interpret the meaning of a sentence, often resolving the ambiguities inherent to language. It’s a slow, painstaking process, and it requires the full attention of the conscious mind. Eventually, though, letters and then words get encoded in the neurons of the visual cortex—the part of the brain that processes sight—and the young reader begins to recognize them without conscious thought. Through a symphony of brain changes, reading becomes effortless. The greater the automaticity the child achieves, the more fluent and accomplished a reader she becomes.27

 

‹ Prev