Reactive thinking is at the core of how we allocate our attention, and in many settings, it’s a tremendous asset. Athletes, for example, practice certain moves again and again so that, during a game, they can think reactively and execute plays faster than their opponents can respond. Reactive thinking is how we build habits, and it’s why to-do lists and calendar alerts are so helpful: Rather than needing to decide what to do next, we can take advantage of our reactive instincts and automatically proceed. Reactive thinking, in a sense, outsources the choices and control that, in other settings, create motivation.
But the downside of reactive thinking is that habits and reactions can become so automatic they overpower our judgment. Once our motivation is outsourced, we simply react. One study conducted by Strayer, the psychologist, in 2009 looked at how drivers’ behaviors changed when cars were equipped with features such as cruise control and automatic braking systems that allowed people to pay less attention to road conditions.
“These technologies are supposed to make driving safer, and many times, they do,” said Strayer. “But it also makes reactive thinking easier, and so when the unexpected startles you, when the car skids or you have to brake suddenly, you’ll react with practiced, habitual responses, like stomping on the pedal or twisting the wheel too far. Instead of thinking, you react, and if it’s not the correct response, bad things happen.”
Inside the cockpit, as the alarms sounded and the cricket chirped, the pilots were silent. Robert, the copilot, perhaps lost in his own thoughts, didn’t reply to Bonin’s question—“I’m in TO/GA, right?”—but instead tried once again to beckon the captain, who was still resting in the hold. If Bonin had paused to consider the basic facts—he was in thin air, a stall alarm was sounding, the plane couldn’t safely go higher—he would have immediately realized he needed to lower the airplane’s nose. Instead, he relied on behaviors he had practiced hundreds of times and pulled back on the stick. The plane’s nose increased to a terrifying eighteen-degree pitch as Bonin pushed the throttle open. The plane moved higher, touched the top of an arc, and then started dropping, its nose still pointed upward and the engines at full thrust. The cockpit began shaking as the buffeting grew more pronounced. The plane was falling fast.
“What the hell is happening?” the copilot asked. “Do you understand what’s happening, or not?”
“I don’t have control of the plane anymore!” Bonin shouted. “I don’t have control of the plane at all!”
In the cabin, passengers probably had little idea anything was wrong. There were no alarms they could hear. The buffeting likely felt like normal turbulence. Neither pilot ever made an announcement of any kind.
The captain finally entered the cockpit.
“What the hell are you doing?” he asked.
“I don’t know what’s happening,” Robert said.
“We’re losing control of the airplane!” Bonin shouted.
“We lost control of the airplane and we don’t understand at all,” Robert said. “We’ve tried everything.”
Flight 447 was now sinking at a rate of ten thousand feet per minute. The captain, standing behind the pilots and perhaps overwhelmed by what he saw, uttered a curse word and then remained silent for forty-one seconds.
“I have a problem,” Bonin said, the panic audible in his voice. “I have no more displays.” This was not correct. The displays—the screens on his instrument panel—were providing accurate information and were clearly visible. But Bonin was too overwhelmed to focus.
“I have the impression we’re going crazily fast,” Bonin said. The plane, in fact, at this point was moving far too slowly. “What do you think?” Bonin asked as he reached for the lever that would raise the speed-brakes on the wing, slowing the plane even more.
“No!” shouted the copilot. “Above all, don’t extend the brakes!”
“Okay,” Bonin said.
“What should we do?” the copilot asked the captain. “What do you see?”
“I don’t know,” the captain said. “It’s descending.”
Over the next thirty-five seconds, as the pilots shouted questions, the plane dropped another nine thousand feet.
“Am I going down now?” Bonin asked. The instruments in front of him could have easily answered that question.
“You’re going down down down,” the copilot said.
“I’ve been at full back stick for a while,” Bonin said.
“No, no!” the captain shouted. The plane was now less than ten thousand feet above the Atlantic Ocean. “Don’t climb!”
“Give me the controls!” the copilot said. “The controls! To me!”
“Go ahead,” Bonin says, finally releasing the stick. “You have the controls. We’re still in TO/GA, right?”
As the copilot took over, the plane fell another six thousand feet closer to the sea.
“Watch out, you’re pitching up there,” the captain said.
“I’m pitching up?” the copilot replied.
“You’re pitching up,” the captain said.
“Well, we need to!” Bonin said. “We’re at four thousand feet!”
By now, the only way the craft could pick up enough speed was to lower its nose into a dive and let more air flow over the wings. But with such a small distance between the plane and the ocean’s surface, there was no room to maneuver. A ground proximity warning began blaring, “SINK RATE! PULL UP!” The cockpit was filled with constant noise.
“You’re pitching up,” the captain told the copilot.
“Let’s go!” Bonin replied. “Pull up! Pull up! Pull up!”
The men stopped speaking for a moment.
“This can’t be true,” said Bonin. The ocean was visible through the cockpit’s windows. If the pilots had craned their necks, they could have made out individual waves.
“But what’s happening?” Bonin asked.
Two seconds later, the plane plunged into the sea.
II.
In the late 1980s, a group of psychologists at a consulting firm named Klein Associates began exploring why some people seem to stay calm and focused amid chaotic environments while others become overwhelmed. Klein Associates’ business was helping companies analyze how they make decisions. A variety of clients wanted to know why some employees made such good choices amid stress and time pressures, while other workers became distracted. More important, they wanted to know if they could train people to get better at paying attention to the right things.
The Klein Associates team began by interviewing professionals who worked in extreme settings, such as firefighters, military commanders, and emergency rescue personnel. Many of those conversations, however, proved frustrating. Firefighters could look at a burning staircase and sense if it would hold their weight, they knew which parts of a building needed constant attention and how to stay attuned to warning signs, but they struggled to explain how they did it. Soldiers could tell you which parts of a battlefield were more likely to be harboring enemies and where to focus for signs of ambush. But when asked to explain their decisions, they chalked it up to intuition.
So the team moved on to other settings. One researcher, Beth Crandall, visited neonatal intensive care units, or NICUs, around Dayton, near where she lived. A NICU, like all critical care settings, is a mix of chaos and banality set against a backdrop of constantly beeping machines and chiming warnings. Many of the babies inside a NICU are on their way to full health; they might have arrived prematurely or suffered minor injuries during birth, but they are not seriously ill. Others, though, are unwell and need constant monitoring. What makes things particularly hard for NICU nurses, however, is that it is not always clear which babies are sick and which are healthy. Seemingly okay preemies can become unwell quickly; sick infants can recover unexpectedly. So nurses are constantly making choices about where to focus their attention: the squalling baby or the quiet one? The new lab results or the worried parents who say something seems wrong? What’s more, these choices occur amid a constan
t stream of data from machines—heart monitors and automatic thermometers, blood pressure systems and pulse oximeters—that are ready to sound alarms the moment anything changes. Such innovations have made patients safer and have remarkably improved NICUs’ productivity, because fewer nurses are now needed to oversee greater numbers of children. But they have also made NICUs more complex. Crandall wanted to understand how nurses made decisions about which babies needed their attention, and why some of them were better at focusing on what mattered most.
Crandall interviewed nurses who were calm in the face of emergencies and others who seemed on the brink of collapse. Most interesting were the handful of nurses who seemed particularly gifted at noticing when a baby was in trouble. They could predict an infant’s decline or recovery based on small warning signs that almost everyone else overlooked. Often, the clues these nurses relied upon to spot problems were so subtle that they themselves had trouble later recalling what had prompted them to act. “It was like they could see things no one else did,” Crandall told me. “They seemed to think differently.”
One of Crandall’s first interviews was with a talented nurse named Darlene, who described a shift from a few years earlier. Darlene had been walking past an incubator when she happened to glance at the baby inside. All of the machines hooked up to the child showed that her vitals were within normal ranges. There was another RN keeping watch over the baby, and she was observing the infant attentively, unconcerned by what she saw. But to Darlene, something seemed wrong. The baby’s skin was slightly mottled instead of uniformly pink. The child’s belly seemed a bit distended. Blood had recently been drawn from a pinprick in her heel and the Band-Aid showed a blot of crimson, rather than a small dot.
None of that was particularly unusual or troubling. The nurse tending to the child said she was eating and sleeping well. Her heartbeat was strong. But something about all those small things occurring together caught Darlene’s attention. She opened the incubator and examined the infant. The newborn was conscious and awake. She grimaced slightly at Darlene’s touch but didn’t cry. There was nothing specific that she could point to, but this baby simply didn’t look like Darlene expected her to.
Darlene found the attending physician and said they needed to start the child on intravenous antibiotics. All they had to go on was Darlene’s intuition, but the doctor, deferring to her judgment, ordered the medication and a series of tests. When the labs came back, they showed that the baby was in the early stages of sepsis, a potentially fatal whole-body inflammation caused by a severe infection. The condition was moving so fast that, had they waited any longer, the newborn would have likely died. Instead, she recovered fully.
“It fascinated me that Darlene and this other nurse had seen the same warning signs, they had all the same information, but only Darlene detected the problem,” Crandall said. “To the other nurse, the mottled skin and the bloody Band-Aid were data points, nothing big enough to trigger an alarm. But Darlene put everything together. She saw a whole picture.” When Crandall asked Darlene to explain how she knew the baby was sick, Darlene said it was a hunch. As Crandall asked more questions, however, another explanation emerged. Darlene explained that she carried around a picture in her mind of what a healthy baby ought to look like—and the infant in the crib, when she glanced at her, hadn’t matched that image. So the spotlight inside Darlene’s head went to the child’s skin, the blot of blood on her heel, and the distended belly. It focused on those unexpected details and triggered Darlene’s sense of alarm. The other nurse, in contrast, didn’t have a strong picture in her head of what she expected to see, and so her spotlight focused on the most obvious details: The baby was eating. Her heartbeat was strong. She wasn’t crying. The other nurse was distracted by the information that was easiest to grasp.
People like Darlene who are particularly good at managing their attention tend to share certain characteristics. One is a propensity to create pictures in their minds of what they expect to see. These people tell themselves stories about what’s going on as it occurs. They narrate their own experiences within their heads. They are more likely to answer questions with anecdotes rather than simple responses. They say when they daydream, they’re often imagining future conversations. They visualize their days with more specificity than the rest of us do.
Psychologists have a phrase for this kind of habitual forecasting: “creating mental models.” Understanding how people build mental models has become one of the most important topics in cognitive psychology. All people rely on mental models to some degree. We all tell ourselves stories about how the world works, whether we realize we’re doing it or not.
But some of us build more robust models than others. We envision the conversations we’re going to have with more specificity, and imagine what we are going to do later that day in greater detail. As a result, we’re better at choosing where to focus and what to ignore. The secret of people like Darlene is that they are in the habit of telling themselves stories all the time. They engage in constant forecasting. They daydream about the future and then, when life clashes with their imagination, their attention gets snagged. That helps explain why Darlene noticed the sick baby. She was in the habit of imagining what the babies in her unit ought to look like. Then, when she glanced over and the bloody Band-Aid, distended belly, and mottled skin didn’t match the image in her mind, the spotlight in her head swung toward the child’s bassinet.
Cognitive tunneling and reactive thinking occur when our mental spotlights go from dim to bright in a split second. But if we are constantly telling ourselves stories and creating mental pictures, that beam never fully powers down. It’s always jumping around inside our heads. And, as a result, when it has to flare to life in the real world, we’re not blinded by its glare.
When the Air France Flight 447 investigators began parsing cockpit audio recordings, they found compelling evidence that none of the pilots had strong mental models during their flight.
“What’s this?” the copilot asked when the first stall warning sounded.
“There’s no good speed indication?…We’re in…we’re in a climb?” Bonin responded.
The pilots kept asking each other questions as the plane’s crisis deepened because they didn’t have mental models to help them process new information as it arrived. The more they learned, the more confused they became. This explains why Bonin was so prone to cognitive tunneling. He hadn’t been telling himself a story as the plane flew along, and so when the unexpected occurred, he wasn’t sure which details to pay attention to. “I have the impression we’re going crazily fast,” he said as the plane began to slow and fall. “What do you think?”
And when Bonin did finally latch on to a mental model—“I’m in TO/GA, right?”—he didn’t look for any facts that challenged that model. “I’m climbing, okay, so we’re going down,” he said two minutes before the plane crashed, seemingly oblivious to the contradiction of his words. “Okay, we’re in TO/GA,” he added. “How come we’re continuing to go right down?”
“This can’t be true,” he said seconds before the plane hit the water. Then there are his last words, which make all the sense in the world once you realize Bonin was still grasping for useful mental models as the plane hurtled toward the waves:
“But what’s happening?”
This problem isn’t unique to the aviators of Flight 447, of course. It happens all the time, within offices and on freeways, as we’re working on our smartphones and multitasking from the couch. “This mess of a situation is one hundred percent our own fault,” said Stephen Casner, a research psychologist at NASA who has studied dozens of accidents like Air France Flight 447. “We started with a creative, flexible, problem-solving human and a mostly dumb computer that’s good at rote, repetitive tasks like monitoring. So we let the dumb computer fly and the novel-writing, scientific-theorizing, jet-flying humans sit in front of the computer like potted plants watching for blinking lights. It’s always been difficult to learn how to foc
us. It’s even harder now.”
A decade after Beth Crandall interviewed the NICU nurses, two economists and a sociologist from MIT decided to study how, exactly, the most productive people build mental models. To do that, they convinced a midsized recruiting firm to give them access to their profit-and-loss data, employees’ appointment calendars, and the 125,000 email messages the firm’s executives had sent over the previous ten months.
The first thing the researchers noticed, as they began crawling through all that data, was that the firm’s most productive workers, its superstars, shared a number of traits. The first was they tended to work on only five projects at once—a healthy load, but not extraordinary. There were other employees who handled ten or twelve projects at a time. But those employees had a lower profit rate than the superstars, who were more careful about how they invested their time.
The economists figured the superstars were pickier because they were seeking out assignments that were similar to previous work they had done. Conventional wisdom holds that productivity rises when people do the same kind of tasks over and over. Repetition makes us faster and more efficient because we don’t have to learn fresh skills with each new assignment. But as the economists looked more closely, they found the opposite: The superstars weren’t choosing tasks that leveraged existing skills. Instead, they were signing up for projects that required them to seek out new colleagues and demanded new abilities. That’s why the superstars worked on only five projects at a time: Meeting new people and learning new skills takes a lot of additional hours.
Smarter Faster Better: The Secrets of Being Productive in Life and Business Page 9