An Astronaut's Guide to Life on Earth

Home > Memoir > An Astronaut's Guide to Life on Earth > Page 8
An Astronaut's Guide to Life on Earth Page 8

by Chris Hadfield


  In any field, it’s a plus if you view criticism as potentially helpful advice rather than as a personal attack. But for an astronaut, depersonalizing criticism is a basic survival skill. If you bristled every time you heard something negative—or stubbornly tuned out the feedback—you’d be toast.

  At NASA, everyone’s a critic. Over the years, hundreds of people weigh in on our performance on a regular basis. Our biggest blunders are put under the microscope so even more people can be made aware of them: “Check out what Hadfield did—let’s be sure no one ever does that again.”

  Often, we’re scrutinized and evaluated in real time. Quite a few simulations involve a crowd: all the people in Mission Control who would in real life work that particular problem, plus the trainers who dreamed up the scenario in the first place and the experts who best understand the intricate components of whatever system is being tested. When we’re simulating deorbit to landing, for instance, dozens of people observe, hoping that something new—a flaw in a standard procedure, say, or a better way of doing something—will be revealed. They actually want us to stumble into a gray zone no one had recognized could be problematic in order to see whether we can figure out what to do. If not, well, it’s much better to discover that gray zone while we’re still on Earth, where we have the luxury of being able to simulate a bunch more times until we do figure it out. Whether we fail or succeed in a sim is only part of the story. The main point is to learn—and then to review the experience afterward from every possible angle.

  The debrief is a cultural staple at NASA, which makes this place a nightmare for people who aren’t fond of meetings. During a sim, the flight director or lead astronaut makes notes on major events, and afterward, kicks off the debrief by reviewing the highlights: what went well, what new things were learned, what was already known but needs to be re-emphasized. Then it’s a free-for-all. Everyone else dives right in, system by system, to dissect what went wrong or was handled poorly. All the people who are involved in the sim have a chance to comment on how things looked from their consoles, so if you blundered in some way, dozens of people may flag it and enumerate all the negative effects of your actions. It’s not a public flogging: the goal is to build up collective wisdom. So the response to an error is never, “No big deal, don’t beat yourself up about it.” It’s “Let’s pull on that”—the idea being that a mistake is like a loose thread you should tug on, hard, to see if the whole fabric unravels.

  Occasionally the criticism is personal, though, and even when it’s constructive, it can sting. Prior to my last mission, my American crewmate Tom Marshburn and I were in the pool for a six-hour EVA evaluation, practicing spacewalking in front of a group of senior trainers and senior astronauts. Tom and I have both done EVAs in space and I thought we did really well in the pool. But in the debrief, after I’d explained my rationale for tethering my body in a particular way so I’d be stable enough to perform a repair, one of our instructors announced to the room, “When Chris talks, he has a very clear and authoritative manner—but don’t let yourself be lulled into a feeling of complete confidence that he’s right. Yes, he used to be a spacewalking instructor and evaluator and he’s Mr. EVA, but he hasn’t done a walk since 2001. There have been a lot of changes since then. I don’t want the junior trainers to ignore that little voice inside and not question something just because it’s being said with authority by someone who’s been here a long time.”

  At first that struck me as a little insulting, because the message boiled down to this: “Mr. EVA” sounds like he knows what he’s doing, but really, he may not have a clue. Then I stopped to ask myself, “Why is the instructor saying that?” Pretty quickly I had to concede that the point was valid. I don’t come off as wishy-washy and I’m used to teaching others how to do things, so I can sound very sure of myself. That doesn’t mean I think I know everything there is to know; I’d always assumed that people understood that perfectly well and felt free to jump in and question my judgment. But maybe my demeanor was making that difficult. I decided to test that proposition: instead of waiting for feedback, I’d invite it and see what happened. After a sim, I began asking my trainers and crewmates, “How did I fall short, technically, and what changes could I make next time?” Not surprisingly, the answer was rarely, “Don’t change a thing, Chris—everything you do is perfect!” So the debrief did what it was supposed to: it alerted me to a subtle but important issue I was able to address in a way that ultimately improved our crew’s chances of success.

  At NASA, we’re not just expected to respond positively to criticism, but to go one step further and draw attention to our own missteps and miscalculations. It’s not easy for hyper-competitive people to talk openly about screw-ups that make them look foolish or incompetent. Management has to create a climate where owning up to mistakes is permissible and colleagues have to agree, collectively, to cut each other some slack.

  I got used to public confessionals as a fighter pilot. Every Monday morning we got together for a flight safety briefing and talked about all the things that could have killed us the previous week. Sometimes pilots confessed to really basic errors and oversights and the rest of us were expected to suspend judgment. (Deliberate acts of idiocy—flying under a bridge, say, or showing off by going supersonic over your friend’s house and busting every window in the neighborhood—were a different story. Fighter pilots could be and were fired for them.) It was easier not to pass judgment once I grasped that another pilot’s willingness to admit he’d made a boneheaded move, and then talk about what had happened next, could save my life. Literally.

  At NASA, where the organizational culture focuses so explicitly on education, not just achievement, it’s even easier to frame individual mistakes as teachable moments rather than career-ending blunders. I remember one astronaut, also a former test pilot, standing up at a meeting and walking us all through an incident where his T-38 (the plane we all train on to keep up our flying skills) slid off the end of a runway in Louisiana. For a pilot this is hugely embarrassing, a rookie error. There wasn’t much damage to the plane, so the guy could’ve either kept his mouth shut, or the moral of the story could have been, “All’s well that ends well.” But as he told it, the moral was: be careful because the asphalt at this runway is slicker than most—it contains ground-up seashells, which, it turns out, are seriously slippery when it’s raining. That was incredibly useful information for all of us to have. While no one thought more of that astronaut for sliding off the runway, we certainly didn’t think less of him for being willing to save us from doing the same thing ourselves.

  After a four-hour sim, we usually debrief for about an hour, but that’s nothing. After a space flight, we debrief all day, every day, for a month or more, one subject at a time. Communication systems, biological research, spacesuits—every aspect of each experience is picked apart in an exhaustive meeting with the people responsible for that particular area. We gather in the main conference room of the Astronaut Office at JSC, a windowless, rather cavernous place, and the senior experts in that day’s subject matter take seats around a large oval table beside the recently returned astronauts, while the not-so-senior experts sit in chairs lined up against the walls. The flavor of the meetings is grilled astronaut: the experts fire questions at us and we do our best to answer them fully, with as many details as possible. In the debrief about food, for instance, we’re asked, “How was it? What did you like? Why? Was there enough for everyone? What did you throw away? How about the packaging—any way you can think of to improve it?” (The level of detail we go into helps explain why the food on Station is, for the most part, really good.)

  When the topic of discussion is an unexpected occurrence, such as the unplanned EVA to locate an external ammonia leak on the ISS during my last mission, the debrief goes on for days. As I’ll explain later, that was a highly unusual spacewalk for a variety of reasons, and the novelty factor made the debrief especially long and involved. The room was packed with people trying to d
econstruct and reconstruct events, and figure out what they could do better next time.

  And as in any debrief, everyone also wanted to review what we could have done better—and to magnify and advertise our errors, so other astronauts wouldn’t make the same ones. One of the main purposes of a debrief is to learn every lesson possible, then fold them back into what we call Flight Rules so that everyone in the organization benefits.

  Flight Rules are the hard-earned body of knowledge recorded in manuals that list, step by step, what to do if X occurs, and why. Essentially, they are extremely detailed, scenario-specific standard operating procedures. If while I was on board the ISS a cooling system had failed, Flight Rules would have provided a blow-by-blow explanation of how to fix the system as well as the rationale behind each step of the procedure.

  NASA has been capturing our missteps, disasters and solutions since the early 1960s, when Mercury-era ground teams first started gathering “lessons learned” into a compendium that now lists thousands of problematic situations, from engine failure to busted hatch handles to computer glitches, and their solutions. Our flight procedures are based on these rules, but Flight Rules are really for Mission Control, so that when we have problems on orbit they can walk us through what to do.

  Given the obsession with preparation, it’s interesting how frequently we do run into trouble in space. Despite all our practice runs on Earth, it often turns out that we have miscalculated or overlooked something obvious, and need a new flight rule to cover it. In 2003, when I was Chief of Robotics at NASA, a crew on the ISS came very close to inadvertently hitting a fragile part of a docked Shuttle with Canadarm2. In the debrief afterward, it became obvious that although the impending near-collision had been detected on the ground, there wasn’t a clear and simple way to alert the crew. The chain of communication was incredibly convoluted: video and data from orbit were transmitted to Houston, where a specialist in a backroom had to recognize the problem and alert the robotics flight controller in Mission Control, who then had to warn the flight director and the CAPCOM, who then had to understand the situation and tell the astronauts what to do, who then had to do the right thing—and all this had to happen while the robot arm continued moving closer and closer to smashing into the only vehicle that could get the crew home alive.

  In the debrief we also realized that although astronauts had been very well prepared to use the relatively simple arm on the Shuttle, which had good lighting in the payload bay and fewer things to hit, they were less well trained to manipulate a more sophisticated robotic arm on a structure as complex and poorly lit as the ISS. So in the calm aftermath, we decided that along with making some changes to training, we’d better come up with a fast and unambiguous response people could use when a problem was observed in real time. Sounds like a no-brainer, right? But none of this had occurred to anyone before. And we had to take into account possibly fuzzy and intermittent radio communications, crew members whose first language might not be English, the actual controls on the robot arm itself and the urgency of the problem that had been detected. What we came up with was the simplest possible radio call and the simplest possible crew reaction: whoever saw that Canadarm2 was getting perilously close to smashing into something would say “all stop” three times. Everyone who heard the command, whether on the ground or in space, would repeat it out loud. And the crew would halt the arm’s motion with a single switch. This was captured in a new flight rule, so crews and Mission Control now train with the All-Stop Protocol in mind, and brief it aloud before every robotic operation, both in sims and on orbit. And the robot arm has never hit a structure accidentally.

  As is probably clear by now, even making seemingly simple decisions can be extremely difficult in space. The beauty of Flight Rules is that they create certainty when we have to make tough calls. For instance, in 1997 I was CAPCOM for STS-83, which, shortly after launch, appeared to have a fuel cell issue. Fuel cells generate electricity, sort of like a battery, and one of the three on board appeared to surpass permissible voltage thresholds. At Mission Control we thought the problem was probably with the sensor, not the fuel cell itself, so we were inclined to ignore it. But Flight Rules insisted the fuel cell had to be shut down—and then, with only two fuel cells deemed fully operational, another flight rule kicked in: the mission had to be terminated.

  If it had been up to us, STS-83 probably could’ve kept on going, because the Shuttle would fly fine with just two fuel cells if no other problems cropped up. In real time, the temptation to take a chance is always higher. However, the flight rules were unequivocal: the Shuttle had to return to Earth. As CAPCOM, it was my job to tell the commander, “Listen, I know you just got up there, but you have to come on back. Starting now.” It was heartbreaking for the crew, after spending so long training for that specific mission, to return to Earth three days after launch with most of their objectives unfulfilled. I’m sure they cursed the flight rules as they deorbited—and cursed even more loudly later, when it turned out the fuel cell in question would likely have been completely fine if they’d stayed in space. (There’s a nice coda to this story: the same crew launched again just three months later—which was unprecedented—and that time, nothing went wrong.)

  One reason we’re able to keep pushing the boundaries of human capability yet keep people safe is that Flight Rules protect against the temptation to take risks, which is strongest when momentum has been building to meet a launch date. The Soyuz can launch in just about any weather but the Shuttle was a much less rugged vehicle, so there were ironclad launch criteria: how windy it could be, how cold, how much cloud cover—clearly spelled-out minimally acceptable weather conditions for a safe launch. We came up with them when there was no urgency or pressure and there was enough time to pull on every string and analyze every consequence. We had to invoke them for about one-third of all launches. Having hard and fast rules, and being unwilling to bend them, was a godsend on launch day, when there was always a temptation to say, “Sure, it’s a touch colder than we’d like, but … let’s just try anyway.”

  I had helped with so many launches at the Cape that I fully expected a weather delay when I got strapped into my seat on Atlantis in November 1995, all ready for my first trip to space. Sure enough, five minutes before we were supposed to launch, STS-74 was called off. The weather was actually beautiful in Florida that day, but it was bad at all of our overseas emergency landing sites. The chances that we’d have to abort the mission after liftoff were extremely slim, but the rules were clear: we needed to have the option. No one on board was delighted with this turn of events, but there wasn’t a lot of grousing. After so many years of training, what was one more day? That’s one good thing about habitually sweating the small stuff: you learn to be very, very patient. (And we did, in fact, launch the next day.)

  NASA’s fanaticism about details and rules may seem ridiculously finicky to outsiders. But when astronauts are killed on the job, the reason is almost always an overlooked detail that seemed unimportant at the time. Initially, for instance, astronauts didn’t wear pressure suits during launch and re-entry—the idea had been considered but dismissed. Why bother, since they were in a proven vehicle with multiple levels of redundancy? It seemed over-the-top, and besides, suits would take up room, add weight to the rocket and, because they’re unwieldy, make it more difficult for the crew to maneuver. The Russians began wearing pressure suits for launch and landing only after a ventilation valve came loose and a Soyuz depressurized during re-entry in 1971, killing all three cosmonauts on board, likely within seconds. Shuttle astronauts started wearing pressure suits only after Challenger exploded during launch in 1986. In the case of both Challenger and Columbia, seemingly tiny details—a cracked O-ring, a dislodged piece of foam—caused terrible disasters.

  This is why, individually and organizationally, we have the patience to sweat the small stuff even when—actually, especially when—pursuing major goals. We’ve learned the hardest way possible just how much little thi
ngs matter.

  The night before my first spacewalk in 2001, I was calm yet very conscious of the fact that I was about to do something I’d been dreaming of most of my life. STS-100 was my second space mission but the first time I’d ever had so much responsibility for such a crucial task on orbit—I was EV1, the lead spacewalker. I felt ready. I’d spent years studying and training. Still, I wanted to feel even more ready, so I spent a few hours polishing the visor of my spacesuit so my breath wouldn’t fog it up, unpacking and checking each piece of gear I’d need for the spacewalk, pre-assembling as much of it as I could and carefully attaching it to the Shuttle wall with Velcro—then double- and triple-checking my work, all the while mentally rehearsing the procedures I’d learned in the pool in Houston.

  Scott Parazynski and I had been training for a year and a half to install Canadarm2, the robotic arm that would build the ISS, then in its infancy. In May 2001, the Station was just a fraction of its current size; the first parts of the ISS had only been sent into orbit three years earlier, and the first crew took up residence in 2000. Our crew hadn’t even been inside the Station yet. We’d docked Endeavour to it a few days before but hadn’t yet been able to open the hatch because our EVA was going to take place from the Shuttle airlock—a depressurized bridge, in essence, between the two spacecraft.

  That night I felt a little like a kid on Christmas Eve. I wanted to get to sleep right away, to make the morning come faster. The visuals, however, were more appropriate to Halloween: on the Shuttle we slept in sleeping bags tethered to the walls and ceiling, an oddly macabre den of human chrysalises, hovering and still. I woke in the night and checked the green light of my Omega Speedmaster astronaut watch. Hours to go. Everyone else was fast asleep. I fell back asleep too until, with a burst of static, the small speaker in the Shuttle middeck erupted with wake-up music from Houston, a song Helene had chosen for me: “Northwest Passage” by Stan Rogers, one of my favorite folk singers. I slipped carefully out of my sleeping bag, found the microphone, said thanks to my family and everyone at Mission Control and started to get ready to go outside.

 

‹ Prev