Book Read Free

Traffic

Page 34

by Tom Vanderbilt


  Were drivers trading a feeling of greater safety for more risk? Perhaps they were simply swapping collisions with other vehicles for potentially more dangerous “single-vehicle road-departure” crashes—studies on test tracks have shown that drivers in ABS-equipped cars more often veered off the road when trying to avoid a crash than non-ABS drivers did. Other studies revealed that many drivers didn’t know how to use ABS brakes correctly. Rather than exploiting ABS to drive more aggressively, they may have been braking the wrong way. Finally, drivers with ABS may simply have been racking up more miles. Whatever the case, a 1994 report by the National Highway Traffic Safety Administration concluded that the “overall, net effect of ABS” on crashes—fatal and otherwise—was “close to zero.” (The reason why is still rather a mystery, as the Insurance Institute for Highway Safety concluded in 2000: “The poor early experience of cars with antilocks has never been explained.”)

  There always seems to be something else to protect us on the horizon. The latest supposed silver bullet for traffic safety is electronic stability control, the rollover-busting technology that, it is said, can help save nearly ten thousand lives per year. It would be a good thing if it did, but if history is a guide, it will not.

  Why do these changes in safety never seem to have the predicted impact? Is it just overambitious forecasting? The most troublesome possible answer, one that has been haunting traffic safety for decades, suggests that, as with the roads in Chapter 7, the safer cars get, the more risks drivers choose to take.

  While this idea has been around in one form or another since the early days of the automobile—indeed, it was used to argue against railroad safety improvements—it was most famously, and controversially, raised in a 1976 article by Sam Peltzman, an economist at the University of Chicago. Describing what has since become known as the “Peltzman effect,” he argued that despite the fact that a host of new safety technologies—most notably, the seat belt—had become legally required in new cars, the roads were no safer. “Auto safety regulation,” he concluded, “has not affected the highway death rate.” Drivers, he contended, were trading a decrease in accident risk with an increase in “driving intensity.” Even if the occupants of cars themselves were safer, he maintained, the increase in car safety had been “offset” by an increase in the fatality rate of people who did not benefit from the safety features—pedestrians, bicyclists, and motorcyclists. As drivers felt safer, everyone else had reason to feel less safe.

  Because of the twisting, entwined nature of car crashes and their contributing factors, it is exceedingly difficult to come to any certain conclusions about how crashes may have been affected by changes to any one variable of driving. The median age of the driving population, the state of the economy, changes in law enforcement, insurance factors, weather conditions, vehicle and modal mix, alterations in commuting patterns, hazy crash investigations—all of these things, and others, play their subtle part. In many cases, the figures are simply estimates.

  This gap between expected and achieved safety results might be explained by another theory, one that turns the risk hypothesis rather on its head. This theory, known as “selective recruitment,” says that when a seat-belt law is passed, the pattern of drivers who switch from not wearing seat belts to wearing seat belts is decidedly not random. The people who will be first in line are likely to be those who are already the safest drivers. The drivers who do not choose to wear seat belts, who have been shown in studies to be riskier drivers, will be “captured” at a smaller rate—and even when they are, they will still be riskier.

  Looking at the crash statistics, one finds that in the United States in 2004, more people not wearing their seat belts were killed in passenger-car accidents than those who were wearing belts—even though, if federal figures can be believed, more than 80 percent of drivers wear seat belts. It is not simply that drivers are less likely to survive a severe crash when not wearing their belts; as Leonard Evans has noted, the most severe crashes happen to those not wearing their belts. So while one can make a prediction about the estimated reduction in risk due to wearing a seat belt, this cannot simply be applied to the total number of drivers for an “expected” reduction in fatalities.

  Economists have a clichéd joke: The most effective car-safety instrument would be a dagger mounted on the steering wheel and aimed at the driver. The incentive to drive safely would be quite high. Given that you are twice as likely to die in a severe crash if you’re not wearing a seat belt, it seems that not wearing a seat belt is essentially the same as installing a dangerous dagger in your car.

  And yet what if, as the economists Russell Sobel and Todd Nesbit ask, you had a car so safe you could usually walk away unharmed after hitting a concrete wall at high speed? Why, you would “race it at 200 miles per hour around tiny oval racetracks only inches away from other automobiles and frequently get into accidents.” This was what they concluded after tracking five NASCAR drivers over more than a decade’s worth of races, as cars gradually became safer. The number of crashes went up, they found, while injuries went down.

  Naturally, this does not mean that the average driver, less risk-seeking than a race-car driver, is going to do the same. For one, average drivers do not get prize money; for another, race-car drivers wear flame-retardant suits and helmets. This raises the interesting, if seemingly outlandish, question of why car drivers, virtually alone among users of wheeled transport, do not wear helmets. Yes, cars do provide a nice metal cocoon with inflatable cushions. But in Australia, for example, head injuries among car occupants, according to research by the Federal Office of Road Safety, make up half the country’s traffic-injury costs. Helmets, cheaper and more reliable than side-impact air bags, would reduce injuries and cut fatalities by some 25 percent. A crazy idea, perhaps, but so were air bags once.

  Seat belts and their effects are more complicated than allowed for by the economist’s language of incentives, which sees us all as rational actors making predictable decisions. I have always considered the act of wearing my seat belt not so much an incentive to drive more riskily as a grim reminder of my own mortality (some in the car industry fought seat belts early on for this reason). This doesn’t mean I’m immune from behavioral adaptation. Even if I cannot imagine how the seat belt makes me act more riskily, I can easily imagine how my behavior would change if, for some reason, I was driving a car without seat belts. Perhaps my ensuing alertness would cancel out the added risk.

  Moving past the question of how many lives have been saved by seat belts and the like, it seems beyond doubt that increased feelings of safety can push us to take more risks, while feeling less safe makes us more cautious. This behavior may not always occur, we may do it for different reasons, we may do it with different intensities, and we may not be aware that we are doing it (or by how much); but the fact that we do it is why these arguments are still taking place. This may also explain why, as Peltzman has pointed out, car fatalities per mile still decline at roughly the same rate every year now as they did in the first half of the twentieth century, well before cars had things like seat belts and air bags.

  In the first decade of the twentieth century, forty-seven men tried to climb Alaska’s Mount McKinley, North America’s tallest peak. They had relatively crude equipment and little chance of being rescued if something went wrong. All survived. By the end of the century, when climbers carried high-tech equipment and helicopter-assisted rescues were quite frequent, each decade saw the death of dozens of people on the mountain’s slopes. Some kind of adaptation seemed to be occurring: The knowledge that one could be rescued was either driving climbers to make riskier climbs (something the British climber Joe Simpson has suggested); or it was bringing less-skilled climbers to the mountain. The National Park Service’s policy of increased safety was not only costing more money, it perversely seemed to be costing more lives—which had the ironic effect of producing calls for more “safety.”

  In the world of skydiving, the greatest mortality risk was
once the so-called low-pull or no-pull fatality. Typically, the main chute would fail to open, but the skydiver would forget to trigger the reserve chute (or would trigger it too late). In the 1990s, U.S. skydivers began using a German-designed device that automatically deploys, if necessary, the reserve chute. The number of low- or no-pull fatalities dropped dramatically, from 14 in 1991 to 0 in 1998. Meanwhile, the number of once-rare open-canopy fatalities, in which the chute deploys but the skydiver is killed upon landing, surged to become the leading cause of death. Skydivers, rather than simply aiming for a safe landing, were attempting hook turns and swoops, daring maneuvers done with the canopy open. As skydiving became safer, many skydivers, particularly younger skydivers, found new ways to make it riskier.

  The psychologist Gerald Wilde would call what was happening “risk homeostasis.” This theory implies that people have a “target level” of risk: Like a home thermostat set to a certain temperature, it may fluctuate a bit from time to time but generally keeps the same average setting. “With that reliable rip cord,” Wilde told me at his home in Kingston, Ontario, “people would want to extend their trip in the sky as often as possible. Because a skydiver wants to be up there, not down here.”

  In traffic, we routinely adjust the risks we’re willing to take as the expected benefit grows. Studies, as I mentioned earlier in the book, have shown that cars waiting to make left turns against oncoming traffic will accept smaller gaps in which to cross (i.e., more risk) the longer they have been waiting (i.e., as the desire for completing the turn increases). Thirty seconds seems to be the limit of human patience for left turns before we start to ramp up our willingness for risk.

  We may also act more safely as things get more dangerous. Consider snowstorms. We’ve all seen footage of vehicles slowly spinning and sliding their way down freeways. The news talks dramatically of the numbers of traffic deaths “blamed on the snowstorm.” But something interesting is revealed in the crash statistics: During snowstorms, the number of collisions, relative to those on clear days, goes up, but the number of fatal crashes goes down. The snow danger seems to cut both ways: It’s dangerous enough that it causes more drivers to get into collisions, and dangerous enough that it forces them to drive at speeds that are less likely to produce a fatal crash. It may also, of course, force them not to drive in the first place, which itself is a form of risk adjustment.

  In moments like turning left across traffic, the risk and the payoff seem quite clear and simple. But do we behave consistently, and do we really have a sense of the actual risk or safety we’re looking to achieve? Are we always pushing it “to the max,” and do we even know what that “max” is? Critics of risk homeostasis have said that given how little humans actually know about assessing risk and probability, and given how many mis-perceptions and biases we’re susceptible to while driving, it’s simply expecting too much of us to think we’re able to hold to some perfect risk “temperature.” A cyclist, for example, may feel safer riding on the sidewalk instead of the street. But several studies have found that cyclists are more likely to be involved in a crash when riding on the sidewalk. Why? Sidewalks, though separated from the road, cross not only driveways but intersections—where most car-bicycle collisions happen. The driver, having already begun her turn, is less likely to expect—and thus to see—a bicyclist emerging from the sidewalk. The cyclist, feeling safer, may also be less on the lookout for cars.

  The average person, the criticism goes, is hardly aware of what their chances actually would be of surviving a severe crash while wearing a seat belt or protected by the unseen air bag lurking inside the steering wheel. Then again, as any trip to Las Vegas will demonstrate, we seem quite capable of making confident choices based on imperfect information of risk and odds. The loud, and occasionally vicious, debate over “risk compensation” and its various offshoots seems less about whether it can happen and more about whether it always happens, or exactly why.

  Most researchers agree that behavioral adaptation seems more robust in response to direct feedback. When you can actually feel something, it’s easier to change your behavior in response to it. We cannot feel air bags and seat belts at work, and we do not regularly test their capabilities—if they make us feel safer, that sense comes from something besides the devices themselves. Driving in snow, on the other hand, we don’t have to rely on internalized risk calculations: One can feel how dangerous or safe it is through the act of driving. (Some studies have shown that drivers with studded winter tires drive faster than those without them.)

  A classic way we sense feedback as drivers is through the size of the vehicle we are driving. The feedback is felt in various ways, from our closeness to the ground to the amount of road noise. Studies have suggested that drivers of small cars take fewer risks (as judged by speed, distance to the vehicle ahead of them, and seat-belt wearing) than drivers of larger cars. Many drivers, particularly in the United States, drive sportutility vehicles for their perceived safety benefits from increased weight and visibility. There is evidence, however, that SUV drivers trade these advantages for more aggressive driving behavior. The result, studies have argued, is that SUVs are, overall, no safer than medium or large passenger cars, and less safe than minivans.

  Studies have also shown that SUV drivers drive faster, which may be a result of feeling safer. They seem to behave differently in other ways as well. A study in New Zealand observed the position of passing drivers’ hands on their steering wheels. This positioning has been suggested as a measure of perceived risk—research has found, for instance, that more people are likely to have their hands on the top half of the steering wheel when they’re driving on roads with higher speeds and more lanes. The study found that SUV drivers, more than car drivers, tended to drive either with only one hand or with both hands on the bottom half of the steering wheel, positions that seemed to indicate lower feelings of risk. Another study looked at several locations in London. After observing more than forty thousand vehicles, researchers found that SUV drivers were more likely to be talking on a cell phone than car drivers, more likely not to be wearing a seat belt, and—no surprise—more likely not to be wearing a seat belt while talking on a cell phone.

  It could just be that the types of people who talk on cell phones and disdain seat belts while driving also like to drive SUVs. But do they like to drive an SUV because they think it’s a safer vehicle or because it gives them license to act more adventurously on the road? To return to the mythical Fred, pickup drivers are less likely than other drivers to wear their seat belts. Under risk-compensation theory, he is doing this because he feels safer in the large pickup truck. But could he not drive in an even more risky fashion yet lower the “cost” of that risky driving by buckling up? It all leads to questions of where we get our information about what is risky and safe, and how we act upon it. Since relatively few of us have firsthand experience with severe crashes in which the air bags deployed, can we really have an accurate sense of how safe we are in a car with air bags versus one without—enough to get us to change our behavior?

  Risk is never as simple as it seems. One might think the safest course of action on the road would be to drive the newest car possible, one filled with the latest safety improvements and stuffed full of technological wonders. This car must be safer than your previous model. But, as a study in Norway found, new cars crash most. It’s not simply that there are more new cars on the road—the rate is higher. After studying the records of more than two hundred thousand cars, the researchers concluded: “If you drive a newer car, the probability of both damage and injury is higher than if you drive an older car.”

  Given that a newer car would seem to offer more protection in a crash, the researchers suggested that the most likely explanation was drivers changing the way they drive in response to the new car. “When using an older car which may not feel very safe,” they argued, “a driver probably drives more slowly and is more concentrated and cautious, possibly keeping a greater distance to the car in fro
nt.” The finding that new cars crash most has shown up elsewhere, including in the United States, although another explanation has been offered: When people buy new cars, they drive them more than old cars. This in itself, however, may be a subtle form of risk compensation: I feel safer in my new car, thus I am going to drive it more often.

  Studying risk is not rocket science; it’s more complicated. Cars keep getting objectively safer, but the challenge is to design a car that can overcome the inherent risks of human nature.

  In most places in the world, there are more suicides than homicides. Globally, more people take their own lives in an average year—roughly a million—than the total murdered and killed in war. We always find these sorts of statistics surprising, even if we are simultaneously aware of one of the major reasons for our misconception: Homicides and war receive much more media coverage than suicides, so they seem more prevalent.

  A similar bias helps explain why, in countries like the United States, the annual death toll from car crashes does not elicit more attention. If the media can be taken as some version of the authentic voice of public concern, one might assume that, over the last few years, the biggest threat to life in this country is terrorism. This is reinforced all the time. We hear constant talk about “suspicious packages” left in public buildings. We’re searched at airports and we watch other people being searched. We live under coded warnings from the Department of Homeland Security. The occasional terrorist cell is broken up, even if it often seems to be a hapless group of wannabes.

 

‹ Prev