Traffic

Home > Other > Traffic > Page 32
Traffic Page 32

by Tom Vanderbilt


  One possible answer goes back to the spike in brain activity of the Detroit driver. He was afraid, probably before he even knew why. The size of trucks makes most of us nervous—and rightfully so. When we have a close brush with a truck or we see the horrific results of a crash between a car and a truck, it undoubtedly leaves a greater impression on our consciousness, which can skew our view of the world. “Being tailgated by a big truck is worth getting tailgated by fifty Geo Metros,” as Blower put it. “It stays with you, and you generalize with that.” (Studies have suggested that people think there are more trucks on the road than is actually the case.)

  Here’s the conundrum: If, on both an instinctual level and a more intellectual level, the drivers of cars fear trucks, why do car drivers, in so many cases, act so dangerously around them? The answer, as we are about to see, is that on the road we make imperfect guesses as to exactly what is risky and why, and we act on those biases in ways we may not even be aware of.

  Should I Stay or Should I Go? Why Risk on the Road Is So Complicated

  Psychologists have suggested that we generally think about risk in two different ways. One way, called “risk as analysis,” involves reason, logic, and careful consideration about the consequences of choices. This is what we do when we tell ourselves, on the way to the airport with a nervous stomach, “Statistically, flying is much safer than driving.”

  The second way has been called “risk as feelings.” This is why you have the nervous stomach in the first place. Perhaps it’s the act of leaving the ground: Flying just seems more dangerous than driving, even though you keep telling yourself it isn’t. Studies have suggested that we tend to lean more on “risk as feelings” when we have less time to make a decision, which seems like a survival instinct. It was smart of the Detroit driver to feel risk from the truck next to him, but the instinctual fear response doesn’t always help us. In collisions between cars and deer, for example, the greatest risk to the driver comes in trying to avoid hitting the animal. No one with a conscience wants to hit a deer, but we may also be fooled into thinking that the deer itself presents the greatest hazard. Hence the traffic signs that say DON’T VEER WHEN YOU SEE A DEER.

  One good reason why we rely on our feelings in thinking about risk is that “risk as analysis” is an incredibly complex and daunting process, more familiar to mathematicians and actuaries than the average driver. Even when we’re given actual probabilities of risk on the road, often the picture just gets muddier. Take the simple question of whether driving is safe or dangerous. Consider two sets of statistics: For every 100 million miles that are driven in vehicles in the United States, there are 1.3 deaths. One hundred million miles is a massive distance, the rough equivalent of crisscrossing the country more than thirty thousand times. Now consider another number: If you drive an average of 15,500 miles per year, as many Americans do, there is a roughly 1 in 100 chance you’ll die in a fatal car crash over a lifetime of 50 years of driving.

  To most people, the first statistic sounds a whole lot better than the second. Each trip taken is incredibly safe. On an average drive to work or the mall, you’d have a 1 in 100 million chance of dying in a car crash. Over a lifetime of trips, however, it doesn’t sound as good: 1 in 100. How do you know if this one trip is going to be the trip? Psychologists, as you may suspect, have found that we are more sensitive to the latter sorts of statistics. When subjects in one study were given odds, similar to the aforementioned ones, of dying in a car crash on a “per trip” versus a “per lifetime” basis, more people said they were in favor of seat-belt laws when given the lifetime probability.

  This is why, it has been argued, it has long been difficult to convince people to drive in a safer manner. Each safe trip we take reinforces the image of a safe trip. It sometimes hardly seems worth the bother to wear a seat belt for a short trip to a local store, given that the odds are so low. But events that the odds say will almost certainly never happen have a strange way of happening sometimes (risk scholars call these moments “black swans”). Or, perhaps more accurately, when they do happen we are utterly unprepared for them—suddenly, there’s a train at the always empty railroad crossing.

  The risk of driving can be framed in several ways. One way is that most people get through a lifetime without a fatal car crash. Another way, as described by one study, is that “traffic fatalities are by far the most important contributor to the danger of leaving home.” If you considered only the first line of thinking, you might drive without much of a sense of risk. If you listened to only the second, you might never again get in a car. There is a built-in dilemma to how societies think about the risk of driving; driving is relatively safe, considering how much it is done, but it could be much safer. How much safer? If the number of deaths on the road were held to the acceptable-risk standards that the U.S. Occupational Safety and Health Administration maintains for service-industry fatalities, it has been estimated, there would be just under four thousand deaths a year; instead, the number is eleven times that. Does telling people it is dangerous make it safer?

  One often hears, on television or the radio, such slogans as “Every fifteen minutes, a driver is killed in an alcohol-related crash” or “Every thirteen minutes, someone dies in a fatal car crash.” This is meant, presumably, to suggest not just the magnitude of the problem but the idea that a fatal crash can happen to anyone, anywhere. And it can. Yet even when these slogans leave out the words “on average,” as they often do, we still do not take it to mean that someone is actually dying, like clockwork, every fifteen minutes.

  These kinds of averages obscure the startling extent to which risk on the road is not average. Take the late-night hours on weekends. How dangerous are they? In an average year, more people were killed in the United States on Saturday and Sunday from midnight to three a.m. than all those who were killed from midnight to three a.m. the rest of the week. In other words, just two nights accounted for a majority of the week’s deaths in that time period. On Sunday mornings from twelve a.m. to three a.m., there was not one driver dying every thirteen minutes but one driver dying every seven minutes. By contrast, on Wednesday mornings from three a.m. to six a.m., a driver was killed every thirty-two minutes.

  Time of day has a huge influence on what kinds of crashes occur. The average driver faces the highest risk of a crash during the morning and evening rush hours, simply because the volume of traffic is highest. But fatal crashes occur much less often during rush hours; one study found that 8 of every 1,000 crashes that happened outside the peak hours were fatal, while during the rush hour the number dropped to 3 out of every 1,000. During the weekdays, one theory goes, a kind of “commuters’ code” is in effect. The roads are filled with people going to work, driving in heavy congestion (one of the best road-safety measures, with respect to fatalities), by and large sober. The morning rush hour in the United States is twice as safe as the evening rush hour, in terms of fatal and non-fatal crashes. In the afternoon, the roads get more crowded with drivers out shopping, picking up the kids or the dry cleaning. Drivers are also more likely to have had a drink or two. The “afternoon dip,” or the circadian fatigue that typically sets in around two p.m., also raises the crash risk.

  What’s so striking about the massive numbers of fatalities on weekend mornings is the fact that so few people are on the roads, and so many—estimates are as high as 25 percent—have been drinking. Or think of the Fourth of July, one of the busiest travel days in the country and also, statistically, the most dangerous day to be on the road. It isn’t simply that more people are out driving, in which case more fatalities would be expected—and thus the day would not necessarily be more dangerous in terms of crash rate. It has more to do with what people are doing on the Fourth: Studies have shown there are more alcohol-related crashes on the Fourth of July than on the same days the week before or after—and, as it happens, many more than during any other holiday.

  What’s the actual risk imposed by a drunk driver, and what should the pena
lty be to offset that risk? The economists Steven D. Levitt and Jack Porter have argued that legally drunk drivers between the hours of eight p.m. and five a.m. are thirteen times more likely than sober drivers to cause a fatal crash, and those with legally acceptable amounts of alcohol are seven times more likely. Of the 11,000 drunk-driving fatalities in the period they studied, the majority—8,000—were the drivers and the passengers, while 3,000 were other drivers (the vast majority of whom were sober). Levitt and Porter argue that the appropriate fine for drunk driving in the United States, tallying up the externalities that it causes, should be about $8,000.

  Risk is not distributed randomly on the road. In traffic, the roulette wheel is loaded. Who you are, where you are, how old you are, how you are driving, when you are driving, and what you are driving all exert their forces on the spinning wheel. Some of these are as you might expect; some may surprise you.

  Imagine, if you will, Fred, the pickup-driving divorced Montana doctor out for a spin after the Super Bowl who is mentioned in this chapter’s title. Obviously, Fred is a fictional creation, and even if he did exist there’d be no way to judge the actual risk of driving with him. But each of the little things about Fred, and the way those things interact, play their own part in building a profile of Fred’s risk on the road.

  The most important risk factor, one that is subtly implicated in all the others, is speed. In a crash, the risk of dying rises with speed. This is common sense, and has been demonstrated in any number of studies. In a crash at 50 miles per hour, you’re fifteen times more likely to die than in a crash at 25 miles per hour—not twice as likely, as you might innocently expect from the doubling of the speed. The relationships are not proportional but exponential: Risk begins to accelerate much faster than speed. A crash when you’re driving 35 miles per hour causes a third more frontal damage than one where you’re doing 30 miles per hour.

  Somewhat more controversial is the relationship between speed and the potential for a crash. It is known that drivers who have more speeding violations tend to get into more crashes. But studies have also looked at the speeds of vehicles that crashed on a given road, compared them to the speeds of vehicles that did not crash, and tried to figure out how speed affects the likelihood that one will crash. (One problem is that it’s extremely hard to tell how fast cars in crashes were actually going.) Some rough guidelines have been offered. An Australian study found that for a mean speed—not a speed limit—of 60 kilometers per hour (about 37 miles per hour), the risk of a crash doubled for every additional 5 kilometers per hour.

  In 1964, one of the first and most famous studies of crash risk based on speed was published, giving rise to the so-called Solomon curve, after its author, David Solomon, a researcher with the U.S. Federal Highway Administration. Crash rates, Solomon found after examining crash records on various sections of rural highway, seemed to follow a U-shaped curve: They were lowest for drivers traveling at the median speed and sloped upward for those going more or less than the median speed. Most strikingly, Solomon reported that “low speed drivers are more likely to be involved in accidents than relatively high speed drivers.”

  Solomon’s finding, despite being almost a half century old, has become a sort of mythic (and misunderstood) touchstone in the speed-limit debate, a hoary banner waved by those arguing in favor of higher speed limits. It’s not the actual speed itself that’s the safety problem, they insist, it’s speed variance. If those slower drivers would just get up to speed, the roads would flow in smooth harmony. It’s not speed that kills, it’s variance. (This belief, studies have indicated, is most strongly held by young males—who are, after all, experts, given that they get in the most crashes.) And what causes the most variance? Speed limits that are too low!

  Dear reader, much as I—as guilty as anyone of an occasional craving for speed—would like to believe this, the arguments against it are too compelling. For one, it assumes that the drivers who are going slow want to be driving slowly, and are not simply slowing for congested traffic, or entering a road from a turn, when they are suddenly hit by one of those drivers traveling the mean speed or higher. Solomon himself acknowledged (but downplayed) that these kinds of events might account for nearly half of the rear-end crashes at low speeds. Studies have found that a majority of rear-end crashes involved a stopped vehicle, which presumably had stopped for a good reason—and not to get in the way of the would-be speed maven behind him. Further, Gary Davis, an engineering professor at the University of Minnesota, proving yet again that statistics are one of the most dangerous things about traffic, has suggested there is a disconnect—what statisticians call an “ecological fallacy”—at work in speed-variance studies. Individual risk is conflated with the “aggregate” risk, even if in reality, he suggests, what holds for the whole group might not hold for individuals.

  In pure traffic-engineering theory, a world that really exists only on computer screens and in the dreams of traffic engineers and bears little resemblance to how drivers actually behave, a highway of cars all flowing at the same speed is a good thing. The fewer cars you overtake, the lower your chance of hitting someone or being hit. But this requires a world without cars slowing to change lanes to enter the highway, because they are momentarily lost, or because they’re hitting the tail end of a traffic jam. In any case, if faster cars being put at risk by slower cars were the mythical problem some have made it out to be, highway carnage would be dominated by cars trying to pass—but in fact, one study found that in 1996, a mere 5 percent of fatal crashes involved two vehicles traveling in the same direction. A much more common fatal crash is a driver moving at high speed leaving the road and hitting an object that isn’t moving at all. That is a case where speed variance really does kill.

  Let us move on to perhaps the oddest risk factor: Super Bowl Sunday. In one study, researchers compared crash data with the start and end times of all prior Super Bowl broadcasts. They divided all the Super Bowl Sundays into three intervals (before, during, and after). They then compared Super Bowl Sundays to non–Super Bowl Sundays. They found that in the before-the-game period, there was no discernible change in fatalities. During the game, when presumably more people would be off the roads, the fatal crash rate was 11 percent less than on a normal Sunday. After the game, they reported a relative increase in fatalities of 41 percent. The relative risks were higher in the places whose teams had lost.

  The primary reason for the increased postgame risk is one that I have already discussed: drinking. Nearly twenty times more beer is drunk in total on Super Bowl Sunday than on an average day. Fred’s risk would obviously be influenced by how many beers he had downed (beer, at least in the United States, is what most drivers pulled over for DUIs have been drinking) and the other factors that determine blood alcohol concentration (BAC). Increases in crash risk, as a number of studies have shown, begin to kick in with as little as .02 percent BAC level, start to crest significantly at .05 percent, and spike sharply at .08 to .1 percent.

  Determining crash risk based on a person’s BAC depends, of course, on the person. A famous study in Grand Rapids, Michigan, in the 1960s (one that would help establish the legal BAC limits in many countries), which pulled over drivers at random, found that drivers who had a .01 to .04 percent BAC level actually had fewer crashes than drivers with a BAC of zero. This so-called Grand Rapids dip led to the controversial speculation that drivers who had had “just a few” were more aware of the risks of driving, or of getting pulled over, and so drove more safely; others argued that regular drinkers were more capable of “handling” a small intake.

  The Grand Rapids dip has shown up in other studies, but it has been downplayed as another statistical fallacy—the “zero BAC” group in Michigan, for example, had more younger and older drivers, who are statistically less safe. Even critics of the study, however, noted that people who reported drinking with greater frequency had safer driving records than their teetotaler counterparts at every level of BAC, including zero. This does not m
ean that drinkers are better drivers per se, or that having a beer makes you a better driver. But the question of what makes a person a safe driver is more complicated than the mere absence of alcohol. As Leonard Evans notes, the effects of alcohol on driver performance are well known, but the effects of alcohol on driver behavior are not empirically predictable. Here is where the tangled paths of the cautious driver who has had a few, carefully obeying the speed limit, and the distracted sober driver, blazing over the limit and talking on the phone, intersect. Neither may be driving as well as they think they are, and one’s poorer reflexes may be mirrored by the other’s slower time to notice a hazard. Only one is demonized, but they’re both dangerous.

  The second key risk is Fred himself. Not because he is Fred, for there is no evidence that people named Fred get in more crashes than people named Max or Jerry. It is the fact that Fred is male. Across every age group in the United States, men are more likely than women to be involved in fatal crashes—in fact, in the average year, more than twice as many men as women are likely to be killed in a car, even though there are more women than men in the country. The global ratio is even higher. Men do drive more, but after that difference is taken into account, their fatal crash rates are still higher.

  According to estimates by researchers at Carnegie Mellon University, men die at the rate of 1.3 deaths per 100 million miles; for women the rate is .73. Men die at the rate of 14.51 deaths per 100 million trips, while for women it is 6.55. And crucially, men face .70 deaths per 100 million minutes, while for women the rate is .36. It may be true that men drive more, and drive for longer periods when they do drive, but this does not change the fact that for each minute they’re on the road, each mile they drive, and each trip they take, they are more likely to be killed—and to kill others—than women.

 

‹ Prev