SuperFreakonomics

Home > Other > SuperFreakonomics > Page 7
SuperFreakonomics Page 7

by Steven D. Levitt


  Finally Allie realized what she really wanted to do: go back to college. She would build on everything she’d learned by running her own business and, if all went well, apply this newfound knowledge to some profession that would pay an insanely high wage without relying on her own physical labor.

  Her chosen field of study? Economics, of course.

  CHAPTER 2

  WHY SHOULD SUICIDE BOMBERS BUY LIFE INSURANCE?

  If you know someone in southeastern Uganda who is having a baby next year, you should hope with all your heart that the baby isn’t born in May. If so, it will be roughly 20 percent more likely to have visual, hearing, or learning disabilities as an adult.

  Three years from now, however, May would be a fine month to have a baby. But the danger will have only shifted, not disappeared; April would now be the cruelest month.

  What can possibly account for this bizarre pattern? Before you answer, consider this: the same pattern has been identified halfway across the world, in Michigan. In fact, a May birth in Michigan might carry an even greater risk than in Uganda.

  The economists Douglas Almond and Bhashkar Mazumder have a simple answer for this strange and troubling phenomenon: Ramadan.

  Some parts of Michigan have a substantial Muslim population, as does southeastern Uganda. Islam calls for a daytime fast from food and drink for the entire month of Ramadan. Most Muslim women participate even while pregnant; it’s not a round-the-clock fast, after all. Still, as Almond and Mazumder found by analyzing years’ worth of natality data, babies that were in utero during Ramadan are more likely to exhibit developmental aftereffects. The magnitude of these effects depends on which month of gestation the baby is in when Ramadan falls. The effects are strongest when fasting coincides with the first month of pregnancy, but they can occur if the mother fasts at any time up to the eighth month.

  Islam follows a lunar calendar, so the month of Ramadan begins eleven days earlier each year. In 2009, it ran from August 21 to September 19, which made May 2010 the unluckiest month in which to be born. Three years later, with Ramadan beginning on July 20, April would be the riskiest birth month. The risk is magnified when Ramadan falls during summertime because there are more daylight hours—and, therefore, longer periods without food and drink. That’s why the birth effects can be stronger in Michigan, which has fifteen hours of daylight during summer, than in Uganda, which sits at the equator and therefore has roughly equal daylight hours year-round.

  It is no exaggeration to say that a person’s entire life can be greatly influenced by the fluke of his or her birth, whether the fluke is one of time, place, or circumstance. Even animals are susceptible to this natal roulette. Kentucky, the capital of Thoroughbred horse breeding, was hit by a mysterious disease in 2001 that left 500 foals stillborn and resulted in about 3,000 early fetal losses. In 2004, as this diminished cohort of three-year-olds came of age, two of the three Triple Crown races were won by Smarty Jones, a colt whose dam was impregnated in Kentucky but returned home to Pennsylvania before she could be afflicted.

  Such birth effects aren’t as rare as you might think. Douglas Almond, examining U.S. Census data from 1960 to 1980, found one group of people whose terrible luck persisted over their whole lives. They had more physical ailments and lower lifetime income than people who’d been born just a few months earlier or a few months later. They stood out in the census record like a layer of volcanic ash stands out in the archaeological record, a thin stripe of ominous sediment nestled between two thick bands of normalcy.

  What happened?

  These people were in utero during the “Spanish flu” pandemic of 1918. It was a grisly plague, killing more than half a million Americans in just a few months—a casualty toll, as Almond notes, greater than all U.S. combat deaths during all the wars fought in the twentieth century.

  More than 25 million Americans, meanwhile, contracted the flu but survived. This included one of every three women of childbearing age. The infected women who were pregnant during the pandemic had babies who, like the Ramadan babies, ran the risk of carrying lifelong scars from being in their mothers’ bellies at the wrong time.

  Other birth effects, while not nearly as dire, can exert a significant pull on one’s future. It is common practice, especially among economists, to co-write academic papers and list the authors alphabetically by last name. What does this mean for an economist who happened to be born Albert Zyzmor instead of, say, Albert Aab? Two (real) economists addressed this question and found that, all else being equal, Dr. Aab would be more likely to gain tenure at a top university, become a fellow in the Econometric Society (hooray!), and even win the Nobel Prize.

  “Indeed,” the two economists concluded, “one of us is currently contemplating dropping the first letter of her surname.” The offending name: Yariv.

  Or consider this: if you visit the locker room of a world-class soccer team early in the calendar year, you are more likely to interrupt a birthday celebration than if you arrive later in the year. A recent tally of the British national youth leagues, for instance, shows that fully half of the players were born between January and March, with the other half spread out over the nine remaining months. On a similar German team, 52 elite players were born between January and March, with just 4 players born between October and December.

  Why such a severe birthdate bulge?

  Most elite athletes begin playing their sports when they are quite young. Since youth sports are organized by age, the leagues naturally impose a cutoff birthdate. The youth soccer leagues in Europe, like many such leagues, use December 31 as the cutoff date.

  Imagine now that you coach in a league for seven-year-old boys and are assessing two players. The first one (his name is Jan) was born on January 1, while the second one (his name is Tomas) was born 364 days later, on December 31. So even though they are both technically seven-year-olds, Jan is a year older than Tomas—which, at this tender age, confers substantial advantages. Jan is likely to be bigger, faster, and more mature than Tomas.

  So while you may be seeing maturity rather than raw ability, it doesn’t much matter if your goal is to pick the best players for your team. It probably isn’t in a coach’s interest to play the scrawny younger kid who, if he only had another year of development, might be a star.

  And thus the cycle begins. Year after year, the bigger boys like Jan are selected, encouraged, and given feedback and playing time, while boys like Tomas eventually fall away. This “relative-age effect,” as it has come to be known, is so strong in many sports that its advantages last all the way through to the professional ranks.

  K. Anders Ericsson, an enthusiastic, bearded, and burly Swede, is the ringleader of a merry band of relative-age scholars scattered across the globe. He is now a professor of psychology at Florida State University, where he uses empirical research to learn what share of talent is “natural” and how the rest of it is acquired. His conclusion: the trait we commonly call “raw talent” is vastly overrated. “A lot of people believe there are some inherent limits they were born with,” he says. “But there is surprisingly little hard evidence that anyone could attain any kind of exceptional performance without spending a lot of time perfecting it.” Or, put another way, expert performers—whether in soccer or piano playing, surgery or computer programming—are nearly always made, not born.*

  And yes, just as your grandmother always told you, practice does make perfect. But not just willy-nilly practice. Mastery arrives through what Ericsson calls “deliberate practice.” This entails more than simply playing a C-minor scale a hundred times or hitting tennis serves until your shoulder pops out of its socket. Deliberate practice has three key components: setting specific goals; obtaining immediate feedback; and concentrating as much on technique as on outcome.

  The people who become excellent at a given thing aren’t necessarily the same ones who seemed to be “gifted” at a young age. This suggests that when it comes to choosing a life path, people should do what they love—yes, your
nana told you this too—because if you don’t love what you’re doing, you are unlikely to work hard enough to get very good at it.

  Once you start to look, birthdate bulges are everywhere. Consider the case of Major League Baseball players. Most youth leagues in the United States have a July 31 cutoff date. As it turns out, a U.S.-born boy is roughly 50 percent more likely to make the majors if he is born in August instead of July. Unless you are a big, big believer in astrology, it is hard to argue that someone is 50 percent better at hitting a big-league curveball simply because he is a Leo rather than a Cancer.

  But as prevalent as birth effects are, it would be wrong to overemphasize their pull. Birth timing may push a marginal child over the edge, but other forces are far, far more powerful. If you want your child to play Major League Baseball, the most important thing you can do—infinitely more important than timing an August delivery date—is make sure the baby isn’t born with two X chromosomes. Now that you’ve got a son instead of a daughter, you should know about a single factor that makes him eight hundred times more likely to play in the majors than a random boy.

  What could possibly have such a mighty influence?

  Having a father who also played Major League Baseball. So if your son doesn’t make the majors, you have no one to blame but yourself: you should have practiced harder when you were a kid.

  Some families produce baseball players. Others produce terrorists.

  Conventional wisdom holds that the typical terrorist comes from a poor family and is himself poorly educated. This seems sensible. Children who are born into low-income, low-education families are far more likely than average to become criminals, so wouldn’t the same be true for terrorists?

  To find out, the economist Alan Krueger combed through a Hezbollah newsletter called Al-Ahd (The Oath) and compiled biographical details on 129 dead shahids (martyrs). He then compared them with men from the same age bracket in the general populace of Lebanon. The terrorists, he found, were less likely to come from a poor family (28 percent versus 33 percent) and more likely to have at least a high-school education (47 percent versus 38 percent).

  A similar analysis of Palestinian suicide bombers by Claude Berrebi found that only 16 percent came from impoverished families, versus more than 30 percent of male Palestinians overall. More than 60 percent of the bombers, meanwhile, had gone beyond high school, versus 15 percent of the populace.

  In general, Krueger found, “terrorists tend to be drawn from well-educated, middle-class or high-income families.” Despite a few exceptions—the Irish Republican Army and perhaps the Tamil Tigers of Sri Lanka (there isn’t enough evidence to say)—the trend holds true around the world, from Latin American terrorist groups to the al Qaeda members who carried out the September 11 attacks in the United States.

  How can this be explained?

  It may be that when you’re hungry, you’ve got better things to worry about than blowing yourself up. It may be that terrorist leaders place a high value on competence, since a terrorist attack requires more orchestration than a typical crime.

  Furthermore, as Krueger points out, crime is primarily driven by personal gain, whereas terrorism is fundamentally a political act. In his analysis, the kind of person most likely to become a terrorist is similar to the kind of person most likely to…vote. Think of terrorism as civic passion on steroids.

  Anyone who has read some history will recognize that Krueger’s terrorist profile sounds quite a bit like the typical revolutionary. Fidel Castro and Che Guevara, Ho Chi Minh, Mohandas Gandhi, Leon Trotsky and Vladimir Lenin, Simón Bolívar, and Maximilien Robespierre—you won’t find a single lower-class, uneducated lad among them.

  But a revolutionary and a terrorist have different goals. Revolutionaries want to overthrow and replace a government. Terrorists want to—well, it isn’t always clear. As one sociologist puts it, they might wish to remake the world in their own dystopian image; religious terrorists may want to cripple the secular institutions they despise. Krueger cites more than one hundred different scholarly definitions of terrorism. “At a conference in 2002,” he writes, “foreign ministers from over 50 Islamic states agreed to condemn terrorism but could not agree on a definition of what it was that they had condemned.”

  What makes terrorism particularly maddening is that killing isn’t even the main point. Rather, it is a means by which to scare the pants off the living and fracture their normal lives. Terrorism is therefore devilishly efficient, exerting far more leverage than an equal amount of non-terrorist violence.

  In October 2002, the Washington, D.C., metropolitan area experienced fifty murders, a fairly typical number. But ten of these murders were different. Rather than the typical domestic disputes or gang killings, these were random and inexplicable shootings. Ordinary people minding their own business were shot while pumping gas or leaving the store or mowing the lawn. After the first few killings, panic set in. As they continued, the region was virtually paralyzed. Schools were closed, outdoor events canceled, and many people wouldn’t leave their homes at all.

  What kind of sophisticated and well-funded organization had wrought such terror?

  Just two people, it turned out: a forty-one-year-old man and his teenage accomplice, firing a Bushmaster .223-caliber rifle from an old Chevy sedan, its roomy trunk converted into a sniper’s nest. So simple, so cheap, and so effective: that is the leverage of terror. Imagine that the nineteen hijackers from September 11, rather than going to the trouble of hijacking airplanes and flying them into buildings, had instead spread themselves around the country, nineteen men with nineteen rifles in nineteen cars, each of them driving to a new spot every day and shooting random people at gas stations and schools and restaurants. Had the nineteen of them synchronized their actions, they would have effectively set off a nationwide time bomb every day. They would have been hard to catch, and even if one of them was caught, the other eighteen would carry on. The entire country would have been brought to its knees.

  Terrorism is effective because it imposes costs on everyone, not just its direct victims. The most substantial of these indirect costs is fear of a future attack, even though such fear is grossly misplaced. The probability that an average American will die in a given year from a terrorist attack is roughly 1 in 5 million; he is 575 times more likely to commit suicide.

  Consider the less obvious costs, too, like the loss of time and liberty. Think about the last time you went through an airport security line and were forced to remove your shoes, shuffle through the metal detector in stocking feet, and then hobble about while gathering up your belongings.

  The beauty of terrorism—if you’re a terrorist—is that you can succeed even by failing. We perform this shoe routine thanks to a bumbling British national named Richard Reid, who, even though he couldn’t ignite his shoe bomb, exacted a huge price. Let’s say it takes an average of one minute to remove and replace your shoes in the airport security line. In the United States alone, this procedure happens roughly 560 million times per year. Five hundred and sixty million minutes equals more than 1,065 years—which, divided by 77.8 years (the average U.S. life expectancy at birth), yields a total of nearly 14 person-lives. So even though Richard Reid failed to kill a single person, he levied a tax that is the time equivalent of 14 lives per year.

  The direct costs of the September 11 attacks were massive—nearly three thousand lives and economic losses as high as $300 billion—as were the costs of the wars in Afghanistan and Iraq that the United States launched in response. But consider the collateral costs as well. In just the three months following the attacks, there were one thousand extra traffic deaths in the United States. Why?

  One contributing factor is that people stopped flying and drove instead. Per mile, driving is much more dangerous than flying. Interestingly, however, the data show that most of these extra traffic deaths occurred not on interstates but on local roads, and they were concentrated in the Northeast, close to the terrorist attacks. Furthermore, these fatalit
ies were more likely than usual to involve drunken and reckless driving. These facts, along with myriad psychological studies of terrorism’s aftereffects, suggest that the September 11 attacks led to a spike in alcohol abuse and post-traumatic stress that translated into, among other things, extra driving deaths.

  Such trickle-down effects are nearly endless. Thousands of foreign-born university students and professors were kept out of the United States because of new visa restrictions after the September 11 attacks. At least 140 U.S. corporations exploited the ensuing stock-market decline by illegally backdating stock options. In New York City, so many police resources were shifted to terrorism that other areas—the Cold Case Squad, for one, as well as anti-Mafia units—were neglected. A similar pattern was repeated on the national level. Money and manpower that otherwise would have been spent chasing financial scoundrels were instead diverted to chasing terrorists—perhaps contributing to, or at least exacerbating, the recent financial meltdown.

  Not all of the September 11 aftereffects were harmful. Thanks to decreased airline traffic, influenza—which travels well on planes—was slower to spread and less dangerous. In Washington, D.C., crime fell whenever the federal terror-alert level went up (thanks to extra police flooding the city). And an increase in border security was a boon to some California farmers—who, as Mexican and Canadian imports declined, grew and sold so much marijuana that it became one of the state’s most valuable crops.

  When one of the four airplanes hijacked on September 11 crashed into the Pentagon, all of the seriously injured victims, most of whom suffered burns, were taken to Washington Hospital Center, the largest hospital in the city. There were only a handful of patients—corpses were more plentiful—but even so, the burn unit was nearly overwhelmed. Like most hospitals, WHC routinely operated at about 95 percent of capacity, so even a small surge of patients stressed the system. Worse yet, the hospital’s phone lines went down, as did local cell service, so anyone needing to make a call had to jump in a car and drive a few miles away.

 

‹ Prev