Freakonomics Revised and Expanded Edition

Home > Other > Freakonomics Revised and Expanded Edition > Page 3
Freakonomics Revised and Expanded Edition Page 3

by Steven D. Levitt


  There are three basic flavors of incentive: economic, social, and moral. Very often a single incentive scheme will include all three varieties. Think about the anti-smoking campaign of recent years. The addition of a $3-per-pack “sin tax” is a strong economic incentive against buying cigarettes. The banning of cigarettes in restaurants and bars is a powerful social incentive. And when the U.S. government asserts that terrorists raise money by selling black-market cigarettes, that acts as a rather jarring moral incentive.

  Some of the most compelling incentives yet invented have been put in place to deter crime. Considering this fact, it might be worthwhile to take a familiar question—why is there so much crime in modern society?—and stand it on its head: why isn’t there a lot more crime?

  After all, every one of us regularly passes up opportunities to maim, steal, and defraud. The chance of going to jail—thereby losing your job, your house, and your freedom, all of which are essentially economic penalties—is certainly a strong incentive. But when it comes to crime, people also respond to moral incentives (they don’t want to do something they consider wrong) and social incentives (they don’t want to be seen by others as doing something wrong). For certain types of misbehavior, social incentives are terribly powerful. In an echo of Hester Prynne’s scarlet letter, many American cities now fight prostitution with a “shaming” offensive, posting pictures of convicted johns (and prostitutes) on websites or on local-access television. Which is a more horrifying deterrent: a $500 fine for soliciting a prostitute or the thought of your friends and family ogling you on www.HookersAndJohns.com?

  So through a complicated, haphazard, and constantly readjusted web of economic, social, and moral incentives, modern society does its best to militate against crime. Some people would argue that we don’t do a very good job. But taking the long view, that is clearly not true. Consider the historical trend in homicide (not including wars), which is both the most reliably measured crime and the best barometer of a society’s overall crime rate. These statistics, compiled by the criminologist Manuel Eisner, track the historical homicide levels in five European regions.

  HOMICIDES

  (per 100,000 People)

  The steep decline of these numbers over the centuries suggests that, for one of the gravest human concerns—getting murdered—the incentives that we collectively cook up are working better and better.

  So what was wrong with the incentive at the Israeli day-care centers?

  You have probably already guessed that the $3 fine was simply too small. For that price, a parent with one child could afford to be late every day and only pay an extra $60 each month—just one-sixth of the base fee. As babysitting goes, that’s pretty cheap. What if the fine had been set at $100 instead of $3? That would have likely put an end to the late pickups, though it would have also engendered plenty of ill will. (Any incentive is inherently a trade-off; the trick is to balance the extremes.)

  But there was another problem with the day-care center fine. It substituted an economic incentive (the $3 penalty) for a moral incentive (the guilt that parents were supposed to feel when they came late). For just a few dollars each day, parents could buy off their guilt. Furthermore, the small size of the fine sent a signal to the parents that late pickups weren’t such a big problem. If the day-care center suffers only $3 worth of pain for each late pickup, why bother to cut short your tennis game? Indeed, when the economists eliminated the $3 fine in the seventeenth week of their study, the number of late-arriving parents didn’t change. Now they could arrive late, pay no fine, and feel no guilt.

  Such is the strange and powerful nature of incentives. A slight tweak can produce drastic and often unforeseen results. Thomas Jefferson noted this while reflecting on the tiny incentive that led to the Boston Tea Party and, in turn, the American Revolution: “So inscrutable is the arrangement of causes and consequences in this world that a two-penny duty on tea, unjustly imposed in a sequestered part of it, changes the condition of all its inhabitants.”

  In the 1970s, researchers conducted a study that, like the Israeli day-care study, pitted a moral incentive against an economic incentive. In this case, they wanted to learn about the motivation behind blood donations. Their discovery: when people are given a small stipend for donating blood rather than simply being praised for their altruism, they tend to donate less blood. The stipend turned a noble act of charity into a painful way to make a few dollars, and it wasn’t worth it.

  What if the blood donors had been offered an incentive of $50, or $500, or $5,000? Surely the number of donors would have changed dramatically.

  But something else would have changed dramatically as well, for every incentive has its dark side. If a pint of blood were suddenly worth $5,000, you can be sure that plenty of people would take note. They might literally steal blood at knifepoint. They might pass off pig blood as their own. They might circumvent donation limits by using fake IDs. Whatever the incentive, whatever the situation, dishonest people will try to gain an advantage by whatever means necessary.

  Or, as W. C. Fields once said: a thing worth having is a thing worth cheating for.

  Who cheats?

  Well, just about anyone, if the stakes are right. You might say to yourself, I don’t cheat, regardless of the stakes. And then you might remember the time you cheated on, say, a board game. Last week. Or the golf ball you nudged out of its bad lie. Or the time you really wanted a bagel in the office break room but couldn’t come up with the dollar you were supposed to drop in the coffee can. And then took the bagel anyway. And told yourself you’d pay double the next time. And didn’t.

  For every clever person who goes to the trouble of creating an incentive scheme, there is an army of people, clever and otherwise, who will inevitably spend even more time trying to beat it. Cheating may or may not be human nature, but it is certainly a prominent feature in just about every human endeavor. Cheating is a primordial economic act: getting more for less. So it isn’t just the boldface names—inside-trading CEOs and pill-popping ballplayers and perk-abusing politicians—who cheat. It is the waitress who pockets her tips instead of pooling them. It is the Wal-Mart payroll manager who goes into the computer and shaves his employees’ hours to make his own performance look better. It is the third grader who, worried about not making it to the fourth grade, copies test answers from the kid sitting next to him.

  Some cheating leaves barely a shadow of evidence. In other cases, the evidence is massive. Consider what happened one spring evening at midnight in 1987: seven million American children suddenly disappeared. The worst kidnapping wave in history? Hardly. It was the night of April 15, and the Internal Revenue Service had just changed a rule. Instead of merely listing the name of each dependent child, tax filers were now required to provide a Social Security number. Suddenly, seven million children—children who had existed only as phantom exemptions on the previous year’s 1040 forms—vanished, representing about one in ten of all dependent children in the United States.

  The incentive for those cheating taxpayers was quite clear. The same for the waitress, the payroll manager, and the third grader. But what about that third grader’s teacher? Might she have an incentive to cheat? And if so, how would she do it?

  Imagine now that instead of running a day-care center in Haifa, you are running the Chicago Public Schools, a system that educates 400,000 students each year.

  The most volatile current debate among American school administrators, teachers, parents, and students concerns “high-stakes” testing. The stakes are considered high because instead of simply testing students to measure their progress, schools are increasingly held accountable for the results.

  The federal government mandated high-stakes testing as part of the No Child Left Behind law, signed by President Bush in 2002. But even before that law, most states gave annual standardized tests to students in elementary and secondary school. Twenty states rewarded individual schools for good test scores or dramatic improvement; thirty-two states sa
nctioned the schools that didn’t do well.

  The Chicago Public School system embraced high-stakes testing in 1996. Under the new policy, a school with low reading scores would be placed on probation and face the threat of being shut down, its staff to be dismissed or reassigned. The CPS also did away with what is known as social promotion. In the past, only a dramatically inept or difficult student was held back a grade. Now, in order to be promoted, every student in third, sixth, and eighth grade had to manage a minimum score on the standardized, multiple-choice exam known as the Iowa Test of Basic Skills.

  Advocates of high-stakes testing argue that it raises the standards of learning and gives students more incentive to study. Also, if the test prevents poor students from advancing without merit, they won’t clog up the higher grades and slow down good students. Opponents, meanwhile, worry that certain students will be unfairly penalized if they don’t happen to test well, and that teachers may concentrate on the test topics at the exclusion of more important lessons.

  Schoolchildren, of course, have had incentive to cheat for as long as there have been tests. But high-stakes testing has so radically changed the incentives for teachers that they too now have added reason to cheat. With high-stakes testing, a teacher whose students test poorly can be censured or passed over for a raise or promotion. If the entire school does poorly, federal funding can be withheld; if the school is put on probation, the teacher stands to be fired. High-stakes testing also presents teachers with some positive incentives. If her students do well enough, she might find herself praised, promoted, and even richer: the state of California at one point introduced bonuses of $25,000 for teachers who produced big test-score gains.

  And if a teacher were to survey this newly incentivized landscape and consider somehow inflating her students’ scores, she just might be persuaded by one final incentive: teacher cheating is rarely looked for, hardly ever detected, and just about never punished.

  How might a teacher go about cheating? There are any number of possibilities, from brazen to subtle. A fifth-grade student in Oakland recently came home from school and gaily told her mother that her super-nice teacher had written the answers to the state exam right there on the chalkboard. Such instances are certainly rare, for placing your fate in the hands of thirty prepubescent witnesses doesn’t seem like a risk that even the worst teacher would take. (The Oakland teacher was duly fired.) There are more nuanced ways to inflate students’ scores. A teacher can simply give students extra time to complete the test. If she obtains a copy of the exam early—that is, illegitimately—she can prepare them for specific questions. More broadly, she can “teach to the test,” basing her lesson plans on questions from past years’ exams, which isn’t considered cheating but may well violate the spirit of the test. Since these tests all have multiple-choice answers, with no penalty for wrong guesses, a teacher might instruct her students to randomly fill in every blank as the clock is winding down, perhaps inserting a long string of Bs or an alternating pattern of Bs and Cs. She might even fill in the blanks for them after they’ve left the room.

  But if a teacher really wanted to cheat—and make it worth her while—she might collect her students’ answer sheets and, in the hour or so before turning them in to be read by an electronic scanner, erase the wrong answers and fill in correct ones. (And you always thought that no. 2 pencil was for the children to change their answers.) If this kind of teacher cheating is truly going on, how might it be detected?

  To catch a cheater, it helps to think like one. If you were willing to erase your students’ wrong answers and fill in correct ones, you probably wouldn’t want to change too many wrong answers. That would clearly be a tip-off. You probably wouldn’t even want to change answers on every student’s test—another tip-off. Nor, in all likelihood, would you have enough time, because the answer sheets have to be turned in soon after the test is over. So what you might do is select a string of eight or ten consecutive questions and fill in the correct answers for, say, one-half or two-thirds of your students. You could easily memorize a short pattern of correct answers, and it would be a lot faster to erase and change that pattern than to go through each student’s answer sheet individually. You might even think to focus your activity toward the end of the test, where the questions tend to be harder than the earlier questions. In that way, you’d be most likely to substitute correct answers for wrong ones.

  If economics is a science primarily concerned with incentives, it is also—fortunately—a science with statistical tools to measure how people respond to those incentives. All you need are some data.

  In this case, the Chicago Public School system obliged. It made available a database of the test answers for every CPS student from third grade through seventh grade from 1993 to 2000. This amounts to roughly 30,000 students per grade per year, more than 700,000 sets of test answers, and nearly 100 million individual answers. The data, organized by classroom, included each student’s question-by-question answer strings for reading and math tests. (The actual paper answer sheets were not included; they were habitually shredded soon after a test.) The data also included some information about each teacher and demographic information for every student, as well as his or her past and future test scores—which would prove a key element in detecting the teacher cheating.

  Now it was time to construct an algorithm that could tease some conclusions from this mass of data. What might a cheating teacher’s classroom look like?

  The first thing to search for would be unusual answer patterns in a given classroom: blocks of identical answers, for instance, especially among the harder questions. If ten very bright students (as indicated by past and future test scores) gave correct answers to the exam’s first five questions (typically the easiest ones), such an identical block shouldn’t be considered suspicious. But if ten poor students gave correct answers to the last five questions on the exam (the hardest ones), that’s worth looking into. Another red flag would be a strange pattern within any one student’s exam—such as getting the hard questions right while missing the easy ones—especially when measured against the thousands of students in other classrooms who scored similarly on the same test. Furthermore, the algorithm would seek out a classroom full of students who performed far better than their past scores would have predicted and who then went on to score significantly lower the following year. A dramatic one-year spike in test scores might initially be attributed to a good teacher; but with a dramatic fall to follow, there’s a strong likelihood that the spike was brought about by artificial means.

  Consider now the answer strings from the students in two sixth-grade Chicago classrooms who took the identical math test. Each horizontal row represents one student’s answers. The letter a, b, c, or d indicates a correct answer; a number indicates a wrong answer, with 1 corresponding to a, 2 corresponding to b, and so on. A zero represents an answer that was left blank. One of these classrooms almost certainly had a cheating teacher and the other did not. Try to tell the difference—although be forewarned that it’s not easy with the naked eye.

  Classroom A

  112a4a342cb214d0001acd24a3a12dadbcb4a0000000

  d4a2341cacbddad3142a2344a2ac23421c00adb4b3cb

  1b2a34d4ac42d23b141acd24a3a12dadbcb4a2134141

  dbaab3dcacb1dadbc42ac2cc31012dadbcb4adb40000

  d12443d43232d32323c213c22d2c23234c332db4b300

  db2abad1acbdda212b1acd24a3a12dadbcb400000000

  d4aab2124cbddadbcb1a42cca3412dadbcb423134bc1

  1b33b4d4a2b1dadbc3ca22c000000000000000000000

  d43a3a24acb1d32b412acd24a3a12dadbcb422143bc0

  313a3ad1ac3d2a23431223c000012dadbcb400000000

  db2a33dcacbd32d313c21142323cc300000000000000

  d43ab4d1ac3dd43421240d24a3a12dadbcb400000000

  db223a24acb11a3b24cacd12a241cdadbcb4adb4b300

  db4abadcacb1dad3141ac212a3a1c3a144ba2db41b43

  1142340c2cbddadb4b1acd24a3a12dadbcb43d133bc4

  214ab4dc4cbdd31b1b2213c4ad412dadbcb4adb00000<
br />
  1423b4d4a23d24131413234123a243a2413a21441343

  3b3ab4d14c3d2ad4cbcac1c003a12dadbcb4adb40000

  dba2ba21ac3d2ad3c4c4cd40a3a12dadbcb400000000

  d122ba2cacbd1a13211a2d02a2412d0dbcb4adb4b3c0

  144a3adc4cbddadbcbc2c2cc43a12dadbcb4211ab343

  d43aba3cacbddadbcbca42c2a3212dadbcb42344b3cb

  Classroom B

  db3a431422bd131b4413cd422a1acda332342d3ab4c4

  d1aa1a11acb2d3dbc1ca22c23242c3a142b3adb243c1

  d42a12d2a4b1d32b21ca2312a3411d00000000000000

  3b2a34344c32d21b1123cdc000000000000000000000

  34aabad12cbdd3d4c1ca112cad2ccd00000000000000

  d33a3431a2b2d2d44b2acd2cad2c2223b40000000000

  23aa32d2a1bd2431141342c13d212d233c34a3b3b000

  d32234d4a1bdd23b242a22c2a1a1cda2b1baa33a0000

  d3aab23c4cbddadb23c322c2a222223232b443b24bc3

  d13a14313c31d42b14c421c42332cd2242b3433a3343

  d13a3ad122b1da2b11242dc1a3a12100000000000000

  d12a3ad1a13d23d3cb2a21ccada24d2131b440000000

  314a133c4cbd142141ca424cad34c122413223ba4b40

  d42a3adcacbddadbc42ac2c2ada2cda341baa3b24321

  db1134dc2cb2dadb24c412c1ada2c3a341ba20000000

  d1341431acbddad3c4c213412da22d3d1132a1344b1b

  1ba41a21a1b2dadb24ca22c1ada2cd32413200000000

  dbaa33d2a2bddadbcbca11c2a2accda1b2ba20000000

  If you guessed that classroom A was the cheating classroom, congratulations. Here again are the answer strings from classroom A, now reordered by a computer that has been asked to apply the cheating algorithm and seek out suspicious patterns.

  Classroom A

  (With cheating algorithm applied)

  112a4a342cb214d0001acd24a3a12dadbcb4a0000000

 

‹ Prev