Book Read Free

The Predictioneer’s Game: Using the Logic of Brazen Self-Interest to See and Shape the Future

Page 16

by Bruce Bueno De Mesquita


  Naysayers are too quick to equate what they see people do with what they think their core values are. Because terrorist acts seem so extreme, so fanatical, so incomprehensible, many of us are quick to assume that terrorists are a breed apart. They are thought of as people who cannot and will not respond to rational arguments. And yet we already know that even al-Qaeda insurgents in Iraq can be induced to change their ways for just ten dollars a day. Put the history of Jewish-Muslim relations together with the responsiveness of former insurgents to modest economic rewards, and it’s hard to see the downside to trying a new economic approach, especially one that promises virtually no economic downside for one party and huge gains for the other. As the old anti-war song says, “Give peace a chance.”

  Even those who absolutely cannot believe that Palestinians or Israelis would value economic incentives over religious principles should want this tourism-incentive plan tried out. Why? Because it has a “hidden hand” benefit alluded to earlier that directly addresses the concern of naysayers. I think we can all agree that there are some hard-liners on the Palestinian side who don’t care about building a strong Palestinian economy, and others on the Israeli side who are certain God did not intend the land to be occupied by anyone other than Jews. These hard-liners will do whatever they can to thwart peace. They will foment violence to prevent tourists from coming. But we should also be able to agree that there are at least some pragmatists on each side as well. The revenue-sharing strategy will ensure that the pragmatists have a strong incentive to identify hard-liners and fight them. The pragmatists will have an incentive that they do not currently have to provide counterterrorism intelligence to their governments in order to ferret out the hard-liners and stop them from interfering with the massive economic improvements promised by this plan. Thus, it should become easier to find and punish the hard-liners, thereby strengthening the hand of pragmatists on both sides. That’s something that should appeal to those who fear the power of the hard-liners.

  What I want more than anything to show in this book, and hope that I have done so to a degree so far, is that by thinking hard about the interests involved in a given problem, we have the opportunity to take the best available steps to ensure optimal outcomes. As this next example will show, when we are unaware of the interests at play, or willfully ignore them, we can invite ruin upon ourselves.

  INCENTIVIZING IGNORANCE

  Arthur Andersen was driven out of business by an aggressive Justice Department looking for a big fish to fry for Enron’s bankruptcy. Later, on appeal, the Supreme Court unanimously threw out Andersen’s conviction, but it was too late to save the business. Thousands of innocent people lost their jobs, their pensions, and the pride they had in working for a successful, philanthropic, and innovative company. Andersen’s senior management apparently was entirely innocent of real wrongdoing. Unfortunately, they nevertheless helped foster their own demise by not erecting a good monitoring system to protect their business from the misbehavior of their audit clients. In fact, that was and is a problem with every major accounting firm. In Andersen’s case, I know from painful personal experience how needless their sad end was.

  Around the year 2000, the head of Andersen’s risk management group asked me if I could develop a game-theory model that would help them anticipate the risk that some of their audit clients might commit fraud (this is where my work related to the Sarbanes-Oxley discussion from a few chapters back began). As I have related, three colleagues and I constructed a model to predict the chances that a company would falsely report its performance to shareholders and the SEC. Our game-theory approach, coupled with publicly available data, makes it possible to predict the likelihood of fraud two years in advance of its commission. We worked out a way to identify a detailed forensic accounting that helps assess the likely cause of fraud—if any—as a function of any publicly traded company’s governance structure.

  We grouped companies according to the degree to which our model projected that they were at risk of committing fraud. Of all the firms we examined, 98 percent were predicted to have a near-zero risk of committing fraud. Barely 1 percent of those firms were subsequently alleged to have reported their performance fraudulently. At the other end of our scale, about 1.5 percent of companies were placed in the highest risk category based on the corporate organizational and compensation factors assessed by the model. A whopping 85 percent of that small group of companies were accused by the SEC of committing fraud within the time window investigated by the model. This is a very effective system that produces few false positives—alleging that a company would commit fraud when it apparently did not—and very few false negatives—suggesting that a company would not commit fraud when it subsequently did.

  Enron was one of the 1.5 percent of companies that we highlighted as being in the highest risk category. You can see this in the table on page 119, which shows our predictions for a select group of companies that eventually were accused of very big frauds. The table shows our assessment of the risk of fraud for each company each year. The estimates of interest are for 1997-99. These assessments are based on what is called in statistics an out-of-sample test. Let me explain what that means and how it is constructed.

  Suppose you want to know how likely it is that a company is in either of two categories: honest or fraudulent. Using game-theory reasoning, you might identify several factors that nudge executives to resort to fraud when their company is in trouble. A few chapters back we talked about some of those factors, such as the size of the group of people whose support executives need to keep their jobs, and we talked about factors that provide early-warning signs of fraud, such as dividend and management compensation packages that are below expectations given the reported performance of the firm and its governance structure.

  We know that some conditions, including the amount paid out in dividends, indicate whether fraud is more or less likely; but how important is the magnitude of dividend payments in influencing the risk of fraud compared to, for instance, the percentage of the company owned by large institutional investors? That too is an important indicator of the incentive to hide or reveal poor corporate performance. There are statistical procedures that evaluate the information on many variables (the factors identified in the fraud game devised by my colleagues and me, for example) to work out how well those factors predict the odds that a company is honest or fraudulent (or whatever else it is that is being studied).

  There is a family of statistical methods known as maximum likelihood estimation for doing this. We won’t worry here about exactly how these methods work. (For the aficionados, we used logit analysis.) The important thing is that these methods produce unbiased estimates of the relative weight or importance of each of the factors, each of the variables, thought to influence the outcome. By multiplying each factor’s value (the number of directors, for example, or the percentage of the firm owned by institutional investors) by its weight, we can get a composite estimate of the probability that the firm will be honest or will commit fraud two years in the future. If the theory is just plain wrong, then these statistical methods will show that the factors in the equation do not significantly influence whether a firm is honest or fraudulent in the way the theory predicts.

  The weights we estimated were derived from data on hundreds of companies for the years 1989 through 1996. Since the thing we were interested in predicting—corporate honesty or fraud—was unknown for 1997 in 1995, or for 1998 in 1996, or for 1999 in 1997, and so forth, the statistically estimated weights were limited to just those years for which we knew the outcomes as well as the inputs from two years earlier. Thus, the last year for which we used the statistical method to fit data to a known outcome was 1996. We then applied the weights created by the in-sample test to estimate the likelihood of fraud for the years of data that were not included in our statistical calculation. Those years are the out-of-sample cases. The out-of-sample predictions, then, cover the years 1997 forward in the table. Of course, since this analysis was a
ctually being done in 2000 and 2001, we were “predicting” the past.

  Now, you may well think this is an odd view of prediction. It is unlike anything I have discussed so far. What, you might wonder, does it mean to predict the past? That must be very easy when you already know what happened. But remember, in the out-of-sample test, nothing that happened after 1996 was utilized to create variable weights or to pick the variables that were important. Since the predictions about events after 1996 took no advantage of any information after that year, they are true predictions even though they were created in 2000 and 2001. This sort of out-of-sample test is useful to assess whether our model worked effectively at distinguishing between companies facing a high and a low risk of fraud. That is not to say that it is useful from a practical standpoint, even though it is useful from the perspective of validating the model and in providing confidence about how it could be expected to perform in the future. Let me explain what I mean:

  Predicting the past can be helpful in terms of advancing science, even though it is not of much practical use when it comes to avoiding the audit of firms that have already committed fraud. But if it accurately predicted the pattern of fraud in the past, it is likely to do the same in the future.

  Here’s another way to think about this: The fraud model uses publicly available data. If Arthur Andersen had asked my colleagues and me to develop a theory of fraud in 1996 instead of in 2000, we could have constructed exactly the same model. We could have used exactly the same data from 1989—96 to predict the risk of fraud in different companies for 1997 forward. Those predictions would have been identical to the ones we made in our out-of-sample tests in 2000. The only difference would have been that they could have been useful, because they would then have been about the future.

  Clearly, we had a good monitoring system. Our game-theory logic allowed us to predict when firms were likely to be on good behavior and when they were not. It even sorted out correctly the years that an individual firm was at high or low risk. For instance, our approach showed “in advance” (that is, based on the out-of-sample test) when it was likely that Rite Aid was telling the truth in its annual reports and when it was not. The same could be said for Xerox, Waste Management, Enron, and also many others not shown here. We could identify companies that Andersen was auditing that involved high risks, and we could identify companies that Andersen was not auditing that they should have pursued aggressively for future business because those firms were a very low risk. That, in fact, was the idea behind the pilot study Arthur Andersen contracted for. They could use the information we uncovered to maintain up-to-date data on firms. Then the model could predict future risks, and Andersen could tailor their audits accordingly.

  Did Andersen make good use of this information? Sadly, they did not. After consulting with their attorneys and their engagement partners—the people who signed up audit clients and oversaw the audits—they concluded that it was prudent not to know how risky different companies were, and so they did not use the model. Instead, they kept on auditing problematic firms, and they got driven out of business. Were they unusual in their seeming lack of commitment to real monitoring and in their failure to cut off clients who were predicted to behave poorly in the near future? Not in my experience. The lack of commitment to effective monitoring is a major concern in game-theory designs for organizations. This is true because, as we will see, too often companies have weak incentives to know about problems. Was the lack of monitoring rational? Alas, yes, it was, even though in the end it meant the demise of Arthur Andersen, LLP. Game-theory thinking made it clear to me that Andersen would not monitor well, but I must say Andersen’s most senior management partners genuinely did not seem to understand the risks they were taking.

  At Arthur Andersen, partners had to retire by age sixty-two. Many retired at age fifty-seven. These two numbers go a long way toward explaining why there were weak incentives to pay attention to audit risks. The biggest auditing gigs were brought in by senior engagement partners who had been around for a long time. As I pointed out to one of Andersen’s senior management partners, senior engagement partners had an incentive not to look too closely at the risks associated with big clients. A retiring partner’s pension depended on how much revenue he brought in over the years. The audit of a big firm, like Enron, typically involved millions of dollars. It was clear to me why a partner might look the other way, choosing not to check too closely whether the firm had created a big risk of litigation down the road.

  Suppose the partner were in his early to mid-fifties at the time of the audit. If the fraud model predicted fraud two years later, the partner understood that meant a high risk of fraud and therefore a high risk that Andersen (or any accounting firm doing the audit) would face costly litigation. The costs of litigation came out of the annual funds otherwise available as earnings for the partners. Of course, this cost was not borne until a lawsuit was filed, lawyers hired, and the process of defense got under way. An audit client cooking its books typically was not accused of fraud until about three years after the alleged act. This would be about five years after the model predicted (two years in advance) that fraud was likely. Costly litigation would follow quickly on the allegation of fraud, but it would not be settled for probably another five to eight years, or about ten or so years after the initial prediction of risk. By then, the engagement partner who brought in the business in his early to mid-fifties was retired and enjoying the benefits of his pension. By not knowing the predicted risk ten or fifteen years earlier, the partner ensured that he did not knowingly audit unsavory firms. Therefore, when litigation got under way, the partner was not likely to be held personally accountable by plaintiffs or the courts. Andersen (or whichever accounting firm did the audit) would be held accountable (or at least be alleged to be accountable), as they had deep pockets and were natural targets for litigation, but then the money for the defense was coming out of the pockets of future partners, not the partner involved in the audit of fraudulent books a decade or so earlier. The financial incentive to know was weak indeed.

  When I suggested to a senior Andersen management partner that this perverse incentive system was at work, he thought I was crazy—and told me so. He thought that clients later accused of fraud must have been audited by inexperienced junior partners, not senior partners near retirement. I asked him to look up the data. One thing accounting firms are good at is keeping track of data. That is their business. Sure enough, to his genuine shock, he found that big litigations were often tied to audits overseen by senior partners. I bet that was true at every big accounting firm, and I bet it is still true today. So now we can see, as he saw, why a partner might not want to know that he was about to audit a firm that was likely to cook its books.

  Why didn’t the senior management partners already know these facts? The data were there to be examined. If they had thought about incentives more carefully, maybe they would have saved the partnership from costly lawsuits such as those associated with Enron, Sunbeam, and many other big alleged frauds. Of course, they were not in the game-theory business, and so they didn’t think as hard as they could have about the wrong-headed incentives designed into their partnership (and most other partnerships, for that matter).

  On the plus side, management’s incentives were better than those of the engagement partners. Senior managers seemed more concerned about the long-term performance of the firm. Maybe that was the result of what we call a selection effect, as people concerned about the firm’s well-being may have been more likely candidates to become senior management. Still, they also had an incentive to help their colleagues bring in business, and that meant that they were interested in making it easy for their colleagues to sign up as many audit engagements as possible. They may have preferred to avoid problems with bad clients, but the senior managers could live with not knowing about future trouble if that helped to keep their colleagues happy and business pouring in. Thus, senior management’s incentives were not quite right either. Effective
monitoring had benefits for them, but it was costly in revenue and especially in personal relations. Many senior management partners tolerated slack monitoring as the solution to this problem, and likely did a quick risk calculation that litigation—not collapse—was the worst that a fraudulent client could visit upon the firm. Let’s face it, many of us would do the same thing.

  We also should remember that but for what seems to have been an overly zealous prosecution by the Department of Justice, the likely risk calculation by senior management partners would have been right. Remember, while Andersen gave up its license to engage in accountancy in 2002, following its conviction on criminal charges, the conviction was overturned by the Supreme Court. Sadly for the approximately 85,000 people who lost their jobs, the Supreme Court decision came too late to save the business.

 

‹ Prev