Complexity and the Economy

Home > Other > Complexity and the Economy > Page 21
Complexity and the Economy Page 21

by W Brian Arthur


  future prevention. These modalities seek not just to study past failures but to construct an organized body of knowledge that might help prevent failure or

  breakdown in the future.

  It would be a large undertaking to construct a policy-system failure mode

  sub-discipline of economics, worthwhile of course, but beyond the scope of

  this paper. What we can do is think about how such a discipline would work.

  One good place to start is to look at how systems have been exploited or gamed in the past and point to general categories or motifs by which this happens.

  I will talk about four motifs and label these by their causes:

  1. Use of asymmetric information. In many social systems, different parties have access to different information, and often one party offers a service or puts forward an opportunity based upon its understanding of the available information. Another party then responds with behavior based on its

  more detailed understanding and uses the system to profit from its privi-

  leged information. The financial and the marketing industries are particu-

  larly prone to such behavior; in each of these some parties are well informed

  about the product they are promoting, while others—the potential inves-

  tors or customers—are not. In 2007 Goldman Sachs created a package of

  mortgage-linked bonds it sold to its clients. But it allowed a prominent hedge fund manager, John A. Paulson, to select bonds for this that privately he

  thought would lose value, then to bet against the package. Paulson profited,

  and so allegedly did Goldman by buying insurance against loss of value of the

  instrument; but investors lost more than $1 billion (Appleton, 2010). The

  package (a synthetic collateralized debt obligation tied to the performance

  of subprime residential mortgage-backed securities) was complicated, and its

  designers, Goldman and Paulson, were well informed on its prospects. Their

  clients were not.

  The health care industry is also prone to information asymmetries; both

  physicians and patients are better informed on ailments and their appropriate

  treatments than are the insurance companies or governmental bodies pay-

  ing for them (Arrow, 1963). In 2006 the state of Massachusetts mandated

  individual health care insurance, and the program appeared to work initially,

  but after some few months insurers discovered they were losing money. The

  reason was, as Suderman (2010) reports, “[t] housands of consumers are gam-

  ing Massachusetts’ 2006 health insurance law by buying insurance when they

  need to cover pricey medical care, such as fertility treatments and knee sur-

  gery, and then swiftly dropping coverage.” This behavior is not illegal, nor is it quite immoral, but it is certainly exploitive.

  all syst ems Will Be g amed [ 107 ]

  2. Tailoring behavior to conform to performance criteria. A second type of exploitation—better to call it manipulation here—occurs when agent

  behavior is judged, monitored, or measured by strict criteria of evaluation and agents optimize their behavior to conform to these narrow criteria, rather than to what was more widely intended. Agents, in other words, game the criteria.

  Before the 2008 financial crisis, financial ratings agencies such as Moody’s

  or Standard & Poor’s for years performed evaluations of the risk inherent in financial instruments proposed by investment and banking houses. A few

  years before the financial crash, in an act of transparency and implicit trust, they made their ratings models available to the Wall Street investment firms.

  Says Morgenson (2010): “The Wall Street firms learned how to massage these

  models, change one or two little inputs and then get a better rating as a result.

  They learned how to game the rating agency’s models so that they could put

  lesser quality bonds in these portfolios, still get a high rating, and then sell the junk that they might not otherwise have been able to sell.”5

  Gaming performance criteria is not confined to Wall Street. It occurs within

  all systems where judgment of performance is important: conformance to the

  law; educational testing;6 adherence to standards of human rights; adherence

  to environmental standards; adherence to criteria for receiving funding; the

  production of output within factories; financial accounting; tax reporting; the performance of bureaucrats; the performance of governments. In all these

  cases, the parties under surveillance adjust their behavior to appear virtuous under the stated performance measures, while their actual behavior may be

  anywhere from satisfactory to reprehensible. In fact, in the case of govern-

  ment performance, two particular expressions of this form of exploitation

  already exist. One is Campbell’s law (1976): “the more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intend to monitor.” The other is Goodhart’s law (1975): “Any

  observed statistical regularity will tend to collapse once pressure is placed

  upon it for control purposes.”7 Both of these apply to governmental behav-

  ior. I prefer a broader truism: Any performance criterion will be optimized

  against, and will thereby lose its value.

  3. Taking partial control of a system. A third type of exploitation occurs when a small group of agents manages to take control of some significant portion of the resources of a system and use this for its own purposes. This is the economic equivalent of the way viruses operate. The group in effect takes over part of the machinery of the system and uses that to its own advantage.

  5. See White (2009).

  6. See Nichols and Berliner (2007).

  7. Chrystal and Mizen (2001).

  [ 108 ] Complexity and the Economy

  The financial sector has seen a great deal of this type of exploitation. Within the insurance giant AIG, some years before the 2008 crash, a small group of

  people (the Financial Products Unit) managed to take effective control of

  much of the company’s assets and risk bearing, and began to invest heavily

  in credit default swaps. The group profited greatly through their own personal compensation—they were paid a third of the profits they generated—but

  the investments collapsed, and that in turn sank AIG (Zuill, 2009). A simi-

  lar set of events unfolded in Iceland, where a small group of entrepreneurs

  took out loans, used these to buy control of the assets of the country’s banks, and invested these in international properties and derivatives (Boyes, 2009;

  Jonsson, 2009). The international investments collapsed, and so did Iceland’s

  banks, along with their customers’ deposits.

  4. Using system elements in a way not intended by policy designers.

  Still another type of exploitation happens when agents use the behavior of the system itself to manipulate the system. An example would be using a website’s

  rating possibilities to manipulate others’ ratings. Often too, players find a rule they can use as a loophole to justify behavior the designers of the system did not intend. Usually this forces a flow of money or energy through the rule,

  to the detriment of the system at large. Following the Arab Oil Embargo in

  the early 1970s, the US Congress set up fuel economy standards for motor

  vehicles. Understandably, the requirements for commercial light trucks were

  more lenient than those for passenger vehicles. But in due course and with

  a little congressional manipulation, Detroit found it could declare its sports utility vehicles to be
light trucks. These then passed through the light-truck loophole, the highways in due course filled with SUVs, and between 1988 and

  2005 average fuel economy actually fell in the United States (Pew, 2010). This was not what the energy policy’s designers intended.

  The four motifs I have described are by no means exhaustive; there are no

  doubt other ways in which systems might be gamed. But these give us a feel

  for the types of exploitation we might expect to see, and they show us that

  exploitive behavior is not rare in systems. It is rife.

  ANTICIPATING FAILURE MODES

  For some policy systems, it is obvious that their possible exploitation falls into one of the four motifs just given. For others, no particular mode of exploitive behavior might be obvious. In general we have a given policy system and a

  mental model or analytical studies of how it is expected to work, and we would like to anticipate where the system might in real life be exploited. So how do we proceed in general? How would we go about failure mode analysis in a particular economic situation? There is no prescribed answer to these questions,

  but we can usefully borrow some directives from engineering failure analysis.

  all syst ems Will Be g amed [ 109 ]

  An obvious first step is to have at hand knowledge of how similar systems have failed in the past. We have at least the beginnings of such knowledge

  with the motifs I described earlier. Aircraft designers know from forensic

  studies the causes by which failures (they call these “anomalies”) typically

  occur: fatigue failure, explosive decompression, fire and explosions, burst

  engines (Bibel, 2008). By analogy, as I said, we need a failure mode analysis of how policy systems have been exploited in the past.

  Second, we can observe that in general the breakdown of a structure starts

  at a more micro level than that of its overall design. Breakdown in engineer-

  ing designs happens not because the overall structure gives way, but because

  stresses cause hairline cracks in some part of an assembly, or some component

  assembly fails, and these malfunctions propagate to higher levels, possibly to cause eventual whole-system degradation. This suggests in our case that for

  any system we are studying, exploitive behavior will typically take place at a smaller scale than the overall system. Exploitive behavior after all is created—is

  “invented”—by individual people, individual human agents, or small groups of

  these, and we will have to have detailed knowledge of the options and possibilities agents possess if we want to understand how manipulation may happen.

  Third, and again by analogy, we can look for places of high “stress” in the

  proposed system and concentrate our attentions there. In social systems these

  places tend to be the points that present strong incentives for agents to do

  something different from their prescribed behavior. Typically, in an analytical model, points of behavioral action are represented as rates (the rate, say, at which individuals buy health insurance), or as simple rules (if income exceeds $X, and age exceeds Y, buy health insurance). The modeler needs to query

  whether simple rates or rules are warranted, given the pattern of incentives

  agents faces. Very often they are not.

  All this would suggest that if we have a design for social system and an

  analytical model of it, we can “stress test” it by first identifying where actual incentives would yield strong inducements for agents to engage in behavior

  different from the assumed behavior. These might, to give some examples, be

  places where agents have power to affect other players’ well-being (they can

  issue building permits, say, to wealthy property developers), yet we assume

  they make impartial decisions; or places where agents can profit by compro-

  mising on performance or safety of some activity (say, they decide on aircraft maintenance), yet we assume they conform to given standards; or places

  where agents have inside information (say, they have knowledge of a com-

  pany’s future plans), yet we assume they do not trade on this information.

  Next we construct the agents’ possibilities from our sense of the detailed

  incentives and information the agents have at this location. That is, we

  construct detailed strategic options for the agents. The key word here is

  “detailed”: the options or opportunities possible here are driven by the imagination and experience of the analyst looking at the system, they are drawn

  [ 110 ] Complexity and the Economy

  from the real world, and they require careful, detailed description. This is why we will need to have knowledge of the fine-grained information and opportunities the agents will draw from to create their actions.

  Once we have identified where and how exploitation might take place, we

  can break open the overall economic model of the policy system at this loca-

  tion, and insert a module that “injects” the behavior we have in mind. We now

  have a particular type of exploitation in mind, and a working model of it that we can use to study what difference the strategic agents make in the behavior

  of the overall system. Sometimes they will make little difference; the strategic behavior may not affect much outside its sphere. Sometimes they will have a

  major effect; they may even in certain cases cause the collapse of the struc-

  ture they were inserted into. What is important here is that we are looking

  for weak points in a policy system and the consequences that might follow

  from particular behaviors that system might be prone to. It is important that

  this testing not be rushed. In engineering it often takes months or years to

  painstakingly test, debug, and rework a novel design of importance, especially where public safety is at stake. There is no reason we should place less emphasis on the safety of economic and social policy outcomes.

  This method I have just outlined presumes one system designer, or a team,

  working to discover flaws in a given set of policies or given simulated economic system. Things can be speeded up if multiple designers work in parallel and

  are invited to probe a model to find its weak points. Where we have a work-

  ing model of a proposed policy system—think of a new health care policy, or

  an altered set of financial regulations—we can solicit “strategy” modules that exploit it. Here the overall simulation model or overall policy situation would be given, and we would be inviting outside participants to submit strategies

  to exploit it. This was first carried out in the famous prisoner’s dilemma tournament several decades ago, where Robert Axelrod (1984) solicited strategies

  that would compete in a repeated prisoner’s dilemma game. To do this in the

  more general systems context, participants would need to study the system

  thoroughly, identify its myriad incentives, home in on the places were it leaves open opportunities for exploitation, and model these.

  Something similar to this is carried out routinely in the beta testing of

  encryption systems. When, say, the US Navy develops a novel encryption

  scheme, it invites a group of selected people to see if they can crack the

  scheme. If they cannot, the scheme can proceed. It is important that testers

  come from the outside. Says Schneier (1999): “Consider the Internet IP secu-

  rity protocol. It was designed in the open by committee and was the subject

  of considerable public scrutiny from the start. . . . Cryptographers at the Naval Research Laboratory recently discovered a minor implementation flaw. The

  work continues, in public, by anyone and everyone who is interes
ted. On the

  other hand, Microsoft developed its own Point-to-Point Tunneling Protocol

  (PPTP) to do much the same thing. They invented their own authentication

  all syst ems Will Be g amed [ 111 ]

  protocol, their own hash functions, and their own key-generation algorithm.

  Every one of these items was badly flawed. . . . But since they did all this work internally, no one knew that their PPTP was weak.”

  MODELING EXPLOITATION WITHIN COMPUTER MODELS

  In the previous section I talked in general about probing policy systems

  for possible failure. Now I want to narrow this and talk more about prob-

  ing computer-based models of policy systems—usually simulation mod-

  els—for possible failure. One difficulty we immediately face is that most

  computer-based models are closed to novel behavior: they use equations or

  Markov states or other architectures that assume fixed categories of behavior

  laid down in advance or embedded within them, so they can’t easily be modi-

  fied to conjure up the unforeseen—the 51-foot ladders that might appear.

  But we can proceed. Certainly, as I said before, we can “inject” foreseen exploitive behavior into the computer model; that’s a matter of breaking open

  the model and adding more detail. More generally, though, we would like to

  be able to have our simulation model allow for the spontaneous arising or

  “discovery” of unforeseen novel behaviors, and this seems more challenging.

  Notice we are really asking how new behaviors might emerge from agents’ dis-

  covering or learning within a system, and emergence is something that model-

  ing, especially agent-based modeling, has experience with. So we might expect

  that we can indeed modify a simulation model to allow agents to “discover”

  manipulative behavior.

  Let me illustrate with a real-world example. Consider the health insurance

  case I mentioned from Massachusetts. We don’t have a simulation model of

  this policy system at hand, so for our purposes we will construct one. And

  because we are interested not in social details but in issues of how we can

  simulate exploitation, we can keep this simple and stylized. We will proceed

 

‹ Prev