Book Read Free

Complexity and the Economy

Page 22

by W Brian Arthur


  in steps by constructing versions that progressively capture the behavior that interests us.

  First we construct a basic model of health insurance. (I used NetLogo for

  this simulation, a convenient platform for agent-based modeling.) The model

  has N (typically from 100 to 1,000) people who individually and randomly incur health care costs, perhaps from diseases, hospital care, surgical procedures, or accidents, and initially they cover these costs themselves. In this

  model, the distribution of health costs is uniform, stationary, and identical

  for all (we can assume people are all of the same age). People receive a fixed income, common to all, and their consumption c equals this less their health costs. I assume a concave utility function over consumption, U(c) = c 1/2: people are risk averse. There is one insurance company. At first it offers no policies, but instead for a fixed period collects actuarial data: It has access to the

  [ 112 ] Complexity and the Economy

  population’s health costs and uses these to figure average health costs per person per period. Once it has a sufficiently accurate estimate it issues a vol-untary health insurance policy. The policy’s cost is set to be “fair” (equal to its estimate of the expected cost per person per period) plus a markup of m% to cover administrative costs. When we run the model we find that when insurance markup values are sufficiently low ( m < 23.3%) people find that their utility is higher with the policy and they buy it. Otherwise they do not. We now

  have a simple working agent-based model of insurance.

  As a second step, let us build in the Massachusetts edict and its conse-

  quences. Central to what happened was a class of people who believed or

  found out they could do without coverage; instead they could pay the fine for

  non-participation in the scheme. There are several ways we could modify our

  model to allow for such a class. We could assume, for example, people who

  have small risk of incurring health costs. But the simplest way is to assume

  that a proportion of the population (let us say 50%) is not risk-averse. It has a linear utility function, U(c) = c, and thus finds it profitable to pay the government fine, assuming (as is true in Massachusetts) this is less than the insur-

  ance markup. When we run this model we find not surprisingly that one-half

  of the population insures, the other half does not.

  As a third step we build in the exploitive behavior. We now allow that

  all people can see costs in advance for some types of health care (shoulder

  operations, say, or physical therapy). So we build into the model—“inject” the behavior—that these can be foreseen at the start of the period, giving people

  the option of taking out coverage for that period and possibly canceling it the next. The 50% already insured will not be affected; they are paying insurance

  regardless. But the uninsured will be affected, and we find when we run this

  model that they opt in and out of coverage according to whether this suits

  their pockets. In the sense that they are taking out insurance on an outcome

  they know in advance, but the insurance company does not, they are “gam-

  ing” the system. Figure 1 shows the consequences for the insurance company’s

  profits when it switches in. They plummet.

  As a last stage in our modeling, realistically, we can assume that the system

  responds. The state may raise its non-participation fine. Once it does this sufficiently we find that everyone participates and normality resumes. Or the insurance company can react by increasing the mandatory policy-holding period.

  Once it does this to a sufficient point, we find again that normality resumes.

  I have constructed this model in stages because it is convenient to break

  out the base model, demarcate the agents that will strategize, allow them to

  do so, and build in natural responses of the other agents. When finished, the

  model runs through all these dynamics in sequence, of course.

  So far this demonstrates that we can take a given simulation model of a

  policy system (we constructed this one) and modify it by injecting foreseen

  “exploitive” behavior and response into it. But, as I mentioned earlier, in real all syst ems Will Be g amed [ 113 ]

  Insurance Company

  8800

  Money

  –3400

  0

  Time

  456

  Figure 1:

  Agents are allowed to see some upcoming health expenses starting around time 300. The upper plots show the effect on the insurance company’s income from policy payments (smoother line) which rises because it acquires additional policy-holders, and expenses (upper jagged line). The lower plot shows the company’s profits (lower jagged line), which now fall below the flat zero line.

  life, exploitation emerges: it arises—seemingly appears—in the course of a policy system’s existence. In fact, if we look at what happens in real life more closely, we see that players notice that certain options are available to them, and they learn from this—or sometimes discover quite abruptly—that certain actions can be profitably taken. So let us see how we can build “noticing” and

  “discovery” into our example. To keep things short I will only briefly indicate how to do this.8

  First, “noticing” is fairly straightforward. We can allow our agents to

  “notice” certain things—what happened say in the recent past, what options

  are possible—simply by making these part of the information set they are

  aware of as they become available (cf. Lindgren, 1992).

  We still need to include “discovery.” To do this we allow agents to generate

  and try out a variety of potential actions or strategies based on their information. There are many ways to do this (Holland, 1975; Holland et al., 1986).

  Agents can randomly generate contingent actions or rules of the type: If the system fulfills a certain condition K then execute strategy G. Or they can construct novel actions randomly from time to time by forming combinations

  of ones that have worked before: If conditions K and P are true, then execute 8. See Arthur et al. (1997) for a study that implements the procedure I describe.

  [ 114 ] Complexity and the Economy

  strategy F. Or they can generate families of possible actions: Buy in if this period’s pre-known health costs exceed k dollars (where k can be pegged at different levels). We further allow agents to keep these potential strategies

  in mind (there may be many) and monitor each one’s putative performance,

  thus learning over time which ones are effective in what circumstances. They

  can then use or execute the strategy they deem most effective at any time; and drop strategies that prove ineffective.

  This sort of design will bring in the effect we seek. (For detailed illus-

  trations of it in action, see Arthur, 1994, and Arthur et al., 1997.) If some

  randomly generated strategy is monitored and proves particularly effective,

  certain agents will quickly “discover” it. To an outsider it will look as if the strategy has suddenly been switched on—it will suddenly emerge and have an

  effect. In reality, the agents are merely inductively probing the system to find out what works, thereby at random times “discovering” effective strategies

  that take advantage of the system. Exploitation thus “appears.”

  I have described a rather simple model in our example and sketched a way

  to build the emergence of possible exploitations into it. Obviously we could

  elaborate this in several directions.9

  But I want to emphasize my main point. Nothing special needs to be added

  by way of “scheming” or “exploitive thinking” to agent-based simulat
ion mod-

  els when we want to model system manipulation. Agents are faced with par-

  ticular information about the system and the options available to them when

  it becomes available, and from these they generate putative actions. From

  time to time they discover particularly effective ones and “exploitation”—if

  we want to call it that—emerges. Modeling this calls only for standard proce-

  dures already available in agent-based modeling.

  AUTOMATIC PRE-DISCOVERY OF EXPLOITIVE BEHAVIOR

  But this is still not the last word. In the previous section, if we wanted computation to “discover” exploitive behaviors as in the previous section, we needed to specify a given class of behaviors within which they could explore. Ideally, in the future, we would want computation to automatically “discover” a wide

  range of gaming possibilities that we hadn’t thought of, and to test these out, and thereby anticipate possible manipulation.

  What are the prospects for this? What would it take for a simulation model

  of US-Mexico border crossings to foresee—to be able to “imagine”—the use of

  51-foot ladders? Of course, it would be trivially easy to prompt the computer

  9. For example, realistically we could allow information on what works to be shared among agents and spread through their population, and we could assume that if a strategy works, people would focus attention on it and construct variants on it—they would “explore” around it.

  all syst ems Will Be g amed [ 115 ]

  to see such solutions. We could easily feed the simulation the option of ladders of varying length, 40-foot, 64-foot, 25-foot, and allow it to learn that 51-foot ladders would do the job just right. But that would be cheating.

  What we really want, in this case, is to have the computer proceed com-

  pletely without human prompting, to “ponder” the problem of border cross-

  ing in the face of a wall, and “discover” the category of ladders or invent some other plausible way to defeat obstacles, without these being built in. To do this the computer would need to have knowledge of the world, and this would have

  to be a deep knowledge. It would have to be a general intelligence that would

  know the world’s possibilities, know what is available, what is “out there” in general outside itself. It would need in other words something like our human

  general intelligence. There is more than a whiff of artificial intelligence here.

  We are really asking for an “invention machine”: a machine that is aware of its world and can together put available components conceptually to solve general problems. Seen this way, the problem joins the category of computational

  problems that humans find doable but machines find difficult, the so-called

  “AI-complete” problems, such as reading and understanding text, interpreting

  speech, translating languages, recognizing visual objects, playing chess, judging legal cases. To this we can add: imagining solutions.

  It is good to recognize that the problem here is not so much a conceptual

  one as a practical one. We can teach computers to recognize contexts, and to

  build up a huge store of general worldly knowledge. In fact, as is well known, in 2010 IBM taught a computer to successfully answer questions in the quiz

  show Jeopardy, precisely through building a huge store of general worldly knowledge. So it is not a far cry from this to foresee computers that have

  semantic knowledge of a gigantic library of past situations and how they have

  been exploited in the past, so that it can “recognize” analogies and use them

  for the purpose at hand. In the case of the 2003 US invasion of Iraq, such computation or simulation would have run through previous invasions in history,

  and would have encountered previous insurgencies that followed from them,

  and would have warned of such a possibility in the future of Iraq, and built

  the possibility into the simulation. It would have anticipated the “emergent”

  behavior. Future simulations may well be able to dip into history, find analo-

  gies there—find the overall category of ladders as responses to walls—and

  display them. But even if it is conceptually feasible, I believe full use of this type of worldly machine intelligence still lies decades in the future.

  CONCLUSION

  Over the last hundred years or more, economics has improved greatly in its

  ability to stabilize macro-economic outcomes, design international trade

  policies, regulate currency systems, implement central banking, and execute

  [ 116 ] Complexity and the Economy

  antitrust policy. What it hasn’t been able to do is prevent financial and economic crises, most of which are caused by exploitive behavior. This seems an

  anomaly given our times. Airline safety, building safety, seismic safety, food and drug safety, disease safety, surgical safety—all these have improved

  steadily decade by decade in the last fifty years. “Economic safety” by contrast has not improved in the last five decades; if anything it has gotten worse.

  Many economists—myself included—would say that unwarranted faith in

  the ability of free markets to regulate themselves bears much of the blame (e.g.

  Cassidy, 2009; Tabb, 2012). But so too does the absence of a systematic meth-

  odology in economics of looking for possible failure modes in advance of policy implementation. Failure-mode studies are not at the center of our discipline for the simple reason that economics’ adherence to equilibrium analysis assumes that the system quickly settles to a place where no agent has an incentive to

  diverge from its present behavior, and so exploitive behavior cannot happen. We therefore tend to design policies and construct simulations of their outcomes

  without sufficiently probing the robustness of their behavioral assumptions,

  and without identifying where they might fail because of systemic exploitation.

  I suggest that it is time to revise our thinking on this. It is no longer enough to design a policy system and analyze it and even carefully simulate its outcome. We need to see social and economic systems not as a set of behaviors

  that have no motivation to change, but as a web of incentives that always

  induce further behavior, always invite further strategies, always cause the system to change. We need to emulate what is routine in structural engineering,

  or in epidemiology, or in encryption, and anticipate where the systems we

  study might be exploited. We need to stress test our policy designs, to find

  their weak points and see if we can “break” them. Such failure-mode analysis

  in engineering, carried out over decades, has given us aircraft that fly millions of passenger-miles without mishap and high-rise buildings that do not collapse in earthquakes. Such exploitation-mode analysis, applied to the world of policy, would give us economic and social outcomes that perform as hoped for,

  something that would avert much misery in the world.

  REFERENCES

  Appleton, Michael, “SEC Sues Goldman over Housing Market Deal.” New York Times, Apr. 16, 2010.

  Arrow, Kenneth, “Uncertainty and the Welfare Economics of Medical Care,” American Economic Review, 53: 91–96, 1963.

  Arthur, W. Brian. “Bounded Rationality and Inductive Behavior (the El Farol problem),” American Economic Review Papers and Proceedings, 84, 406–411, 1994.

  Arthur, W. Brian, J. H. Holland, B. LeBaron, R. Palmer, and P. Tayler, “Asset Pricing under Endogenous Expectations in an Artificial Stock Market,” in The Economy as an Evolving Complex System II, Arthur, W. B., Durlauf, S., Lane, D., eds.

  Addison-Wesley, Redwood City, CA, 1997.

  all syst ems Will Be g amed [ 117 ]

  Axelrod, Robert. The Evolution
of Cooperation. Basic Books, New York, 1984.

  Bibel, George. Beyond the Black Box: The Forensics of Airplane Crashes. Johns Hopkins University Press, Baltimore, MD, 2008.

  Boyes, Roger. Meltdown Iceland: How the Global Financial Crisis Bankrupted an Entire Country. Bloomsbury Publishing, London, 2009.

  Campbell, Donald, “Assessing the Impact of Planned Social Change,” Public Affairs Center, Dartmouth, NH, Dec. 1976.

  Cassidy, J., How Markets Fail: the Logic of Economic Calamities. Farrar, Straus and Giroux, New York, 2009.

  Chrystal, K. Alec, and Paul Mizen, “Goodhart’s Law: Its Origins, Meaning and

  Implications for Monetary Policy, ” (http://cyberlibris.typepad.com/blog/files/

  Goodharts_Law.pdf), 2001.

  Colander, David, A. Haas, K. Juselius, T. Lux, H. Föllmer, M. Goldberg, A. Kirman, B.

  Sloth, “The Financial Crisis and the Systemic Failure of Academic Economics,”

  mimeo, 98th Dahlem Workshop, 2008.

  Holland, John. Adaptation in Natural and Artificial Systems. MIT Press, Cambridge, MA, 1992. (Originally published 1975.)

  Holland, John H., K. J. Holyoak, R. E. Nisbett and P. R. Thagard, Induction.

  Cambridge, MA, MIT Press, 1986.

  Jonsson, Asgeir. Why Iceland? How One of the World’s Smallest Countries Became the Meltdown’s Biggest Casualty, Mc-Graw-Hill, New York, 2009.

  Koppl, Roger, and W. J. Luther, “BRACE for a new Interventionist Economics,”

  mimeo, Fairleigh Dickinson University, 2010.

  Lindgren, Kristian. “Evolutionary Phenomena in Simple Dynamics,” in C. Langton, C. Taylor, D. Farmer, S. Rasmussen, (eds.), Artificial Life II. Addison-Wesley, Reading, MA, 1992.

  Mahar, Maggie. Money-driven Medicine. HarperCollins, New York, 2006.

  March, James, “Exploration and Exploitation in Organizational Learning,”

  Organization Science, 2, 1, 71–87, 1991.

  Morgenson, Gretchen ( New York Times reporter), “Examining Goldman Sachs,” NPR

  interview in Fresh Air, May 4, 2010.

  Nichols, S. L., and D. Berlner, Collateral Damage: How High-stakes Testing Corrupts America’s Schools. Harvard Education Press, Cambridge, MA, 2007.

  Pew Charitable Trusts, “History of Fuel Economy: One Decade of Innovation, Two Decades of Inaction” (www.pewtrusts.org), 2010.

 

‹ Prev