Complexity and the Economy

Home > Other > Complexity and the Economy > Page 9
Complexity and the Economy Page 9

by W Brian Arthur


  current hypotheses and act on them. As feedback from the environment comes in, we may strengthen or weaken our beliefs in our current hypotheses, discarding some when they cease to perform, and replacing them as needed with new

  ones. In other words, when we cannot fully reason or lack full definition of the problem, we use simple models to fill the gaps in our understanding. Such behavior is inductive.

  One can see inductive behavior at work in chess playing. Players typically

  study the current configuration of the board and recall their opponent’s play

  in past games to discern patterns (Adriann De Groot, 1965). They use these

  to form hypotheses or internal models about each other’s intended strate-

  gies, maybe even holding several in their minds at one time: “He’s using a

  Caro-Kann defense.” “This looks a bit like the 1936 Botvinnik-Vidmar game.”

  “He is trying to build up his mid-board pawn formation.” They make local

  deductions based on these, analyzing the possible implications of moves sev-

  eral moves deep. And as play unfolds they hold onto hypotheses or mental

  models that prove plausible or toss them aside if not, generating new ones

  to put in their place. In other words, they use a sequence of pattern recogni-

  tion, hypotheses formation, deduction using currently held hypotheses, and

  replacement of hypotheses as needed.

  This type of behavior may not be familiar in economics; but one can rec-

  ognize its advantages. It enables us to deal with complication: we construct

  plausible, simpler models that we can cope with. It enables us to deal with ill-definedness: where we have insufficient definition, our working models fill the gap. It is not antithetical to “reason,” or to science for that matter. In fact, it is the way science itself operates and progresses.

  Modeling Induction

  If humans indeed reason in this way, how can one model this? In a typical problem that plays out over time, one might set up a collection of agents, probably heterogeneous, and assume they can form mental models, or hypotheses, or

  subjective beliefs. These beliefs might come in the form of simple mathemati-

  cal expressions that can be used to describe or predict some variable or action; or of complicated expectational models of the type common in economics; or

  of statistical hypotheses; or of condition/prediction rules (“If situation Q is observed, predict outcome or action D”). These will normally be subjective,

  that is, they will differ among the agents. An agent may hold one in mind at a time, or several simultaneously.

  mundane actions like walking or driving are subconsciously directed, and for these pattern-cognition maps directly into action. In this case, connectionist models work better.

  [ 32 ] Complexity and the Economy

  Each agent will normally keep track of the performance of a private collection of such belief-models. When it comes time to make choices, he acts

  upon his currently most credible (or possibly most profitable) one. The oth-

  ers he keeps at the back of his mind, so to speak. Alternatively, he may act

  upon a combination of several. (However, humans tend to hold in mind many

  hypotheses and act on the most plausible one [Julian Feldman, 1962].) Once

  actions are taken, the aggregative picture is updated, and agents update the

  track record of all their hypotheses.

  This is a system in which learning takes place. Agents “learn” which of their

  hypotheses work, and from time to time they may discard poorly performing

  hypotheses and generate new “ideas” to put in their place. Agents linger with

  their currently most believable hypothesis or belief model but drop it when it no longer functions well, in favor of a better one. This causes a built-in hysteresis. A belief model is clung to not because it is “correct”—there is no way to know this—but rather because it has worked in the past and must cumulate

  a record of failure before it is worth discarding. In general, there may be a

  constant slow turnover of hypotheses acted upon. One could speak of this as

  a system of temporarily fulfilled expectations—beliefs or models or hypotheses that are temporarily fulfilled (though not perfectly), which give way to different beliefs or hypotheses when they cease to be fulfilled.

  If the reader finds this system unfamiliar, he or she might think of it as

  generalizing the standard economic learning framework which typically has

  agents sharing one expectational model with unknown parameters, acting

  upon the parameters’ currently most plausible values. Here, by contrast,

  agents differ, and each uses several subjective models instead of a contin-

  uum of one commonly held model. This is a richer world, and one might ask

  whether, in a particular context, it converges to some standard equilibrium of beliefs; or whether it remains open-ended, always leading to new hypotheses,

  new ideas.

  It is also a world that is evolutionary, or more accurately, coevolutionary.

  Just as species, to survive and reproduce, must prove themselves by compet-

  ing and being adapted within an environment created by other species, in this

  world hypotheses, to be accurate and therefore acted upon, must prove them-

  selves by competing and being adapted within an environment created by

  other agents’ hypotheses. The set of ideas or hypotheses that are acted upon

  at any stage therefore coevolves.2

  A key question remains. Where do the hypotheses or mental models

  come from? How are they generated? Behaviorally, this is a deep question in

  psychology, having to do with cognition, object representation, and pattern

  2. A similar statement holds for strategies in evolutionary game theory; but there, instead of a large number of private, subjective expectational models, a small number of strategies compete.

  t He el farol ProBlem [ 33 ]

  recognition. I will not go into it here. However, there are some simple and practical options for modeling. Sometimes one might endow agents with

  focal models: patterns or hypotheses that are obvious, simple, and easily dealt with mentally. One might generate a “bank” of these and distribute

  them among the agents. Other times, given a suitable model-space one

  might allow the genetic algorithm or some similar intelligent search device

  to generate ever “smarter” models. One might also allow agents the pos-

  sibility of “picking up” mental models from one another (in the process

  psychologists call transfer). Whatever option is taken, it is important to be clear that the framework described above is independent of the specific

  hypotheses or beliefs used, just as the consumer-theory framework is inde-

  pendent of the particular products chosen among. Of course, to use the

  framework in a particular problem, some system of generating beliefs must

  be adopted.

  II. THE BAR PROBLEM

  Consider now a problem I will construct to illustrate inductive reasoning and

  how it might be modeled. N people decide independently each week whether to go to a bar that offers entertainment on a certain night. For concreteness, let us set N at 100. Space is limited, and the evening is enjoyable if things are not too crowded—specifically, if fewer than 60% of the possible 100 are present. There is no sure way to tell the numbers coming in advance; therefore a

  person or agent goes (deems it worth going) if he expects fewer than 60 to show up or stays home if he expects more than 60 to go. Choices are unaffected by previous visits; there is no collusion or prior communication among the

  agents; and the only information available
is the numbers who came in past

  weeks. (The problem was inspired by the bar El Farol in Santa Fe which offers

  Irish music on Thursday nights; but the reader may recognize it as applying

  to noontime lunch-room crowding, and to other commons or coordination

  problems with limits to desired coordination.) Of interest is the dynamics of

  the numbers attending from week to week.

  Notice two interesting features of this problem. First, if there were an

  obvious model that all agents could use to forecast attendance and base their

  decisions on, then a deductive solution would be possible. But this is not

  the case here. Given the numbers attending in the recent past, a large num-

  ber of expectational models might be reasonable and defensible. Thus, not

  knowing which model other agents might choose, a reference agent cannot

  choose his in a well-defined way. There is no deductively rational solution—

  no “correct” expectational model. From the agents’ viewpoint, the problem

  is ill-defined, and they are propelled into a world of induction. Second, and

  diabolically, any commonalty of expectations gets broken up: if all believe

  [ 34 ] Complexity and the Economy

  few will go, all will go. But this would invalidate that belief. Similarly, if all believe most will go, nobody will go, invalidating that belief.3 Expectations will be forced to differ.

  At this stage, I invite the reader to pause and ponder how attendance

  might behave dynamically over time. Will it converge, and if so to what? Will

  it become chaotic? How might predictions be arrived at?

  A. A Dynamic Model

  To answer the above questions, I shall construct a model along the lines of

  the framework sketched above. Assume the 100 agents can individually form

  several predictors or hypotheses, in the form of functions that map the past

  d weeks’ attendance figures into next week’s. For example, recent attendance might be:

  . . . , 44, 78, 56, 15, 23, 67, 84, 34, 45, 76, 40, 56, 22, 35.

  Particular hypotheses or predictors might be: predict next week’s number

  to be

  • thesameaslastweek’s[35]

  • amirrorimagearound50oflastweek’s[65]

  • a(rounded)averageofthelastfourweeks[49]

  • thetrendinlast8weeks,boundedby0,100[29]

  • thesameas2weeksago(2-periodcycledetector)[22]

  • thesameas5weeksago(5-periodcycledetector)[76]

  • etc.

  Assume that each agent possesses and keeps track of an individualized set

  of k such focal predictors. He decides to go or stay according to the currently most accurate predictor in his set. (I will call this his active predictor.) Once decisions are made, each agent learns the new attendance figure and updates

  the accuracies of his monitored predictors.

  Notice that in this bar problem, the set of hypotheses currently most

  credible and acted upon by the agents (the set of active hypotheses) deter-

  mines the attendance. But the attendance history determines the set of

  active hypotheses. To use John Holland’s term, one can think of these

  active hypotheses as forming an ecology. Of interest is how this ecology evolves over time.

  3. This is reminiscent of Yogi Berra’s famous comment on why he no longer went to Ruggeri's, a restaurant in St. Louis: “Nobody goes there anymore. It’s too crowded.”

  t He el farol ProBlem [ 35 ]

  B. Computer Experiments

  For most sets of hypotheses, analytically this appears to be a difficult ques-

  tion. So in what follows I will proceed by computer experiments. In the experiments, to generate hypotheses, I first create an “alphabet soup” of predictors, in the form of several dozen focal predictors replicated many times. I then

  randomly ladle out k (6 or 12 or 23, say) of these to each of 100 agents. Each agent then possesses k predictors or hypotheses or “ideas” he can draw upon.

  We need not worry that useless predictors will muddy behavior. If predictors

  do not “work” they will not be used; if they do work they will come to the

  fore. Given starting conditions and the fixed set of predictors available to each agent, in this problem the future accuracies of all predictors are predetermined. The dynamics here are deterministic.

  100

  90

  Numbers

  Attending

  80

  70

  60

  50

  40

  30

  20

  10

  0

  0

  20

  40

  60

  80

  100

  Time

  Figure 1:

  Bar attendance in the first 100 weeks.

  The results of the experiments are interesting (Figure 1). Where

  cycle-detector predictors are present, cycles are quickly “arbitraged” away so there are no persistent cycles. (If several people expect many to go because

  many went three weeks ago, they will stay home.) More interestingly, mean

  attendance converges always to 60. In fact the predictors self-organize into an equilibrium pattern or “ecology” in which, of the active predictors (those most accurate and therefore acted upon), on average 40% are forecasting above 60,

  60% below 60.

  This emergent ecology is almost organic in nature. For, while the popula-

  tion of active predictors splits into this 60/40 average ratio, it keeps changing in membership forever. This is something like a forest whose contours do not

  change, but whose individual trees do. These results appear throughout the

  experiments and are robust to changes in types of predictors created and in

  numbers assigned.

  [ 36 ] Complexity and the Economy

  How do the predictors self-organize so that 60 emerges as average attendance and forecasts split into a 60/40 ratio? One explanation might be that

  60 is a natural “attractor” in this bar problem; in fact, if one views it as a pure game of predicting, a mixed strategy of forecasting above 60 with probability

  0.4 and below it with probability 0.6 is a Nash equilibrium. Still, this does not explain how the agents approximate any such outcome, given their realistic,

  subjective reasoning. To get some understanding of how this happens, sup-

  pose that 70% of their predictors forecasted above 60 for a longish time. Then on average only 30 people would show up; but this would validate predictors

  that forecasted close to 30 and invalidate the above-60 predictors, restor-

  ing the “ecological” balance among predictions, so to speak. Eventually the

  40–60-percent combination would assert itself. (Making this argument math-

  ematically exact appears to be nontrivial.) It is important to be clear that one does not need any 40–60 forecasting balance in the predictors that are set up.

  Many could have a tendency to predict high, but aggregate behavior calls the

  equilibrium predicting ratio to the fore. Of course, the result would fail if all predictors could only predict below 60; then all 100 agents would always show

  up. Predictors need to “cover” the available prediction space to some modest

  degree. The reader might ponder what would happen if all agents shared the

  same set of predictors.

  It might be objected that I lumbered the agents in these experiments with

  fixed sets of clunky predictive models. If they could form more open-ended,

  intelligent predictions, different behavior might emerge. One could certainly

  test this using a more
sophisticated procedure, say, genetic programming

  (John Koza, 1992). This continually generates new hypotheses, new predic-

  tive expressions, that adapt “intelligently” and often become more compli-

  cated as time progresses. However, I would be surprised if this changes the

  above results in any qualitative way.

  The bar problem introduced here can be generalized in a number of ways

  (see E. R. Grannan and G. H. Swindle, 1994). I encourage the reader to

  experiment.

  III. CONCLUSION

  The inductive-reasoning system I have described above consists of a multi-

  tude of “elements” in the form of belief-models or hypotheses that adapt to

  the aggregate environment they jointly create. Thus it qualifies as an adaptive complex system. After some initial learning time, the hypotheses or mental models in use are mutually coadapted. Thus one can think of a consistent set of mental models as a set of hypotheses that work well with each other under some criterion—that have a high degree of mutual adaptedness.

  Sometimes there is a unique such set, it corresponds to a standard rational

  t He el farol ProBlem [ 37 ]

  expectations equilibrium, and beliefs gravitate into it. More often there is a high, possibly very high, multiplicity of such sets. In this case one might expect inductive-reasoning systems in the economy—whether in stock-market speculating, in negotiating, in poker games, in oligopoly pricing, or in positioning products in the market—to cycle through or temporarily lock into psychological patterns that may be nonrecurrent, path-dependent, and increasingly

  complicated. The possibilities are rich.

  Economists have long been uneasy with the assumption of perfect, deduc-

  tive rationality in decision contexts that are complicated and potentially

  ill-defined. The level at which humans can apply perfect rationality is surprisingly modest. Yet it has not been clear how to deal with imperfect or bounded

  rationality. From the reasoning given above, I believe that as humans in these contexts we use inductive reasoning: we induce a variety of working hypotheses, act upon the most credible, and replace hypotheses with new ones if they cease to work. Such reasoning can be modeled in a variety of ways. Usually this leads to a rich psychological world in which agents’ ideas or mental models

 

‹ Prev