by Ajay Agrawal
ˆ , prediction and judgment are
J 2
substitutes.
3.5 Complexity
Thus far, the model illustrates the interplay between knowing the reward
function ( judgment) and prediction. While those results show that predic-
tion and judgment can be substitutes, there is a sense in which they are
more naturally complements. The reason is this: what prediction enables is a
form of state- contingent decision- making. Without a prediction, a decision
maker is forced to make the same choice regardless of the state that might
arise. In the spirit of Herbert Simon, one might call this a heuristic. And in
the absence of prediction, the role of judgment is to make that choice. More-
over, that choice is easier—that is, more likely to be optimal—when there
exists dominant (or “near dominant”) choices. Thus, when either the state
space or the action space expand (as it may in more complex situations), it is
104 Ajay Agrawal, Joshua Gans, and Avi Goldfarb
less likely that there will exist a dominant choice. In that regard, faced with
complexity, in the absence of prediction, the value of judgment diminishes
and we are more likely to see decision makers choose default actions that,
on average, are likely to be better than others.
Suppose now we add a prediction machine to the mix. While in our
model such a machine, when it renders a prediction, can perfectly signal
the state that will arise, let us consider a more convenient alternative that
may arise in complex situations: the prediction machine can perfectly signal
some states (should they arise), but for other states no precise prediction is
possible except for the fact that one of those states is the correct one. In
other words, the prediction machine can sometimes render a fi ne prediction
and otherwise a coarse one. Here, an improvement in the prediction machine
means an increase in the number of states in which the machine can render
a fi ne prediction.
Thus, consider an N- state model where the probability of state i is .
i
Suppose that states {1, . . ., m} can be fi nely predicted by an AI, while the
remainder cannot be distinguished. Suppose that in the states that cannot
be distinguished applying judgment is not worthwhile so that the optimal
choice is the safe action. Also, assume that when a prediction is available,
judgment is worthwhile; that is, ˆ ≥ s/ [ vR + (1 – v) S]. In this situation, the expected present discounted value when both prediction and judgment are
available is
m
N
V
= ˆ μ ( vR + (1 v) S) +
μ S.
PJ
i
i
i=1
i= m+1
Similarly, it is easy to see that V = V = S = V as vR + (1 – v) r ≤ S. Note P
J
0
that as m increases (perhaps because the prediction machine learns to predict
more states), then the marginal value of better judgment increases. That is,
ˆ μ ( vR + (1 v) S) μ S is increasing in ˆ.
m
m
What happens as the situation becomes more complex (that is, N in-
creases)? An increase in N will weakly lead to a reduction in for any given
i
i. Holding m fi xed (and so the quality of the prediction machine does not improve with the complexity of the world), this will reduce the value of prediction and judgment as greater weight is placed on states where prediction
is unavailable; that is, it is assumed that the increase in complexity does not,
ceteris paribus, create a state where prediction is available. Thus, complexity
appears to be associated with lower returns to both prediction and judg-
ment. Put diff erently, an improvement in prediction machines would mean
m increases with N fi xed. In this case, the returns to judgment rise as greater weight is put on states where prediction is available.
This insight is useful because there are several places in the economics
literature where complexity has interacted with other economic decisions.
These include automation, contracting, and fi rm boundaries. We discuss
each of these in turn, highlighting potential implications.
A Theory of Decision- Making and Artifi cial Intelligence 105
3.5.1 Automation
The literature on automation is sometimes synonymous with AI. This
arises because AI may power new robots that are able to operate in open
environments thanks to machine learning. For instance, while automated
trains have been possible for some time since they run on tracks, automated
cars are new because they need to operate in far more complex environments.
It is prediction in those open environments that has allowed the emergence
of environmentally fl exible capital equipment. Note that leads to the impli-
cation that as AI improves, tasks in more complex environments can be
handled by machines (Acemoglu and Restrepo 2017).
However, this story masks the message that emerges from our analysis that
recent AI developments are all about prediction. Why prediction enables
automated vehicles is because it is relatively straightforward to describe (and
hence, program) what those vehicles should do in diff erent situations. In
other words, if prediction enables “state contingent decisions,” then auto-
mated vehicles arise because someone knows what decision is optimal in
each state. In other words, automation means that judgment can be encoded
in machine behavior. Prediction added to that means that automated capital
can be moved into more complex environments. In that respect, it is perhaps
natural to suggest that improvements in AI will lead to a substitution of
humans for machines as more tasks in more complex environments become
capable of being programmed in a state- contingent manner.
That said, there is another dimension of substitution that arises in com-
plex environments. As noted above, when states cannot be predicted (some-
thing that for a given technology is more likely to be the case in more complex
environments), then the actions chosen are more likely to be defaults or the
results of heuristics that perform, on average, well. Many, including Acemo-
glu and Restrepo (2017), argue that it is for more complex tasks that humans
have a comparative advantage relative to machines. However, this is not at
all obvious. If it is known that a particular default or heuristic should be
used, then a machine can be programmed to undertake this. In this regard,
the most complex tasks—precisely because little is known regarding how
to take better actions given that the prediction of the state is coarse—may
be more, not less, amenable to automation.
If we had to speculate, imagine that states were ordered in terms of dimin-
ished likelihood (i.e., ≥ for all i < j). The lowest index states might be i
j
ones that, because they arrive frequently, there is knowledge of what the
optimal action is in each and so they can be programmed to be handled by a
machine. The highest index states similarly, because the optimal action that
cannot be determined can also be programmed. It is the intermediate states
that arise less frequently but not infre
quently where, if a reliable prediction
existed, could be handled by humans applying judgment when those states
arose. Thus, the payoff could be written
106 Ajay Agrawal, Joshua Gans, and Avi Goldfarb
k
m
N
V
=
μ ( vR + (1 v) S) + ˆ
μ ( vR + (1 v) S) +
μ S,
PJ
i
i
i
i=1
i= k+1
i= m+1
where tasks 1 through k are automated using prediction because there is
knowledge of the optimal action. If this was the matching of tasks to
machines and humans, then it is not at all clear whether an increase in com-
plexity would be associated with more or less human employment.
That said, the issue for the automation literature is not subtleties over
the term “complex tasks,” but as AI becomes more prevalent, where might
the substitution of machines for humans arise. As noted above, an increase
in AI increases m. At this margin, humans are able to come into the mar-
ginal tasks and, because a prediction machine is available, use judgment to
conduct state- contingent decisions in those situations. Absent other eff ects,
therefore, an increase in AI is associated with more human labor on any
given task. However, as the weight on those marginal tasks is falling in the
level of complexity, it may not be the more complex tasks that humans are
performing more of. On the other hand, one can imagine that in a model
with a full labor market equilibrium that an increase in AI that enables
more human judgment at the margin may also create opportunities to study
that judgment to see if it can be programmed into lower index states and
be handled by machines. So, while the AI does not necessarily cause more
routine tasks to be handled by machines, it might create the economic con-
ditions that lead to just that.
3.5.2 Contracting
Contracting shares much with programming. Here is Jean Tirole (2009,
265) on the subject:
Its general thrust goes as follows. The parties to a contract (buyer, seller)
initially avail themselves of an available design, perhaps an industry stan-
dard. This design or contract is the best contract under existing knowl-
edge. The parties are unaware, however, of the contract’s implications, but
they realize that something may go wrong with this contract; indeed, they
may exert cognitive eff ort in order to fi nd out about what may go wrong
and how to draft the contract accordingly: put diff erently, a contingency
is foreseeable (perhaps at a prohibitively high cost), but not necessarily
foreseen. To take a trivial example, the possibility that the price of oil
increases, implying that the contract should be indexed on it, is perfectly
foreseeable, but this does not imply that parties will think about this possi-
bility and index the contract price accordingly.
Tirole argues that contingencies can be planned for in contracts using cogni-
tive eff ort (akin to what we have termed here as judgment), while others may
be optimally left out because the eff ort is too costly relative to the return
given, say, the low likelihood that contingency arises.
This logic can assist us in understanding what prediction machines might
A Theory of Decision- Making and Artifi cial Intelligence 107
do to contracts. If an AI becomes available then, in writing contracts, it is
possible, because fi ne state predictions are possible, to incur cognitive costs
to determine what the contingencies should be if those states should arise.
For other states, the contract will be left incomplete—perhaps for a default
action or alternatively some renegotiation process. A direct implication of
this is that contracts may well become less incomplete.
Of course, when it comes to employment contracts, the eff ects may be
diff erent. As Herbert Simon (1951) noted, employment contracts diff er from
other contracts precisely because it is often not possible to specify what
actions should be performed in what circumstance. Hence, what those con-
tracts often allocate are diff erent decision rights.
What is of interest here is the notion that contacts can be specifi ed
clearly—that is, programmed—but also that prediction can activate the
use of human judgment. That latter notion means that actions cannot be
easily contracted—by defi nition, contractibility is programming and need-
ing judgment implies that programming was not possible. Thus, as predic-
tion machines improve and more human judgment is optimal, then that
judgment will be applied outside of objective contract measures—including
objective performance measures. If we had to speculate, this would favor
more subjective performance processes, including relational contracts
(Baker, Gibbons, and Murphy 1999).9
3.5.3 Firm
Boundaries
We now turn to consider what impact AI may have on fi rm boundaries
(that is, the make or buy decision). Suppose that it is a buyer ( B) who receives the value from a decision taken—that is, the payoff from the risky or safe
action as the case may be. To make things simple, let’s assume that =
i
for all i, so that V = k ( vR + (1 v) S ) + ˆ ( m k)( vR + (1 v) S ) + ( N m) S.
We suppose that the tasks are undertaken by a seller ( S). The tasks
{1, . . . , k} and { m + 1, . . . , N ) can be contracted upon, while the intermediate tasks require the seller to exercise judgment. We suppose that the
cost of providing judgment is a function c( ˆ), which is nondecreasing and
convex. (We write this function in terms of ˆ just to keep the notation
simple.) The costs can be anticipated by the buyer. So if one of the inter-
mediate states arises, the buyer can choose to give the seller a fi xed price
contract (and bear none of the costs) or a cost- plus contract (and bear all
of them).
Following Tadelis (2002), we assume that the seller market is competitive
and so all surplus accrues to the buyer. In this case, the buyer return is
9. A recent paper by Dogan and Yildirim (2017) actually considers how automation might
impact on worker contracts. However, they do not examine AI per se, and focus on how it might change objective performance measures in teams moving from joint performance evaluation to more relative performance evaluation.
108 Ajay Agrawal, Joshua Gans, and Avi Goldfarb
k ( vR + (1 v) S) + max ˆ ( m k)( vR + (1 v) S ), S
{
}+ ( N m) S p zc ˆ(),
while the seller return is: p – (1 – z) c( ˆ). Here p + zc( ˆ) is the contract price and z is 0 for a fi xed price contract and 1 for a cost- plus contract. Note that only with a cost- plus contract does the seller exercise any judgment. Thus,
the buyer chooses a cost- plus over a fi xed price contract if
k ( vR + (1 v) S ) + max ˆ ( m k)( vR + (1 v) S ), S
{
}+ ( N m) S c ˆ()
> k ( vR + (1 v) S) + ( N k) S.
It is easy to see that as m rises (i.e., prediction becomes cheaper), a cost- plus contract is more likely to be chosen. That is, incentives fall as prediction
becomes more abundant.
Now we can consider the impact of integration.
We assume that the buyer
can choose to make the decisions themselves, but at a higher cost. That is,
c( ˆ, I) > c( ˆ) where I denotes integration. We also assume that ∂ c( ˆ, I)/ ∂ ˆ > c( ˆ ) / ˆ
(
). Under integration, the buyer’s value is
k ( vR + (1 v) S) + ˆ *( m k)( vR + (1 v) S) + ( N m) S c( ˆ *, I) where ˆ * maximizes the buyer payoff in this case. Given this, it can easily be
seen that as m increases, the returns to integration rise.
By contrast, notice that as k increases, the incentives for a cost- plus con-
tract are diminished and the returns to integration fall. Thus, the more pre-
diction machines allow for the placement of contingencies in a contract (the
larger m- k), the higher powered will seller incentives be and the more likely
there is to be integration.
Forbes and Lederman (2009) showed that airlines are more likely to ver-
tically integrate with regional partners when scheduling is more complex:
specifi cally, where bad weather is more likely to lead to delays. The impact of
prediction machines will depend on whether they lead to an increase in the
number of states where the action can be automated in a state- contingent
manner ( k) relative to the increase in the number of states where the state
becomes known but the action cannot be automated ( m). If the former, then
we will see more vertical integration with the rise of prediction machines. If
the latter, we will see less. The diff erence is driven by the need for more costly
judgment in the vertically integrated case as m- k rises.
3.6 Conclusions
In this chapter, we explore the consequences of recent improvements in
machine- learning technology that have advanced the broader fi eld of artifi -
cial intelligence. In particular, we argue that these advances in the ability of
machines to conduct mental tasks are driven by improvements in machine
prediction. In order to understand how improvements in machine prediction
will impact decision- making, it is important to analyze how the payoff s of
the model arise. We label the process of learning payoff s “judgment.”
A Theory of Decision- Making and Artifi cial Intelligence 109
By modeling judgment explicitly, we derive a number of useful insights
into the value of prediction. We show that prediction and judgment are gen-