by Ajay Agrawal
erally complements, as long as judgment is not too diffi
cult. We also show
that improvements in judgment change the type of prediction quality that
is most useful: better judgment means that more accurate predictions are
valuable relative to more frequent predictions. Finally, we explore the role of
complexity, demonstrating that, in the presence of complexity, the impact of
improved prediction on the value of judgment depends on whether improved
prediction leads to automated decision- making. Complexity is a key aspect
of economic research in automation, contracting, and the boundaries of
the fi rm. As prediction machines improve, our model suggests that the con-
sequences in complex environments are particularly fruitful to study.
There are numerous directions research in this area could proceed. First,
the chapter does not explicitly model the form of the prediction—includ-
ing what measures might be the basis for decision- making. In reality, this
is an important design variable and impacts on the accuracy of predic-
tions and decision- making. In computer science, this is referred to as the
choice of surrogates, and this appears to be a topic amenable for economic
theoretical investigation. Second, the chapter treats judgment as largely a
human- directed activity. However, we have noted that it can else be encoded,
but have not been explicit about the process by which this occurs. Endogenis-
ing this—perhaps relating it to the accumulation of experience—would be
an avenue for further investigation. Finally, this is a single- agent model. It
would be interesting to explore how judgment and prediction mix when each
is impacted upon by the actions and decisions of other agents in a game
theoretic setting.
References
Acemoglu, Daron. 2003. “Labor- and Capital- Augmenting Technical Change.”
Journal of the European Economic Association 1 (1): 1– 37.
Acemoglu, Daron, and Pascual Restrepo. 2017. “The Race between Machine and
Man: Implications of Technology for Growth, Factor Shares, and Employment.”
Working paper, Massachusetts Institute of Technology.
Agrawal, Ajay, Joshua S. Gans, and Avi Goldfarb. 2018a. “Human Judgment and
AI Pricing.” American Economic Association: Papers & Proceedings, 108:58–63.
———. 2018b. Prediction Machines: The Simple Economics of Artifi cial Intelligence.
Boston, MA: Harvard Business Review Press.
Alpaydin, Ethem. 2010. Introduction to Machine Learning, 2nd ed. Cambridge, MA: MIT Press.
Autor, David. 2015. “Why Are There Still So Many Jobs? The History and Future of
Workplace Automation.” Journal of Economic Perspectives 29 (3): 3– 30.
Baker, George, Robert Gibbons, and Kevin Murphy. 1999. “Informal Authority in
Organizations.” Journal of Law, Economics, and Organization 15:56– 73.
Belloni, Alexandre, Victor Chernozhukov, and Christian Hansen. 2014. “High-
110 Andrea Prat
Dimensional Methods and Inference on Structural and Treatment Eff ects.” Jour-
nal of Economic Perspectives 28 (2): 29– 50.
Benzell, Seth G., Laurence J. Kotlikoff , Guillermo LaGarda, and Jeff rey D. Sachs.
2015. “Robots Are Us: Some Economics of Human Replacement.” NBER Work-
ing Paper no. 20941, Cambridge, MA.
Bolton, P., and A. Faure- Grimaud. 2009. “Thinking Ahead: The Decision Problem.”
Review of Economic Studies 76:1205– 38.
Brynjolfsson, Erik, and Andrew McAfee. 2014. The Second Machine Age. New York:
W. W. Norton.
Dogan, M., and P. Yildirim. 2017. “Man vs. Machine: When Is Automation Inferior
to Human Labor?” Unpublished manuscript, The Wharton School of the Uni-
versity of Pennsylvania.
Domingos, Pedro. 2015. The Master Algorithm. New York: Basic Books.
Forbes, Silke, and Mara Lederman. 2009. “Adaptation and Vertical Integration in
the Airline Industry.” American Economic Review 99 (5): 1831– 49.
Hastie, Trevor, Robert Tibshirani, and Jerome Friedman. 2009. The Elements of
Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed. New York: Springer.
Hawkins, Jeff . 2004. On Intelligence. New York: Times Books.
Jha, S., and E. J. Topol. 2016. “Adapting to Artifi cial Intelligence: Radiologists and Pathologists as Information Specialists.” Journal of the American Medical Association 316 (22): 2353– 54.
Lusted, L. B. 1960. “Logical Analysis in Roentgen Diagnosis.” Radiology 74:178– 93.
Markov, John. 2015. Machines of Loving Grace. New York: HarperCollins Pub-
lishers.
Ng, Andrew. 2016. “What Artifi cial Intelligence Can and Can’t Do Right Now.”
Harvard Business Review Online. Accessed Dec. 8, 2016. https:// hbr .org/ 2016/ 11
/ what- artifi cial- intelligence- can- and- cant- do- right- now.
Simon, H. A. 1951. “A Formal Theory of the Employment Relationship.” Econo-
metrica 19 (3): 293– 305.
Tadelis, S. 2002. “Complexity, Flexibility and the Make- or- Buy Decision.” American Economic Review 92 (2): 433– 37.
Tirole, J. 2009. “Cognition and Incomplete Contracts.” American Economic Review
99 (1): 265– 94.
Varian, Hal R. 2014. “Big Data: New Tricks for Econometrics.” Journal of Economic
Perspectives 28 (2): 3– 28.
Comment Andrea Prat
One of the key activities of organizations is to collect, process, combine,
and utilize information (Arrow 1974). A modern corporation exploits
the vast amounts of data that it accumulates from marketing, operations,
human resources, fi nance, and other functions to grow faster and be more
Andrea Prat is the Richard Paul Richman Professor of Business at Columbia Business
School and professor of economics at Columbia University.
For acknowledgments, sources of research support, and disclosure of the author’s material fi nancial relationships, if any, please see http:// www .nber .org/ chapters/ c14022.ack.
Comment 111
productive. This exploitation process depends on the kind of information
technology (IT) that is available to the fi rm. If IT undergoes a revolution,
we should expect deep structural changes in the way fi rms are organized
(Milgrom and Roberts 1990).
Agrawal, Gans, and Goldfarb explore the eff ects that an IT revolution
centered on artifi cial intelligence could have on organizations. Their anal-
ysis highlights an insightful distinction between prediction, the process of
forecasting a state of the world given observable information, and judg-
ment, the assessment of the eff ects of the state of the world and the possible action x the organization can take in response to it, namely, the value of the payoff function u(, x).
This is an important point of departure from existing work. Almost all
economists—as well as computer scientists and decision scientists—assume
that the payoff function u(, x) is known: the decision maker is presumed to have a good sense of how actions and states combine to create outcomes.
This assumption, however, is highly unrealistic. The credit card fraud ex-
ample supplied by the authors is convincing. What is the long- term cost
to a bank of approving a fraudulent transaction or labeling a legitimate
transaction a suspected fraud?
Organizations can spend resources to improve
both their prediction preci-
sion and their judgment quality. Agrawal, Gans, and Goldfarb characterize
the solution to this optimization problem. Their main result is that, under
reasonable assumption, investment in prediction and investment in judg-
ment are complementary (Proposition 2). Investing in prediction makes
investment in judgment more benefi cial in expected value.
This complementarity suggests that moving from a situation where
prediction is prohibitively expensive to one where it is economical should
increase the returns to judgment. In this perspective, the AI revolution will
lead to an increase in the demand for judgment. However, judgment is an
intrinsically diff erent problem—one that cannot be solved through the anal-
ysis of big data.
Let me suggest an example. Admissions offi
ces of many universities are
turning to AI to choose which applicants to make off ers to. Algorithms
can be trained on past admissions data. We observe the characteristics of
applicants and the grades of past and present students. Leaving aside the
censored observations problem arising from the fact that we only see the
grades of successful applicants who decide to enroll, we can hope that AI
can provide a fairly accurate prediction of an applicant’s future grades given
his or her observable characteristics. The obvious problem is that we do not
know how admitting someone who is likely to get high grades is going to
aff ect the long- term payoff of our university. The latter is a highly complex
object that depends on whether our alums become the kind of inspiring,
successful, and ethical people that will add to the academic reputation and
fi nancial sustainability of our university. There is likely to be a connection
112 Andrea Prat
between grades and this long- term goal, but we are not sure what it is. In
this setting, Agrawal, Gans, and Goldfarb teach us an important lesson.
Progress in AI should induce our university leaders to ask deeper questions
about the relationship between student quality and the long- term goals of
our higher- learning institutions. These questions cannot be answered within
AI, but rather with more theory- driven retrospective approaches or perhaps
more qualitative methodologies.
As an organizational economist, I am particularly interested in the impli-
cations of Agrawal, Gans, and Goldfarb’s model for the study of organi-
zations. First, this chapter highlights the importance of the dynamics of
decision- making—a seriously underresearched topic. In a complex world,
organizations are not going to immediately collect all the information they
could possibly need about all possible contingencies they may face. Bolton
and Faure- Grimaud (2009), a source of inspiration for Agrawal, Gans, and
Goldfarb, model a decision maker who can “think ahead” about future states
of the world in yet unrealized states of nature. They show that the typical
decision maker does not want to think through a complete action plan, but
rather focus on key short- and medium- term decisions. Agrawal, Gans, and
Goldfarb show that Bolton and Faure- Grimaud’s ideas are highly relevant
for understanding how organizations are likely to respond to changes in
information technology.
Second, Agrawal, Gans, and Goldfarb also speak to the organizational
economics literature on mission. Dewatripont, Jewitt, and Tirole (1999)
develop a model where organizational leaders are agents whose type is
unknown, as in Holmstrom’s (1999) career concerns paradigm. Each agent
is assigned a mission, a set of measured variables that are used to evaluate
and reward the agent. Dewatripont, Jewitt, and Tirole identify a tension
between selecting a simple one- dimensional mission that will provide the
agent with a strong incentive to perform well or a “fuzzy” multidimensional
mission that will dampen the agent’s incentive to work hard but will more
closely mirror the true objective of the organization.
This tension is also present in Agrawal, Gans, and Goldfarb’s world.
Should we give the organization a mission that is close to a pure prediction
problem, like admitting students who will get high grades? The pro is that
it will be relatively easy to assess the leader’s performance. The con is that
the outcome may be weakly related to the organization’s ultimate objective.
Or should we give the organization a mission that also comprises the judg-
ment problem, like furthering the long- term academic reputation of our
university? This mission would be more representative of the organization’s
ultimate objective, but may make it hard to assess our leaders and give them
a weak incentive to adopt new prediction technologies. One possible lesson
from Agrawal, Gans, and Goldfarb is that, as the cost of adopting AI goes
down, the moral hazard problem connected with judgment becomes rela-
Comment 113
tively more important, thus militating in favor of incentive schemes that
reward judgment rather than prediction.
Third, Agrawal, Gans, and Goldfarb’s section on reliability touches on
an important topic. Is it better to have a technology that returns accurate
predictions with a low probability or less accurate predictions with a higher
probability? The answer to this question depends on the available judgment
technology. Better judgment technology increases the marginal benefi t of
prediction accuracy rather than prediction frequency. More broadly, this
type of analysis can guide the design of AI algorithms. Given the mapping
between states, actions, and outcomes, and given the cost of various pre-
diction technologies, what prediction technology should the organization
select? A general analysis of this question may require using information
theoretical concepts, introduced to economics by Sims (2003).
Fourth, Agrawal, Gans, and Goldfarb show that economic theory can
make important contributions to the debate over how AI will aff ect optimal
organization. There is a related area where the interaction between econo-
mists and computer scientists can be benefi cial. Artifi cial intelligence typi-
cally assumes a stable fl ow of instances. When a bank develops an AI- based
system to detect fraud, it assumes that the available data, which is used to
build and test the detection algorithm, comes from the same data- generating
process as future data on which the algorithm will be applied. However,
the underlying data- generating process is not an exogenously given natural
phenomenon: it is the output of a set of human beings who are pursuing
their own goals, like maximizing the chance of getting their nonfraudulent
application accepted or maximizing their chance of defrauding the bank.
These sentient creatures will in the long term respond to the fraud- detection
algorithm by modifying their application strategy, for instance, by providing
diff erent information or by exerting eff ort to modify the reported variables.
This means that the data- generating process will be subject to a structural
change and that this change will be endogenous to the fraud- detection algo-
rit
hm chosen by the bank. A similar phenomenon occurs in the university
admission example discussed above: a whole consulting industry is devoted
to understanding admissions criteria and advising applicants on how to
maximize their success chances. A change in admissions practices is likely
to be refl ected in the choices that high school students make.
If the data- generating process is endogenous and depends on the predic-
tion technology adopted by the organization, the judgment problem identi-
fi ed by Agrawal, Gans, and Goldfarb becomes even more complex. The
organization must evaluate how other agents will respond to changes in the
prediction technology. As, by defi nition, no data is available about not yet
realized data- generating processes, the only way to approach this problem
is by estimating a structural model that allows other agents to respond to
changes in our prediction technology.
114 Andrea Prat
In conclusion, Agrawal, Gans, and Goldfarb make a convincing case
that the AI revolution should increase the benefi t of improving our judg-
ment ability. They also provide us with a tractable yet powerful framework
to understand the interaction between prediction and judgment. Future
research should focus on further understanding the implications of improve-
ments in prediction technology on the optimal structure of organizations.
References
Arrow, Kenneth. J. 1974. The Limits of Organization. New York: W. W. Norton.
Bolton, P., and A. Faure- Grimaud. 2009. “Thinking Ahead: The Decision Problem.”
Review of Economic Studies 76: 1205– 38.
Dewatripont, Mathias, Ian Jewitt, and Jean Tirole. 1999. “The Economics of Career
Concerns, Part II: Application to Missions and Accountability of Government
Agencies.” Review of Economic Studies 66 (1): 199– 21.
Holmstrom, Bengt. 1999. “Managerial Incentive Problems: A Dynamic Perspective.”
Review of Economic Studies 66 (1): 169– 82.
Milgrom, Paul, and John Roberts. 1990. “The Economics of Modern Manufac-
turing: Technology, Strategy, and Organization.” American Economic Review
June: 511– 28.
Sims, Christopher. 2003. “Implications of Rational Inattention.” Journal of Mone-
tary Economics 50 (3): 665– 90.
4
The Impact of Artifi cial