by Ajay Agrawal
Decision- making
Sensor data fusion
Image alignment
Machine intelligence
Systems and control theory
Optimal search
Neural network
Layered control systems
Symbolic reasoning
Symbolic error analysis
References
Acemoglu, D., and P. Restrepo. 2017. “Robots and Jobs: Evidence from US Labor
Markets.” NBER Working Paper no. 23285, Cambridge, MA.
———. 2018. “Artifi cial Intelligence, Automation and Work.” NBER Working
Paper no. 24196, Cambridge, MA.
Aghion, P., and P. Howitt. 1992. “A Model of Growth through Creative Destruc-
tion.” Econometrica 60 (2): 323– 51.
Arrow, K. 1962. “Economic Welfare and the Allocation of Resources for Invention.”
In The Rate and Direction of Inventive Activity: Economic and Social Factors,
edited by R. R. Nelson. Princeton, NJ: Princeton University Press.
Bresnahan, T., E. Brynjolfsson, and L. Hitt. 2002. “Information Technology, Work-
place Organization, and the Demand for Skilled Labor: Firm- Level Evidence.”
Quarterly Journal of Economics 117 (1): 339– 76.
Bresnahan, T., and S. Greenstein. 1999. “Technological Competition and the Struc-
ture of the Computer Industry.” Journal of Industrial Economics 47 (1): 1– 40.
Bresnahan, T., and M. Trajtenberg. 1995. “General Purpose Technologies ‘Engines
of Growth’?” Journal of Econometrics 65:83– 108.
Brooks, R. 1990. “Elephants Don’t Play Chess.” Robotics and Autonomous Systems
6:3– 15.
Brynjolfsson, E., and L. M. Hitt. 2000. “Beyond Computation: Information Tech-
nology, Organizational Transformation and Business Performance.” Journal of
Economic Perspectives 14 (4): 23– 48.
David, P. 1990. “The Dynamo and the Computer: An Historical Perspective on the
Productivity Paradox.” American Economic Review 80 (2): 355– 61.
The Impact of Artifi cial Intelligence on Innovation 145
Evenson, R. E., and Y. Kislev. 1976. “A Stochastic Model of Applied Research.”
Journal of Political Economy 84 (2): 265– 82.
Furman, J. L., and S. Stern. 2011. “Climbing atop the Shoulders of Giants: The
Impact of Institutions on Cumulative Research.” American Economic Review 101
(5): 1933– 63.
Griliches, Z. 1957. “Hybrid Corn: An Exploration in the Economics of Technologi-
cal Change.” Econometrica 25 (4): 501– 22.
Hall, B. H., A. B. Jaff e, and M. Trajtenberg. 2001. “The NBER Patent Citation Data
File: Lessons, Insights and Methodological Tools.” NBER Working Paper no.
8498, Cambridge, MA.
Hinton, G. E., and R. R. Salakhutdinov. 2006. “Reducing the Dimensionality of
Data with Neural Networks.” Science 313 (5786): 504– 07.
Jones, B. F. 2009. “The Burden of Knowledge and the ‘Death of the Renaissance
Man’: Is Innovation Getting Harder?” Review of Economic Studies 76 (1): 283– 317.
Krizhevsky, A., I. Sutskever, and G. Hinton. 2012. “ImageNet Classifi cation with
Deep Convolutional Neural Networks.” Advances in Neural Information Pro-
cessing 25 (2). MIT Press. https://
www
.researchgate
.net/
journal/
1049-5258
_Advances_in_neural_information_processing_systems.
Leung, M. K. K., A. Delong, B. Alipanahi, and B. J. Frey. 2016. “Machine Learn-
ing in Genomic Medicine: A Review of Computational Problems and Data Sets.”
Proceedings of the IEEE 104 (1): 176– 97.
Marco, A., M. Carley, S. Jackson, and A. Myers. 2015. “The USPTO Historical
Patent Data Files.” USPTO Working Paper no. 2015– 01, United States Patent
and Trademark Offi
ce, 1– 57.
Marco, A., A. Myers, S. Graham, P. D’Agostino, and K. Apple. 2015. “The USPTO
Patent Assignment Dataset: Descriptions and Analysis.” USPTO Working Paper
no. 2015– 02, United States Patent and Trademark Offi
ce, 1– 53.
Mokyr, J. 2002. Gifts of Athena. Princeton, NJ: Princeton University Press.
Murray, F., and S. O’Mahony. 2007. “Exploring the Foundations of Cumulative
Innovation: Implications for Organization Science.” Organization Science 18 (6): 1006– 21.
Nelson, Richard. 1959. “The Simple Economics of Basic Scientifi c Research.” Jour-
nal of Political Economy 67 (3): 297– 306.
Newell, A., J. C. Shaw, and H. A. Simon. 1958. “Elements of a Theory of Human
Problem Solving.” Psychological Review 6 (3): 151– 66.
Newell, A., and H. A. Simon. 1976. “Computer Science as Empirical Inquiry: Sym-
bols and Search.” Communications of the ACM 19 (3): 113– 26.
Nilsson, N. 2010. The Quest for Artifi cial Intelligence: A History of Ideas and Achievements. Cambridge: Cambridge University Press.
Romer, P. 1990. “Endogenous Technological Change.” Journal of Political Economy
98 (5): S71– 102.
Rosenblatt, F. 1958. “The Perceptron: A Probabilistic Model for Information Stor-
age and Organization in the Brain.” Psychological Review 65 (6): 386– 408.
———. 1962. The Principles of Neurodynamics. New York: Spartan Books.
Rumelhart, D., G. Hinton, and R. Williams. 1986. “Learning Internal Represen-
tations by Error Propagation.” In Parallel Distributed Processing: Explorations
in the Microstructure of Cognition, Volume 2: Psychological and Biological Mod-
els, edited by J. McClelland and D. Rumelhart, 7– 57. Cambridge, MA: MIT
Press.
Scotchmer, S. 1991. “Standing on the Shoulders of Giants: Cumulative Research and
the Patent Law.” Journal of Economic Perspectives 5 (1): 29– 41.
Turing, A. 1950. “Computing Machinery and Intelligence.” Mind 59:433– 60.
146 Matthew Mitchell
Wallach, I., M. Dzamba, and A. Heifels. 2015. “AtomNet: A Deep Convolutional
Neural Network for Bioactivity Prediction in Structure- Based Drug Discovery.”
arXiv:1510.02855 [cs.LG]. https:// arxiv .org/ abs/ 1510.02855.
Williams, H. 2013. “Intellectual Property Rights and Innovation: Evidence from the
Human Genome.” Journal of Political Economy 121 (1): 1– 27.
Comment Matthew Mitchell
In their very interesting chapter, Cockburn, Henderson, and Stern make the
case that artifi cial intelligence (AI) might serve as a general purpose tech-
nology in the production of innovations. My discussion centers on what this
might mean for policy, and especially policies surrounding intellectual prop-
erty (IP) protection. In particular, AI is likely to bring up new questions that
are familiar from old IP debates about the balance between rewarding inno-
vation and fears that this protection might in turn deter future innovation.
Is AI a Technology for Innovation or Imitation?
It is not obvious whether AI is a general purpose technology for innova-
tion or a very effi
cient method of imitation. The answer has direct rele-
vance for policy. A technology that made innovation cheaper would often
(but not always) imply less need for strong IP protection, since the balance
would swing toward limiting monopoly power and away from compensating
innovation costs. To the extent that a technology reduces cost of imitation,
/>
however, it typically necessitates greater protection.
New technology is often useful for both innovation and imitation. For
instance technologies like plastic molds, which can off er the possibility of
new designs and therefore foster innovation, also lead to greater possibili-
ties for reverse engineering. Machine learning is, in a sense, a sophisticated
sort of mimicking; it sees what “works” (by some criterion) and fi nds ways
to exploit that relationship. Therefore it seems that AI might be a general
purpose technology for either innovation or imitation.
Consider a news aggregator. Many of these aggregators work because
of some form of machine learning; they match the user to news stories that
are predicted to be of interest. This is clearly a service that generates value,
and would not exist in anything like its realized form in the absence of the
underlying AI technology. But some news sites have argued that this con-
stitutes infringement of their copyright. Semantically there is a question: Is
the aggregator technology an innovation or is it imitation?
Matthew Mitchell is professor of economic analysis and policy at the University of Toronto.
For acknowledgments, sources of research support, and disclosure of the author’s material fi nancial relationships, if any, please see http:// www .nber .org/ chapters/ c14023.ack.
Comment 147
Of course the answer is that it is both. It is much like the case of sequen-
tial innovations, where a later innovation builds on the earlier one, and at
the same time uses and improves upon the prior. In those cases, to decide
if the new innovation is a suffi
cient breakthrough on the old, words like
“non obvious” are employed in patent law. It is not completely clear how
such words would apply to innovations that are made by machines; non-
obviousness is designed in terms of a “person having ordinary skill in the
art” and therefore is fundamentally about the human brain. How we will
answer semantic questions like “what is obvious?” in a world where innova-
tions are generated by machines will be central, and diffi
cult, if we are to
balance IP rewards and costs.
Situations like that of news aggregators have largely been managed, in
practice, by the internet version of contracts. A news source can make its
articles visible or invisible to the aggregator by blocking the content through
a robots .txt fi le. That leaves only a competition concern: if news aggregators
are few, they may still have monopoly power over creators of underlying
content, making it diffi
cult to solve problems simply by allowing content
providers to opt out. The aggregator might control so much consumer atten-
tion that a news source cannot be viable without it.
Hammers That Make Nails
The aggregator example brings up the question of what policies might
foster competition in a world where innovations are made using AI. Cock-
burn, Henderson, and Stern highlight the importance of data sharing and
availability as an essential input in a world where the data itself is an input
into the production of innovation by AI. This is clearly of critical impor-
tance. One issue that complicates policy is that the innovations may not
only be produced from data, but also generate new data. Google’s search engine generated data from users because it was a superior engine in the
fi rst place, but this can undoubtedly cement Google’s market position. In a
sense, asking the right questions or solving the right problems initially can
generate users and data that lead to more innovations in the future. It is like
a hammer that both needs nails to be productive, and also produces nails;
being the fi rst user of the hammer magnifi es the advantage by creating more
of the complementary input.
Here the economics literature on IP highlights two eff ects to balance:
giving property rights to data (and not forcing the nails to be shared) is an
encouragement to using the hammer in the fi rst place (since it increases the
value of the nails it produces) but also can make the hammer- nail tech-
nology less effi
cient for other fi rms (since they have less access to nails as
an input). Striking the right balance on property rights for data strikes at
the heart of the classic debate on how much competition is good for inno-
vation.
148 Matthew Mitchell
Competition, Innovation, and Privacy
Whinston (2012) summarizes the classic forces of competition before and
after innovation: Arrow (1962) suggests that ex ante competition is good
for innovation, whereas Schumpeter (1942) argues that ex post competition
is bad for innovation. Because today’s innovations tend to lead to future
innovations, for instance, through the data they generate if AI were involved,
there is unfortunately no clear distinction between ex ante and ex post to
serve as a rule. In the case of data, there is another force: privacy. It may be
distasteful to enforce a data- sharing standard that would lead to multiple
fi rms having the inputs necessary to attack the same problem. Goldfarb and
Tucker (2012) point out that this means that privacy policy is connected to
innovation policy more generally. Restrictions on data ownership will mean
restrictions on a vital input into the innovation production process when
innovations are produced with AI.
Since privacy concerns will likely mean less competition for innovation
technologies built on AI, policymakers will have to be vigilant about insuf-
fi cient competition. Since concern about insuffi
cient competition harming
innovation is largely about a lack of ex ante competition, the most important
areas will be innovations in the early stage, relatively uncluttered areas of
the technology space. Tailoring innovation policy in a new world of AI-
generated innovations will require taking care to heed the general lessons
of balancing benefi ts and costs of market power, while at the same time
taking seriously the important new issues that are specifi c to the AI context.
Cockburn, Henderson, and Stern’s work helps us to better understand that
context.
References
Arrow, K. 1962. “Economic Welfare and the Allocation of Resources to Invention.”
In The Rate and Direction of Inventive Activity: Economic and Social Factors,
edited by Universities- National Bureau Committee for Economic Research and
the Committee on Economic Growth of the Social Science Research Councils,
467– 92. Princeton, NJ: Princeton University Press.
Goldfarb, Avi, and Catherine Tucker. 2012. “Privacy and Innovation.” In Innovation
Policy and the Economy, vol. 12, edited by Josh Lerner and Scott Stern, 65– 89.
Chicago: University of Chicago Press.
Schumpeter, Joseph. 1942. Capitalism, Socialism and Democracy. New York: Harper
& Brothers.
Whinston, Michael D. 2012. “Comment on ‘Competition and Innovation: Did
Arrow Hit the Bull’s Eye?’ ” In The Rate and Direction of Inventive Activity Revis-
ited, edited by Josh Lerner and Scott Stern, 404– 10. Chicago: University of Chicago Press.
&nb
sp; 5
Finding Needles in Haystacks
Artifi cial Intelligence and
Recombinant Growth
Ajay Agrawal, John McHale, and Alexander Oettl
The potential for continued economic growth comes from the vast
search space that we can explore. The curse of dimensionality is, for
economic purposes, a remarkable blessing. To appreciate the potential
for discovery, one need only consider the possibility that an extremely
small fraction of the large number of potential mixtures may be valu-
able. (Romer 1993, 68– 69)
Deep learning is making major advances in solving problems that
have resisted the best attempts of the artifi cial intelligence community
for years. It has turned out to be very good at discovering intricate
structure in high- dimensional data and is therefore applicable to many
domains of science, business, and government. (LeCun, Bengio, and
Hinton 2015, 436)
5.1 Introduction
What are the prospects for technology- driven economic growth? Tech-
nological optimists point to the ever- expanding possibilities for combin-
Ajay Agrawal is the Peter Munk Professor of Entrepreneurship at the Rotman School of
Management, University of Toronto, and a research associate of the National Bureau of Economic Research. John McHale is Established Professor of Economics and Dean of the College of Business, Public Policy, and Law at the National University of Ireland. Alexander Oettl is associate professor of strategy and innovation at the Georgia Institute of Technology and a research associate of the National Bureau of Economic Research.
We thank Kevin Bryan, Joshua Gans, and Chad Jones for thoughtful input on this chap-
ter. We gratefully acknowledge fi nancial support from Science Foundation Ireland, the Social Sciences Research Council of Canada, the Centre for Innovation and Entrepreneurship at the Rotman School of Management, and the Whitaker Institute for Innovation and Societal Development. For acknowledgments, sources of research support, and disclosure of the authors’
material fi nancial relationships, if any, please see http:// www .nber .org/ chapters/ c14024.ack.
149
150 Ajay Agrawal, John McHale, and Alexander Oettl
ing existing knowledge into new knowledge (Romer 1990, 1993; Weitzman
1998; Arthur 2009; Brynjolfsson and McAfee 2014). The counter case
put forward by technological pessimists is primarily empirical: growth at
the technological frontier has been slowing down rather than speeding up