by Ajay Agrawal
(Cowen 2011; Gordon 2016). Gordon (2016, 575) highlights this slowdown
for the US economy. Between 1920 and 1970, total factor productivity grew
at an annual average compound rate of 1.89 percent, falling to 0.57 per-
cent between 1970 and 1994, then rebounding to 1.03 percent during the
information technology boom between 1994 and 2004, before falling again
to just 0.40 percent between 2004 and 2014. Even the maintenance of this
lowered growth rate has only been possible due to exponential growth in
the number of research workers (Jones 1995). Bloom et al. (2017) document
that the total factor productivity in knowledge production itself has been
falling both in the aggregate and in key specifi c knowledge domains such as
transistors, health care, and agriculture.
Economists have given a number of explanations for the disappointing
growth performance. Cowen (2011) and Gordon (2016) point to a “fi shing
out” or “low- hanging fruit” eff ect—good ideas are simply becoming harder
to fi nd. Jones (2009) points to the headwind created by an increased “burden
of knowledge.” As the technological frontier expands, it becomes harder for
individual researchers to know enough to fi nd the combinations of knowl-
edge that produce useful new ideas. This is refl ected in PhDs being awarded
at older ages and a rise in team size as ever- more specialized researchers must
combine their knowledge to produce breakthroughs (Agrawal, Goldfarb,
and Teodoridis 2016). Other evidence points to the physical, social, and
institutional constraints that limit access to knowledge, including the need
to be physically close to the sources of knowledge (Jaff e, Trajtenberg, and
Henderson 1993; Catalini 2017), the importance of social relationships in
accessing knowledge (Mokyr 2002; Agrawal, Cockburn, and McHale 2006;
Agrawal, Kapur, and McHale 2008), and the importance of institutions in
facilitating—or limiting—access to knowledge (Furman and Stern 2011).
Despite the evidence of a growth slowdown, one reason to be hopeful
about the future is the recent explosion in data availability under the rubric
of “big data” and computer- based advances in capabilities to discover and
process those data. We can view these technologies in part as “meta tech-
nologies”—technologies for the production of new knowledge. If part of the
challenge is dealing with the combinatorial explosion in the potential ways
that existing knowledge can be combined as the knowledge base grows, then
meta technologies such as deep learning hold out the potential to partially
overcome the challenges of fi shing out, the rising burden of knowledge, and
the social and institutional constraints on knowledge access.
Of course, meta technologies that aid in the discovery of new knowledge
are nothing new. Mokyr (2002, 2017) gives numerous examples of how scien-
tifi c instruments such as microscopes and x-ray crystallography signifi cantly
Artifi cial Intelligence and Recombinant Growth 151
aided the discovery process. Rosenberg (1998) provides an account of how
technology- embodied chemical engineering altered the path of discovery in
the petrochemical industry. Moreover, the use of artifi cial intelligence (AI)
for discovery is itself not new and has underpinned fi elds such as chemin-
formatics, bioinformatics, and particle physics for decades. However, recent
breakthroughs in AI such as deep learning have given a new impetus to these
fi elds.1 The convergence of graphical processing unit (GPU)- accelerated
computing power, exponential growth in data availability buttressed in part
by open data sources, and the rapid advance in AI- based prediction tech-
nologies is leading to breakthroughs in solving many needle- in-a- haystack
problems (chapter 3, this volume). If the curse of dimensionality is both
the blessing and curse of discovery, advances in AI off er renewed hope of
breaking the curse while helping to deliver on the blessing.
Understanding how these technologies could aff ect future growth dynam-
ics is likely to require an explicitly combinatorial framework. Weitzman’s
(1998) pioneering development of a recombinant growth model has unfor-
tunately not been well incorporated into the corpus of growth theory litera-
ture. Our contribution in this chapter is thus twofold. First, we develop a
relatively simple combinatorial- based knowledge production function that
converges in the limit to the Romer/ Jones function. The model allows for
the consideration of how existing knowledge is combined to produce new
knowledge and also how researchers combine to form teams. Second, while
this function can be incorporated into existing growth models, the specifi c
combinatorial foundations mean that the model provides insights into how
new metatechnologies such as artifi cial intelligence might matter for the path
of future economic growth.
The starting point for the model we develop is the Romer/ Jones knowl-
edge production function. This function—a workhorse of modern growth
theory—models the output of new ideas as a Cobb- Douglas function with
the existing knowledge stock and labor resources devoted to knowledge
production as inputs. Implicit in the Romer/ Jones formulation is that new
knowledge production depends on access to the existing knowledge stock
and the ability to combine distinct elements of that stock into valuable new
ideas. The promise of AI as a meta technology for new idea production
is that it facilitates the search over complex knowledge spaces, allowing
for both improved access to relevant knowledge and improved capacity to
predict the value of new combinations. It may be especially valuable where
the complexity of the underlying biological or physical systems has stymied
technological advance, notwithstanding the apparent promise of new fi elds
such as biotechnology or nanotechnology. We thus develop an explicitly
combinatorial- based knowledge production function. Separate parameters
1. See, for example, the recent survey of the use of deep learning in computational chemistry by Garrett Goh, Nathan Hodas, and Abhinav Vishnu (2017).
152 Ajay Agrawal, John McHale, and Alexander Oettl
control the ease of knowledge access, the ability to search the complex space
of potential combinations, and the ease of forming research teams to pool
knowledge access. An attractive feature of our proposed function is that the
Romer/ Jones function emerges as a limiting case. By explicitly delineating
the knowledge access, combinatorial and collaboration aspects of knowl-
edge production, we hope that the model can help elucidate how AI could
improve the chances of solving needle- in-a- haystack- type challenges and
thus infl uence the path of economic growth.
Our chapter thus contributes to a recent but rapidly expanding literature
on the eff ects of AI on economic growth. Much of the focus of this new
literature is on how increased automation substitutes for labor in the produc-
tion process. Building on the pioneering work of Zeira (1998), Acemoglu
and Restrepo (2017) develop a model in which AI substitutes for workers in
existing tasks
, but also creates new tasks for workers to do. Aghion, Jones,
and Jones (chapter 9, this volume) show how automation can be consistent
with relatively constant factor shares when the elasticity of substitution
between goods is less than one. Central to their results is Baumol’s “cost
disease,” which posits the ultimate constraint on growth to be from goods
that are essential but hard to improve rather than goods whose production
benefi ts from AI- driven technical change. In a similar vein, Nordhaus (2015)
explores the conditions under which AI would lead to an “economic singu-
larity” and examines the empirical evidence on the elasticity of substitution
on both the demand and supply sides of the economy.
Our focus is diff erent from these papers in that instead of emphasising the
potential substitution of machines for workers in existing tasks, we empha-
sise the importance of AI in overcoming a specifi c problem that impedes
human researchers—fi nding useful combinations in complex discovery
spaces. Our chapter is closest in spirit to Cockburn, Henderson, and Stern
(chapter 4, this volume), which examines the implications of AI—and deep
learning in particular—as a general purpose technology (GPT) for inven-
tion. We provide a suggested formalization of this key idea. Nielsen (2012)
usefully illuminates the myriad ways in which “big data” and associated
technologies are changing the mechanisms of discovery in science. Nielsen
emphasizes the increasing importance of “collective intelligence” in formal
and informal networked teams, the growth of “data- driven intelligence”
that can solve problems that challenge human intelligence, and the impor-
tance of increased technology facilitating access to knowledge and data. We
incorporate all of these elements into the model developed in this chapter.
The rest of the chapter is organized as follows. In the next section, we
outline some examples of how advances in artifi cial intelligence are chang-
ing both knowledge access and the ability to combine knowledge in high-
dimensional data across a number of domains. In section 5.3, we develop an
explicitly combinatorial- based knowledge production function and embed
it in the growth model of Jones (1995), which itself is a modifi cation of
Artifi cial Intelligence and Recombinant Growth 153
Romer (1990). In section 5.4, we extend the basic model to allow for knowl-
edge production by teams. We discuss our results in section 5.5 and conclude
in section 5.6 with some speculative thoughts on how an “economic singu-
larity” might emerge.
5.2
How
Artifi cial Intelligence Is Impacting the
Production of Knowledge: Some Motivating Examples
Breakthroughs in AI are already impacting the productivity of scientifi c
research and technology development. It is useful to distinguish between
such meta technologies that aid in the process of search (knowledge access)
and discovery (combining existing knowledge to produce new knowledge).
For search, we are interested in AIs that solve problems that meet two condi-
tions: (a) potential knowledge relevant to the process of discovery is subject
to an explosion of data that an individual researcher or team of researchers
fi nds increasingly diffi
cult to stay abreast of (the “burden of knowledge”);
and (b) the AI predicts which pieces of knowledge will be most relevant to
the researcher, typically through the input of search terms. For discovery,
we also identify two conditions: (a) potentially combinable knowledge for
the production of new knowledge is subject to combinatorial explosion,
and (b) the AI predicts which combinations of existing knowledge will yield
valuable new knowledge across a large number of domains. We now consider
some specifi c examples of how AI- based search and discovery technologies
may change the innovation process.
5.2.1 Search
Meta produces AI- based search technologies for identifying relevant
scientifi c papers and tracking the evolution of scientifi c ideas. The company
was acquired by the Chan- Zuckerberg Foundation, which intends to make
it available free of charge to researchers. This AI- based search technology
meets our two conditions for a meta technology for knowledge access: (a) the
stock of scientifi c papers is subject to exponential growth at an estimated
8– 9 percent per year (Bornmann and Mutz 2015), and (b) the AI- based
search technology helps scientists identify relevant papers, thereby reduc-
ing the “burden of knowledge” associated with the exponential growth of
published output.
BenchSci is an AI- based search technology for the more specifi c task of
identifying eff ective compounds used in drug discovery (notably antibod-
ies that act as reagents in scientifi c experiments). It again meets our two
conditions: (a) reports on compound effi
cacy are scattered through mil-
lions of scientifi c papers with little standardization in how these reports are
provided, and (b) an AI extracts compound- effi
cacy information, allow-
ing scientists to more eff ectively identify appropriate compounds to use in
experiments.
154 Ajay Agrawal, John McHale, and Alexander Oettl
5.2.2 Discovery
Atomwise is a deep learning- based AI for the discovery of drug molecules
(compounds) that have the potential to yield safe and eff ective new drugs.
This AI meets our two conditions for a meta technology for discovery: (a) the
number of potential compounds is subject to combinatorial explosion, and
(b) the AI predicts how basic chemical features combine into more intricate
features to identify potential compounds for more detailed investigation.
Deep Genomics is a deep learning- based AI that predicts what happens
in a cell when DNA is altered by natural or therapeutic genetic variation.
It again meets our two conditions: (a) genotype- phenotype variations are
subject to combinatorial explosion, and (b) the AI “bridges the genotype-
phenotype divide” by predicting the results of complex biological processes
that relate variations in the genotype to observable characteristics of an
organism, thus helping to identify potentially valuable therapeutic interven-
tions for further testing.
5.3 A Combinatorial- Based Knowledge Production Function
Figure 5.1 provides an overview of our modeling approach and how it
relates to the classic Romer/ Jones knowledge production function. The solid
lines capture the essential character of the Romer/ Jones function. Research-
ers use existing knowledge—the standing- on- shoulders eff ect—to produce
Fig. 5.1 Romer/ Jones and combinatorial- based knowledge production functions
Artifi cial Intelligence and Recombinant Growth 155
new knowledge. The new knowledge then becomes part of the knowledge
base from which subsequent discoveries are made. The dashed lines capture
our approach. The existing knowledge base determines the potential new
combinations that are possible, the majority of which are likely to have no
value. The discovery of valuable new know
ledge is made by searching among
the massive number of potential combinations. This discovery process is
aided by meta technologies such as deep learning that allow researchers to
identify valuable combinations in spaces where existing knowledge interacts
in often highly complex ways. As with the Romer/ Jones function, the new
knowledge adds to the knowledge base—and thus the potential combina-
tions of that knowledge base—which subsequent researchers have to work
with. A feature of our new knowledge production function will be that the
Romer/ Jones function emerges as a limiting case both with and without
team production of new knowledge. In this section, we fi rst develop the new
function without team production of new knowledge; in the next section,
we extend the function to allow for team production.
The total stock of knowledge in the world is denoted as A, which we
assume initially is measured discretely. An individual researcher has access
to an amount of knowledge, A (also assumed to be an integer), so that the
share of the stock of knowledge available to an individual researcher is A– 1.2
We assume that 0 < < 1. This implies that the share of total knowledge
accessible to an individual researcher is falling with the total stock of knowl-
edge. This is a manifestation in the model of the “burden of knowledge”
eff ect identifi ed by Jones (2009)—it becomes more diffi
cult to access all the
available knowledge as the total stock of knowledge grows. The knowledge
access parameter, , is assumed to capture not only what a researcher knows
at a point in time, but also their ability to fi nd existing knowledge should they
require it. The value of the parameter will thus be aff ected by the extent to
which knowledge is available in codifi ed form and can be found as needed
by researchers. The combination of digital repositories of knowledge and
search technologies that can predict what knowledge will be most relevant
to the researcher given the search terms they input—think of the ubiquitous
Google as well as more specialized search technologies such as Meta and
BenchSci—should increase the value of .
2. Paul Romer emphasized the importance of distinguishing between ideas (a nonrival good) and human capital (a rival good). “Ideas are . . . the critical input in the production of more valuable human and non- human capital. But human capital is also the most important input in the production of new ideas. . . . Because human capital and ideas are so closely related as inputs and outputs, it is tempting to aggregate them into a single type of good. . . . It is important, nevertheless, to distinguish ideas and human capital because they have diff erent fundamental attributes as economic goods, with diff erent implications for economic theory”