by Ajay Agrawal
(Romer 1993, 71). In our model, A is a measure of a researcher’s human capital. Clearly, human capital depends on the existing technological and other knowledge and the researcher’s access to that knowledge. In turn, the production of new knowledge depends on the researcher’s human capital.
156 Ajay Agrawal, John McHale, and Alexander Oettl
Innovations occur as a result of combining existing knowledge to produce
new knowledge. Knowledge can be combined a ideas at a time, where a =
0, 1 . . . A. For a given individual researcher, the total number of possible combinations of units of existing knowledge (including singletons and the
null set)3 given their knowledge access is
A
A
(1)
Z =
= 2 A .
i
a
a=0
The total number of potential combinations, Z , grows exponentially with
i
A. Clearly, if A is itself growing exponentially, Z will be growing at a double i
exponential rate. This is the source of combinatorial explosion in the model.
Since it is more convenient to work with continuously measured variables in
the growth model, from this point on we treat A and Z as continuously mea-i
sured variables. However, the key assumption is that the number of potential
combinations grows exponentially with knowledge access.
The next step is to specify how potential combinations map to discover-
ies. We assume that a large share of potential combinations do not produce
useful new knowledge. Moreover, of those combinations that are useful,
many will have already been discovered and thus are already part of A. This
latter feature refl ects the fi shing- out phenomenon. The per- period transla-
tion of potential combinations into valuable new knowledge is given by the
(asymptotically) constant elasticity discovery function
(2)
1
A =
Zi
=
2 A
( ) 1
for < ≤ 1
i
= ln Z = ln 2 A
( ) = ln(2) A for = 0,
i
where is a positively valued knowledge discovery parameter and use is
made of L’Hôpital’s rule for the limiting case of = 0.4
For > 0, the elasticity of new discoveries with respect to the number of
possible combinations, Z , is
i
A Z
1
(3)
i =
Zi
=
Zi
,
Z A
Z
1
i
( Z
1) /
i
i
3. Excluding the singletons and the null set, total number of potential combinations would be 2A – A – 1. As singletons and the null set are not true “combinations,” we take equation (1) to be an approximation of the true number of potential combinations. The relative signifi cance of this approximation will decline as the knowledge base grows, and we ignore it in what follows.
4. L’Hôpital’s rule is often useful where a limit of a quotient is indeterminate. The limit of the term in brackets on the right- hand side of equation (2) as goes to zero is 0 divided by 0 and is thus indeterminate. However, by L’Hôpital’s rule, the limit of this quotient is equal to the limit of the quotient produced by dividing the limit of the derivative of the numerator with respect to by the limit of the derivative of the denominator with respect to . This limit is equal to ln(2)A.
Artifi cial Intelligence and Recombinant Growth 157
which converges to as the number of potential combinations goes to infi n-
ity. For = 0, the elasticity of new discoveries is
A Z
Z
(4)
i =
i
= 1 ,
Z A
Z
lnZ
lnZ
i
i
i
i
which converges to zero as the number of potential combinations goes to
infi nity.
A number of factors seem likely to aff ect the value of the fi shing- out/
complexity parameter, . First are basic constraints relating to natural
phenomena that limit what is physically possible in terms of combining
existing knowledge to produce scientifi cally or technologically useful new
knowledge. Pessimistic views on the possibilities for future growth tend to
emphasize such constraints. Second is the ease of discovering new useful
combinations that are physically possible. The potentially massive size and
complexity of the space of potential combinations means that fi nding useful
combinations can be a needle- in-the- haystack problem. Optimistic views of
the possibilities for future growth tend to emphasize how the combination
of AI (embedded in algorithms such as those developed by Atomwise and
DeepGenomics) and increases in computing power can aid prediction in the
discovery process, especially where it is diffi
cult to identify patterns of cause
and eff ect in high- dimensional data. Third, recognizing that future oppor-
tunities for discoveries are path dependent (see, e.g., Weitzman 1998), the
value of will depend on the actual path that is followed. To the extent that
AI can help identify productive paths, it will limit the chances of economies
going down technological dead ends.
There are L researchers in the economy each working independently,
A
where L is assumed to be measured continuously. (In section 5.4, we con-
A
sider the case of team production in an extension of the model.) We assume
that some researchers will duplicate each other’s discoveries—the standing-
on- toes eff ect. To capture this eff ect, new discoveries are assumed to take
place “as if ” the actual number of researchers is equal to L , where 0 ≤ ≤ 1.
A
Thus the aggregate knowledge production function for > 0 is given:
2 A
( ) 1
(5)
A = L
.
A
At a point in time (with given values of A and L ), how does an increase A
in aff ect the rate of discovery of new knowledge, A? The partial derivative
of A with respect to is
A
L ( ln(2) A
1)2 A
A
(6)
=
+ LA .
2
2
A suffi
cient condition for this partial derivative to be positive is that that
term in square brackets is greater than zero, which requires
158 Ajay Agrawal, John McHale, and Alexander Oettl
Fig. 5.2 Relationships between new knowledge production, , and
1/
(7)
A >
1
.
ln(2)
We assume this condition holds. Figure 5.2 shows an example of how A
(and also the percentage growth of A given that A is assumed to be equal to 100) varies with for diff erent assumed values of . Higher values of are
associated with a faster growth rate. The fi gure also shows how and
interact positively: greater knowledge access (as refl ected in a higher value
of ) increases the gain associated with a given increase in the value of .
We assume, however, that itself evolves with A. A larger A means a bigger and more complex di
scovery search space. We further assume that this
complexity will eventually overwhelm any discovery technology given the
power of the combinatorial explosion as A grows. This is captured by assum-
ing that is a declining function of A; that is, = ( A), where ʹ( A) < 0. In the limit as A goes to infi nity, we assume that ( A) goes to zero, or
(8)
lim ( A) = 0.
A
This means that the discovery function converges asymptotically (given sus-
tained growth in A) to
(9)
A = ln(2) L A .
A
This mirrors the functional form of the Romer/ Jones function and allows
for decreasing returns to scale in the number of researchers, depending
on the size of . While the form of the function is familiar by design, its
combinatorial- based foundations have the advantage of providing richer
motivations for the key parameters in the knowledge discovery function.
Artifi cial Intelligence and Recombinant Growth 159
We use the fact that the functional form of equation (9) is the same as that
used in Jones (1995) to solve for the steady state of the model. More pre-
cisely, given that the limiting behaviour of our knowledge production func-
tion mirrors the function used by Jones and all other aspects of the economy
are assumed to be identical, the steady state along a balanced growth path
with constant exponential growth will be the same as in that model.
As we have nothing to add to the other elements of the model, we here
simply sketch the growth model developed by Jones (1995), referring the
reader to the original for details. The economy is composed of a fi nal goods
sector and a research sector. The fi nal goods sector uses labor, L , and inter-Y
mediate inputs to produce its output. Each new idea (or “blueprint”) sup-
ports the design of an intermediate input, with each input being supplied by
a profi t- maximizing monopolist. Given the blueprint, capital, K, is trans-
formed unit for unit in producing the input. The total labor force, L, is fully allocated between the fi nal goods and research sectors, so that L + L = L.
Y
A
We assume the labor force to be equal to the population and growing at
rate n(>0).
Building on Romer (1990), Jones (1995) shows that the production func-
tion for fi nal goods can be written as
(10)
Y = AL
( ) K 1 ,
Y
where Y is fi nal goods output. The intertemporal utility function of a rep-
resentative consumer in the economy is given by
(11)
U = u( c) e tdt,
0
where c is per capita consumption and is the consumer’s discount rate.
The instantaneous utility function is assumed to exhibit constant relative
risk aversion, with a coeffi
cient of risk aversion equal to and a (constant)
intertemporal elasticity of substitution equal to 1 / .
Jones (1995) shows that the steady- state growth rate of this economy
along a balanced growth path with constant exponential growth is given by
(12)
g = g = g = g =
n ,
A
y
c
k
1
where g = A/ A is the growth rate of the knowledge stock, g is the growth A
y
rate of per capita output y , (where y = Y / L), g is the growth rate of per c
capita output c (where c = C / L) , and g is the growth rate of the capital labor k
ratio (where k = K / L).
Finally, the steady- state share of labor allocated to the research sector
is given by
(13)
s =
1
1 + 1 /
( (1– ) / n) + (1/ ) –
{
}.
160 Ajay Agrawal, John McHale, and Alexander Oettl
We can now consider how changes in the parameters of knowledge pro-
duction given by equation (5) will aff ect the dynamics of growth in the
economy. We start with improvement in the availability of AI- based search
technologies that improve a researcher’s access to knowledge. In the context
of the model, the availability of AI- based search technologies—for example,
Google, Meta, BenchSci, and so forth—should increase the value of and
reduce the “burden of knowledge” eff ect. From equation (12), an increase
in this parameter will increase the steady- state growth rate and also the
growth rate and the level of per capital output along the transition path to
the steady state.
We next consider AI- based technologies that increase the value of the
discovery parameter, . As does not appear in the steady state in equation
(12), the steady- state growth rate is unaff ected. However, such an increase
will raise the growth rate (and level) along the path to that steady state.
The most interesting potential changes to the possibilities for growth
come about if we allow a change to the fi shing- out/ complexity parameter,
. We assume that the economy is initially in a steady state and then experi-
ences an increase in as the result of the discovery of a new AI technology.
Recall that we assume that will eventually converge back to zero as the
complexity that comes with combinatorial explosion eventually overwhelms
the new AI. Thus, the steady state of the economy is unaff ected. However,
the transition dynamics are again quite diff erent, with larger increases in
knowledge for an given starting of the knowledge stock along the path back
to the steady state.
Using Jones (1995) as the limiting case of the model is appealing because
we avoid unbounded increases in the growth rate, which would lead to the
breakdown of any reasonable growth model and indeed a breakdown in the
normal operations of any actual economy. It is interesting to note, however,
what happens to growth in the economy if instead of assuming that con-
verges asymptotically to zero, it stays at some positive value (even if very
small). Dividing both sides of equation (5) by A gives an expression for the
growth rate of the stock of knowledge
A
(2 A )
1
(14)
= ln(2) LA
.
A
A
The partial derivative of this growth rate with respect to A is
( A / A)
(15)
= LA 1+ 2 A
( ) ( ln(2) A 1) .
A
A 2
The key to the sign of this derivative is the sign of the term inside the last
round brackets. This term will be positive for a large enough A. As A is growing over time (for any positive number of researchers and existing knowledge
stock), the growth rate must eventually begin to rise once A exceeds some
threshold value. Thus, with a fi xed positive value of (or with converging
Artifi cial Intelligence and Recombinant Growth 161
asymptotically to a positive value), the growth rate will eventually begin to
grow without bound.
A possible deeper foundation for our combinatorial- based knowledge
production function is provided by the work on “rugged landscapes” (Kauff -
man 1993). Kauff man’s NK model has been fruitfully applied to question
s of
organizational design (Levinthal 1997), strategy (Rivkin 2000) and science-
driven technological search (Fleming and Sorenson 2004). In our setting,
each potential combination of existing ideas accessible to a researcher is
a point in the landscape represented by a binary string indicating whether
each idea in the set of accessible knowledge is in the combination (a 1 in the
string) or not (a 0 in the string). The complexity—or “ruggedness”—of the
landscape depends on the total number of ideas that can be combined and
also on the way that the elements of the binary string interact. For any given
element, its impact on the value of the combination will depend on the value
of X other elements.5 The larger the value of X the more interrelated are the
various elements of the string, creating a more rugged knowledge landscape
and thus a harder the search problem for the innovator.
We can think of would-be innovators as starting from some already known
valuable combination and searching for other valuable combinations in the
vicinity of that combination (see, e.g., Nelson and Winter 1982). Purely
local search can be thought of as varying one component of the binary
string at a time for some given fraction of the total elements of the string.
This implies that the total number of combinations that can be searched is
a linear function of the innovator’s knowledge. This is consistent with the
Romer/ Jones knowledge production function where the discovery of new
knowledge is a linear function of knowledge access, A f. Positive values of
are then associated with the capacity to search a larger fraction of the space
of possible combinations, which in turn increases the probability of discov-
ering a valuable combination. Meta technologies such as deep learning can
be thought of as expanding the capacity to search a given space of potential
combinations—that is, as increasing the value of —thereby increasing the
chance of new discoveries. Given its ability to deal with complex nonlinear
spaces, deep learning may be especially valuable for search over highly rug-
ged landscapes.