by Ajay Agrawal
( K / K ), so combining this with equation (31) t
t
t
t
yields
2
A
(32)
t
> A ,
A
t
t
implying that
A
(33)
t > A /2.
A
t
t
That is, the growth rate of A grows at least as fast as A /2. But we know from t
the analysis of the simple diff erential equation given earlier—see equation
(28)—that even if equation (33) held with equality, this would be enough to
deliver the singularity. Because A grows faster than that, it also exhibits a
singularity.
Because ideas are nonrival, the overall economy is characterized by
increasing returns, à la Romer (1990). Once the production of ideas is fully
13. It is easy to rule out the opposite case of ( A / A ) < ( K / K ).
t
t
t
t
256 Philippe Aghion, Benjamin F. Jones, and Charles I. Jones automated, this increasing returns applies to “accumulable factors,” which
then leads to a Type II growth explosion, that is, a mathematical singu-
larity.
Example 3: Singularities without Complete Automation
The above examples consider complete automation of goods production
(Example 1) and ideas production (Example 2). With the CES case and an
elasticity of substitution less than one, we require that all tasks are auto-
mated. If only a fraction of the tasks are automated, then the scarce factor
(labor) will dominate, and growth rates do not explode. We show in this
section that with Cobb- Douglas production, a Type II singularity can occur
as long as a suffi
cient fraction of the tasks are automated. In this sense, the
singularity might not even require full automation.
Suppose the production function for goods is Y = A K L 1 (a constant
t
t
t
population simplifi es the analysis, but exogenous population growth would
not change things). The capital accumulation equation and the idea produc-
tion function are then specifi ed as
(34)
K = sLA K
K ,
t
t
t
t
(35)
A = K S A ,
t
t
t
where 0 < < 1 and 0 < < 1, and where we also take S (research eff ort) to
be constant. Following the Zeira (1998) model discussed earlier, we interpret
as the fraction of goods tasks that have been automated and as the frac-
tion of tasks in idea production that have been automated.
The standard endogenous growth result requires “constant returns to
accumulable factors.” To see what this means, it is helpful to defi ne a key
parameter:
(36)
:=
.
1
1
In this setup, the endogenous growth case corresponds to = 1. Not surpris-
ingly, then, the singularity case occurs if > 1. Importantly, notice that this
can occur with both and less than one, that is, when tasks are not fully
automated. For example, in the case in which = = = 1/2, then =
2 · , so explosive growth and a singularity will occur if > 1/2. We show
that > 1 delivers a Type II singularity in the remainder of this section. The
argument builds on the argument given in the previous subsection.
In growth rates, the laws of motion for capital and ideas are
K
A
(37)
t = sL 1
t
,
K
K 1
t
t
A
K
(38)
t = S
t
.
A
A 1
t
t
Artifi cial Intelligence and Economic Growth 257
It is easy to show that these growth rates cannot be constant if > 1.14
If the growth rates are rising over time to infi nity, then eventually either
g > g , or the reverse, or the two growth rates are the same. Consider the At
Kt
fi rst case, that is, g > g ; the other cases follow the same logic. Once again, At
Kt
to simplify the algebra, set = 0, S = 1, and sL 1 = 1. Multiplying the growth rates together in this case gives
A
K
A
(39)
t
t = Kt
t
.
A
K
A 1
K 1
t
t
t
t
Since g > g , we then have
A
K
2
A
A
t
> Kt
t
A
A 1–
K 1
t
t
t
K
A
> 1
t
t
(since K > 1 eventually)
K
A 1–
K 1–
t
t
t
t
1
A
> 1
t
(rewriting)
K 1–
A 1–
K 1
t
t
t
1
A
> 1
t
(since
A > K eventually)
t
t
A 1
A 1
A 1
t
t
t
>
1
At
(collecting terms).
Therefore,
A
(40)
t > A( 1)/2.
A
t
t
With > 1, the growth rate grows at least as fast as A raised to a positive t
power. But even if it grew just this fast we would have a singularity, by the
same arguments given before. The case with g > g can be handled in the Kt
At
same way, using K s instead of A s. QED.
Example 4: Singularities via Superintelligence
The examples of growth explosions above are based in automation. These
examples can also be read as creating “superintelligence” as an artifact of
automation, in the sense that advances of A across all tasks include, implic-
t
itly, advances across cognitive tasks, and hence a resulting singularity can be
conceived of as commensurate with an intelligence explosion. It is interest-
ing that automation itself can provoke the emergence of superintelligence.
However, in the telling of many futurists, the story runs diff erently, where
14. If the growth rate of K is constant, then g = (1 – ) g , so K is proportional to A/ (1– ).
A
K
Making this substitution in equation (35) and using > 1 then implies that the growth rate of A would explode, and this requires the growth rate of K to explode.
258 Philippe Aghion, Benjamin F. Jones, and Charles I. Jones an intelligence explosion occurs fi rst and then, through the insights of this
superintelligence, a technological singularity may be reached. Typically the
AI is seen as “self- improving” through a recursive process.
This idea can be modeled using similar ideas to those presented above. To
do so in a simple manner, divide tasks into two types: physical and cognitive.
Defi ne a common level of intelligence across the cognitive tasks by a pro-
ductivity term A
, and further defi ne a common productivity at physical
cognitive
tasks, A
. Now imagine we have a unit of AI working to improve itself,
physical
where progress follows
(41)
A
= A 1+
.
cognitive
cognitive
We have studied this diff erential equation above, but now we apply it to
cognition alone. If > 0, then the process of self- improvement explodes,
resulting in an unbounded intelligence in fi nite time.
The next question is how this superintelligence would aff ect the rest of
the economy. Namely, would such superintelligence also produce an output
singularity? One route to a singularity could run through the goods produc-
tion function: to the extent that physical tasks are not essential (i.e., ≥ 0),
then the intelligence explosion will drive a singularity in output. However,
it seems noncontroversial to assert that physical tasks are essential to pro-
ducing output, in which case the singularity will have potentially modest
eff ects directly on the goods production channel.
The second route lies in the idea production function. Here the question is
how the superintelligence would advance the productivity at physical tasks,
A
. For example, if we write
physical
(42)
A
= A
F ( K , L),
physical
cognitive
where > 0, then it is clear that A
will also explode with the intelligence
physical
explosion. That is, we imagine that the superintelligent AI can fi gure out
ways to vastly increase the rate of innovation at physical tasks. In the above
specifi cation, the output singularity would then follow directly upon the
advent of the superintelligence. Of course, the idea production functions
(41) and (42) are particular, and there are reasons to believe they would not
be the correct specifi cations, as we will discuss in the next section.
9.4.2 Objections to Singularities
The above examples show ways in which automation may lead to rapid
accelerations of growth, including ever- increasing growth rates or even a
singularity. Here we can consider several possible objections to these sce-
narios, which can broadly be characterized as “bottlenecks” that AI cannot
resolve.
Automation Limits
One kind of bottleneck, which has been discussed above, emerges when
some essential input(s) to production are not automated. Whether AI can
Artifi cial Intelligence and Economic Growth 259
ultimately perform all essential cognitive tasks, or more generally achieve
human intelligence, is widely debated. If not, then growth rates may still be
larger with more automation and capital intensity (sections 9.2 and 9.3),
but the “labor free” singularities featured above (section 9.4.1) become out
of reach.
Search Limits
A second kind of bottleneck may occur even with complete automation.
This type of bottleneck occurs when the creative search process itself pre-
vents especially rapid producitivy gains. To see this, consider again the idea
production function. In the second example above, we allow for complete
automation and show that a true mathematical singularity can ensue. But
note also that this result depends on the parameter . In the diff erential
equation
1+
A = A
t
t
we will have explosive growth only if > 0. If ≤ 0, then the growth rate
declines as A advances. Many models of growth and associated evidence
t
suggest that, on average, innovation may be becoming harder, which is con-
sistent with low values of on average.15 Fishing out or burden of knowledge
processes can point toward < 0. Interestingly, the burden of knowledge
mechanism (Jones 2009), which is based on the limits of human cognition,
may not restrain an AI if an AI can comprehend a much greater share of the
knowledge stock than a human can. Fishing- out processes, however, viewed
as a fundamental feature of the search for new ideas (Kortum 1997), would
presumably also apply to an AI seeking new ideas. Put another way, AI may
resolve a problem with the fi shermen, but it would not change what is in the
pond. Of course, fi shing- out search problems can apply not only to overall
productivity but also to the emergence of a superintelligence, limiting the
potential rate of an AI program’s self- improvement (see equation [41]), and
hence limiting the potential for growth explosions through the superintel-
ligence channel.
Baumol Tasks and Natural Laws
A third kind of bottleneck may occur even with complete automation
and even with a superintelligence. This type of bottleneck occurs when an
essential input does not see much productivity growth. That is, we have an-
other form of Baumol’s cost disease.
To see this, generalize slightly the task- based production function (5) of
section 9.2 as
1
1/
Y =
a X
(
) di , < 0,
it
it
0
15. See, for example, Jones (1995), Kortum (1997), Jones (2009), Gordon (2016), and Bloom et al. (2017).
260 Philippe Aghion, Benjamin F. Jones, and Charles I. Jones where we have introduced task- specifi c productivity terms, a .
it
In contrast to our prior examples, where we considered a common tech-
nology term, A , that aff ected all of aggregate production, here we imag-
t
ine that productivity at some tasks may be diff erent than others and may
proceed at diff erent rates. For example, machine computation speeds have
increased by a factor of about 1011 since World War II.16 By contrast, power
plants have seen modest effi
ciency gains and face limited prospects given
constraints like Carnot’s theorem. This distinction is important, because
with < 0, output and growth end up being determined not by what we are
good at, but by what is essential but hard to improve.
In particular, let’s imagine that some superintelligence somehow does
emerge, but that it can only drive productivity to (eff ectively) infi nity in a
share of tasks, which we index from i ∈ [0,]. Output thereafter will be
1
1/
Y =
( a Y ) di .
it it
Clearly, if these remaining technologies a cannot be radically improved,
it
we no longer have a mathematical singularity (Type II growth explosion)
and may not even have much future growth. We might still end up with an
AK model
, if all the remaining tasks can be automated at low cost, and this
can produce at least accelerating growth if the a can be somewhat improved
it
but, again, in the end we are still held back by the productivity growth in the
essential things that we are worst at improving. In fact, Moore’s Law, which
stands in part behind the rise of artifi cial intelligence, may be a caution-
ary tale along these lines. Computation, in the sense of arithmetic opera-
tions per second, has improved at mind- boggling rates and is now mind-
bogglingly fast. Yet economic growth has not accelerated, and may even be
in decline.
Through the lens of essential tasks, the ultimate constraint on growth
will then be the capacity for progress at the really hard problems. These
constraints may in turn be determined less by the limits of cognition (i.e.,
traditionally human intelligence limits, which an AI superintelligence may
overcome) and more by the limits of natural laws, such as the second law of
thermodynamics, which constrain critical processes.17
Creative Destruction
Moving away from technological limits per se, the positive eff ect of AI
(and super AI) on productivity growth may be counteracted by another
16. This ratio compares Beltchley Park’s Colossus, the 1943 vacuum tube machine that made 5 × 105 fl oating point operations per second, with the Sunway TaihuLight computer, which in 2016 peaked at 9 × 1016 operations per second.
17. Returning to example 4 above, note that equation (42) assumes that all physical constraints can be overcome by superintelligence. However, one might alternatively specify max( A
) = c, representing a fi rm physical constraint.
physical
Artifi cial Intelligence and Economic Growth 261
eff ect working through creative destruction and its impact on innovation
incentives. Thus in the appendix we develop a Schumpeterian model in
which: (a) new innovations displace old innovations; and (b) innovations
involves two steps, where the fi rst step can be performed by machines but
the second step requires human inputs to research. In a singularity- like limit
where successive innovations come with no time in between, the private
returns to human research and development (R&D) falls down to zero and
as a result innovation and growth taper off . More generally, the faster the
fi rst step of each successive innovation as a result of AI, the lower the return
to human investment in stage- two innovation, which in turn counteracts
the direct eff ect of AI and super- AIon innovation- led growth pointed out
above.
9.4.3 Some Additional Thoughts
We conclude this section with additional thoughts on how AI and its