by Ajay Agrawal
acteristics of any suffi
ciently intelligent entity—no matter what fi nal ob-
jectives are programmed into it by evolution or by its creator—is that
it will act by pursuing intermediate objectives or “basic drives” that are
instrumental for any fi nal objective (Omohundro 2008). These intermediate
objectives include self- preservation, self- improvement, and resource accu-
mulation, which all make it likelier and easier for the entity to achieve its
fi nal ob jectives.
It may be worthwhile pursuing the logic of what happens if humans do
not or cannot assert ownership rights over artifi cially intelligent or superin-
telligent entities.33 That would imply that suffi
ciently advanced AI is likely
to operate autonomously.
To describe the resulting economic system, Korinek (2017) assumes that
there are two types of entities, unenhanced humans and AI entities, which
are in a Malthusian race and diff er—potentially starkly—in how they are
aff ected by technological progress. At the heart of Malthusian models is the
notion that survival and reproduction requires resources, which are poten-
33. If humans and artifi cially intelligent entities are somewhat close in their levels of intelligence, it may still be possible for humans to assert ownership rights over the AI—in fact, throughout the history of mankind, those determining and exerting property rights have not always been the most intelligent. For example, humans could still threaten to turn off or destroy the computers on which AI entities are running. However, if the gap between humans and superintelligent AI entities grows too large, it may be impossible for humans to continue to exert control, just like a two- year- old would not be able to eff ectively exert property rights over adults.
384 Anton Korinek and Joseph E. Stiglitz
tially scarce.34 Formally, traditional Malthusian models capture this by
describing how limited factor supplies interact with two related sets of tech-
nologies, a production and a consumption/ reproduction technology: First,
humans supply the factor labor, which is used in a production technology
to generate consumption goods. Second, a consumption/ reproduction tech-
nology converts consumption goods into the survival and reproduction of
humans, determining the future supply of the factor labor.
Throughout human history Malthusian dynamics, in which scarce con-
sumption goods limited the survival and reproduction of humans, provided
a good description of the state of humanity, roughly until when Malthus
(1798) published his Essay on the Principle of Population to describe the
resulting Iron Law of Population. Over the past two centuries, humanity, at
least in advanced countries, was lucky to escape its Malthusian constraints:
capital accumulation and rapid labor- augmenting technological progress
generated by the Industrial Revolution meant that our technology to pro-
duce consumption goods was constantly ahead of the consumption goods
required to guarantee our physical survival. Moreover, human choices to
limit physical reproduction meant that the gains of greater productivity were
only partly dissipated in increased population. However, this state of aff airs
is not guaranteed to last forever.
Korinek (2017) compares the production and consumption/ reproduction
technologies of humans and AI entities and observes that they diff er starkly:
On the production side, the factor human labor is quickly losing ground to
the labor provided by AI entities, captured by the notion of worker- replacing
technological progress that we introduced earlier. In other words, AI entities
are becoming more and more effi
cient in the production of output compared
to humans. On the consumption/ reproduction side, the human technology
to convert consumption goods such as food and housing into future humans
has experienced relatively little technological change—the basic biology of
unenhanced humans is slow to change. By contrast, the reproduction tech-
nology of AI entities—to convert AI consumption goods such as energy,
silicon, aluminum into future AI—is subject to exponential progress, as
described, for example, by Moore’s Law and its successors, which postulate
that computing power per dollar (i.e., per unit of “AI consumption good”)
doubles roughly every two years.35
34. If AI directs its enhanced capabilities at binding resource constraints, it is conceivable that such constraints might successively be lifted, just as we seem to have avoided the constraints that might have been imposed by the limited supply of fossil fuels. At present, humans consume only a small fraction—about 0.1 percent—of the energy that earth receives from the sun. However, astrophysicists such as Tegmark (2017) note that according to the laws of physics as currently known, there will be an ultimate resource constraint on superintelligent AI given by the availability of energy (or, equivalently, matter, since E = mc2) accessible from within our event horizon.
35. The original version of Moore’s Law, articulated by the cofounder of Intel, Gordon Moore (1965), stated that the number of components that can be fi t on an integrated circuit (IC) would double every year. Moore revised his estimate to every two years in 1975. In recent
AI and Its Implications for Income Distribution and Unemployment 385
Taken together, these two dynamics imply—unsurprisingly—that humans
may lose the Malthusian race in the long run, unless counteracting steps are
taken, to which we will turn shortly. In the following paragraphs we trace
out what this might entail and how we might respond to it. (Fully following
the discussion requires a certain suspension of disbelief. However, we should
begin by recognizing that machines can already engage in a large variety of
economic transactions—trading fi nancial securities, placing orders, mak-
ing payments, and so forth. It is not a stretch of the mind to assume that
they could in fact engage in all of what we now view as economic activities.
In fact, if an outside observer from a diff erent planet were to witness the
interactions among the various intelligent entities on earth, it might not be
clear to her if, for example, artifi cially intelligent entities such as Apple or
Google control what we humans do [via a plethora of control devices called
smartphones that we carry with us] or whether we intelligent humans control
what entities such as Apple and Google do. See also the discussion in Tur-
ing [1950].) The most interesting aspects of the economic analysis concern
the transition dynamics and the economic mechanisms through which the
Malthusian race plays out.
In the beginning, those lacking the skills that are useful in an AI- dominated
world may fi nd that they are increasingly at a disadvantage in competing for
scarce resources, and they will see their incomes decline, as we noted earlier.
The proliferation of AI entities will at fi rst put only modest price pressure
on scarce resources, and most of the scarce factors are of relatively little
interest to humans (such as silicon), so humanity as a whole will benefi t from
the high productivity of AI entities and from large gains from trade. From a
human perspective, this will look like AI leading to signifi cant productivity
gains in our world. Moreover, any scarce factors that are v
aluable for the
reproduction and improvement of AI, such as human labor skilled in pro-
gramming, or intellectual property, would experience large gains.
As time goes on, the superior production and consumption technologies
of AI entities imply that they will proliferate. Their ever- increasing effi
ciency
units will lead to fi erce competition over any nonreproducible factors that
are in limited supply, such as land and energy, pushing up the prices of
such factors and making them increasingly unaff ordable for regular humans,
given their limited factor income. It is not hard to imagine an outcome where
the AI entities, living for themselves, absorb (i.e., “consume”) more and
more of our resources.
Eventually, this may force humans to cut back on their consumption to
the point where their real income is so low that they decline in numbers.
years, companies such as Intel have predicted that the literal version of Moore’s Law may come to an end over the coming decade, as the design of traditional single- core ICs has reached its physical limits. However, the introduction of multidimensional ICs, multicore processors and other specialized chips for parallel processing implies that a broader version of Moore’s Law, expressed in terms of computing power per dollar, is likely to continue for several decades to come. Quantum computing may extend this time span even further into the future.
386 Anton Korinek and Joseph E. Stiglitz
Technologists have described several dystopian ways in which humans could
survive for some time—ranging from uploading themselves into a simulated
(and more energy- effi
cient) world,36 to taking drugs that reduce their energy
intake. The decline of humanity may not play out in the traditional way
described by Malthus—that humans are literally starving—since human
fertility is increasingly a matter of choice rather than nutrition. It is suf-
fi cient that a growing number of unenhanced humans decide that given the
prices they face, they cannot aff ord suffi
cient off spring to meet the human
replacement rate while providing their off spring with the space, education,
and prospects that they aspire to.
One question that these observations bring up is whether it might be desir-
able for humanity to slow down or halt progress in AI beyond a certain point.
However, even if such a move were desirable, it may well be technologically
infeasible—progress may have to be stopped well short of the point where
general artifi cial intelligence could occur. Furthermore it cannot be ruled
out that a graduate student under the radar working in a garage will create
the world’s fi rst superhuman AI.
If progress in AI cannot be halted, our description above suggests mecha-
nisms that may ensure that humans can aff ord a separate living space and
remain viable: because humans start out owning some of the factors that are
in limited supply, if they are prohibited from transferring these factors, they
could continue to consume them without suff ering from their price apprecia-
tion. This would create a type of human “reservation” in an AI- dominated
world. Humans would likely be tempted to sell their initial factor holdings,
for two reasons: First, humans may be less patient than artifi cially intelli-
gent entities. Second, superintelligent AI entities may earn higher returns on
factors and thus be willing to pay more for them than other humans. That
is why, for the future of humanity, it may be necessary to limit the ability
of humans to sell their factor allocations to AI entities. Furthermore, for
factors such as energy that correspond to a fl ow that is used up in consump-
tion, it would be necessary to allocate permanent usage rights to humans.
Alternatively, we could provide an equivalent fl ow income to humans that
is adjusted regularly to keep pace with factor prices.37
14.7 Conclusions
The proliferation of AI and other forms of worker- replacing technologi-
cal change can be unambiguously positive in a fi rst- best economy in which
individuals are fully insured against any adverse eff ects of innovation, or if
it is coupled with the right form of redistribution. In the absence of such
36. See, for example, Hanson (2016). In fact, Aguiar et al. (2017) document that young males with low education have already shifted a considerable part of their time into the cyber world rather than supplying labor to the market economy—at wages that they deem unattractive.
37. All of this assumes that the superintelligent AI entities don’t use their powers in one way or another to abrogate these property rights.
AI and Its Implications for Income Distribution and Unemployment 387
intervention, worker- replacing technological change may not only lead to
workers getting a diminishing fraction of national income, but may actually
make them worse off in absolute terms.
The scope for redistribution is facilitated by the fact that the changes in
factor prices create windfall gains on the complementary factors, which
should make it feasible to achieve Pareto improvements. If there are limits
on redistribution, the calculus worsens and a Pareto improvement can no
longer be ensured. This may lead to resistance from those in society who
are losing. As a result, there is a case for using as broad of a set of second-
best policies as possible, including changes in intellectual property rights, to
maximize the likelihood that AI (or technological progress more generally)
generate a Pareto improvement.
Artifi cial intelligence and other changes in technology necessitate large
adjustments, and while individuals and the economy more broadly may be
able to adjust to slow changes, this may not be so when the pace is rapid.
Indeed, in such situations, outcomes can be Pareto inferior. The more will-
ing society is to support the necessary transition and to provide support to
those who are “left behind,” the faster the pace of innovation that society can
accommodate, and still ensure that the outcomes are Pareto improvements.
A society that is not willing to engage in such actions should expect resis-
tance to innovation, with uncertain political and economic consequences.
References
Acemoglu, Daron. 1998. “Why Do New Technologies Complement Skills? Directed
Technical Change and Wage Inequality.” Quarterly Journal of Economics 113 (4):
1055– 89.
———. 2002. “Directed Technical Change.” Review of Economic Studies 69 (4):
781– 809.
Acemoglu, Daron, and Pascual Restrepo. 2018. “The Race between Machine and
Man: Implications of Technology for Growth, Factor Shares and Employment.”
American Economic Review 108 (6): 1488– 542.
Aghion, Philippe, Benjamin Jones, and Charles Jones. 2017. “Artifi cial Intelligence
and Economic Growth.” NBER Working Paper no. 23928, Cambridge, MA.
Aguiar, Mark, Mark Bils, Kerwin Kofi Charles, and Erik Hurst. 2017. “Leisure
Luxuries and the Labor Supply of Young Men.” NBER Working Paper no. 23552,
Cambridge, MA.
Akerlof, George, and Janet Yellen. 1990. “The Fair Wage- Eff ort Hypothesis and
Unemployment.” Quarterly Journal of Economics 105 (2): 255– 83.
Arrow, Kenneth. 1962. “Econo
mic Welfare and the Allocation of Resources for
Invention.” In The Rate and Direction of Inventive Activity: Economic and Social
Factors, edited by Richard R. Nelson, 609– 26. Princeton, NJ: Princeton Univer-
sity Press.
Baker, Dean, Arjun Jayadev, and Joseph E. Stiglitz. 2017. “Innovation, Intellectual
Property, and Development: A Better Set of Approaches for the 21st Century.”
AccessIBSA: Innovation & Access to Medicines in India, Brazil & South Africa.
388 Anton Korinek and Joseph E. Stiglitz
Barrat, James. 2013. Our Final Invention: Artifi cial Intelligence and the End of the Human Era. New York: St. Martin’s Press.
Berg, Andrew, Edward F. Buffi
e, and Luis- Felipe Zanna. 2018. “Should We Fear the
Robot Revolution? (The Correct Answer is Yes).” Journal of Monetary Economics
97. www .doi .org/ 10.1016/ j.jmoneco.2018.05.012.
Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.
Dasgupta, Partha, and Joseph E. Stiglitz. 1980a. “Uncertainty, Industrial Structure
and the Speed of R&D.” Bell Journal of Economics 11 (1): 1– 28.
———. 1980b. “Industrial Structure and the Nature of Innovative Activity.” Eco-
nomic Journal 90 (358): 266– 93.
———. 1988. “Potential Competition, Actual Competition and Economic Welfare.”
European Economic Review 32:569– 77.
Dávila, Eduardo, and Anton Korinek. 2018. “Pecuniary Externalities in Economies
with Financial Frictions.” Review of Economic Studies 85 (1): 352– 95.
Delli Gatti, Domenico, Mauro Gallegati, Bruce C. Greenwald, Alberto Russo,
and Joseph E. Stiglitz. 2012a. “Mobility Constraints, Productivity Trends, and
Extended Crises.” Journal of Economic Behavior & Organization 83 (3): 375– 93.
———. 2012b. “Sectoral Imbalances and Long- run Crises.” In The Global Macro
Economy and Finance, edited by Franklin Allen, Masahiko Aoki, Jean- Paul
Fitoussi, Nobuhiro Kiyotaki, Robert Gordon, and Joseph E. Stiglitz. Interna-
tional Economic Association Series. London: Palgrave Macmillan.
Dosi, Giovanni, and Joseph E. Stiglitz. 2014. “The Role of Intellectual Property
Rights in the Development Process, with Some Lessons from Developed Coun-
tries: An Introduction.” In Intellectual Property Rights: Legal and Economic