by Ajay Agrawal
to rule their countries many decades after they had lost the cooperation
of their masses. And they did not have super- smart robots to help them.
If the future elite of countries that are willing to protect their rents from
owning the economy’s productive assets (machines) study history’s success-
ful autocrats well enough (or their machines do), this could go on for quite
a while.
In contrast, where the machines are nationally owned, and where the
rents are shared by all society’s members, what I will call inclusive societies,
there is no reason that we cannot have equality in consumption. The very
good, incentive- based reasons for inequality to exist under capitalism will
no longer apply.
The Political Economic Source of Future Human Work
What will humans do for work in a world where machines are better at
doing everything than humans? It would seem that the obvious answer is
nothing. We will have to learn to create meaning from non- work- related
activities, and hopefully overcome our evolved proclivity toward equating
personal value with social productivity. I am going to argue that this obvious
286 Patrick Francois
answer is wrong. There will actually be vital and important work for humans
to do in this world, and that the amount of it to be done will be greatest in
the most inclusive societies.
Managing the Machines Will Be the Source of Human Work
Why would machines need managing? The machines will be self-
replicating, self- maintaining, self- creating, self- repairing, self- improving,
so what else needs to be done? What is not so clear is which ends the
machines are pursuing.
Usually we tend to think in terms of well- defi ned human objectives, and
for most of these it is a nonquestion as to what machines should do. For
example, oncology machines will read MRIs, diagnose potential cancers,
order more tests, or operations, or drugs, and so forth, based on protocols
they have learned by being run millions of times on training data. They can
learn what to do because objectives here are relatively simple, and success
in meeting them can be used to determine optimal actions easily. So these
machines with very narrow objectives need relatively little managing.
But machines will be producing all output and services in our economy,
and while doing this will all the while continually reinvent and modify them-
selves in pursuit of objectives that were programmed in to them by their
human masters. So we will have a complex set of evolving machines who are
not only running all production, but doing all inventing as well. We could
think of these machines as designed, but through the process of machine
learning and machine- based innovation the designs would become far
removed from anything imagined by the last generation of human design-
ers that worked on them. Even understanding what they are doing will be
diffi
cult for us humans. Perhaps we will develop intuitions about them, a
richer human language, or narratives about what they do that will give us
some vague understandings of what they are about, but it is reasonable to
suppose that no human will fully understand them.
The question is, Will we be willing to let this design direction simply con-
tinue without human interjection? I would argue that we will not. We (our
societal “we”) will be greatly concerned about the direction that this design
takes, and managing this direction will require immense human oversight.
The more so, the more inclusive a society is. But why would we need to
manage it if we have already programmed in to these advanced machines
a set of objectives that are human centred? If we have already delegated
that to the machines? I am assuming that, as part of this programming, we
will fi nd fail- safes to short- circuit rogue machines following objectives that
do not advance human welfare, as interestingly sketched by Nick Bostrom
(2014), so I am explicitly excluding that particular dystopia.
But even with such fail- safes, additional human involvement will be
required. This is because we cannot delegate a particular objective function
to machines and be done with it, because whatever delegation that we imple-
Comment 287
ment at time t, based on an objective articulated with the knowledge we have
at time t, may well be outdated by time tʹ > t because either our knowledge or our values have changed by tʹ. We will need people (obviously greatly
aided by machines) charged with working out what our social consensus is
at time tʹ, informing other citizens at tʹ what relevant information they need to make their decisions then, and then implementing those changes at time
tʹ. These actions, which would of course be simple for machines to do since
they will be so much smarter than us, will be inherently nonimplementable
by the machines that are doing all our inventing and production at time
tʹ, because those machines will have been programmed with the objective
functions of time t society, which is precisely what we wish to countenance
changing at time tʹ.
The whole problem is that writing objectives at time t may lead machines
to evolve capacities based on those objectives that become outdated at tʹ. In
order for us to know whether they are outdated at tʹ, we have to fi rst develop a conception of what the machines should be doing at tʹ, and how that diff ers
from what we thought at t, and we need to somehow have a sense of what the
machines are actually doing at tʹ and how it diff ers from t. All of these things are collective human decisions, and will require immense human eff ort.
For example, suppose we program in to these advanced machines an
objective of maximizing human welfare defi ned in a utilitarian way in the
year 2035. The designing machines will then set off to come up with machine
improvements that advance our utilitarian human objectives. But in doing
so, they may end up doing some violence to other objectives which, on the
whole we were ready as a society to subordinate to sound utilitarian ones in
2035, but are no longer willing to countenance in 2050. For instance, it may
be the case that the utilitarian- based inventing machines put no weight on
animal welfare, other than how it indirectly advances the utilitarian goal.
But it could be that our societal objectives, beliefs, views and so forth have
evolved in the intervening years. Maybe we come to learn something more
about animal neurology, or maybe we just change our values as we become
richer. And then people, on the whole, start to want to privilege other mam-
mals as much as ourselves. Or alternatively perhaps we become so impressed
with the complexity of machines that we want to countenance nonorganic
life as of value in itself. In either such case, we will need to, as human deci-
sion makers, understand enough of what machines are doing in pursuit of
some of our earlier objectives to be able to see whether the societal objectives
unstated in 2035 are being trammelled upon or not in 2050. They may not
be, and in that case nothing much needs to change. But how will we know
without checking?
That will be very complicated to
do. It fi rstly requires some humans trying
to understand just what it is that the machines are doing in 2050: How they
are evolving and what they have been up to? We then need to work out what
the relevant parts of that information are for our societal decision makers
288 Patrick Francois
to know, and in inclusive societies “societal decision makers” are a lot of
people. We then need to fi nd a way of communicating this perhaps highly
sophisticated information to these decisions makers, some, and perhaps
many, of whom have very little technical training about machine function,
so that they can make their decisions based on the knowledge and training
that they do have.
This process also, of course, begs the question as to who “we” as a set
of societal decision makers are in this context, and what “we” want. Some
humans must be involved in making these ethical and social decisions. And
here I do not mean decisions of the form whether a car should collide with
and kill three old citizens instead of a pregnant mother, which is of course
diffi
cult, but which we at least implicitly grapple with every day. But I mean
the more basic decisions as to what is the societal objective that the network
of machines that are not only producing everything for us, but also designing
and inventing everything for us are trying to attain. One could argue that
we also implicitly engage in such decisions today as a society, for example,
when we elect politicians or parties with competing platforms. However,
in the future it will be much more explicit, as our collective stance on these
things will be needed to determine precisely what direction we will orient
our machine inventors to head towards every single day.
It will not be possible (or prudent if it were possible) to delegate this set
of conversations and tasks to machines alone. Even though they may be
demonstrably smarter and hence better at making those decisions given
a well- defi ned objective function, the point is that there is and never will
be such a well- defi ned social objective function (we have known this since
Arrow’s impossibility theorem). We need to modify it via our political
processes in a continual way, and the objective function followed by the
machines will need to be adjusted in refl ection of a social conversation that
occurs amongst humans. In inclusive societies, where presumably all citi-
zens will have a voice in those decisions, this will involve a lot of people, all
of whom will have to be informed so that they can weigh in on that social
consensus.
Managing that conversation, reporting back to “us” what is relevant for
that conversation emerging from the self- directed world of machines, and
then adjusting the trajectory of the machines in light of what “we” decide
via whatever social mechanisms we come up with to express as our collective
will, must require humans at certain critical points. Human decision making
will not be replicable or replaceable by machines here almost by defi nition.
So, to summarize, I am describing a world that we are admittedly far from
today. A world in which most human labor is involved in the set of essentially
political tasks related to managing the machines that will be doing all the
production in our economy, and hence determining much of our societies’
directions. A set of people will need to work at determining just what our
current machines are doing and making that intelligible to social decision
Comment 289
makers (which in inclusive societies will be a lot of citizens). Another set of
people will need to work out how the diverse sets of opinions manifested
by citizens maps back to a consensus about what our machines should be
doing, and what directions they should be heading toward. All of these
workers will be helped by machines, but the machines helping them will
need human guidance since they will not be using objective protocols that
could ever be unchanging. This is because it is the very protocols that the
machines are using that we humans must be constantly discussing changing.
Humans, though immeasurably dumber than machines, will be essential and
nonsubstitutable in that process.
References
Acemoglu, Daron, and Pascual Restrepo. 2016. “The Race between Man and
Machine: Implications of Technology for Growth, Factor Shares and Employ-
ment.” Unpublished manuscript, Massachusetts Institute of Technology.
Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.
10
Artifi cial Intelligence and Jobs
The Role of Demand
James Bessen
There is widespread concern today that artifi cial intelligence technologies
will create mass unemployment during the next ten or twenty years. One
recent paper concluded that new information technologies will put “a sub-
stantial share of employment, across a wide range of occupations, at risk in
the near future” (Frey and Osborne 2017).
The example of manufacturing decline provides good reason to be con-
cerned about technology and job losses. In 1958, the broadwoven textile
industry in the United States employed over 300,000 production workers,
and the primary steel industry employed over 500,000. By 2011, broadwoven
textiles employed only 16,000, and steel employed only 100,000 production
workers.1 Some of these losses can be attributed to trade, especially since the
mid- 1990s. However, overall since the 1950s, most of the decline appears to
come from technology and changing demand (Rowthorn and Ramaswamy
1999).
But the example of manufacturing also demonstrates that the eff ect of
technology on employment is more complicated than a simple story of
“automation causes job losses” in the aff ected industries. Indeed, fi gure 10.1
shows how textiles, steel, and automotive manufacturing all enjoyed strong
employment growth during many decades that also experienced very rapid
productivity growth. Despite persistent and substantial productivity growth,
these industries have spent more decades with growing employment than
James Bessen is Executive Director of the Technology & Policy Research Initiative at Boston University School of Law.
For acknowledgments, sources of research support, and disclosure of the author’s material fi nancial relationships, if any, please see http:// www .nber .org/ chapters/ c14029.ack.
1. These fi gures are for the broadwoven fabrics industry using cotton and manmade fi bers, SIC 2211 and 2221, and the steel works, blast furnaces, and rolling mills industry, SIC 3312.
291
Fig. 10.1 Production employment in three industries
Artifi cial Intelligence and Jobs: The Role of Demand 293
with job losses. This “inverted- U” pattern appears to be quite general for
manufacturing industries (Buera and Kaboski 2009; Rodrik 2016).2
The reason automation in textiles, steel, and automotive manufac-
turing led to strong job growth has to do with the eff ect of technology on
demand, as I explore below. New technologies do not just replace labor with
machines, but, in a competitive market, automation wi
ll reduce prices. In
addition, technology may improve product quality, customization, or speed
of delivery. All of these things can increase demand. If demand increases
suffi
ciently, employment will grow even though the labor required per unit
of output declines.
Of course, job losses in one industry might be off set by employment
growth in other industries. Such macroeconomic eff ects are covered by
other articles in this volume (chapter 13, chapter 9). This chapter explores
the eff ect of technology on employment in the aff ected industry itself. The
rise and fall of employment poses an important puzzle. While a substan-
tial literature has looked at structural change associated with technology, I
argue that the most widely accepted explanations for deindustrialization are
inconsistent with the observed historical pattern. To explain the inverted-
U pattern, I present a very simple model that shows why demand for these
products was highly elastic during the early years and why demand became
inelastic over time. This model forecasts the rise and fall of employment in
these industries with reasonable accuracy: the solid line in fi gure 10.1 shows
those predictions. I then explore the implications of this model for the future
impact of artifi cial intelligence over the next two decades.
10.1 Structural
Change
The inverted- U pattern in fi gure 10.1 is also seen in the relative share of
employment in the whole manufacturing sector, shown in fi gure 10.2. Logi-
cally, the rise and fall of the sector as a whole in this chart results from the
aggregate rise and fall of separate manufacturing industries such as those in
fi gure 10.1. Yet, explanations of this phenomenon based on broad sector-
level factors face a challenge because individual industries show rather dispa-
rate patterns. For example, employment in the automotive industry appears
to have peaked nearly a century after textile employment peaked. Data on
individual industries are needed to analyze such disparate responses.
The literature on structural change provides two sorts of accounts for
the relative size of the manufacturing sector, one based on diff erential rates
of productivity growth, the other based on diff erent income elasticities of