The Economics of Artificial Intelligence
Page 22
hybridization had a huge impact on agricultural productivity.
One of the important insights to be gained from thinking about IMIs,
therefore, is that the economic impact of some types of research tools is not
limited to their ability to reduce the costs of specifi c innovation activities—
perhaps even more consequentially they enable a new approach to innova-
tion itself, by altering the “playbook” for innovation in the domains where
the new tool is applied. For example, prior to the systematic understanding
of the power of “hybrid vigor,” a primary focus in agriculture had been
improved techniques for self- fertilization (i.e., allowing for more and more
specialized natural varietals over time). Once the rules governing hybridiza-
tion (i.e., heterosis) were systematized, and the performance advantages of
hybrid vigor demonstrated, the techniques and conceptual approach for
agricultural innovation was shifted, ushering in a long period of systematic
innovation using these new tools and knowledge.
Advances in machine learning and neural networks appear to have great
potential as a research tool in problems of classifi cation and prediction.
These are both important limiting factors in a variety of research tasks,
and, as exemplifi ed by the Atomwise example, application of “learning”
approaches to AI hold out the prospect of dramatically lower costs and
improved performance in R&D projects where these are signifi cant chal-
lenges. But as with hybrid corn, AI- based learning may be more usefully
understood as an IMI than as a narrowly limited solution to a specifi c
problem. One the one hand, AI- based learning may be able to substantially
“automate discovery” across many domains where classifi cation and predic-
tion tasks play an important role. On the other, that they may also “expand
The Impact of Artifi cial Intelligence on Innovation 121
the playbook” is the sense of opening up the set of problems that can be fea-
sibly addressed, and radically altering scientifi c and technical communities’
conceptual approaches and framing of problems. The invention of optical
lenses in the seventeenth century had important direct economic impact in
applications such as spectacles. But optical lenses in the form of microscopes
and telescopes also had enormous and long- lasting indirect eff ects on the
progress of science, technological change, growth, and welfare: by making
very small or very distant objects visible for the fi rst time, lenses opened
up entirely new domains of inquiry and technological opportunity. Leung
et al. (2016), for example, evocatively characterize machine learning as an
opportunity to “learn to read the genome” in ways that human cognition
and perception cannot.
Of course, many research tools are neither IMIs nor GPTs, and their
primary impact is to reduce the cost or enhance the quality of an existing
innovation process. For example, in the pharmaceutical industry new kinds
of materials promise to enhance the effi
ciency of specifi c research processes.
Other research tools can indeed be thought of as IMIs but are nonetheless
relatively limited in application. For example, the development of genetically
engineered research mice (such as the OncoMouse) is an IMI that has had
a profound impact on the conduct and playbook of biomedical research,
but has no obvious relevance to innovation in areas such as information
technology, energy, or aerospace. The challenge presented by advances in
AI is that they appear to be research tools that not only have the potential
to change the method of innovation itself, but also have implications across
an extraordinarily wide range of fi elds. Historically, technologies with these
characteristics—think of digital computing—have had large and unantici-
pated impacts across the economy and society in general. Mokyr (2002)
points to the profound impact of IMIs that take the form not of tools per
se, but innovations in the way research is organized and conducted, such
as the invention of the university. General purpose technologies that are
themselves IMIs (or vice versa) are particularly complex phenomena, whose
dynamics are as yet poorly understood or characterized.
From a policy perspective, a further important feature of research tools is
that it may be particularly diffi
cult to appropriate their benefi ts. As empha-
sized by Scotchmer (1991), providing appropriate incentives for an upstream
innovator that develops only the fi rst “stage” of an innovation (such as a
research tool) can be particularly problematic when contracting is imperfect
and the ultimate application of the new products whose development is
enabled by the upstream innovation is uncertain. Scotchmer and her co-
authors emphasized a key point about a multistage research process: when
the ultimate innovation that creates value requires multiple steps, providing
appropriate innovation incentives are not only a question of whether and
how to provide property rights in general, but also of how best to distribute
property rights and incentives across the multiple stages of the innovation
122 Iain M. Cockburn, Rebecca Henderson, and Scott Stern
process. Lack of incentives for early stage innovation can therefore mean
that the tools required for subsequent innovation do not even get invented;
strong early stage property rights without adequate contracting opportuni-
ties may result in “hold-up” for later- stage innovators and so reduce the
ultimate impact of the tool in terms of commercial application.
The vertical research spillovers created by new research tools (or IMIs) are
not just a challenge for designing appropriate intellectual property policy.1
They are also exemplars of the core innovation externality highlighted by
endogenous growth theory (Romer 1990; Aghion and Howitt 1992); a cen-
tral source of underinvestment in innovation is the fact that the intertem-
poral spillovers from innovators today to innovators tomorrow cannot be
easily captured. While tomorrow’s innovators benefi t from “standing on the
shoulders of giants,” their gains are not easily shared with their predecessors.
This is not simply a theoretical idea: an increasing body of evidence sug-
gests that research tools and the institutions that support their development
and diff usion play an important role in generating intertemporal spillovers
(among others, Furman and Stern 2011; Williams 2013). A central insight
of this work is that control—both in the form of physical exclusivity, as well
as in the form of formal intellectual property rights—over tools and data
can shape both the level and direction of innovative activity, and that rules
and institutions governing control over these areas has a powerful infl uence
on the realized amount and nature of innovation.
Of course, these frameworks cover only a subset of the key informational
and competitive distortions that might arise when considering whether and
how to provide optimal incentives for the type of technological change
represented by some areas of AI. But these two areas in particular
seem
likely to be important for understanding the implications of the current
dramatic advances in AI- supported learning. We therefore turn in the next
section to a brief outline of the ways in which AI is changing, with an eye
toward bringing the framework here to bear on how we might outline a
research agenda exploring the innovation policy challenges that they create.
4.3 The Evolution of Artifi cial Intelligence:
Robotics, Symbolic Systems, and Neural Networks
In his omnibus historical account of AI research, Nilsson (2010) defi nes
AI as “that activity devoted to making machines intelligent, and intelligence
is that quality that enables an entity to function appropriately and with fore-
sight in its environment.” His account details the contributions of multiple
fi elds to achievements in AI, including but not limited to biology, linguistics,
psychology and cognitive sciences, neuroscience, mathematics, philosophy
1. Challenges presented by AI- enabled invention for legal doctrine and the patent process are beyond the scope of this chapter.
The Impact of Artifi cial Intelligence on Innovation 123
and logic, engineering, and computer science. And, of course, regardless
of their particular approach, artifi cial intelligence research has been united
from the beginning by its engagement with Turing (1950) and his discussion
of the possibility of mechanizing intelligence.
Though often grouped together, the intellectual history of AI as a scien-
tifi c and technical fi eld is usefully informed by distinguishing between three
interrelated but separate areas: robotics, neural networks, and symbolic
systems. Perhaps the most successful line of research in the early years of
AI—dating back to the 1960s—falls under the broad heading of symbolic
systems. Although early pioneers such as Turing had emphasized the impor-
tance of teaching a machine as one might a child (i.e., emphasizing AI as a
learning process), the “symbol processing hypothesis” (Newell, Shaw, and
Simon 1958; Newell and Simon 1976) was premised on the attempt to rep-
licate the logical fl ow of human decision- making through processing sym-
bols. Early attempts to instantiate this approach yielded striking success
in demonstration projects, such as the ability of a computer to navigate
elements of a chess game (or other board games) or engage in relatively
simple conversations with humans by following specifi c heuristics and rules
embedded into a program. However, while research based on the concept
of a “general problem solver” has continued to be an area of signifi cant
academic interest, and there have been periodic explosions of interest in the
use of such approaches to assist human decision- making (e.g., in the con-
text of early stage expert systems to guide medical diagnosis), the symbolic
systems approach has been heavily criticized for its inability to meaningfully
impact real- world processes in a scalable way. It is, of course, possible that
this fi eld will see breakthroughs in the future, but it is fair to say that while
symbolic systems continues to be an area of academic research, it has not
been central to the commercial application of AI. Nor is it at the heart of the
recent reported advances in AI that are associated with the area of machine
learning and prediction.
A second infl uential trajectory in AI has been broadly in the area of
robotics. While the concepts of “robots” as machines that can perform
human tasks dates back at least to the 1940s, the fi eld of robotics began
to meaningfully fl ourish from the 1980s onward through a combination of
the advances in numerically controlled machine tools and the development
of more adaptive but still rules- based robotics that rely on the active sens-
ing of a known environment. Perhaps the most economically consequential
application of AI to date has been in this area, with large- scale deploy-
ment of “industrial robots” in manufacturing applications. These machines
are precisely programmed to undertake a given task in a highly controlled
environment. Often located in “cages” within highly specialized industrial
processes (most notably automobile manufacturing), these purpose- built
tools are perhaps more aptly described as highly sophisticated numerically
controlled machines rather than as robots with signifi cant AI content. Over
124 Iain M. Cockburn, Rebecca Henderson, and Scott Stern
the past twenty years, innovation in robotics has had an important impact
on manufacturing and automation, most notably through the introduction
of more responsive robots that rely on programmed response algorithms
that can respond to a variety of stimuli. This approach, famously pioneered
by Rod Brooks (1990), focused the commercial and innovation orientation
of AI away from the modeling of human- like intelligence toward providing
feedback mechanisms that would allow for practical and eff ective robotics
for specifi ed applications. This insight led, among other applications, to the
Roomba and to other adaptable industrial robots that could interact with
humans such as Rethink Robotics’ Baxter. Continued innovation in robot-
ics technologies (particularly in the ability of robotic devices to sense and
interact with their environment) may lead to wider application and adoption
outside industrial automation.
These advances are important, and the most advanced robots continue
to capture public imagination when the term AI is invoked. But innova-
tions in robotics are not, generally speaking, IMIs. The increasing auto-
mation of laboratory equipment certainly improves research productivity,
but advances in robotics are not (yet) centrally connected to the under-
lying ways in which researchers themselves might develop approaches to
undertake innovation itself across multiple domains. There are, of course,
counterexamples to this proposition: robotic space probes have been a very
important research tool in planetary science, and the ability of automated
remote sensing devices to collect data at very large scale or in challenging
environments may transform some fi elds of research. But robots continue to
be used principally in specialized end- use “production” applications.
Finally, a third stream of research that has been a central element of AI
since its founding can be broadly characterized as a “learning” approach.
Rather than being focused on symbolic logic, or precise sense- and- react
systems, the learning approach attempts to create reliable and accurate
methods for the prediction of particular events (either physical or logical)
in the presence of particular inputs. The concept of a neural network has
been particularly important in this area. A neural network is a program that
uses a combination of weights and thresholds to translate a set of inputs
into a set of outputs, measures the “closeness” of these outputs to reality,
and then adjusts the weights it uses to narrow the distance between outputs
and reality. In this way, neural networks can learn as they are fed more
inputs (Rosenblatt 1958, 1962). Over the course of the 1980s, Hinton and
h
is coauthors further advanced the conceptual framework on which neural
networks are based through the development of “back- propagating multi-
layer” techniques that further enhance their potential for supervised learning
(Rumelhart, Hinton, and Williams 1986).
After being initially heralded as having signifi cant promise, the fi eld of
neural networks has come in and out of fashion, particularly within the
United States. From the 1980s through the middle of the fi rst decade of the
The Impact of Artifi cial Intelligence on Innovation 125
twenty- fi rst century, their challenge seemed to be that there were signifi cant
limitations to the technology that could not be easily fi xed by using larger
training data sets or through the introduction of additional layers of “neu-
rons.” However, in the early twenty- fi rst century, a small number of new
algorithmic approaches demonstrated the potential to enhance prediction
through back propagation through multiple layers. These neural networks
increased their predictive power as they were applied to larger and larger
data sets and were able to scale to an arbitrary level (among others, a key
reference here is Hinton and Salakhutdinov [2006]). These advances exhib-
ited a surprising level of performance improvement, notably in the con-
text of the ImageNet visual recognition project competition pioneered by
Fei- Fei Li at Stanford (Krizhevsky, Sutskever, and Hinton 2012).
4.4 How Might Diff erent Fields within
Artifi cial Intelligence Impact Innovation?
Distinguishing between these three streams of AI is a critical fi rst step
toward developing a better understanding of how AI is likely to infl uence
the innovation process going forward, since the three diff er signifi cantly in
their potential to be either GPTs or IMIs—or both.
First, though a signifi cant amount of public discussion of AI focuses on
the potential for AI to achieve superhuman performance over a wide range
of human cognitive capabilities, it is important to note that, at least so far,
the signifi cant advances in AI have not been in the form of the “general
problem solver” approaches that were at the core of early work in symbolic
systems (and that were the motivation for considerations of human reason-
ing such as the Turing test). Instead, recent advances in both robotics and in