The Economics of Artificial Intelligence
Page 69
cating their workforce about security practices, and so on, they have a safe
harbor with respect to liability for costs associated with security incidents.
But where does the due care standard come from? One possibility is from
the government, particularly from military or law enforcement practices.
The Orange Book and its successor, the Common Criteria standard, are
good examples. Another possibility is that insurance agencies off er insur-
ance to parties that implement good practices security. Just as an insurer
may require a sprinkler system to off er fi re insurance, cyber insurance may
only be off ered to those companies that engage in best practices (see Varian
2000 for more discussion).
This model is an appealing approach to the problem. However, we know
that there are many issues involving insurance such as adverse selection and
moral hazard that need to be addressed. See the archives of the Workshop
on the Economics of Information Security for more work in this area, and
Anderson (2017) for an overview.
16.5.2 Privacy
Privacy policy is a large and sprawling area. Acquisti, Taylor, and Wag-
man (2016) provide a comprehensive review of the economic literature.
There are several policy questions that arise in the machine- learning area.
For example, do fi rms have appropriate incentives to provide appropriate
levels of privacy? What is the trade- off between privacy and economic per-
formance? It is widely recognized that privacy regulations may limit ability
of ML vendors to combine data from multiple sources and there may be
Artifi cial Intelligence, Economics, and Industrial Organization 417
limits on transfer of data across corporate boundaries and/or sale of data.
There is a tendency to promulgate regulation in this area that leads to unin-
tended consequences. An example is the Health Insurance Portability and
Accountability Act of 1996, commonly known as HIPAA. The original
intent of the legislation was to stimulate competition among insurers by
establishing standards for medical record keeping. However, many research-
ers argue that it has had a signifi cant negative impact on the quantity and
quality of medical research.
16.5.3 Explanations
European regulators are examining the idea of a “right to an explana-
tion.” Suppose information about a consumer is fed into a model to predict
whether or not he or she will default on a loan. If the consumer is refused
the loan, are they owed an “explanation” of why? If so, what would count as
an explanation? Can an organization keep a predictive model secret because
if it were revealed it could be manipulated? A notable example is the Dis-
criminant Inventory Function. better known as the DIF function that the
IRS uses to trigger audits. Is it legitimate to reverse engineer the DIF func-
tion? See CAvQM (2011) for a collection of links on the DIF function.
Can we demand more of an ML model than we can of a person? Sup-
pose we show you a photo and that you correctly identify it as a picture of
your spouse. Now we ask, “how do you know?” The best answer might be
“because I’ve seen a lot of pictures that I know are pictures of my spouse,
and that photo looks a lot like those pictures!” Would this explanation be
satis factory coming from a computer?
16.6 Summary
This chapter has only scratched the surface of how AI and ML might
impact industrial structure. The technology is advancing rapidly, with the
main bottleneck now being analysts who can implement these machine-
learning systems. Given the huge popularity of college classes in this area
and the wealth of online tutorials, we expect this bottleneck will be alleviated
in the next few years.
References
Acquisti, Alessandro, Curtis R. Taylor, and Liad Wagman. 2016. “The Economics
of Privacy.” Journal of Economic Literature 52 (2).
Acquisti, Alessandro, and Hal Varian. 2004. “Conditioning Prices on Purchase His-
tory.” Marketing Science 24 (4): 367– 81.
418 Hal Varian
Anderson, Ross. 1993. “Why Cryptosystems Fail.” Proceedings of the 1st ACM Con-
ference on Computer and Communtications Security. https:// dl.acm .org/ citation
.cfm?id=168615.
———. 2017. “Economics and Security Resource Page.” Working paper, Cambridge
University. http:// www .cl .cam.ac .uk/ ~rja14/ econsec .html.
Arrow, Kenneth J. 1962. “The Economic Implications of Learning by Doing.”
Review of Economic Studies 29 (3): 155– 73.
Axelrod, Robert. 1984. The Evolution of Cooperation. New York: Basic Books.
Bessen, James. 2016. Learning by Doing: The Real Connection between Innovation,
Wages, and Wealth. New Haven, CT: Yale University Press.
———. 2017. “Information Technology and Industry.” Law and Economics
Research Paper no. 17-41, Boston University School of Law.
Borenstein, Severin. 1997. “Rapid Communication and Price Fixing: The Airline
Tariff Publishing Company Case.” Working paper. http:// faculty.haas.berkeley
.edu/ borenste/ download/ atpcase1 .pdf.
Bughin, Jacques, and Erik Hazan. 2017. “The New Spring of Artifi cial Intelli-
gence.” Vox CEPR Policy Portal. https:// voxeu .org/ article/ new- spring- artifi cial
- intelligence- few- early- economics.
CavQM. 2011. “Reverse Engineering The IRS DIF- Score. ” Comparative Advan-
tage via Quantitative Methods blog, July 10. http:// cavqm.blogspot .com/ 2011/ 07
/ reverse- engineering- irs- dif- score .html.
Christie, William G., and Paul H. Schultz. 1995. “Did Nasdaq Market Makers
Implicitly Collude?” Journal of Economic Perspectives 9 (3): 199– 208.
DellaVigna, Stefano, and Matthew Gentzkow. 2017. “Uniform Pricing in US Retail
Chains.” NBER Working Paper no. 23996, Cambridge, MA.
Dubé, Jean- Pierre, and Sanjog Misra. 2017. “Scalable Price Targeting.” NBER
Working Paper no. 23775, Cambridge, MA.
Eckersley, Peter, and Yomna Nassar. 2017. “Measuring the Progress of AI Research.”
Electronic Frontier Foundation. https:// eff .org/ ai/ metrics.
Etzioni, Oren, Rattapoom Tuchinda, Craig Knoblock, and Alexander Yates. 2003.
“To Buy or Not to Buy: Mining Airfare Data to Minimize Ticket Purchase Price.”
Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining. www .doi .org/ 10.1145/ 956750.956767.
Ezrachi, A., and M. E. Stucke. 2017. “Algorithmic Collusion: Problems and
Counter- Measures—Note.” OECD Roundtable on Algorithms and Collusion.
https:// www .oecd .org/ offi
cialdocuments/ publicdisplaydocumentpdf/ ?cote=DAF
/ COMP/ WD%282017%2925&docLanguage=En.
Fudenberg, Drew. 1992. “Explaining Cooperation and Commitment in Repeated
Games.” In Advances in Economic Theory: Sixth World Congress, Econometric
Society Monographs, edited by Jean- Jacques Laff ont. Cambridge, MA: Cam-
bridge University Press.
Goodfellow, Ian, Nicolas Papernot, Sandy Huang, Yan Duan, Pieter Abbeel, and
Jack Clark. 2017. “Attacking Machine Learning with Adversarial Examples.”
OpenAI blog, Feb. 26. https:// blog.openai .com/ a
dversarial- example- research/.
Hutson, Matthew. 2017. “Will Make AI Smarter for Cash.” Bloomberg Business
Week, Sept. 11.
Kurakin, Alexy, Ian Goodfellow, and Samy Bengio. 2016. “Adversarial Examples in
the Physical World.” Cornell University Library, ArXiv 1607.02533. https:// arxiv
.org/ abs/ 1607.02533.
Organisation for Economic Co- operation and Development (OECD). 2017. “Algo-
rithms and Collusion: Competition Policy in the Digital Age.” www .oecd .org
/ competition/ algorithms- collusion- competition- policy- in-the- digital- age .htm.
Comment 419
Pierce, David G. 1992 “Repeated Games: Cooperation and Rationality.” In Advances
in Economic Theory: Sixth World Congress, Econometric Society Monographs,
edited by Jean- Jacques Laff ont. Cambridge, MA: Cambridge University Press.
Rubinstein, Arial. 1986. “Finite Automata Play the Repeated Prisoner’s Dilemma.”
Journal of Economic Theory 39:83– 96.
Segal, Ilya. 2003. “Optimal Pricing Mechanisms with Unknown Demand.” American
Economic Review 93 (3): 509– 29.
Shiller, Benjamin Reed. 2013. “First Degree Price Discrimination Using Big Data.”
Working Paper no. 58, Department of Economics and International Business
School, Brandeis University.
Spiegel, Yossi, and Igal Hendel. 2014. “Small Steps for Workers, A Giant Leap for
Productivity.” American Economic Journal: Applied Economics 6 (1): 73– 90.
Sreevallabh, Chivukula, and Wei Liu. 2017. “Adversarial Learning Games with Deep
Learning Models.” International Joint Conference in Neural Networks. www .doi
.org/ 10.1109/ IJCNN.2017.7966196.
Stiglitz, Joseph E., and Bruce C. Greenwald. 2014. Creating a Learning Society. New York: Columbia University Press.
Varian, H. 2000. “Managing Online Security Risks.” New York Times, June 1.
Comment Judith Chevalier
Varian provides an excellent overview of industrial organization issues aris-
ing out of the adoption of machine learning and artifi cial intelligence. A
number of these issues have potential competition policy implications. For
example, exploitation of AI technologies may either increase or decrease
economies of scale, leading potentially to situations of market power. Own-
ership of data, if crucial to competition in a specifi c industry, may create
barriers to entry. The potential for algorithmic collusion clearly leads to
antitrust enforcement concerns. Here, I briefl y address one of these issues,
data ownership, and highlight some potential antitrust policy responses.
While I focus here on data ownership as a barrier to entry, some of the policy
trade- off s I discuss are germane to the other potential market structure
changes highlighted in Varian.
Artifi cial intelligence and machine- learning processes often use raw data
as an input. As Varian points out, it is not at all clear that data defi es our
usual expectation that a scarce asset or resource will eventually face decreas-
ing returns to scale. Nonetheless, one can certainly imagine circumstances
where exclusive ownership of a body of data will create a nearly insurmount-
able advantage to a market incumbent. While the concern that access to a
Judith Chevalier is the William S. Beinecke Professor of Finance and Economics at the
Yale School of Management and a research associate of the National Bureau of Economic Research.
For acknowledgments, sources of research support, and disclosure of the author’s material fi nancial relationships, if any, please see http:// www .nber .org/ chapters/ c14033.ack.
420 Judith Chevalier
scarce asset creates entry barriers may be relatively new as it applies to data,
the underlying fundamental economic issue is not new. Antitrust authori-
ties in all jurisdictions have long wrestled with optimal policy toward fi rms
for which the ownership of scarce assets creates barriers to entry. In the
United States, analysis of this issue dates back at least to United States v.
Terminal Railroad Assocation (224 US 383 (1912), a case in which consortia
of railroads denied rival access to the only railroad bridges traversing the
St. Louis River. In that case and subsequent ones, courts have occasion-
ally articulated a duty to deal for a fi rm with market power that controls
access to an asset (or facility) that is essential to competition and for which
it is impractical for rivals to duplicate the asset. However, determining the
precise circumstances under which a monopolist has an affi
rmative duty to
deal with a rival remains an unsettled area of antitrust law.
In principle, this very kind of antitrust essential facilities doctrine could
be applied to data ownership. Indeed, while Varian remains silent on the
issue of remedies, recent legal literature in the United States has shown
some enthusiasm for essential facilities doctrine as applied to data (see, e.g.,
Meadows 2015; Abrahamson 2014). Further, European antitrust authorities
have begun to articulate principles for the control of big data that suggest
an essential facilities doctrine. For example, Margrethe Vesteger (2016), the
EU Commissioner for Competition, recently stated in a speech “it’s true
that we shouldn’t be suspicious of every company which holds a valuable
set of data. But we do need to keep a close eye on whether companies con-
trol unique data, which no one else can get hold of, and can use it to shut
their rivals out of the market.” In the speech, she highlighted a 2014 case in
which the French competition authority required a French energy producer,
GDF Suez, to share a customer list with industry rivals.
Despite enthusiasm in some quarters, the application of essential facilities
doctrine to data sharing creates both important trade- off s and important
practical concerns. I begin with the trade- off s. In evaluating antitrust poli-
cies in innovative industries, it is important to recognize that consumer bene-
fi ts from new technologies arise not just from obtaining goods and services at
competitive prices, but also from the fl ow of new and improved products and
services that arise from innovation. Thus, antitrust policy should be evalu-
ated not just in terms of its eff ect on prices and outputs, but also on its eff ect
on the speed of innovation. Indeed, in high- technology industries, it seems
likely that these dynamic effi
ciency considerations dwarf the static effi
ciency
considerations. In the case of an application of the essential facilities doc-
trine to data, the trade- off s are numerous and they are directionally unclear.
An often- cited criticism of essential facilities doctrine is that creating an
ex post duty to share diminishes the incentive to invest in the essential facil-
ity in the fi rst place (see, e.g., Pate 2006). In this case, creating an ex post
duty to share data could diminish the incumbent incentive to invest in data
creation, thus slowing the pace of innovation. However, the overall incentive
Comment 421
trade- off s are not as simple as that. In circumstances in which new entrants
are an important source of potential innovation, exclusionary conduct by
incumbents that reduces t
he incentive of entrants to invest in R&D can
slow the pace of innovation. That is, in the case of data, if particular data
is an essential complement to an AI innovation, exclusive ownership of the
data by an incumbent can slow the pace of innovation by entrants. Issues
of the impact of antitrust enforcement on the pace of innovation remains a
nascent area of research, but is explored theoretically in, for example, Segal
and Whinston (2007). Thus, in sum, while a broad application of the essen-
tial facilities doctrine to proprietary data may be tempting from an ex post
static effi
ciency perspective, caution about ex ante incentives is warranted.
In addition to the trade- off s already discussed, any application of an
essential facilities doctrine to data sharing also implies a host of practical
considerations. As in any essential facilities scenario, once a court or anti-
trust authority establishes a duty to deal, it must also articulate terms of
trade. Clearly, absent some articulation of terms, an incumbent can de facto
refuse to deal by establishing transaction terms that are unattractive to any
potential rival user of the data. Given that market conditions are continually
changing, an ongoing regulation of the terms of trade will become unavoid-
able. There are certainly instances in which US courts have become ongoing
regulators of the transactions of companies for which a court has imposed a
duty to deal. The continuing oversight of the contracts of the music licens-
ing fi rms ASCAP and BMI are good examples of a duty to deal leading to
de facto regulation by the courts. However, the creation of such an ongoing
regulatory structure brings with it costs to both the regulatory entity and
the regulated fi rms. Essential facilities is not a quick fi x.
Finally, while essential facilities doctrine may not always be the best
tool for addressing data whose ownership has become concentrated, the
potential for mergers to create importantly concentrated data should be
considered in merger analysis, just as merger analysis considers the poten-
tial for mergers to substantially concentrate some other element of produc-
tive capacity.
Clearly, there are important trade- off s in implementing antitrust solu-
tions to the problems potentially created by exclusive ownership of key data.