The Economics of Artificial Intelligence

Home > Other > The Economics of Artificial Intelligence > Page 102
The Economics of Artificial Intelligence Page 102

by Ajay Agrawal


  the most habitual or who do not shop cleverly, but will help savvy consum-

  ers who can hijack the personalization algorithms to look like low WTP

  consumers and save money. See Gabaix and Laibson (2006) for a carefully

  worked- out model about hidden (“shrouded”) product attributes.

  24.5 Conclusion

  This chapter discussed three ways in which AI, particularly machine learn-

  ing, connect with behavioral economics. One way is that ML can be used

  to mine the large set of features that behavioral economists think could

  improve prediction of choice. I gave examples of simple kinds of ML (with

  much smaller data sets than often used) in predicting bargaining outcomes,

  risky choice, and behavior in games.

  The second way is by construing typical patterns in human judgment as

  the output of implicit machine- learning methods that are inappropriately

  applied. For example, if there is no correction for overfi tting, then the gap

  21. I put the word “hurts” in quotes here as a way to conjecture, through punctuation, that in many industries the AI- driven capacity to personalize pricing will harm consumer welfare overall.

  22. A feature of their fairness framework is that people do not mind price increases or sur-charges if they are even partially justifi ed by cost diff erentials. I have a recollection of Kahneman and Thaler joking that a restaurant could successfully charge higher prices on Saturday nights if there is some enhancement, such as a mariachi band—even if most people don’t like mariachi.

  Artifi cial Intelligence and Behavioral Economics 605

  between training set accuracy and test- set accuracy will grow and grow if

  more features are used. This could be a model of human overconfi dence.

  The third way is that AI methods can help people “assemble” preference

  predictions about unfamiliar products (e.g., through recommender systems)

  and can also harm consumers by extracting more surplus than ever before

  (through better types of price discrimination).

  References

  Andrade, E. B., and T.-H. Ho. 2009. “Gaming Emotions in Social Interactions.”

  Journal of Consumer Research 36 (4): 539– 52.

  Babcock, L., and G. Loewenstein. 1997. “Explaining Bargaining Impasse: The Role

  of Self- Serving Biases.” Journal of Economic Perspectives 11 (1): 109– 26.

  Babcock, L., G. Loewenstein, S. Issacharoff , and C. Camerer. 1995. “Biased Judg-

  ments of Fairness in Bargaining.” American Economic Review 85 (5): 1337– 43.

  Bhatt, M. A., T. Lohrenz, C. F. Camerer, and P. R. Montegue. 2010. “Neural Signa-

  tures of Strategic Types in a Two- Person Bargaining Game.” Proceedings of the

  National Academy of Sciences 107 (46): 19720– 25.

  Binmore, K., J. McCarthy, G. Ponti, A. Shaked, and L. Samuelson. 2002. “A Back-

  ward Induction Experiment.” Journal of Economic Theory 184:48– 88.

  Binmore, K., A. Shaked, and J. Sutton. 1985. “Testing Noncooperative Bargaining

  Theory: A Preliminary Study.” American Economic Review 75 (5): 1178– 80.

  ———. 1989. “An Outside Option Experiment.” Quarterly Journal of Economics

  104 (4): 753– 70.

  Blattberg, R. C., and S. J. Hoch. 1990. “Database Models and Managerial Intuition:

  50% Database + 50% Manager.” Management Science 36 (8): 887– 99.

  Brocas, Isabelle, J. D. Carrillo, S. W. Wang, and C. F. Camerer. 2014. “Imperfect

  Choice or Imperfect Attention? Understanding Strategic Thinking in Private

  Information Games.” Review of Economic Studies 81 (3): 944– 70.

  Camerer, C. F. 1981a. “General Conditions for the Success of Bootstrapping Mod-

  els.” Organizational Behavior and Human Performance 27:411– 22.

  ———. 1981b. “The Validity and Utility of Expert Judgment.” Unpublished PhD

  diss., Center for Decision Research, University of Chicago Graduate School of

  Business.

  ———. 2003. Behavioral Game Theory, Experiments in Strategic Interaction. Princeton, NJ: Princeton University Press.

  Camerer, C. F., T.-H. Ho, and J.-K. Chong. 2004. “A Cognitive Hierarchy Model of

  Games.” Quarterly Journal of Economics 119 (3): 861– 98.

  Camerer, C., E. Johnson, T. Rymon, and S. Sen. 1994. “Cognition and Framing in

  Sequential Bargaining for Gains and Losses. In Frontiers of Game Theory, edited by A. Kirman, K. Binmore, and P. Tani, 101– 20. Cambridge, MA: MIT Press.

  Camerer, C. F., G. Nave, and A. Smith. 2017. “Dynamic Unstructured Bargaining

  with Private Information and Deadlines: Theory and Experiment.” Working paper.

  Chapman, L. J., and J. P. Chapman. 1969. “Illusory Correlation as an Obstacle to the

  Use of Valid Psychodiagnostic Signs.” Journal of Abnormal Psychology 46:271– 80.

  Costa- Gomes, M. A., and V. P. Crawford. 2006. “Cognition and Behavior in Two-

  Person Guessing Games: An Experimental Study.” American Economic Review

  96 (5): 1737– 68.

  Costa- Gomes, M. A., V. P. Crawford, and B. Broseta. 2001. “Cognition and Behavior

  in Normal- Form Games: An Experimental Study.” Econometrica 69 (5): 1193– 235.

  606 Colin F. Camerer

  Crawford, V. P. 2003. “Lying for Strategic Advantage: Rational and Boundedly

  Rational Misrepresentation of Intentions.” American Economic Review 93 (1):

  133– 49.

  Crawford, V. P., M. A. Costa-Gomes, and N. Iriberri. 2013. “Structural Models of

  Nonequilibrium Strategic Thinking: Theory, Evidence, and Applications.” Journal

  of Economic Literature 51 (1): 5–62.

  Crawford, V. P., and N. Iriberri. 2007. “Level- k Auctions: Can a Nonequilibrium

  Model of Strategic Thinking Explain the Winner’s Curse and Overbidding in

  Private- Value Auctions?” Econometrica 75 (6): 1721– 70.

  Dana, J., R. Dawes, and N. Peterson. 2013. “Belief in the Unstructured Interview:

  The Persistence of an Illusion.” Judgment and Decision Making 8 (5): 512– 20.

  Dawes, R. M. 1971. “A Case Study of Graduate Admissions: Application of Three

  Principles of Human Decision Making.” American Psychologist 26:180– 88.

  ———. 1979. “The Robust Beauty of Improper Linear Models in Decision Mak-

  ing.” American Psychologist 34 (7): 571.

  Dawes, R. M., and B. Corrigan. 1974. “Linear Models in Decision Making.” Psy-

  chological Bulletin 81, 97.

  Dawes, R. M., D. Faust, and P. E. Meehl. 1989. “Clinical versus Actuarial Judg-

  ment.” Science 243:1668– 74.

  Doi, E., J. L. Gauthier, G. D. Field, J. Shlens, A. Sher, M. Greschner, T. A. Machado, et al. 2012. “Effi

  cient Coding of Spatial Information in the Primate Retina.” Jour-

  nal of Neuroscience 32 (46): 16256– 64.

  Edwards, D. D., and J. S. Edwards. 1977. “Marriage: Direct and Continuous Mea-

  surement.” Bulletin of the Psychonomic Society 10:187– 88.

  Einhorn, H. J. 1986. “Accepting Error to Make Less Error.” Journal of Personality

  Assessment 50:387– 95.

  Einhorn, H. J., and R. M. Hogarth. 1975. “Unit Weighting Schemas for Decision

  Making.” Organization Behavior and Human Performance 13:171– 92.

  Forsythe, R., J. Kennan, and B. Sopher. 1991. “An Experimental Analysis of Strikes

  in Bargaining Games with One- Sided Private Information.” American Economic

  Review 81 (1): 253– 78.

  Fudenberg, D., and A. Liang. 2017. “Predicting and Understanding Initial Play.”

  Worki
ng paper, Massachusetts Institute of Technology and the University of

  Pennsylvania.

  Gabaix, X., and D. Laibson. 2006. “Shrouded Attributes, Consumer Myopia, and

  Information Suppression in Competitive Markets.” Quarterly Journal of Econom-

  ics 121 (2): 505– 40.

  Goeree, J., C. Holt, and T. Palfrey. 2016. Quantal Response Equilibrium: A Stochastic Theory of Games. Princeton, NJ: Princeton University Press.

  Goldberg, L. R. 1959. “The Eff ectiveness of Clinicians’ Judgments: The Diagnosis

  of Organic Brain Damage from the Bender- Gestalt Test.” Journal of Consulting

  Psychology 23:25– 33.

  ———. 1968. “Simple Models or Simple Processes?” American Psychologist 23:483– 96.

  ———.1970. “Man versus Model of Man: A Rationale, Plus Some Evidence for a

  Method of Improving on Clinical Inferences.” Psychological Bulletin 73:422– 32.

  Goldfarb, A., and M. Xiao. 2011. “Who Thinks about the Competition? Managerial

  Ability and Strategic Entry in US Local Telephone Markets.” American Economic

  Review 101 (7): 3130– 61.

  Gomez- Uribe, C., and N. Hunt. 2016. “The Netfl ix Recommender System: Algo-

  rithms, Business Value, and Innovation.” ACM Transactions on Management

  Information Systems (TMIS) 6 (4): article 13.

  Grove, W. M., D. H. Zald, B. S. Lebow, B. E. Snits, and C. E. Nelson. 2000. “Clinical vs. Mechanical Prediction: A Meta- analysis.” Psychological Assessment 12:19– 30.

  Hartford, J. S., J. R. Wright, and K. Leyton- Brown. 2016. “Deep Learning for Pre-

  Artifi cial Intelligence and Behavioral Economics 607

  dicting Human Strategic Behavior.” Advances in Neural Information Processing

  Systems. https:// dl.acm .org/ citation .cfm?id=3157368.

  Hortacsu, A., F. Luco, S. L. Puller, and D. Zhu. 2017. “Does Strategic Ability Aff ect Effi

  ciency? Evidence from Electricity Markets.” NBER Working Paper no. 23526,

  Cambridge, MA.

  Johnson, E. J. 1980. “Expertise in Admissions Judgment.” Unpublished PhD diss.,

  Carnegie- Mellon University.

  Johnson, E. J. 1988. “Expertise and Decision under Uncertainty: Performance and

  Process.” In The Nature of Expertise, edited by M. T. H. Chi, R. Glaser, and M. I.

  Farr, 209– 28. Hillsdale, NJ: Erlbaum.

  Johnson, E. J., C. F. Camerer, S. Sen, and T. Rymon. 2002. “Detecting Failures of

  Backward Induction: Monitoring Information Search in Sequential Bargaining.”

  Journal of Economic Theory 104 (1): 16– 47.

  Kahneman, D., J. L. Knetsch, and R. Thaler. 1986. “Fairness as a Constraint on

  Profi t Seeking: Entitlements in the Market.” American Economic Review : 728– 41.

  Kahneman, D., and D. Lovallo. 1993. “Timid Choices and Bold Forecasts: A Cogni-

  tive Perspective on Risk Taking.” Management Science 39 (1): 17– 31.

  Kahneman, D., P. Slovic, and A. Tversky, eds. 1982. Judgment under Uncertainty:

  Heuristics and Biases. Cambridge: Cambridge University Press.

  Karagözog˘lu, E. Forthcoming. “On ‘Going Unstructured’ in Bargaining Experi-

  ments.” Studies in Economic Design by Springer, Future of Economic Design.

  Klayman, J., and Y. Ha. 1985. “Confi rmation, Disconfi rmation, and Information in

  Hypothesis Testing.” Psychological Review : 211– 28.

  Kleinberg, J., H. Lakkaraju, J. Leskovec, J. Ludwig, and S. Mullainathan. 2017.

  “Human Decisions and Machine Predictions.” NBER Working Paper no. 23180,

  Cambridge, MA.

  Kleinberg, J., A. Liang, and S. Mullainathan. 2015. “The Theory is Predictive, But

  Is It Complete? An Application to Human Perception of Randomness.” Unpub-

  lished manuscript..

  Krajbich, I., C. Camerer, J. Ledyard, and A. Rangel. 2009. “Using Neural Measures

  of Economic Value to Solve the Public Goods Free- Rider Problem.” Science 326

  (5952): 596– 99.

  Lewis, M. 2016. The Undoing Project: A Friendship That Changed Our Minds. New

  York: W. W. Norton.

  Lohrenz, T., J. McCabe, C. F. Camerer, and P. R. Montague. 2007. “Neural Signature

  of Fictive Learning Signals in a Sequential Investment Task.” Proceedings of the

  National Academy of Sciences 104 (22): 9493–98.

  Meehl, P. E. 1954. Clinical versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence. Minneapolis: University of Minnesota Press.

  ———. 1986. “Causes and Eff ects of My Disturbing Little Book.” Journal of Per-

  sonality Assessment 50 (3): 370–75.

  Mullainathan, S., and J. Spiess. 2017. “Machine Learning: An Applied Econometric

  Approach.” Journal of Economic Perspectives 31 (2): 87– 106.

  Neelin, J., H. Sonnenschein, and M. Spiegel. 1988. “A Further Test of Non cooperative

  Bargaining Theory: Comment.” American Economic Review 78 (4): 824– 36.

  Oskamp, S. 1965. “Overconfi dence in Case- Study Judgments.” Journal of Consulting

  Psychology 29 (3): 261.

  Östling, R., J. Wang, E. Chou, and C. F. Camerer. 2011. “Strategic Thinking and

  Learning in the Field and Lab: Evidence from Poisson LUPI Lottery Games.”

  American Economic Journal: Microeconomics 23 (3): 1–33.

  Peysakhovich, A., and J. Naecker. 2017. “Using Methods from Machine Learning

  to Evaluate Behavioral Models of Choice under Risk and Ambiguity.” Journal of

  Economic Behavior & Organization 133:373– 84.

  Roth, A. E. 1995. “Bargaining Experiments.” In Handbook of Experimental Econom-

  608 Daniel Kahneman

  ics, edited by J. Kagel and A. Roth, 253– 348. Princeton, NJ: Princeton University Press.

  Rumelhart, D. E., and J. L. McClelland. 1986. “On Learning the Past Tenses of

  English Verbs.” In Parallel Distributed Processing, vol. 2, edited by D. Rumelhart, J. McClelland, and the PDP Research Group, 216– 71. Cambridge, MA: MIT

  Press.

  Sawyer, J. 1966. “Measurement and Prediction, Clinical and Statistical.” Psychological Bulletin 66:178– 200.

  Stahl, D. O., and P. W. Wilson. 1995. “On Players’ Models of Other Players: Theory

  and Experimental Evidence.” Games and Economic Behavior 10 (1): 218– 54.

  Thornton, B. 1977. “Linear Prediction of Marital Happiness: A Replication.” Per-

  sonality and Social Psychology Bulletin 3:674– 76.

  Tversky, A., and D. Kahneman. 1992. “Advances in Prospect Theory: Cumulative

  Representation of Uncertainty.” Journal of Risk and Uncertainty 5 (4): 297– 323.

  von Winterfeldt, D., and W. Edwards. 1973. “Flat Maxima in Linear Optimization

  Models.” Working Paper no. 011313-4-T, Engineering Psychology Lab, University

  of Michigan, Ann Arbor.

  Wang, J., M. Spezio, and C. F. Camerer. 2010. “Pinocchio’s Pupil: Using Eyetracking

  and Pupil Dilation to Understand Truth Telling and Deception in Sender- Receiver

  Games.” American Economic Review 100 (3): 984– 1007.

  Wright, J. R., and K. Leyton- Brown. 2014. “Level- 0 Meta- models for Predicting

  Human Behavior in Games.” In Proceedings of the Fifteenth ACM Conference on

  Economics and Computation, 857– 74.

  Comment Daniel Kahneman

  Below is a slightly edited version of Professor Kahneman’s spoken remarks.

  During the talks yesterday, I couldn’t understand most of what was going

  on, and yet I had the feeling that I was learning a lot. I will have some

  remarks about Colin (Camerer) and then some remarks about the few things


  that I noticed yesterday that I could understand.

  Colin had a lovely idea that I agree with. It is that if you have a mass of

  data and you use deep learning, you will fi nd out much more than your

  theory is designed to explain. And I would hope that machine learning can

  be a source of hypotheses. That is, that some of these variables that you

  identify are genuinely interesting.

  At least in my fi eld, the bar for successful publishable science is very low.

  We consider theories confi rmed even when they explain very little of the

  variance so long as they yield statistically signifi cant predictions. We treat

  the residual variance as noise, so a deeper look into the residual variance,

  which machine learning is good at, is an advantage. So as an outsider, actu-

  Daniel Kahneman is professor emeritus of psychology and public aff airs at the Woodrow Wilson School and the Eugene Higgins Professor of Psychology emeritus, Princeton University, and a fellow of the Center for Rationality at the Hebrew University in Jerusalem.

  For acknowledgments, sources of research support, and disclosure of the author’s material fi nancial relationships, if any, please see http:// www .nber .org/ chapters/ c14016.ack.

  Comment 609

  ally, I was surprised not to hear more about that aspect of the superiority

  of artifi cial intelligence (AI) compared to what people can do. Perhaps, as

  a psychologist, this is what interests me most. I’m not sure that new signals

  will always be interesting, but I suppose that some may lead to new theory

  and that would be useful.

  I do not fully agree with Colin’s second idea: that it is useful to view human

  intelligence as a weak version of artifi cial intelligence. There certainly are

  similarities, and certainly you can model some of human overconfi dence in

  that way. But I do think that the processes that occur in human judgment are

  quite diff erent than the processes that produce overconfi dence in software.

  Now I turn to some general remarks of my own based on what I learned

  yesterday. One of the recurrent issues, both in talks and in conversations,

  was whether AI could eventually do whatever people can do. Will there be

  anything that is reserved for human beings?

  Frankly, I don’t see any reason to set limits on what AI can do. We have in

  our heads a wonderful computer. It is made of meat, but it’s a computer. It’s

 

‹ Prev