The Big Nine

Home > Other > The Big Nine > Page 27
The Big Nine Page 27

by Amy Webb

The Big Nine is the result of hundreds of face-to-face meetings, interviews, and dinners with people working in and adjacent to artificial intelligence. Sewell Chan, Noriyuki Shikata, Arfiya Eri, Joel Puckett, Erin McKean, Bill McBain, Frances Colon, Torfi Frans Olafsson, Latoya Peterson, Rob High, Anna Sekaran, Kris Schenck, Kara Snesko, Nadim Hossain, Megan Carroll, Elena Grewal, John Deutsch, Neha Narula, Toshi Ezoe, Masao Takahashi, Mary Madden, Shintaro Yamaguchi, Lorelei Kelly, Hiro Nozaki, Karen Ingram, Kirsten Graham, Francesca Rossi, Ben Johnson, Paola Antonelli, Yoav Schlesinger, Hardy Kagimoto, John Davidow, Rachel Sklar, Glynnis MacNicol, Yohei Sadoshima, and Eiko Ooka have been generous with their time, perspectives, and insights. Several made introductions to others working on AI and policy to help me further investigate the geopolitical balance and to better understand AI’s opportunities and risks.

  It is because of the US-Japan Leadership Foundation that I met Lieutenant Colonel Sea Thomas, retired Army Major DJ Skelton, Defense Innovation Board executive director Joshua Marcuse, and national security analyst John Noonan. We’ve now spent many days together as USJLP Fellows, and I’m indebted to each of them for their patience explaining the future of warfare, the US military’s role in the Pacific Rim, and China’s various strategic initiatives. I’m especially in awe of the work Joshua has done to bridge the divide between Silicon Valley and Washington, DC. He’s one of AI’s present-day heroes.

  The Aspen Strategy Group offered me an opportunity to present on the future of AI and geopolitics during their annual summer meeting in Colorado, and those conversations helped shape my analysis. My sincerest thanks to Nicholas Burns, Condoleezza Rice, Joseph Nye, and Jonathon Price for the invitation and to Carla Anne Robbins, Richard Danzig, James Baker, Wendy Sherman, Christian Brose, Eric Rosenbach, Susan Schwab, Ann-Marie Slaughter, Bob Zoellick, Philip Zelikow, Dov Zakheim, Laura Rosenberger, and Mike Green for all of their valuable feedback.

  A lot of my thinking happened on the campus of NYU’s Stern School of Business, which has been a tremendously supportive professional home for my research. I’m grateful to Professor Sam Craig for bringing me into the MBA program and for advising me the past few years. I cannot say enough about the incredibly bright, creative MBA students who have taken my classes. Three recent Stern graduates in particular—Kriffy Perez, Elena Giralt, and Roy Levkovitz—were wonderful sounding boards as I modeled the futures of AI.

  I’m lucky to have in my life a group of sages who offer counsel and advice. All of the work I do is better because of them. Danny Stern changed my life a few years ago when he asked me to meet him one day on the NYU campus. He taught me how to think more exponentially and showed me how to make my research connect with much wider audiences. His partner at Stern Strategy Group, Mel Blake, has spent hundreds of hours mentoring me, shaping my ideas, and helping me to see the world around me differently. They are a continual source of inspiration, motivation, and (as they know) perspiration. James Geary and Ann Marie Lipinski at Harvard have been incredibly generous for many years, making it possible for me to host gatherings to talk about the future and to further develop my foresight methodology. James and Ann Marie are consummate advisors. My dear friend and personal champion Maria Popova makes me think bigger thoughts, and then she contextualizes those ideas within her encyclopedic knowledge of literature, arts, and sciences. My incredible daughter, Petra Woolf, never stops asking “what if,” reminding me often of my own cognitive biases when thinking about the future. And as always, I’m grateful to Professor Samuel Freedman at Columbia University.

  My enduring thanks to Cheryl Cooney, who works tirelessly on my behalf and without whom I would get very little done. Regardless of what AGIs might someday be built, I cannot imagine one that could ever replace Cheryl. Emily Caufield—whose patience appears to know no bounds—is the artistic force powering my foresight work, trends, and scenarios. Thanks to Phillip Blanchard for working with me again on fact checking, copy editing, and compiling all of the sources and endnotes for this book, and to Mark Fortier, who helped make sure it was read by the news media and by newsmakers alike, and whose advice was invaluable during the launch process.

  Finally, I owe zettabytes of appreciation to Carol Franco, Kent Lineback, and John Mahaney. As my literary agent, Carol managed the contract for this book. But as my friend, she and her husband, Kent, hosted me at their beautiful home in Santa Fe so that we could develop the architecture and central thesis about the Big Nine. We spent days and nights distilling all of my research and ideas into core arguments, and in between work sessions we strolled around town and had lively discussions at terrific restaurants. It’s because of Carol that a few years ago I met my editor John Mahaney, who I was fortunate enough to work with on my previous book. John is an ideal editor—he asks lots of questions, demands quality reporting, and will keep pushing until the analysis, examples, and details are just right. I wrote this book because I want to shift the conversation about AI’s future, but my motivation wasn’t entirely selfless: working with John again meant an opportunity to spend a year learning from him and improving my writing. John, Kent, and Carol, you’re a formidable team, and I can’t believe how fortunate I am to know you.

  Want more Amy Webb?

  Get sneak peeks, book recommendations, and news about your favorite authors.

  Tap here to find your new favorite book.

  AMY WEBB is one of America’s leading futurists and is the bestselling, award-winning author of The Signals Are Talking: Why Today’s Fringe Is Tomorrow’s Mainstream, which explains her method for forecasting the future. She is a professor of strategic foresight at the NYU Stern School of Business and the founder of the Future Today Institute, a leading foresight and strategy firm that helps leaders and their organizations prepare for complex, uncertain futures. Webb is a winner of the Thinkers50 Radar Award, a fellow in the United States–Japan Leadership Program, and a delegate on the former US-Russia Bilateral Presidential Commission, and she was a Visiting Nieman Fellow at Harvard University. She serves as a script consultant for films and shows about technology, science, and the future and also publishes the annual FTI Emerging Tech Trends Report, which has now garnered more than 7.5 million cumulative views worldwide. Learn more at http://www.amywebb.io.

  PRAISE FOR THE BIG NINE

  “The Big Nine is provocative, readable, and relatable. Amy Webb demonstrates her extensive knowledge of the science driving AI and the geopolitical tensions that could result between the US and China in particular. She offers deep insights into how AI could reshape our economies and the current world order, and she details a plan to help humanity chart a better course.”

  —Anja Manuel, Stanford University, cofounder and partner RiceHadleyGates

  “The Big Nine is an important and intellectually crisp work that illuminates the promise and peril of AI. Will AI serve its three current American masters in Washington, Silicon Valley, and Wall Street, or will it serve the interests of the broader public? Will it concentrate or disperse economic and geopolitical power? We can thank Amy Webb for helping us understand the questions and how to arrive at answers that will better serve humanity than our current path. The Big Nine should be discussed in classrooms and boardrooms around the world.”

  —Alec Ross, author of The Industries of the Future

  “The Big Nine makes bold predictions regarding the future of AI. But unlike many other prognosticators, Webb sets sensationalism aside in favor of careful arguments, deep historical context, and a frightening degree of plausibility.”

  —Jonathan Zittrain, George Bemis Professor of International Law and professor of computer science, Harvard University

  “The Big Nine is thoughtful and provocative, taking the long view and most of all raising the right issues around AI and providing a road map for an optimistic future with AI.”

  —Peter Schwartz, author of The Art of the Long View

  “The Big Nine provides seminal arguments on eschewing ‘nowist’ mindsets to avoid allocating human agency to the
corporations developing AI. Webb’s potential scenarios for specific futures are superb, providing detailed visions for society to avoid as well as achieve.”

  —John C. Havens, executive director, IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, and author of Heartificial Intelligence: Embracing Our Humanity to Maximize Machines

  BIBLIOGRAPHY

  Abadi, M., A. Chu, I. Goodfellow, H. McMahan, I. Mironov, K. Talwar, and L. Zhang. “Deep Learning with Differential Privacy.” In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (CCS 2016), 308–318. New York: ACM Press, 2016. Abstract, last revised October 24, 2016. https://arxiv.org/abs/1607.00133.

  Ablon, L., and A. Bogart. Zero Days, Thousands of Nights: The Life and Times of Zero-Day Vulnerabilities and Their Exploits. Santa Monica, CA: RAND Corporation, 2017. https://www.rand.org/pubs/research_reports/RR1751.html.

  Adams, S. S., et al. “Mapping the Landscape of Human-Level Artificial General Intelligence.” AI Magazine 33, no. 1 (2012).

  Agar, N. “Ray Kurzweil and Uploading: Just Say No!” Journal of Evolution and Technology 22 no. 1 (November 2011): 23–26. https://jetpress.org/v22/agar.htm.

  Allen, C., I. Smit, and W. Wallach. “Artificial Morality: Top-Down, Bottom-Up, and Hybrid Approaches.” Ethics and Information Technology 7, no. 3 (2005).

  Allen, C., G. Varner, and J. Zinser. “Prolegomena to Any Future Artificial Moral Agent.” Journal of Experimental and Theoretical Artificial Intelligence 12, no. 3 (2000).

  Allen, C., W. Wallach, and I. Smit. “Why Machine Ethics?” IEEE Intelligent Systems 21, no. 4 (2006).

  Amdahl, G. M. “Validity of the Single Processor Approach to Achieving Large Scale Computing Capabilities.” In Proceedings of the AFIPS Spring Joint Computer Conference. New York: ACM Press, 1967.

  Anderson, M., S. L. Anderson, and C. Armen, eds. Machine Ethics Technical Report FS-05-06. Menlo Park, CA: AAAI Press, 2005.

  Anderson, M., S. L. Anderson, and C. Armen. “An Approach to Computing Ethics.” IEEE Intelligent Systems 21, no. 4 (2006).

  . “MedE-thEx.” In Caring Machines Technical Report FS-05-02, edited by T. Bickmore. Menlo Park, CA: AAAI Press, 2005.

  . “Towards Machine Ethics.” In Machine Ethics Technical Report FS-05-06. Menlo Park, CA: AAAI Press, 2005.

  Anderson, S. L. “The Unacceptability of Asimov’s Three Laws of Robotics as a Basis for Machine Ethics.” In Machine Ethics. Cambridge: Cambridge University Press, 2011.

  Asimov, I. “Runaround.” Astounding Science Fiction (March 1942): 94–103.

  Armstrong, S., A. Sandberg, and N. Bostrom. “Thinking Inside the Box.” Minds and Machines 22, no. 4 (2012).

  Axelrod, R. “The Evolution of Strategies in the Iterated Prisoner’s Dilemma.” In Genetic Algorithms and Simulated Annealing, edited by L. Davis. Los Altos, CA: Morgan Kaufmann, 1987.

  Baars, B. J. “The Conscious Access Hypothesis.” Trends in Cognitive Sciences 6, no. 1 (2002).

  Babcock, J., et al. “Guidelines for Artificial Intelligence Containment.” https://arxiv.org/pdf/1707.08476.pdf.

  Baier, C., and J. Katoen. Principles of Model Checking. Cambridge: MIT Press, 2008.

  Bass, D. “AI Scientists Gather to Plot Doomsday Scenarios (and Solutions).” Bloomberg, March 2, 2017. https://www.bloomberg.com/news/articles/2017-03-02/aiscientists-gather-to-plot-doomsday-scenarios-and-solutions.

  Baum, S. D., B. Goertzel, and T. G. Goertzel. “How Long Until Human-Level AI? Results from an Expert Assessment.” Technological Forecasting and Social Change 78 (2011).

  Berg, P., D. Baltimore, H. W. Boyer, S. N. Cohen, R. W. Davis, D. S. Hogness, D. Nathans, R. Roblin, J. D. Watson, S. Weissman, and N. D. Zinder. “Potential Biohazards of Recombinant DNA Molecules.” Science 185, no. 4148 (1974): 303.

  Bostrom, N. “Ethical Issues in Advanced Artificial Intelligence.” In Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, edited by I. Smit and G. E. Lasker. Windsor, ON: International Institute for Advanced Studies in Systems Research and Cybernetics, 2003.

  . “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards.” Journal of Evolution and Technology 9 (2002). http://www.jetpress.org/volume9/risks.html.

  . “The Future of Human Evolution.” In Two Hundred Years After Kant, Fifty Years After Turing, edited by C. Tandy, 339–371. Vol. 2 of Death and Anti-Death. Palo Alto, CA: Ria University Press, 2004.

  . “How Long Before Superintelligence?” International Journal of Futures Studies, Issue 2 (1998).

  . Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014.

  . “The Superintelligent Will.” Minds and Machines 22, no. 2 (2012).

  . “Technological Revolutions.” In Nanoscale, edited by N. Cameron and M. E. Mitchell. Hoboken, NJ: Wiley, 2007.

  Bostrom, N., and M. M. Ćirković, eds. Global Catastrophic Risks. New York: Oxford University Press, 2008.

  Bostrom, N., and E. Yudkowsky. “The Ethics of Artificial Intelligence.” In Cambridge Handbook of Artificial Intelligence, edited by K. Frankish and W. Ramsey. New York: Cambridge University Press, 2014.

  Brooks, R. A. “I, Rodney Brooks, Am a Robot.” IEEE Spectrum 45, no. 6 (2008).

  Brundage, M., et al., “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.” https://arxiv.org/abs/1802.07228.

  Brynjolfsson, E., and A. McAfee. The Second Machine Age. New York: Norton, 2014.

  Bryson, J., M. Diamantis, and T. Grant. “Of, For, and By the People: The Legal Lacuna of Synthetic Persons.” Artificial Intelligence and Law 25, no. 3 (September 2017): 273–291.

  Bueno de Mesquita, B., and A. Smith. The Dictator’s Handbook: Why Bad Behavior is Almost Always Good Politics. New York: PublicAffairs, 2012.

  Cassimatis N., E. T. Mueller, and P. H. Winston. “Achieving Human-Level Intelligence Through Integrated Systems and Research.” AI Magazine 27, no. 2 (2006): 12–14. http://www.aaai.org/ojs/index.php/aimagazine/article/view/1876/1774.

  Chalmers, D. J. The Conscious Mind: In Search of a Fundamental Theory. Philosophy of Mind Series. New York: Oxford University Press, 1996.

  Chessen, M. The MADCOM Future. Washington, DC: Atlantic Council, 2017. http://www.atlanticcouncil.org/publications/reports/the-madcom-future.

  China’s State Council reports, which are available on the State Council of the People’s Republic of China website, located at www.gov.cn:

  • Made in China 2025 (July 2015)

  • State Council of a Next Generation Artificial Intelligence Development Plan (July 2017)

  • Trial Working Rules on External Transfers of Intellectual Property Rights (March 2018)

  • Three-Year Action Plan on Blue Sky Days (June 2018)

  • Three-Year Action Plan on Transportation Improvement (June 2018)

  • State Council Approves Rongchang as National High-Tech Development Zone (March 2018)

  • State Council Approves Huainan as National High-Tech Development Zone (March 2018)

  • State Council Approves Maoming as National High-Tech Development Zone (March 2018)

  • State Council Approves Zhanjiang as National High-Tech Development Zone (March 2018)

  • State Council Approves Chuxiong as National High-Tech Development Zone (March 2018)

  • Three-Year Action Plan for Promoting Development of a New Generation Artificial Intelligence Industry 2018–2020 (December 2017)

  • Action Plan on the Belt Road Initiative (March 2015)

  Centre for New American Security. “Artificial Intelligence and Global Security Summit.” https://www.cnas.org/events/artificial-intelligence-and-global-security-summit.

  Core, M. G., et al. “Building Explainable Artificial Intelligence Systems.” AAAI (2006): 1766–1773.

  Crawford, K., and R. Calo. “There Is a Blind Spot in AI Research.” Nature, October 13, 2016. https://www.nature.com/news/there-is-a-blind-spot-in-ai
-research-1.20805.

  Dai, P., et al. “Artificial Intelligence for Artificial Artificial Intelligence.” AAAI Conference on Artificial Intelligence 2011.

  Dennett, D. C. “Cognitive Wheels.” In The Robot’s Dilemma, edited by Z. W. Pylyshyn. Norwood, NJ: Ablex, 1987.

  Domingos, P. The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. New York: Basic Books, 2015.

  Dvorsky, G. “Hackers Have Already Started to Weaponize Artificial Intelligence.” Gizmodo, 2017. https://www.gizmodo.com.au/2017/09/hackers-have-already-started-toweaponize-artificial-intelligence/.

  Dyson, G. Darwin Among the Machines: The Evolution of Global Intelligence. New York: Basic Books, 1997.

  Eden, A., J. Søraker, J. H. Moor, and E. Steinhart, eds. Singularity Hypotheses: A Scientific and Philosophical Assessment. The Frontiers Collection. Berlin: Springer, 2012.

  Evans, R., and J. Gao. “DeepMind AI Reduces Google Data Centre Cooling Bill by 40%.” DeepMind (blog), July 20, 2016. https://deepmind.com/blog/deepmind-ai-reducesgoogle-data-centre-cooling-bill-40/.

  Fallows, J. China Airborne. New York: Pantheon, 2012.

  Felten, E., and T. Lyons. “The Administration’s Report on the Future of Artificial Intelligence.” Blog. October 12, 2016. https://obamawhitehouse.archives.gov/blog/2016/10/12/administrations-report-future-artificial-intelligence.

  Floyd D. Spence National Defense Authorization Act for Fiscal Year 2001, Pub. L. No. 106–398, 114 Stat. 1654 (2001). http://www.gpo.gov/fdsys/pkg/PLAW-106publ398/html/PLAW-106publ398.htm.

  French, H. Midnight in Peking: How the Murder of a Young Englishwoman Haunted the Last Days of Old China. Rev. ed. New York: Penguin Books, 2012.

  Future of Life Institute. “Asilomar AI Principles.” Text and signatories available online. https://futureoflife.org/ai-principles/.

  Gaddis, J. L. The Cold War: A New History. New York: Penguin Press, 2006.

 

‹ Prev