Book Read Free

Army of None

Page 47

by Paul Scharre

whole brain emulations: Anders Sandburg and Nick Bostrom, “Whole Brain Emulation: A Roadmap,” Technical Report #2008-3, Oxford, UK, 2008, http://www.fhi.ox.ac.uk/Reports/2008-3.pdf

  232

  “When people say a technology”: Andrew Herr, email to the author, October 22, 2016.

  232

  “last invention”: Irving J. Good, “Speculations Concerning the First Ultraintelligent Machine”, May 1964, https://web.archive.org/web/20010527181244/http://www.aeiveos.com/~bradbury/Authors/Computing/Good-IJ/SCtFUM.html. See also James Barrat, Our Final Invention (New York: Thomas Dunne Books, 2013).

  232

  “development of full artificial intelligence”: Rory Cellan-Jones, “Stephen Hawking Warns Artificial Intelligence Could End Mankind,” BBC News, December 2, 2014, http://www.bbc.com/news/technology-30290540.

  232

  “First the machines will”: Peter Holley, “Bill Gates on Dangers of Artificial Intelligence: ‘I Don’t Understand Why Some People Are Not Concerned,’ ” Washington Post, January 29, 2015, https://www.washingtonpost.com/news/the-switch/wp/2015/01/28/bill-gates-on-dangers-of-artificial-intelligence-dont-understand-why-some-people-are-not-concerned/.

  232

  “summoning the demon”: Matt McFarland, “Elon Musk: ‘With Artificial Intelligence We Are Summoning the Demon,’ ” Washington Post, October 24, 2014, https://www.washingtonpost.com/news/innovations/wp/2014/10/24/elon-musk-with-artificial-intelligence-we-are-summoning-the-demon/.

  233

  “I am in the camp that is concerned”: Holley, “Bill Gates on Dangers of Artificial Intelligence: ‘I Don’t Understand Why Some People Are Not Concerned.’ ”

  233

  “Let an ultraintelligent machine be defined”: Good, “Speculations Concerning the First Ultraintelligent Machine.”

  233

  lift itself up by its own boostraps: “Intelligence Explosion FAQ,” Machine Intelligence Research Institute, accessed June 15, 2017, https://intelligence.org/ie-faq/.

  233

  “AI FOOM”: Robin Hanson and Eliezer Yudkowsky, “The Hanson-Yudkowsky AI Foom Debate,” http://intelligence.org/files/AIFoomDebate.pdf.

  233

  “soft takeoff” scenario: Müller, Vincent C. and Bostrom, Nick, ‘Future progress in artificial intelligence: A Survey of Expert Opinion, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Berlin: Springer Synthese Library, 2016), http://www.nickbostrom.com/papers/survey.pdf.

  234

  “the dissecting room and the slaughter-house”: Mary Shelley, Frankenstein, Or, The Modern Prometheus (London: Lackington, Hughes, Harding, Mavor & Jones, 1818), 43.

  234

  Golem stories: Executive Committee of the Editorial Board., Ludwig Blau, Joseph Jacobs, Judah David Eisenstein, “Golem,” JewishEncylclopedia.com, http://www.jewishencyclopedia.com/articles/6777-golem#1137.

  235

  “the dream of AI”: Micah Clark, interview, May 4, 2016.

  235

  “building human-like persons”: Ibid.

  236

  “Why would we expect a silica-based intelligence”: Ibid.

  236

  Turing test: The Loebner Prize runs the Turing test every year. While no computer has passed the test by fooling all of the judges, some programs have fooled at least one judge in the past. Tracy Staedter, “Chat-Bot Fools Judges Into Thinking It’s Human,” Seeker, June 9, 2014, https://www.seeker.com/chat-bot-fools-judges-into-thinking-its-human-1768649439.html. Every year the Loebner Prize awards a prize to the “most human” AI. You can chat with the 2016 winner, “Rose,” here: http://ec2-54-215-197-164.us-west-1.compute.amazonaws.com/speech.php.

  236

  AI virtual assistant called “Amy”: “Amy the Virtual Assistant Is So Human-Like, People Keep Asking It Out on Dates,” accessed June 15, 2017, https://mic.com/articles/139512/xai-amy-virtual-assistant-is-so-human-like-people-keep-asking-it-out-on-dates.

  236

  “If we presume an intelligent alien life”: Micah Clark, interview, May 4, 2016.

  237

  “any level of intelligence could in principle”: Nick Bostrom, “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents,” http://www.nickbostrom.com/superintelligentwill.pdf.

  237

  “The AI does not hate you”: Eliezer S. Yudkowsky, “Artificial Intelligence as a Positive and Negative Factor in Global Risk,” http://www.yudkowsky.net/singularity/ai-risk.

  238

  “[Y]ou build a chess playing robot”: Stephen M. Omohundro, “The Basic AI Drives,” https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf.

  238

  “Without special precautions”: Ibid.

  238

  lead-lined coffins connected to heroin drips: Patrick Sawer, “Threat from Artificial Intelligence Not Just Hollywood Fantasy,” June 27, 2015, http://www.telegraph.co.uk/news/science/science-news/11703662/Threat-from-Artificial-Intelligence-not-just-Hollywood-fantasy.html.

  239

  “its final goal is to make us happy”: Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford: Oxford University Press, 2014), Chapter 8.

  239

  “a system that is optimizing a function”: Stuart Russell, “Of Myths and Moonshine,” Edge, November 14, 2014, https://www.edge.org/conversation/the-myth-of-ai#26015.

  239

  “perverse instantiation”: Bostrom, Superintelligence, Chapter 8.

  239

  learned to pause Tetris: Tom Murphy VII, “The First Level of Super Mario Bros. is Easy with Lexicographic Orderings and Time Travel . . . after that it gets a little tricky,” https://www.cs.cmu.edu/~tom7/mario/mario.pdf. The same AI also uncovered and exploited a number of bugs, such as one in Super Mario Brothers that allowed it to stomp goombas from underneath.

  239

  EURISKO: Douglas B. Lenat, “EURISKO: A Program That Learns New Heuristics and Domain Concepts,” Artificial Intelligence 21 (1983), http://www.cs.northwestern.edu/~mek802/papers/not-mine/Lenat_EURISKO.pdf, 90.

  240

  “not to put a specific purpose into the machine”: Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, Stuart Russell, “Cooperative Inverse Reinforcement Learning,” 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, November 12, 2016, https://arxiv.org/pdf/1606.03137.pdf.

  240

  correctable by their human programmers: Nate Soares et al., “Corrigibility,” in AAAI Workshops: Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, January 25–26, 2015, https://intelligence.org/files/Corrigibility.pdf.

  240

  indifferent to whether they are turned off: Laurent Orseau and Stuart Armstrong, “Safely Interruptible Agents,” https://intelligence.org/files/Interruptibility.pdf.

  240

  designing AIs to be tools: Holden Karnofsky, “Thoughts on the Singularity Institute,” May 11, 2012, http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/.

  240

  “they might not work”: Stuart Armstrong, interview, November 18, 2016.

  240

  Tool AIs could still slip out of control: Bostrom, Superintelligence, 184–193.

  240

  “We also have to consider . . . whether tool AIs”: Stuart Armstrong, interview, November 18, 2016.

  241

  “Just by saying, ‘we should only build . . .’”: Ibid.

  241

  AI risk “doesn’t concern me”: Sharon Gaudin, “Ballmer Says Machine Learning Will Be the next Era of Computer Science,” Computerworld, November 13, 2014, http://www.computerworld.com/article/2847453/ballmer-says-machine-learning-will-be-the-next-era-of-computer-science.html.

  241

  “There won’t be an intelligence explosion”: Jeff Hawkins, “The Terminator Is Not Coming. The Future Will Thank Us,” Recode, March 2, 2015, https://www.recode.net/2015/3/2/11559576/the-terminator-is-
not-coming-the-future-will-thank-us.

  241

  Mark Zuckerberg: Alanna Petroff, “Elon Musk Says Mark Zuckerberg’s Understanding of AI Is ‘Limited,’ “ CNN.com, July 25, 2017.

  241

  “not concerned about self-awareness”: David Brumley, interview, November 24, 2016.

  242

  “has been completely contradictory”: Stuart Armstrong, interview, November 18, 2016.

  242

  poker became the latest game to fall: Olivia Solon, “Oh the Humanity! Poker Computer Trounces Humans in Big Step for AI,” The Guardian, January 30, 2017, sec. Technology, https://www.theguardian.com/technology/2017/jan/30/libratus-poker-artificial-intelligence-professional-human-players-competition.

  242

  “imperfect information” game: Will Knight, “Why Poker Is a Big Deal for Artificial Intelligence,” MIT Technology Review, January 23, 2017, https://www.technologyreview.com/s/603385/why-poker-is-a-big-deal-for-artificial-intelligence/.

  242

  world’s top poker players had handily beaten: Cameron Tung, “Humans Out-Play an AI at Texas Hold ’Em—For Now,” WIRED, May 21, 2015, https://www.wired.com/2015/05/humans-play-ai-texas-hold-em-now/.

  242

  upgraded AI “crushed”: Cade Metz, “A Mystery AI Just Crushed the Best Human Players at Poker,” WIRED, January 31, 2017, https://www.wired.com/2017/01/mystery-ai-just-crushed-best-human-players-poker/.

  242

  “as soon as something works”: Micah Clark, interview, May 4, 2016.

  242

  “as soon as a computer can do it”: Stuart Armstrong, interview, November 18, 2016. This point was also made by authors of a Stanford study of AI. Peter Stone, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller. “Artificial Intelligence and Life in 2030.” One Hundred Year Study on Artificial Intelligence: Report of the 2015–2016 Study Panel, Stanford University, Stanford, CA, September 2016, 13. http://ai100.stanford.edu/2016-report.

  243

  “responsible use”: AAAI.org, http://www.aaai.org/home.html.

  243

  “most of the discussion about superintelligence”: Tom Dietterich, interview, April 27, 2016.

  243

  “runs counter to our current understandings”: Thomas G. Dietterich and Eric J. Horvitz, “Viewpoint Rise of Concerns about AI: Reflections and Directions,” Communications of the ACM 58, no. 10 (October 2015): 38–40, http://web.engr.oregonstate.edu/~tgd/publications/dietterich-horvitz-rise-of-concerns-about-ai-reflections-and-directions-CACM_Oct_2015-VP.pdf.

  243

  “The increasing abilities of AI”: Tom Dietterich, interview, April 27, 2016.

  244

  “robust to adversarial attack”: Ibid.

  244

  “The human should be taking the actions”: Ibid.

  244

  “The whole goal in military doctrine”: Ibid.

  245

  AGI as “dangerous”: Bob Work, interview, June 22, 2016.

  245

  more Iron Man than Terminator: Sydney J. Freedburg Jr., “Iron Man, Not Terminator: The Pentagon’s Sci-Fi Inspirations,” Breaking Defense, May 3, 2016, http://breakingdefense.com/2016/05/iron-man-not-terminator-the-pentagons-sci-fi-inspirations/. Matthew Rosenberg and John Markoff, “The Pentagon’s ‘Terminator Conundrum’: Robots That Could Kill on Their Own,” New York Times, October 25, 2016, https://www.nytimes.com/2016/10/26/us/pentagon-artificial-intelligence-terminator.html.

  245

  “impose obligations on persons”: Office of General Counsel, Department of Defense, “Department of Defense Law of War Manual,” June 2015, https://www.defense.gov/Portals/1/Documents/law_war_manual15.pdf, 330.

  245

  ”the ultimate goal of AI”: “The ultimate goal of AI (which we are very far from achieving) is to build a person, or, more humbly, an animal.” Eugene Charniak and Drew McDermott, Introduction to Artificial Intelligence (Boston: Addison-Wesley Publishing Company, 1985), 7.

  245

  “what they’re aiming at are human-level”: Selmer Bringsjord, interview, November 8, 2016.

  245

  “we can plan all we want”: Ibid.

  246

  “adversarial AI” and “AI security”: Stuart Russell, Daniel Dewey, and Max Tegmark, “Research Priorities for Robust and Beneficial Artificial Intelligence,” Association for the Advancement of Artificial Intelligence (Winter 2015), http://futureoflife.org/data/documents/research_priorities.pdf.

  246

  malicious applications of AI: One of the few articles to tackle this problem is Federico Pistono and Roman V. Yampolskiy, “Unethical Research: How to Create a Malevolent Artificial Intelligence,” September 2016, https://arxiv.org/pdf/1605.02817.pdf.

  246

  Elon Musk’s reaction: Elon Musk, Twitter post, July 14, 2016, 2:42am, https://twitter.com/elonmusk/status/753525069553381376.

  246

  “adaptive and unpredictable”: David Brumley, interview, November 24, 2016.

  247

  “Faustian bargain”: Richard Danzig, “Surviving on a Diet of Poisoned Fruit: Reducing the National Security Risks of America’s Cyber Dependencies,” Center for a New American Security, Washington, DC, July 21, 2014, https://www.cnas.org/publications/reports/surviving-on-a-diet-of-poisoned-fruit-reducing-the-national-security-risks-of-americas-cyber-dependencies, 9.

  247

  “placing humans in decision loops”: Ibid, 21.

  247

  “abnegation”: Ibid, 20.

  247

  “ecosystem”: David Brumley, interview, November 24, 2016.

  247

  Armstrong estimated: Stuart Armstrong, interview, November 18, 2016.

  16 Robots on Trial: Autonomous Weapons and the Laws of War

  251

  biblical book of Deuteronomy: Deuteronomy 20:10–19. Laws of Manu 7:90–93.

  251

  principle of distinction: “Article 51: Protection of the Civilian Population” and “Article 52: General Protection of Civilian Objects,” Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), 8 June 1977, https://ihl-databases.icrc.org/ihl/webart/470-750065 and https://ihl-databases.icrc.org/ihl/WebART/470-750067.

  251

  principle of proportionality: Article 51(5)(b), Protocol Additional to the Geneva Conventions of 12 August 1949 (Protocol I); and “Rule 14: Proportionality in Attack,” Customary IHL, https://ihl-databases.icrc.org/customary-ihl/eng/docs/v1_cha_chapter4_rule14.

  251

  principle of avoiding unnecessary suffering: “Practice Relating to Rule 70. Weapons of a Nature to Cause Superfluous Injury or Unnecessary Suffering,” Customary IHL, https://ihl-databases.icrc.org/customary-ihl/eng/docs/v2_rul_rule70.

  252

  precautions in the attack: “Article 57: Precautions in Attack,” Protocol Additional to the Geneva Conventions of 12 August 1949 (Protocol I), https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/9ac284404d38ed2bc1256311002afd89/50fb5579fb098faac12563cd0051dd7c; “Rule 15: Precautions in Attack,” Customary IHL, https://ihl-databases.icrc.org/customary-ihl/eng/docs/v1_rul_rule15.

  252

  ‘hors de combat’: “Article 41: Safeguard of an Enemy Hors de Combat,” Protocol Additional to the Geneva Conventions of 12 August 1949 (Protocol I), https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/WebART/470-750050?OpenDocument; “Rule 41: Attacks Against Persons Hors de Combat,” Customary IHL, https://ihl-databases.icrc.org/customary-ihl/eng/docs/v1_rul_rule47.

  252

  by their nature, indiscriminate or uncontrollable: “Article 51: Protection of the Civilian Population,” and “Rule 71: Weapons That Are By Nature Indiscriminate,” Customary IHL, https://ihl-databases.icrc.org/customary-ihl/eng/doc
s/v1_rul_rule71.

  252

  “lots of civilian dying”: Steve Goose, interview, October 26, 2016.

  255

  “there is no accepted formula”: Kenneth Anderson, Daniel Reisner, and Matthew Waxman, “Adapting the Law of Armed Conflict to Autonomous Weapon Systems,” International Law Studies 90 (2014): 386-411, https://www.usnwc.edu/getattachment/a2ce46e7-1c81-4956-a2f3-c8190837afa4/dapting-the-Law-of-Armed-Conflict-to-Autonomous-We.aspx, 403.

  257

  Ancient Sanskrit texts: Dharmaśāstras 1.10.18.8, as quoted in A. Walter Dorn, The Justifications for War and Peace in World Religions Part III: Comparison of Scriptures from Seven World Religions (Toronto: Defence R&D Canada, March 2010), 20, http://www.dtic.mil/dtic/tr/fulltext/u2/a535552.pdf. Mahabharata, Book 11, Chapter 841, “Law, Force, and War,” verse 96.10, from James L. Fitzgerald, ed., Mahabharata, Volume 7, Book 11 and Book 12, Part One, 1st ed. (Chicago: University of Chicago Press, 2003), 411.

  257

  “blazing with fire”: Chapter VII: 90, Laws of Manu, translated by G. Buhler, http://sourcebooks.fordham.edu/halsall/india/manu-full.asp.

  257

  “sawback” bayonets: Sawback bayonets are not illegal, however, provided the purpose is to use the saw as a tool and not for unnecessarily injuring the enemy. Bill Rhodes, An Introduction to Military Ethics: A Reference Handbook (Santa Barbara, CA: Praeger, 2009), 13–14.

  258

  because of the wounds they cause: “Protocol on Non-Detectable Fragments (Protocol I), United Nations Conference on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May be Deemed to be Excessively Injurious or to Have Indiscriminate Effects,” Geneva, 1980, https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Article.xsp?action=openDocument&documentId=1AF77FFE8082AE07C12563CD0051EDF5; “Rule 79: Weapons Primarily Injuring by Non-Detectable Fragments,” Customary IHL, https://ihl-databases.icrc.org/customary-ihl/eng/docs/v1_rul_rule79.

  258

  Is being blinded by a laser really worse: Charles J. Dunlap, “Is it Really Better to be Dead than Blind?,” Just Security, January 13, 2015, https://www.justsecurity.org/19078/dead-blind/.

  258

  “take all feasible precautions”: Article 57(2)(a)(ii), Protocol Additional to the Geneva Conventions of 12 August 1949 (Protocol I); and “Rule 15: Precautions in Attack,” Customary IHL.

 

‹ Prev