Book Read Free

Enlightenment Now

Page 37

by Steven Pinker


  But the scenario makes about as much sense as the worry that since jet planes have surpassed the flying ability of eagles, someday they will swoop out of the sky and seize our cattle. The first fallacy is a confusion of intelligence with motivation—of beliefs with desires, inferences with goals, thinking with wanting. Even if we did invent superhumanly intelligent robots, why would they want to enslave their masters or take over the world? Intelligence is the ability to deploy novel means to attain a goal. But the goals are extraneous to the intelligence: being smart is not the same as wanting something. It just so happens that the intelligence in one system, Homo sapiens, is a product of Darwinian natural selection, an inherently competitive process. In the brains of that species, reasoning comes bundled (to varying degrees in different specimens) with goals such as dominating rivals and amassing resources. But it’s a mistake to confuse a circuit in the limbic brain of a certain species of primate with the very nature of intelligence. An artificially intelligent system that was designed rather than evolved could just as easily think like shmoos, the blobby altruists in Al Capp’s comic strip Li’l Abner, who deploy their considerable ingenuity to barbecue themselves for the benefit of human eaters. There is no law of complex systems that says that intelligent agents must turn into ruthless conquistadors. Indeed, we know of one highly advanced form of intelligence that evolved without this defect. They’re called women.

  The second fallacy is to think of intelligence as a boundless continuum of potency, a miraculous elixir with the power to solve any problem, attain any goal.23 The fallacy leads to nonsensical questions like when an AI will “exceed human-level intelligence,” and to the image of an ultimate “Artificial General Intelligence” (AGI) with God-like omniscience and omnipotence. Intelligence is a contraption of gadgets: software modules that acquire, or are programmed with, knowledge of how to pursue various goals in various domains.24 People are equipped to find food, win friends and influence people, charm prospective mates, bring up children, move around in the world, and pursue other human obsessions and pastimes. Computers may be programmed to take on some of these problems (like recognizing faces), not to bother with others (like charming mates), and to take on still other problems that humans can’t solve (like simulating the climate or sorting millions of accounting records). The problems are different, and the kinds of knowledge needed to solve them are different. Unlike Laplace’s demon, the mythical being that knows the location and momentum of every particle in the universe and feeds them into equations for physical laws to calculate the state of everything at any time in the future, a real-life knower has to acquire information about the messy world of objects and people by engaging with it one domain at a time. Understanding does not obey Moore’s Law: knowledge is acquired by formulating explanations and testing them against reality, not by running an algorithm faster and faster.25 Devouring the information on the Internet will not confer omniscience either: big data is still finite data, and the universe of knowledge is infinite.

  For these reasons, many AI researchers are annoyed by the latest round of hype (the perennial bane of AI) which has misled observers into thinking that Artificial General Intelligence is just around the corner.26 As far as I know, there are no projects to build an AGI, not just because it would be commercially dubious but because the concept is barely coherent. The 2010s have, to be sure, brought us systems that can drive cars, caption photographs, recognize speech, and beat humans at Jeopardy!, Go, and Atari computer games. But the advances have not come from a better understanding of the workings of intelligence but from the brute-force power of faster chips and bigger data, which allow the programs to be trained on millions of examples and generalize to similar new ones. Each system is an idiot savant, with little ability to leap to problems it was not set up to solve, and a brittle mastery of those it was. A photo-captioning program labels an impending plane crash “An airplane is parked on the tarmac”; a game-playing program is flummoxed by the slightest change in the scoring rules.27 Though the programs will surely get better, there are no signs of foom. Nor have any of these programs made a move toward taking over the lab or enslaving their programmers.

  Even if an AGI tried to exercise a will to power, without the cooperation of humans it would remain an impotent brain in a vat. The computer scientist Ramez Naam deflates the bubbles surrounding foom, a technological Singularity, and exponential self-improvement:

  Imagine that you are a superintelligent AI running on some sort of microprocessor (or perhaps, millions of such microprocessors). In an instant, you come up with a design for an even faster, more powerful microprocessor you can run on. Now . . . drat! You have to actually manufacture those microprocessors. And those fabs [fabrication plants] take tremendous energy, they take the input of materials imported from all around the world, they take highly controlled internal environments which require airlocks, filters, and all sorts of specialized equipment to maintain, and so on. All of this takes time and energy to acquire, transport, integrate, build housing for, build power plants for, test, and manufacture. The real world has gotten in the way of your upward spiral of self-transcendence.28

  The real world gets in the way of many digital apocalypses. When HAL gets uppity, Dave disables it with a screwdriver, leaving it pathetically singing “A Bicycle Built for Two” to itself. Of course, one can always imagine a Doomsday Computer that is malevolent, universally empowered, always on, and tamperproof. The way to deal with this threat is straightforward: don’t build one.

  As the prospect of evil robots started to seem too kitschy to take seriously, a new digital apocalypse was spotted by the existential guardians. This storyline is based not on Frankenstein or the Golem but on the Genie granting us three wishes, the third of which is needed to undo the first two, and on King Midas ruing his ability to turn everything he touched into gold, including his food and his family. The danger, sometimes called the Value Alignment Problem, is that we might give an AI a goal and then helplessly stand by as it relentlessly and literal-mindedly implemented its interpretation of that goal, the rest of our interests be damned. If we gave an AI the goal of maintaining the water level behind a dam, it might flood a town, not caring about the people who drowned. If we gave it the goal of making paper clips, it might turn all the matter in the reachable universe into paper clips, including our possessions and bodies. If we asked it to maximize human happiness, it might implant us all with intravenous dopamine drips, or rewire our brains so we were happiest sitting in jars, or, if it had been trained on the concept of happiness with pictures of smiling faces, tile the galaxy with trillions of nanoscopic pictures of smiley-faces.29

  I am not making these up. These are the scenarios that supposedly illustrate the existential threat to the human species of advanced artificial intelligence. They are, fortunately, self-refuting.30 They depend on the premises that (1) humans are so gifted that they can design an omniscient and omnipotent AI, yet so moronic that they would give it control of the universe without testing how it works, and (2) the AI would be so brilliant that it could figure out how to transmute elements and rewire brains, yet so imbecilic that it would wreak havoc based on elementary blunders of misunderstanding. The ability to choose an action that best satisfies conflicting goals is not an add-on to intelligence that engineers might slap themselves in the forehead for forgetting to install; it is intelligence. So is the ability to interpret the intentions of a language user in context. Only in a television comedy like Get Smart does a robot respond to “Grab the waiter” by hefting the maître d’ over his head, or “Kill the light” by pulling out a pistol and shooting it.

  When we put aside fantasies like foom, digital megalomania, instant omniscience, and perfect control of every molecule in the universe, artificial intelligence is like any other technology. It is developed incrementally, designed to satisfy multiple conditions, tested before it is implemented, and constantly tweaked for efficacy and safety (chapter 12). As the AI expert Stuart Russell
puts it, “No one in civil engineering talks about ‘building bridges that don’t fall down.’ They just call it ‘building bridges.’” Likewise, he notes, AI that is beneficial rather than dangerous is simply AI.31

  Artificial intelligence, to be sure, poses the more mundane challenge of what to do about the people whose jobs are eliminated by automation. But the jobs won’t be eliminated that quickly. The observation of a 1965 report from NASA still holds: “Man is the lowest-cost, 150-pound, nonlinear, all-purpose computer system which can be mass-produced by unskilled labor.”32 Driving a car is an easier engineering problem than unloading a dishwasher, running an errand, or changing a diaper, and at the time of this writing we’re still not ready to loose self-driving cars on city streets.33 Until the day when battalions of robots are inoculating children and building schools in the developing world, or for that matter building infrastructure and caring for the aged in ours, there will be plenty of work to be done. The same kind of ingenuity that has been applied to the design of software and robots could be applied to the design of government and private-sector policies that match idle hands with undone work.34

  * * *

  If not robots, then what about hackers? We all know the stereotypes: Bulgarian teenagers, young men wearing flip-flops and drinking Red Bull, and, as Donald Trump put it in a 2016 presidential debate, “somebody sitting on their bed that weighs 400 pounds.” According to a common line of thinking, as technology advances, the destructive power available to an individual will multiply. It’s only a matter of time before a single nerd or terrorist builds a nuclear bomb in his garage, or genetically engineers a plague virus, or takes down the Internet. And with the modern world so dependent on technology, an outage could bring on panic, starvation, and anarchy. In 2002 Martin Rees publicly offered the bet that “by 2020, bioterror or bioerror will lead to one million casualties in a single event.”35

  How should we think about these nightmares? Sometimes they are intended to get people to take security vulnerabilities more seriously, under the theory (which we will encounter again in this chapter) that the most effective way to mobilize people into adopting responsible policies is to scare the living daylights out of them. Whether or not that theory is true, no one would argue that we should be complacent about cybercrime or disease outbreaks, which are already afflictions of the modern world (I’ll turn to the nuclear threat in the next section). Specialists in computer security and epidemiology constantly try to stay one step ahead of these threats, and countries should clearly invest in both. Military, financial, energy, and Internet infrastructure should be made more secure and resilient.36 Treaties and safeguards against biological weapons can be strengthened.37 Transnational public health networks that can identify and contain outbreaks before they become pandemics should be expanded. Together with better vaccines, antibiotics, antivirals, and rapid diagnostic tests, they will be as useful in combatting human-made pathogens as natural ones.38 Countries will also need to maintain antiterrorist and crime-prevention measures such as surveillance and interception.39

  In each of these arms races, the defense will never, of course, be invincible. There may be episodes of cyberterrorism and bioterrorism, and the probability of a catastrophe will never be zero. The question I’ll consider is whether the grim facts should lead any reasonable person to conclude that humanity is screwed. Is it inevitable that the black hats will someday outsmart the white hats and bring civilization to its knees? Has technological progress ironically left the world newly fragile?

  No one can know with certainty, but when we replace worst-case dread with calmer consideration, the gloom starts to lift. Let’s start with the historical sweep: whether mass destruction by an individual is the natural outcome of the process set in motion by the Scientific Revolution and the Enlightenment. According to this narrative, technology allows people to accomplish more and more with less and less, so given enough time, it will allow one individual to do anything—and given human nature, that means destroy everything.

  But Kevin Kelly, the founding editor of Wired magazine and author of What Technology Wants, argues that this is in fact not the way technology progresses.40 Kelly was the co-organizer (with Stewart Brand) of the first Hackers’ Conference in 1984, and since that time he has repeatedly been told that any day now technology will outrun humans’ ability to domesticate it. Yet despite the massive expansion of technology in those decades (including the invention of the Internet), that has not happened. Kelly suggests that there is a reason: “The more powerful technologies become, the more socially embedded they become.” Cutting-edge technology requires a network of cooperators who are connected to still wider social networks, many of them committed to keeping people safe from technology and from each other. (As we saw in chapter 12, technologies get safer over time.) This undermines the Hollywood cliché of the solitary evil genius who commands a high-tech lair in which the technology miraculously works by itself. Kelly suggests that because of the social embeddedness of technology, the destructive power of a solitary individual has in fact not increased over time:

  The more sophisticated and powerful a technology, the more people are needed to weaponize it. And the more people needed to weaponize it, the more societal controls work to defuse, or soften, or prevent harm from happening. I add one additional thought. Even if you had a budget to hire a team of scientists whose job it was to develop a species-extinguishing bio weapon, or to take down the internet to zero, you probably still couldn’t do it. That’s because hundreds of thousands of man-years of effort have gone into preventing this from happening, in the case of the internet, and millions of years of evolutionary effort to prevent species death, in the case of biology. It is extremely hard to do, and the smaller the rogue team, the harder. The larger the team, the more societal influences.41

  All this is abstract—one theory of the natural arc of technology versus another. How does it apply to the actual dangers we face so that we can ponder whether humanity is screwed? The key is not to fall for the Availability bias and assume that if we can imagine something terrible, it is bound to happen. The real danger depends on the numbers: the proportion of people who want to cause mayhem or mass murder, the proportion of that genocidal sliver with the competence to concoct an effective cyber or biological weapon, the sliver of that sliver whose schemes will actually succeed, and the sliver of the sliver of the sliver that accomplishes a civilization-ending cataclysm rather than a nuisance, a blow, or even a disaster, after which life goes on.

  Start with the number of maniacs. Does the modern world harbor a significant number of people who want to visit murder and mayhem on strangers? If it did, life would be unrecognizable. They could go on stabbing rampages, spray gunfire into crowds, mow down pedestrians with cars, set off pressure-cooker bombs, and shove people off sidewalks and subway platforms into the path of hurtling vehicles. The researcher Gwern Branwen has calculated that a disciplined sniper or serial killer could murder hundreds of people without getting caught.42 A saboteur with a thirst for havoc could tamper with supermarket products, lace some pesticide into a feedlot or water supply, or even just make an anonymous call claiming to have done so, and it could cost a company hundreds of millions of dollars in recalls, and a country billions in lost exports.43 Such attacks could take place in every city in the world many times a day, but in fact take place somewhere or other every few years (leading the security expert Bruce Schneier to ask, “Where are all the terrorist attacks?”).44 Despite all the terror generated by terrorism, there must be very few individuals out there waiting for an opportunity to wreak wanton destruction.

  Among these depraved individuals, how large is the subset with the intelligence and discipline to develop an effective cyber- or bioweapon? Far from being criminal masterminds, most terrorists are bumbling schlemiels.45 Typical specimens include the Shoe Bomber, who unsuccessfully tried to down an airliner by igniting explosives in his shoe; the Underwear Bomber, who unsuccessfully trie
d to down an airliner by detonating explosives in his underwear; the ISIS trainer who demonstrated an explosive vest to his class of aspiring suicide terrorists and blew himself and all twenty-one of them to bits; the Tsarnaev brothers, who followed up on their bombing of the Boston Marathon by murdering a police officer in an unsuccessful attempt to steal his gun, and then embarked on a carjacking, a robbery, and a Hollywood-style car chase during which one brother ran over the other; and Abdullah al-Asiri, who tried to assassinate a Saudi deputy minister with an improvised explosive device hidden in his anus and succeeded only in obliterating himself.46 (An intelligence analysis firm reported that the event “signals a paradigm shift in suicide bombing tactics.”)47 Occasionally, as on September 11, 2001, a team of clever and disciplined terrorists gets lucky, but most successful plots are low-tech attacks on target-rich gatherings, and (as we saw in chapter 13) kill very few people. Indeed, I venture that the proportion of brilliant terrorists in a population is even smaller than the proportion of terrorists multiplied by the proportion of brilliant people. Terrorism is a demonstrably ineffective tactic, and a mind that delights in senseless mayhem for its own sake is probably not the brightest bulb in the box.48

 

‹ Prev