Architects of Intelligence

Home > Other > Architects of Intelligence > Page 7
Architects of Intelligence Page 7

by Martin Ford


  I do agree that a lot of jobs that have existed for the last few hundred years are repetitive, and the humans who are doing them are basically exchangeable. If it’s a job where you hire people by the hundred or by the thousand to do it, and you can identify what that person does as a particular task that is then repeated over and over again, those kinds of jobs are going to be susceptible. That’s because you could say that, in those jobs, we are using humans as robots. So, it’s not surprising that when we have real robots, they’re going to be able to do those jobs.

  I also think that the current mindset among governments is: “Oh, well then. I guess we really need to start training people to be data scientists, because that’s the job of the future—or robot engineers.” This clearly isn’t the solution because we don’t need a billion data scientists and robot engineers: we just need a few million. This might be a strategy for a small country like Singapore; or where I am currently, in Dubai, it might also be a viable strategy. But it’s not a viable strategy for any major country because there is simply not going to be enough jobs in those areas. That’s not to say that there are no jobs now: there certainly are, and training more people to do them makes sense; but this simply is not a solution to the long-term problem.

  There are really only two futures for the human economy that I see in the long run.

  The first is that effectively, most people are not doing anything that’s considered economically productive. They’re not involved in economic exchange of work for pay in any form, and this is the vision of the universal basic income: that there is a sector of the economy that is largely automated and incredibly productive, and that productivity generates wealth, in the form of goods and services, that in one way or another ends up subsidizing the economic viability of everyone else. That to me does not seem like a very interesting world to live in, at least not by itself, without a lot of other things needed to go on to make life worth living and provide sufficient incentive for people to do all of the things that we do now. For example, going to school, learning and training, and becoming experts in various areas. It’s hard to see the motivation for acquiring a good education when it doesn’t have any economic function.

  The second of the two futures I can see in the long run is that even though machines will be doing a lot of goods and basic services like transportation, there are still things that people can do which improve the quality of life for themselves and for others. There are people who are able to teach, to inspire people to live richer, more interesting, more varied and more fulfilling lives, whether that’s teaching people to appreciate literature or music, how to build, or even how to survive in the wilderness.

  MARTIN FORD: Do you think we can navigate as individuals and as a species towards a positive future, once AI has changed our economy?

  STUART J. RUSSELL: Yes, I really do, but I think that a positive future will require human intervention to help people live positive lives. We need to start actively navigating, right now, towards a future that can present the most constructive challenges and the most interesting experiences in life for people. A world that can build emotional resilience and nurture a generally constructive and positive attitude to one’s own life—and to the lives of others. At the moment, we are pretty terrible at doing that. So, we have to start changing that now.

  I think that we’ll also need to fundamentally change our attitude about what science is for and what it can do for us. I have a cell phone in my pocket, and the human race probably spent on the order of a trillion dollars on the science and engineering that went into ultimately creating things like my cell phone. And yet we spend almost nothing on understanding how people can live interesting and fulfilling lives, and how we can help people around us do that. I think as a race that we will need to start acknowledging that if we help another person in the right way, it creates enormous value for them for the rest of their lives. Right now, we have almost no science base for how to do this, we have no degree programs in how to do it, we have very few journals about it, and those that are trying are not taken very seriously.

  The future can have a perfectly functioning economy where people who are expert in living life well, and helping other people, can provide those kinds of services. Those services may be coaching, they may be teaching, they may be consoling, or maybe collaborating, so that we can all really have a fantastic future.

  It’s not a grim future at all: it’s a far better future than what we have at present; but it requires rethinking our education system, our science base, our economic structures.

  We need now to understand how this will function from an economic point of view in terms of the future distribution of income. We want to avoid a situation where there are the super-rich who own the means of production—the robots and the AI systems—and then there are their servants, and then there is the rest of the world doing nothing. That’s sort of the worst possible outcome from an economic point of view.

  So, I do think that there is a positive future that makes sense once AI has changed the human economy, but we need to get a better handle on what that’s going to look like now, so that we can construct a plan for getting there.

  MARTIN FORD: You’ve worked on applying machine learning to medical data at both Berkeley and nearby UCSF. Do you think artificial intelligence will create a more positive future for humans through advances in healthcare and medicine?

  STUART J. RUSSELL: I think so, yes, but I also think that medicine is an area where we know a great deal about human physiology—and so to me, knowledge-based or model-based approaches are more likely to succeed than data-driven machine learning systems.

  I don’t think that deep learning is going to work for a lot of important medical applications. The idea that today we can just collect terabytes of data from millions of patients and then throw that data into a black-box learning algorithm, doesn’t make sense to me. There may be some areas of medicine where data-driven machine learning works very well of course. Genomic data is one area; and predicting human susceptibility to various kinds of genetically-related diseases. Also, I think, deep learning AI will be strong at predicting the potential efficacy of particular drugs.

  But these examples are a long way from an AI being able to act like a doctor and being able to decide, perhaps, that a patient has a blocked ventricle in the brain that’s interfering with the circulation of cerebral spinal fluid. To really do that, is more like diagnosing which part of a car is not working. If you have no idea how cars work, then figuring out that it’s the fan belt that’s broken is going to be very, very difficult.

  Of course, if you’re an expert car mechanic and you know how it all works, and you’ve got some symptoms to work with, maybe there’s a kind of a flapping noise and that the car’s overheating, then you generally can figure it out quickly. And it’s going to be the same with human physiology, except that there is a significant effort that must be put in into building these models of human physiology.

  A lot effort was already put in to these models in the ‘60s and ‘70s, and they have helped AI systems in medicine progress to some degree. But today we have technology that can in particular represent the uncertainty in those models. Mechanical systems models are deterministic and have specific parameter values: they represent exactly one completely predictable, fictitious human.

  Today’s probabilistic models, on the other hand, can represent an entire population, and they can accurately reflect the degree of uncertainty we might have about being able to predict, for example, exactly when someone is going to have a heart attack. It’s very hard to predict things like heart attacks on an individual level, but we can predict that there’s a certain probability per person, which might be increased during extreme exercise or stress, and that this probability would depend on various characteristics of the individual.

  This more modern and probabilistic approach behaves much more reasonably than previous systems. Probabilistic systems enable us to combine the classical models of human physiology
with observation and real-time data, to make strong diagnosis and plan treatments.

  MARTIN FORD: I know you’ve focused a lot on the potential risks of weaponized AI. Could you talk more about that?

  STUART J. RUSSELL: Yes, I think autonomous weapons are now creating the prospect of a new arms race. This arms race may already be leading towards the development of lethal autonomous weapons. These autonomous weapons can be given some mission description that the weapon has the ability to achieve by itself, such as identifying, selecting, and attacking human targets.

  There are moral arguments that this will cross a fundamental line for artificial intelligence: that we’re handing over the power over life and death to a machine to decide, and that is a fundamental reduction in the way we value human life and the dignity of human life.

  But I think a more practical argument is that a logical consequence of autonomy is scalability. Since no supervision is required by an individual human for each individual autonomous weapon, someone could launch as many weapons as they want. Someone can launch an attack, where five guys in a control room could launch 10,000,000 weapons and wipe out all males between the age of 12 and 60 in some country. So, these can be weapons of mass destruction, and they have this property of scalability: someone could launch an attack with 10, or 1,000, or 1,000,000 or 10,000,000 weapons.

  With nuclear weapons, if they were used at all, someone would be crossing a major threshold which we’ve managed to avoid so far as a race, by the skin of our teeth. We have managed to avoid crossing that threshold since 1945. But autonomous weapons don’t have such a threshold, and so things can more smoothly escalate. They are also easily proliferated, so once they are manufactured in very large numbers it’s quite likely they’ll be on the international arms market and they’ll be accessible to people who have less scruples than, you know, the Western powers.

  MARTIN FORD: There’s a lot of technology transfer between commercial applications and potential military applications. You can buy a drone on Amazon that could potentially be weaponized...

  STUART J. RUSSELL: So, at the moment, you can buy a drone that’s remotely piloted, maybe with first-person vision. You could certainly attach a little bomb to it and deliver it and kill someone, but that’s a remotely piloted vehicle, which is different. It’s not scalable because you can’t launch 10,000,000 of those unless you’ve got 10,000,000 pilots. So, someone would need a whole country trained to do that, of course, or they could also give those 10,000,000 people machine guns and then go and kill people. Thankfully we have an international system of control of sanctions, and military preparedness, and so on—to try to prevent these things from happening. But we don’t have an international system of control that would work against autonomous weapons.

  MARTIN FORD: Still, couldn’t a few people in a basement somewhere develop their own autonomous control system and then deploy it on commercially available drones? How would we be able control those kinds of homemade AI weapons?

  STUART J. RUSSELL: Yes, something resembling the software that controls a self-driving car could conceivably be deployed to control a quadcopter that delivers a bomb. Then you might have something like a homemade autonomous weapon. It could be that under a treaty, there would be a verification mechanism that would require the cooperation of drone manufacturers and the people who make chips for self-driving cars and so on, so that anyone ordering large quantities would be noticed—in the same way that anyone ordering large quantities of precursor chemicals for chemical weapons is not going to get away with it because the corporation is required, by the chemical weapons treaty, to know its customer and to report any unusual attempts that are made to purchase large quantities of certain dangerous products.

  I think it will be possible to have a fairly effective regime that could prevent very large diversions of civilian technology to create autonomous weapons. Bad things would still happen, and I think this may be inevitable, because in small numbers it will likely always be feasible for homemade autonomous weapons to be built. In small numbers, though, autonomous weapons don’t have a huge advantage over a piloted weapon. If you’re going to launch an attack with ten or twenty weapons, you might as well pilot them because you can probably find ten or twenty people to do that.

  There are other risks of course with AI and warfare, such as where an AI system may accidentally escalate warfare when machines misinterpret some signal and start attacking each other. And the future risk of a cyber-infiltration means that you may think you have a robust defense based on autonomous weapons when in fact, all your weapons have been compromised and are going to turn on you instead when a conflict begins. So that all contributes to strategic uncertainty, which is not great at all.

  MARTIN FORD: These are scary scenarios. You’ve also produced a short film called Slaughterbots, which is quite a terrifying video.

  STUART J. RUSSELL: We made the video really just to illustrate these concepts because I felt that, despite our best efforts to write about them and give presentations about them, that somehow the message wasn’t getting through. People were still saying, “oh, autonomous weapons are science fiction.” They were still imagining it as Skynet and Terminators, as a technology that doesn’t exist. So, we were simply trying to point out that we’re not talking about spontaneously evil weapons, and we’re not talking about taking over the world—but we also not talking about science fiction any more.

  These AI warfare technologies are feasible today, and they bring some new kinds of extreme risks. We’re talking about scalable weapons of mass destruction falling into the wrong hands. These weapons could inflict enormous damage on human populations. So, that’s autonomous weapons.

  MARTIN FORD: In 2014, you published a letter, along with the late Stephen Hawking and the physicists Max Tegmark and Frank Wilczek, warning that we aren’t taking the risks associated with advanced AI seriously enough. It’s notable that you were the only computer scientist among the authors. Could you tell the story behind that letter and what led you to write it? (https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html)

  STUART J. RUSSELL: So, it’s an interesting story. It started when I got a call from National Public Radio, who wanted to interview me about this movie called Transcendence. I was living in Paris at the time and the movie wasn’t out in Paris, so I hadn’t seen it yet.

  I happened to have a stopover in Boston on the way back from a conference in Iceland, so I got off the plane in Boston and I went to the movie theatre to watch the movie. I’m sitting there towards the front of the theatre, and I don’t really know what’s going to happen in the movie at all and then, “Oh, look! It’s showing Berkeley computer science department. That’s kind of funny.” Johnny Depp is playing the AI professor, “Oh, that’s kind of interesting.” He’s giving a talk about AI, and then someone, some anti-AI terrorist decides to shoot him. So, I’m sort of involuntarily shrinking down in my seat seeing this happening, because that could really be me at that time. Then the basic plot of the movie is that before he dies they manage to upload his brain into a big quantum computer and the combination of those two things creates a super-intelligent entity that threatens to take over the world because it very rapidly develops all kinds of amazing new technologies.

  So anyway, we wrote an article that was, at least superficially, a review of the movie, but it was really saying, “You know, although this is just a movie, the underlying message is real: which is that if—or when—we create machines that can have a dominant effect on the real world, then that can present a very serious problem for us: that we could, in fact, cede control over our futures to other entities besides humans.”

  The problem is very straightforward: our intelligence is what gives us our ability to control the world; and so, intelligence represents power over the world. If something has a greater degree of intelligence, then it has more power.

  We are already on the way
to creating things that are much more powerful than us; but somehow, we have to make sure that they never, ever, have any power. So, when we describe the AI situation like that, people say, “Oh, I see. OK, there’s a problem.”

  MARTIN FORD: And yet, a lot of prominent AI researchers are quite dismissive of these concerns...

  STUART J. RUSSELL: Let me talk about these AI denialists. There are various arguments that people put forward as to why we shouldn’t pay any attention to the AI problem, and that there are just too many of these arguments to count. I’ve collected somewhere between 25 and 30 distinct arguments, but they all share a single property, which is that they simply do not make any sense. They don’t really stand up to scrutiny. Just to give you one example, something you’ll often hear is, “well, you know, it’s absolutely not a problem because we’ll just be able to switch them off.” That is like saying that beating AlphaZero at Go is absolutely not a problem. You just put the white pieces in the right place, you know? It just doesn’t stand up to five seconds of scrutiny.

  A lot of these AI denialist arguments, I think, reflect a kind of a knee-jerk defensive reaction. Perhaps some people think, “I’m an AI researcher. I feel threatened by this thought, and therefore I’m going to keep this thought out of my head and find some reason to keep it out of my head.” That’s one of my theories about why some otherwise very informed people will try to deny that AI is going to become a problem for humans.

  This even extends to some mainstream people in the AI community who deny that AI will ever be successful, which is ironic because we’ve spent 60 years fending off philosophers, who have denied that the AI field will ever be successful. We’ve also spent those 60 years demonstrating and proving, one time after another, how things that the philosophers said would be impossible, can indeed happen—such as beating the world champion in chess.

 

‹ Prev