by Martin Ford
In academia, there are a number of excellent places. Canada has Montreal and Toronto, both of which are world-leading deep learning universities, and the likes of Berkeley, Oxford, Stanford, and Carnegie Mellon also have a lot of researchers in the field. It’s not just a Western thing, countries like China are investing greatly in building up their domestic capacity.
MARTIN FORD: Those are not focused specifically on AGI, though.
NICK BOSTROM: Yes, but it’s a fuzzy boundary. Among those groups currently overtly working towards AGI, aside from DeepMind, I guess OpenAI would be another group that one could point to.
MARTIN FORD: Do you think the Turing test is a good way to determine if we’ve reached AGI, or do we need another test for intelligence?
NICK BOSTROM: It’s not so bad if what you want is a rough-and-ready criterion for when you have fully succeeded. I’m talking about a full-blown, difficult version of the Turing test. Something where you can have experts interrogate the system for an hour, or something like that. I think that’s an AI-complete problem. It can’t be solved other than by developing general artificial intelligence. If what you’re interested in is gauging the rate of progress, say, or establishing benchmarks to know what to shoot for next with your AI research team, then the Turing test is maybe not such a good objective.
MARTIN FORD: Because it turns into a gimmick if it’s at a smaller scale?
NICK BOSTROM: Yes. There’s a way of doing it right, but that’s too difficult, and we don’t know at all how to do that right now. If you wanted incremental progress on the Turing test, what you would get would be these systems that have a lot of canned answers plugged in, and clever tricks and gimmicks, but that actually don’t move you any closer to real AGI. If you want to make progress in the lab, or if you want to measure the rate of progress in the world, then you need other benchmarks that plug more into what is actually getting us further down the road, and that will eventually lead to fully general AI.
MARTIN FORD: What about consciousness? Is that something that might automatically emerge from an intelligent system, or is that an entirely independent phenomenon?
NICK BOSTROM: It depends on what you mean by consciousness. One sense of the word is the ability to have a functional form of self-awareness, that is, you’re able to model yourself as an actor in the world and reflect on how different things might change you as an agent. You can think of yourself as persisting through time. These things come more or less as a side effect of creating more intelligent systems that can build better models of all kinds of aspects of reality, and that includes themselves.
Another sense of the word “consciousness” is this phenomenal experiential field that we have that we think has moral significance. For example, if somebody is actually consciously suffering, then it’s a morally bad thing. It means something more than just that they tend to run away from noxious stimuli because they actually experience it inside of themselves as a subjective feeling. It’s harder to know whether that phenomenal experience will automatically arise just as a side effect of making machine systems smarter. It might even be possible to design machine systems that don’t have qualia but could still be very capable. Given that we don’t really have a very clear grasp of what the necessary and sufficient conditions are for morally relevant forms of consciousness, we must accept the possibility that machine intelligences could attain consciousness, maybe even long before they become human-level or superintelligent.
We think many non-human animals have more of the relevant forms of experience. Even with something as simple as a mouse, if you want to conduct medical research on mice, there is a set of protocols and guidelines that you have to follow. You have to anesthetize a mouse before you perform surgery on it, for example, because we think it would suffer if you just carved it up without anesthesia. If we have machine-intelligent systems, say, with the same behavioral repertoire and cognitive complexity as a mouse, then it seems to be a live question whether at that point it might not also start to reach levels of consciousness that would give it some degree of moral status and limit what we can do to it. At least it seems we shouldn’t be dismissing that possibility out of hand. The mere possibility that it could be conscious might already be sufficient grounds for some obligations on our part to do, at least if they’re easy to do, things that will make the machine have a better-quality life.
MARTIN FORD: So, in a sense, the risks here run both ways? We worry about the risk of AI harming us, but there’s also the risk that perhaps we’re going to enslave a conscious entity or cause it to suffer. It sounds to me that there is no definitive way that we’re ever going to know if a machine is truly conscious. There’s nothing like the Turing test for consciousness. I believe you’re conscious because you’re the same species I am, and I believe I’m conscious, but you don’t have that kind of connection with a machine. It’s a very difficult question to answer.
NICK BOSTROM: Yes, I think it is difficult. I wouldn’t say species membership is the main criterion here that we use to posit consciousness, there are a lot of human beings that are not conscious. Maybe they are in a coma, or they are fetuses, or they could be brain dead, or under deep anesthesia. Most people also think you can be a non-human being, for instance, certain animals, let us say, have various degrees and forms of conscious experience. So, we are able to project it outside our own species, but I think it is true that it will be a challenge for human empathy to extend the requisite level of moral consideration to digital minds, should such come to exist.
We have a hard enough time with animals. Our treatment of animals, particularly in meat production, leaves much to be desired, and animals have faces and can squeak! If you have an invisible process inside a microprocessor, it’s going to be much harder for humans to recognize that there could be a sentient mind in there that deserves consideration. Even today, it seems like one of those crazy topics that you can’t really take seriously. It’s like a discussion for a philosophical seminar rather than a real issue, like algorithmic discrimination is, or killer drones.
Ultimately, it needs to be moved out of this sphere of crazy topics that only professional philosophers talk about, and into a topic that you could have a reasonable public debate about. It needs to happen gradually, but I think maybe it’s time to start affecting that shift, just as the topic of what AI might do for the human condition has moved from science-fiction into a more mainstream conversation over the last few years.
MARTIN FORD: What do you think about the impact on the job market and the economy that artificial intelligence might have? How big a disruption do you think that could be and do you think that’s something we need to be giving a lot of attention to?
NICK BOSTROM: In the very short term, I think that there might be a tendency to exaggerate the impacts on the labor market. It is going to take time to really roll out systems on a large enough scale to have a big impact. Over time, though, I do think that advances in machine learning will have an increasingly large impact on human labor markets and if you fully succeed with artificial intelligence, then yes, artificial intelligence could basically do everything. In some respects, the ultimate goal is full unemployment. The reason why we do technology, and why we do automation is so that we don’t have to put in so much effort to achieve a given outcome. You can do more with less, and that’s the gestalt of technology.
MARTIN FORD: That’s the utopian vision. So, would you support, for example, a basic income as a mechanism to make sure that everyone can enjoy the fruits of all this progress?
NICK BOSTROM: Some functional analog of that could start to look increasingly desirable over time. If AI truly succeeds, and we resolve the technical control problem and have some reasonable governance, then an enormous bonanza of explosive economic growth takes place. Even a small slice of that would be ample enough to give everybody a really great life, so it seems one should at the minimum do that. If we develop superintelligence, we will all carry a slice of the risk of this development, whether we
like it or not. It seems only fair, then, that everybody should also get some slice of the upside if things go well.
I think that should be part of the vision of how machine superintelligence should be used in the world; at least a big chunk of it should be for the common good of all of humanity. That’s also consistent with having private incentives for developers, but the pie, if we really hit the jackpot, would be so large that we should make sure that everybody has a fantastic quality of life. That could take the form of some kind of universal basic income or there could be other schemes, but the net result of that should be that everybody sees a great gain in terms of their economic resources. There will also be other benefits—like better technologies, better healthcare, and so forth—that superintelligence could enable.
MARTIN FORD: What are your thoughts on the concern that China could reach AGI first, or at the same time as us? It seems to me that the values of whatever culture develops this technology do matter.
NICK BOSTROM: I think it might matter less which particular culture happens to develop it first. It matters more how competent the particular people or group that are developing it are, and whether they have the opportunity to be careful. This is one of the concerns with a racing dynamic, where you have a lot of different competitors racing to get to some kind of finish line first—in a tight race you are forced to throw caution to the wind. The race would go to whoever squanders the least effort on safety, and that would be a very undesirable situation.
We would rather have whoever it is that develops the first superintelligence to have the option at the end of the development process to pause for six months, or maybe a couple of years to double-check their systems and install whatever extra safeguards they can think of. Only then would they slowly and cautiously amplify the system’s capabilities up to the superhuman level. You don’t want them to be rushed by the fact that some competitor is nipping at their heels. When thinking about what the most desirable strategic situation for humanity is when superintelligence arises in the future, it seems that one important desideratum is that the competitive dynamics should be allayed as much as possible.
MARTIN FORD: If we do have a “fast takeoff” scenario where the intelligence can recursively improve itself, though, then there is an enormous first-mover advantage. Whoever gets there first could essentially be uncatchable, so there’s a huge incentive for exactly the kind of competition that you’re saying isn’t a good thing.
NICK BOSTROM: In certain scenarios, yes, you could have dynamics like that, but I think the earlier point I made about pursuing this with a credible commitment to using it for the global good is important here, not only from an ethical point of view but also from the point of view of reducing the intensity of the racing dynamic. It would be good if all the competitors feel that even if they don’t win the race, they’re still going to benefit tremendously. That will then make it more feasible to have some arrangement in the end where the leader can get a clean shot at this without being rushed.
MARTIN FORD: That calls for some sort of international coordination, and humanity’s track record isn’t that great. Compared to the chemical weapons ban and the nuclear non-proliferation act, it sounds like AI would be an even greater challenge in terms of verifying that people aren’t cheating, even if you did have some sort of agreement.
NICK BOSTROM: In some respects it would be more challenging, and in other respects maybe less challenging. The human game has often been played around scarcity—there is a very limited set of resources, and if one person or country has those resources, then somebody else does not have them. With AI there is the opportunity for abundance in many respects, and that can make it easier to form cooperative arrangements.
MARTIN FORD: Do you think that we will solve these problems and that AI will be a positive force overall?
NICK BOSTROM: I’m full of both hopes and fears. I would like to emphasize the upsides here, both in the short term and longer term. Because of my job and my book, people always ask me about the risks and downsides, but a big part of me is also hugely excited and eager to see all the beneficial uses that this technology could be put to and I hope that this could be a great blessing for the world.
NICK BOSTROM is a Professor at Oxford University, where he is the founding Director of the Future of Humanity Institute. He also directs the Governance of Artificial Intelligence Program. Nick studied at the University of Gothenburg, Stockholm University and Kings College London prior to receiving his PhD in philosophy from the London School of Economics in 2000. He is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller.
Nick has a background in physics, artificial intelligence, and mathematical logic as well as philosophy. He is recipient of a Eugene R. Gannon Award (one person selected annually worldwide from the fields of philosophy, mathematics, the arts and other humanities, and the natural sciences). He has been listed on Foreign Policy’s Top 100 Global Thinkers list twice; and he was included on Prospect magazine’s World Thinkers list, the youngest person in the top 15 from all fields and the highest-ranked analytic philosopher. His writings have been translated into 24 languages. There have been more than 100 translations and reprints of his works.
Chapter 6. YANN LECUN
A human can learn to drive a car in 15 hours of training without crashing into anything. If you want to use the current reinforcement learning methods to train a car to drive itself, the machine will have to drive off cliffs 10,000 times before it figures out how not to do that.
VP & CHIEF AI SCIENTIST, FACEBOOK PROFESSOR OF COMPUTER SCIENCE, NYU
Yann LeCun has been involved in the academic and industry side of AI and Machine Learning for over 30 years. Prior to joining Facebook, Yann worked at AT&T’s Bell Labs, where he is credited with developing convolutional neural networks—a machine learning architecture inspired by the brain’s visual cortex. Along with Geoff Hinton and Yoshua Bengio, Yann is part of a small group of researchers whose effort and persistence led directly to the current revolution in deep learning neural networks.
MARTIN FORD: Let’s jump right in and talk about the deep learning revolution that’s been unfolding over the past decade or so. How did that get started? Am I right that it was the confluence of some refinements to neural network technology, together with much faster computers and an explosion in the amount of training data available?
YANN LECUN: Yes, but it was more deliberate than that. With the emergence of the backpropagation algorithm in 1986-87, people were able to train neural nets with multiple layers, which was something that the old models didn’t do. This resulted in a wave of interest that lasted right through to around 1995 before petering out.
Then in 2003, Geoffrey Hinton, Yoshua Bengio, and I got together and said, we know these techniques are eventually going to win out, and we need to get together and hash out a plan to renew the community interest in these methods. That’s what became deep learning. It was a deliberate conspiracy, if you will.
MARTIN FORD: Looking back, did you imagine the extent to which you would be successful? Today, people think artificial intelligence and deep learning are synonymous.
YANN LECUN: Yes and no. Yes, in the sense that we knew eventually those techniques would come to the fore for computer vision, speech recognition, and maybe a couple of other things—but no, we didn’t realize it would become synonymous with deep learning.
We didn’t realize that there would be so much of an interest from the wider industry that it would create a new industry altogether. We didn’t realize that there would be so much interest from the public, and that it would not just revolutionize computer vision and speech recognition, but also natural language understanding, robotics, medical imaging analysis, and that it would enable self-driving cars that actually work. That took us by surprise, that’s for sure.
Back in the early ‘90s, I would hav
e thought that that this kind of progress would have happened slightly earlier but more progressively, rather than the big revolution that occurred around 2013.
MARTIN FORD: How did you first become interested in AI and machine learning?
YANN LECUN: As a kid, I was interested in science and engineering and the big scientific questions—life, intelligence, the origin of humanity. Artificial intelligence was something that fascinated me, even though it didn’t really exist as a field in France during the 1960s and 1970s. Even with a fascination for those questions, when I finished high school I believed that I would eventually become an engineer rather than a scientist, so I began my studies in the field of engineering.
Early on in my studies, around 1980, I stumbled on a philosophy book which was a transcription of a debate between Jean Piaget, the developmental psychologist, and Noam Chomsky, the linguist, called, Language and Learning: The Debate Between Jean Piaget and Noam Chomsky. The book contained a really interesting debate between the concepts of nature and nurture and the emergence of language and intelligence.
On the side of Piaget in the debate was Seymour Papert, who was a professor at MIT in computer science and who was involved with early machine learning and arguably actually killed the field off in the first wave of neural nets in the late 1960s. Here he was, 10 years later, singing the praise of a very simple machine learning model called the perceptron that had been invented in the 1950s, and that he had been working on in the 1960s. That was the first time I read about the concept of a learning machine, and I was absolutely fascinated by the idea that a machine could learn. I thought learning was an integral part of intelligence.
As an undergrad, I dug up all the literature I could find about machine learning and did a couple of projects on it. I discovered that nobody in the West was working on neural nets. A few Japanese researchers were working on what became known as neural networks, but no one in the West was, because the field had been killed in the late ‘60s in part by Seymour Papert and Marvin Minsky, the famous American AI researcher.