by Martin Ford
JOSH TENENBAUM: Yes, it’s trying to take our reverse engineering approach and understand this one simple aspect of “self.” I call it simple, but it’s one small aspect of the big set of things you could mean by consciousness; to understand what is the basic sense of self that humans have, and what would it mean to build a machine this way. It’s a really interesting question. AI people, especially those who are interested in AGI, will tell you that they are trying to build machines that think for themselves or learn for themselves, but you should ask, “What does that mean to build a machine that actually thinks for itself or learns for itself?” Can you do that unless it has a sense of self?
If we look at today’s systems in AI, whether it’s self-driving cars or systems like AlphaGo that in some sense are advertised as “learning for themselves.” They don’t actually have a self, that’s not part of them. They don’t really understand what they’re doing, in the sense that I understand when I get into a car, and I’m in the car, and I’m driving somewhere. If I played Go, I would understand that I’m playing a game, and if I decide to learn Go, I’ve made a decision for myself. I might learn Go by asking someone to teach me; I might practice with myself or with others. I might even decide I want to become a professional Go player and go to the Go Academy. Maybe I decide I’m really serious and I want to try to become one of the best in the world. When a human becomes a world-class Go player, that’s how they do it. They make a bunch of decisions for themselves very much guided by their sense of self at many different time scales.
At the moment we don’t have any notion like that in AI. We don’t have systems that do anything for themselves, even at the high level. We don’t have systems that have real goals the way a human has goals, rather we have systems that a human built to achieve their goals. I think it’s absolutely essential that if we wanted to have systems that have human-like, human-level AI, they would have to do a lot of things for themselves that right now engineers are the ones doing, but I think it’s possible that they could do that.
We’re trying to understand in engineering terms what it is to make these large decisions for an agent for itself to set up the problems that it’s trying to solve or the learning problems that it is trying to solve, all of which are currently being done by engineers. I think it’s likely that we would have to have machines like that if they were going to be intelligent at the human level. I think it’s also a real question of whether we want to do that, because don’t have to do that. We can decide what level of selfness or autonomy we really want to give to our machine systems. They might well be able to do useful things for us without having the full sense of self that humans have. That might be an important decision for us to make. We might think that’s the right way to go for technology and society.
MARTIN FORD: I want to ask you about some of the potential risks associated with AI. What should we really be concerned about, both in the relatively near term and in the longer term, with regard to the impact that artificial intelligence could have on society and the economy?
JOSH TENENBAUM: Some of the risks that people have advertised a lot are that we’ll see some kind of singularity, or superintelligent machines that take over the world or have their own goals that are incompatible with human existence. It’s possible that could happen in the far future, but I’m not especially worried about that, in part because of the things I was just talking about. We still don’t know how to give machines any sense of self at all. The idea that they would decide for themselves to take over the world at our expense is something that is so far down the line, and there’s a lot of steps between now and then.
Honestly, I’m a lot more worried about the shorter-term steps. I think between where we are right now, and any kind of human-level AGI, let alone super-human level, we are going to develop increasingly powerful algorithms, which can have all sorts of risks. These are algorithms that will be used by people for goals, some of which are good, and some of which are not good. Many of those not good goals are just people pursuing their own selfish ends, but some of them might actually be evil or bad actors. Like any technology, they can be used for good, but they can also be used for selfish purposes, and for evil or bad deeds. We should worry about those things because these are very powerful technologies, which are already being used in all of these ways, for example, in machine learning.
The near-term risks that we need to think about are the ones that everybody’s talking about. I wish I had good ideas on how to think about those, but I don’t. I think that the broader AI community increasingly realizes that they need to think about the near-term risks now, whether it’s about privacy or human rights. Even topics like how AI or automation more generally is reshaping the economy and the job landscape. It’s much bigger than AI, it’s technology more broadly.
If we want to point to new challenges, I think one has to do with jobs, which is important. For pretty much all of human history, my understanding is that most people found some kind of livelihood, whether it was hunting and gathering, farming, working in a manufacturing plant, or whatever kind of business. You would spend the first part of your life learning some things, including a trade or skills that would then set up some kind of livelihood for you, which you could pursue until you died. You could develop a new skill set or change your line of work, but you didn’t have to.
Now, what we’re increasingly seeing is that technology is changing and has advanced to the point that many jobs and livelihoods change or come into existence or go out of existence on a faster time scale than an individual human adult work life. There was always technological change that made whole lines of work disappear, and others come to be, but it used to happen across generations. Now they’re happening within generations, which puts a different kind of stress on the workforce.
More and more people will have to confront the fact that you can’t just learn a certain set of skills and then use those to work for the rest of your life. You might have to be continually retraining yourself because technology is changing. It’s not just more advanced, but it’s advancing faster than it ever has. AI is part of that story, but it’s much bigger than just AI. I think those are things that we as a society have to think about.
MARTIN FORD: Given that things could progress so rapidly, do you worry that a lot of people inevitably are going to be left behind? Is a universal basic income something that we should be giving serious consideration to?
JOSH TENENBAUM: We should think about a basic income, yes, but I don’t think anything is inevitable. Humans are a resilient and flexible species. Yes, it might be that our abilities to learn and retrain ourselves have limitations to them. If technology keeps advancing, especially at this pace, it might be that we might have to do things like that. But again, we’ve seen that happen in previous eras of human history. It’s just unfolded more slowly.
I think it would be fair to say that most of us who work for a living in the socio-economic bracket that you and I live in, where we’re writers, scientists, or technologists, would find that if we went back thousands of years in human history, they would say “That’s not work, that’s just playing! If you’re not laboring in the fields from dawn till dusk, you’re not actually working.” So, we don’t know what the future of work is going to be like.
Just because it might change fundamentally, it doesn’t mean that the idea that you would spend eight hours a day doing something economically valuable goes away. Whether we’re going to have to have some kind of universal basic income, or just see the economy working in a different way, I don’t know about that, and I’m certainly no expert on that, but I think that AI researchers should be part of that conversation.
Another conversation that’s a much larger and much more urgent one is climate change. We don’t know what the future of human-caused climate change is like, but we do know that AI researchers are increasingly contributing to it. Whether it’s AI or Bitcoin mining, just look at what computers are being increasingly used for, and the massive and acc
elerating energy consumption.
I think we as AI researchers should think about the ways in which what we’re doing is actually contributing to climate change, and ways we might contribute positively to solving some of those problems. I think that’s an example of an urgent problem for society that AI researchers maybe don’t think about too much, but they are increasingly part of the problem and maybe part of the solution.
There are also similar issues, like human rights and ways that AI technologies could be used to spy on people, but researchers could also use those technologies to help people figure out when they’re being spied on. We can’t, as researchers, prevent the things that we in our field invent from being used for bad purposes, but we can work harder to develop the good purposes, and also to develop and to use those technologies to push back against bad actors or uses. These are really moral issues that AI researchers need to be engaging in.
MARTIN FORD: Do you think there’s a role for a regulation to help ensure that AI remains a positive force for society?
JOSH TENENBAUM: I think Silicon Valley can be very libertarian with their ethos that says we should break things and let other people pick up the pieces. Honestly, I wish that both governments and the tech industry were less far apart and hostile to each other, and saw more of a common purpose.
I am an optimist, and I do think that these different parties can and should be working more together, and that AI researchers can be one of the standard bearers for that kind of cooperation. I think we need it as a community, and not to mention as a society.
MARTIN FORD: Let me ask you to comment more specifically on the prospect for superintelligence and the alignment or control problem that Nick Bostrom has written about. I think his concern is that, while it might be a long time before superintelligence is achieved, it might take us even longer to work out how to maintain control of a superintelligent system, and that’s what underlies his argument that we should be focusing on this issue now. How would you respond to that?
JOSH TENENBAUM: I think it’s reasonable for people to be thinking about that. We think about that same thing. I wouldn’t say that should be the overriding goal of our thinking, because while you could imagine some kind of superintelligence that would pose an existential risk to humanity, I just think we have other existential risks that are much more urgent. There are already ways that machine learning technologies and other kinds of AI technologies are contributing to big problems that are confronting us right now as a human species, and some of these grow to the level of existential risk.
I want to put that in context and say that people should be thinking about problems on all timescales. The issue of value alignment is difficult to address, and one of the challenges in addressing it right now is that we don’t know what values are. Personally, I think that when AI safety researchers talk about value alignment, they have a very simplistic and maybe naive idea of what a value even is. In some of the work that we do in our computational cognitive science, we’re actually trying to understand and to reverse engineer what are values to humans. What are moral principles, for example? These are not things that we understand in engineering terms.
We should think about these issues, but my approach is that we have to understand ourselves better before we can work on the technology side. We have to understand what actually our values are. How do we as humans come to learn them, and come to know them? What are the moral principles? How do those work in engineering terms? I think if we can understand that, that’s an important part of understanding ourselves, and it’s an important part of the cognitive science agenda.
It will be both useful and probably essential as machines not only become more intelligent but come to have more of an actual sense of self, where they become autonomous actors. It will be important for addressing these issues that you’re talking about. I just think we’re far from understanding how to address them, and how they work in natural intelligence.
We are also recognizing that there are nearer-term, really big risks and problems that are not of the AI value alignment sort, but are things like, what are we doing to our climate? How are governments or companies using AI technologies today to manipulate people?
Those are things we should worry about now. Parts of us should be thinking about how do we become good moral actors, and how do we do things that really make the world better and not worse. We should be engaging in things like that or climate change, which are current, or near-term risks that AI can make better or worse. As opposed to super intelligence value alignment, which we should also be thinking about, but I think more from a basic science perspective—what does it even mean to have a value?
AI researchers should work on all of these things. It’s just that the value alignment questions are very basic research ones that are far from being put into practice or being needed to put into practice. We need to make sure that we don’t lose sight of the real current moral issues that AI needs to be engaged with.
MARTIN FORD: Do you think that we’ll succeed in making sure that the benefits of artificial intelligence outweigh the downsides?
JOSH TENENBAUM: I’m an optimist by nature, so my first response is to say yes, but we can’t take it for granted. It’s not just AI, but technology, whether it’s smartphones or social media, is transforming our lives and changing how we interact with each other. It really is changing the nature of human experience. I’m not sure it’s always for the better. It’s hard to be optimistic when you see a family where everybody’s just on their phones, or when you see some of the negative things that social media has led to.
I think it’s important for us to realize, and to study, all the ways these technologies are doing crazy things to us! They are hacking our brains, our value systems, our reward systems, and our social interaction systems in a way that is pretty clearly not just positive. I think we need more active immediate research to try to understand this and to try to think about this. This is a place where I feel that we can’t be guaranteed that the technology is leading us to a good outcome, and AI right now, with machine learning algorithms, are not necessarily on the side of good.
I’d like the community to think about that in a very active way. In the long term, yes, I’m optimistic that we will build the kinds of AI that are, on balance, forces for good, but I think this is really a key moment for all of us who work in this field to really be serious about this.
MARTIN FORD: Do you have any final thoughts, or is there anything I didn’t ask about that you feel is important?
JOSH TENENBAUM: The questions that animate the work we’re doing and that animate many of us in this field are questions that people have thought about for as long as people have thought about anything. What is the nature of intelligence? What are thoughts? What does it mean to be human? It’s the most exciting thing that we have the opportunity to work on these questions now in ways that we can make both real engineering and real scientific progress on, and not simply consider as abstract philosophical questions.
When we think about building AI of any big sort, but especially AGI, if we see that as not just a technology and an engineering problem, but as one side of one of the biggest scientific questions that humanity has thought about ever. It’s along the same line of thinking as, what is the nature of intelligence, or what are its origins in the universe? The idea to pursue that as part of that larger program is one that I think is tremendously exciting and that we should all be excited and inspired by. That means thinking about ways of making technology that makes us smarter, and doesn’t make us stupider.
We have the opportunity to both understand more about what it means to be intelligent in a human way, and learn how to build technology that can make us smarter individually and collectively. It’s super exciting to be able to do that, but it’s also imperative that we take that seriously when we work on technology.
JOSH TENENBAUM is Professor of Computational Cognitive Science in the Department of Brain and Cognitive Sciences at the Massachusetts Ins
titute of Technology. He is also a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), and the Center for Brains, Minds, and Machines (CBMM). Josh studies perception, learning and common-sense reasoning in humans and machines, with the twin goals of understanding human intelligence in computational terms and bringing artificial intelligence closer to human-level capabilities. Josh received his undergraduate degree in physics from Yale University in 1993, and his PhD from MIT in 1999. After a brief postdoc with the MIT AI Lab, he joined the Stanford University faculty as Assistant Professor of Psychology and (by courtesy) Computer Science. He returned to MIT as a faculty member in 2002.
He and his students have published extensively in cognitive science, machine learning and other AI-related fields, and their papers have received awards at venues across the AI landscape, including leading conferences in computer vision, reinforcement learning and decision-making, robotics, uncertainty in AI, learning and development, cognitive modeling and neural information processing. They have introduced several widely used AI tools and frameworks, including models for nonlinear dimensionality reduction, probabilistic programming, and Bayesian approaches to unsupervised structure discovery and program induction. Individually, he is the recipient of the Howard Crosby Warren Medal from the Society of Experimental Psychologists, the Distinguished Scientific Award for Early Career Contribution to Psychology from the American Psychological Association, and the Troland Research Award from the National Academy of Sciences, and is a fellow of the Society of Experimental Psychologists and the Cognitive Science Society.
Chapter 23. OREN ETZIONI
If you look at a question like, “Would an elephant fit through a doorway?”, while most people can answer that question almost instantaneously, machines will struggle. What’s easy for one is hard for the other, and vice versa. That is what I call the AI paradox.