Architects of Intelligence
Page 50
MARTIN FORD: What do you think about the potential impact on the job market? Are we on the leading edge of a new Industrial Revolution? Is there potential for a massive impact on employment or on the economy?
CYNTHIA BREAZEAL: AI is a powerful tool that can accelerate technology-driven change. It’s something that right now very few people know how to design, and very few entities have the expertise and resources to be able to deploy it. We’re living in a time where there is a growing social-economic divide, where I feel that one of my biggest concerns is whether AI is going to be applied to close that divide or exacerbate it. If only a few people know how to develop it, design with it, and can apply it to the problems they care about, you’ve got a hell of a lot of people in the world who aren’t going to be able to really benefit from that.
One solution to democratizing the benefit of AI to everyone is through education. Right now, I have put significant effort in trying to address things like K-12 AI. Today’s children are growing up with AI; they’re no longer digital natives, they are now AI natives. They’re growing up in a time when they will have always been able to interact with intelligent machines, so it’s imperative these not be black box systems to them. Today’s children need to start to be educated about these technologies, to be able to create things with these technologies, and in doing that, grow up with an attitude of empowerment so that they can apply these technologies and solve problems that matter to them and their community on a global scale. In an increasingly AI-powered society, we need an AI-literate society. This is something that has to happen, and from the industry standpoint, there’s already a shortage of highly qualified people with this level of expertise, you can’t hire these people fast enough. People’s fears about AI can be manipulated because they don’t understand it.
Even from that standpoint, I think there’s a lot of stakeholder interest from the current organizations in wanting to open the tent and be much more inclusive to a much broader diversity of people who can develop that expertise and that understanding. Just like you can have early math and early literacy, I think you can have early AI. It’s about understanding what’s the level of curriculum, the sophistication of concepts and hands-on activities and communities so that students can grow up with more levels of sophistication about understanding AI and making stuff with AI. They don’t have to wait until university to be able to get access to this stuff. We need to have a much broader diversity of people able to understand and apply these technologies to problems that matter to them.
MARTIN FORD: You seem to be focusing on people headed toward professional or technical careers, but most people are not college graduates. There could be a huge impact on jobs like driving a truck or working in a fast food restaurant, for example. Do we need policies to address that?
CYNTHIA BREAZEAL: I think clearly there’s going to be disruption, and I think that right now, the big one people talk about is autonomous vehicles. There’s disruption, and the problem is that those people whose jobs either change or get displaced need to be trained so that they can continue to be competitive in the workforce.
AI can also be applied to retrain people in an affordable, scalable way to keep our workforce vibrant. AI education can be developed for vocational programs. For me, one of the big AI application areas that we should be focusing on is AI education and personalized education systems. A lot of people can’t afford to have a personal tutor or to go to an institution to get educated. If you could leverage AI to make access to those skills, knowledge, and capabilities much more scalable and affordable, then you’re going to have way more people who are going to be much more agile and resilient over their lifetime. To me, that just argues that we need to double down and really think about the role of AI in empowering people and helping our citizens to be resilient and adaptive to the reality of jobs that continue to be changing.
MARTIN FORD: How do you feel about regulation of the AI field? Is that something you would support going forward?
CYNTHIA BREAZEAL: In my particular research field it’s still pretty early. We need to understand it more before you could come up with any policies or regulations that would be sensible for social robots. I do feel that the dialogs that are happening right now around AI are absolutely important ones to have, because we’re starting to see some major unintended consequences. We need to have a serious ongoing dialog to figure these things out, and we get down to privacy, security and all of these things, which are critically important.
For me, it really just gets down to the specifics. I think we’re going to start with a few high-impact areas, and then maybe from that experience we will be able to think more broadly about what the right thing to do is. You’re obviously trying to balance the ability to ensure human values and civil rights are supported with these technologies, as well as wanting to support innovation to open up opportunities. It’s always that balancing act, and so, to me, it gets down to the specifics of how you walk that line so that you achieve both of those goals.
CYNTHIA BREAZEAL is an Associate Professor of Media Arts and Sciences at the Massachusetts Institute of Technology where she founded and directs the Personal Robots Group at the Media Lab. She is also founder of Jibo, Inc. She is a pioneer of social robotics and human robot interaction. She authored the book Designing Sociable Robots , and she has published over 200 peer-reviewed articles in journals and conferences on the topics of social robotics, human-robot interaction, autonomous robotics, artificial intelligence, and robot learning. She serves on several editorial boards in the areas of autonomous robots, affective computing, entertainment technology and multi-agent systems. She is also an Overseer at the Museum of Science, Boston.
Her research focuses on developing the principles, techniques, and technologies for personal robots that are socially intelligent, interact and communicate with people in human-centric terms, work with humans as peers, and learn from people as an apprentice. She has developed some of the world’s most famous robotic creatures, ranging from small hexapod robots, to embedding robotic technologies into familiar everyday artifacts, to creating highly expressive humanoid robots and robot characters.
Cynthia is recognized as a prominent global innovator, designer and entrepreneur. She is a recipient of the National Academy of Engineering’s Gilbreth Lecture Award and an ONR Young Investigator Award. She has received Technology Review’s TR100/35 Award, and TIME magazine’s Best Inventions of 2008 and 2017. She has received numerous design awards, including being named a finalist in the National Design Awards in Communication. In 2014 she was recognized as an entrepreneur as Fortune Magazine’s Most Promising Women Entrepreneurs, and she was also a recipient of the L’Oréal USA Women in Digital NEXT Generation Award. The same year, she received the 2014 George R. Stibitz Computer and Communications Pioneer Award for seminal contributions to the development of social robotics and human-robot interaction.
Chapter 22. JOSHUA TENENBAUM
If we could just get something at the level of the mind of a one-and-a-half-year-old into the robotic hardware that we already have, that would be incredibly useful as a technology.
PROFESSOR OF COMPUTATIONAL COGNITIVE SCIENCE, MIT
Josh Tenenbaum is Professor of Computational Cognitive Science in the Department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology. He studies learning and reasoning in humans and machines, with the twin goals of understanding human intelligence in computational terms and bringing artificial intelligence closer to human-level capacities. He describes his research as an attempt to “reverse engineer the human mind” and to answer the question “How do humans learn so much from so little?”
MARTIN FORD: Let’s begin by talking about AGI or human-level AI. Do you consider that to be feasible and something that we will ultimately achieve?
JOSH TENENBAUM: Let’s be concrete about what we mean by that. Do you mean something like an android robot, similar to C-3PO or Commander Data?
MARTIN FORD: Not necessarily in te
rms of being able to walk around and manipulate things physically, but an intelligence that can clearly pass a Turing test with no time limit. Something you could have a wide-ranging conversation with for hours, so that you’d be convinced that it’s genuinely intelligent.
JOSH TENENBAUM: Yes, I think it’s completely possible. Whether or when we will build it is hard to know, because that all depends on choices that we make as individuals in society. It’s definitely possible, though—our brains and our existence prove that you can have machines that do this.
MARTIN FORD: What does progress toward AGI look like? What are the most important hurdles that you think we would need to overcome to reach that point?
JOSH TENENBAUM: One question is whether it’s possible, but the other question is what version of it is most interesting or desirable? That has a lot to do with what is likely to happen sooner, because we can decide which versions of AGI are interesting and desirable and we can pursue those. I’m not actively working on machines that will do what you’re saying—that will just be a disembodied language system that you can talk to for hours. I think it’s exactly right to say that the system must reach the heights of human intelligence to have that kind of conversation. What we mean by intelligence is inextricably linked to our linguistic ability—our ability to communicate and to express our thoughts to others, and to ourselves, using the tools of language.
Language is absolutely at the heart of human intelligence, but I think that we have to start with the earlier stages of intelligence that are there before language, but that language builds on. If I was to sketch out a high-level roadmap to building some form of AGI of the sort you’re talking about, I would say you could roughly divide it into three stages corresponding to three rough stages of human cognitive development.
The first stage, is basically the first year and a half of a child’s life, which is building all the intelligence we have prior to really being linguistic creatures. The main achievement is to develop a common-sense understanding of the physical world and other people’s actions. What we call intuitive physics, intuitive psychology: goals, plans, tools, and the concepts around those. The second stage, from about one and a half to three, is to use that foundation to build language, to really understand how phrases work, and to be able to construct sentences. Then, there’s the third stage, from the age of three and up, which is now you’ve built language, use language to build and learn everything else.
So, when you talk about an AGI system that can pass a Turing test, and that you could have conversations with for hours, I would agree that reflects in some sense the height of human intelligence. However, my view is that it’s most interesting and valuable to get there by going through these other stages. Both because that’s how we’re going to understand the construction of human intelligence, and because I think if we’re using human intelligence and its development as a guide and an inspiration for AI, then that’s the way to do it.
MARTIN FORD: Very often, we think about AGI in binary terms: either we’ve got true human-level intelligence, or else it’s just narrow AI of the type that we have now. I think that what you are saying is that there might be a big middle ground there, is that right?
JOSH TENENBAUM: Yes. For example, in talks, I often show videos of 18-month-old humans doing remarkably intelligent things, and it’s very clear to me that if we could build a robot that had the intelligence of a one-and-a-half-year-old, I would call that a kind of AGI. It’s not adult-human level, but one and a half-year-olds have a flexible general-purpose understanding of the world that they live in, which is not the same world that adults live in.
You and I live in a world that extends backward in time thousands of years to the earliest recorded human history, and we can imagine hundreds of years forward into the future. We live in a world that includes many different cultures that we understand because we’ve heard about them and we’ve read about them. The typical one-and-a-half-year-old doesn’t live in that world, because we only have access to that world through language. And yet, in the world that they live in, in the world of their immediate spatial and temporal environment, they do have a flexible, general-purpose, common-sense intelligence. That, to me, is the first thing to understand, and if we could build a robot that had that level of intelligence, it would be amazing.
If you look at today’s robots, robotics on the hardware side is making great progress. Basic control algorithms allow robots to walk around. You only have to think about Boston Dynamics, which was founded by Mark Raibert. Have you heard about them?
MARTIN FORD: Yeah. I’ve seen the videos of their robots walking and opening doors and so forth.
JOSH TENENBAUM: That stuff is real, that’s biologically inspired. Mark Raibert always wanted to understand legged locomotion in animals, as well as in humans, and he was part of a field that built engineering models of how biological systems walked. He also understood that the best way to test those models was to build real robots and to see how biological legged locomotion worked. He realized that in order to test that idea, he needed the resources of a company to actually make those things. So, that’s what led to Boston Dynamics.
At this point, whether it’s Boston Dynamics or other robots, such as Rodney Brooks’ work with the Baxter Robots, we’ve seen these robots do impressive things with their bodies, like pick up objects and open doors, yet their minds and brains hardly exist at all. The Boston Dynamics robots are mostly steered by a human with a joystick, and the human mind is setting their high-level goals and plans. If we could just get something at the level of the mind of a one-and-a-half-year-old into the robotic hardware that we already have, that would be incredibly useful as a technology.
MARTIN FORD: Who would you point to as being at the absolute forefront of progress toward AGI now? Is DeepMind the primary candidate, or are there other initiatives out there that you think are demonstrating remarkable progress?
JOSH TENENBAUM: Well, I think we’re at the forefront, but everybody does what they do because they think it’s the right approach. That being said, I have a lot of respect for what DeepMind is doing. They certainly do a lot of cool things and get a lot of well-deserved attention for what they’re doing, motivated by trying to build AGI. But I do have a different view than they do about the right way to approach more human-like AI.
DeepMind is a big company, and they represent a diversity of opinion, but in general, their center of gravity is on building systems that are trying to learn everything from scratch, which is just not the way humans work. Humans, like other animals, are born with a lot of structure in our brains just like in our bodies, and my approach is to be more inspired by human cognitive development in that way.
There are some people within DeepMind who think similarly, but the focus of what the company has been doing, and really the ethos of deep learning, is that we should learn as much as we can from scratch, and that’s the basis for building the most robust AI systems. That’s something that I just think is not true. I think that’s a story that people tell themselves, and I think it’s not the way biology works.
MARTIN FORD: It seems clear that you believe there’s a lot of synergy between AI and neuroscience. How did your interest in the two fields evolve?
JOSH TENENBAUM: Both of my parents were deeply interested in things that related to intelligence and AI. My father, Jay Tenenbaum—often known as Marty, was an early AI researcher. He was an MIT undergraduate and one of Stanford’s first PhDs in AI after John McCarthy went to set up the AI lab effort there. He was an early leader in computer vision and one of the founders of AAAI, the professional organization for AI in America. He also ran an early industry AI lab. Essentially, as a child I lived through the previous big wave of excitement in AI in the late 1970s and 1980s, which allowed me to go to AI conferences as a kid.
We grew up in the Bay Area, and one-time my father took us to Southern California because there was an Apple AI conference taking place, and this was in the Apple II era. I remember that Apple h
ad bought out Disneyland for the evening for all of the attendees of the big AI conference. So, we flew down for the day just to be able to go on Pirates of the Caribbean 13 times in a row, which, looking back, tells you something about just how big AI was even then.
It’s hyped now, but it was the same back then. There were startups, there were big companies, and AI was going to change the world. Of course, that time-period didn’t lead to the kinds of successes that were promised in the short term. My dad was also for a while director of the Schlumberger Palo Alto Research Lab, a major industry AI lab. I hung out around there as a kid and through that, I got to meet many great AI leaders. At the same time, my mother Bonnie Tenenbaum was a teacher and got a PhD in education. She was very interested in kids’ learning and intelligence from that perspective and she would expose me to various puzzles and brainteasers—things that were not too different from some of the problems we work on now in the AI field.
I was always interested in thinking and intelligence while I was growing up, and so when I was looking at college, I thought I would major in philosophy or physics. I wound up as a physics major, but I never thought of myself as a physicist. I took psychology and philosophy classes, and I was interested in neural networks, which were at the peak of their first wave in 1989 when I was at college. Back then, it seemed that if you wanted to study the brain or the mind, you had to learn how to apply math to the world, which is what people advertise physics as being about, so physics seemed like a generally good thing to do.