Book Read Free

Architects of Intelligence

Page 48

by Martin Ford


  RODNEY BROOKS: Security is the big one. I worry about the security of these digital chains and the privacy that we have all given up willingly in return for a certain ease of use. We’ve already seen the weaponization of social platforms. Rather than worry about a self-aware AI doing something willful or bad, it’s much more likely that we’re going to see bad stuff happen from human actors figuring out how to exploit the weaknesses in these digital chains, whether they be nation states, criminal enterprises, or even lone hackers in their bedrooms.

  MARTIN FORD: What about the literal weaponization of robots and drones? Stuart Russell, one of the interviewees in this book, made a quite terrifying film called Slaughterbots about those concerns.

  RODNEY BROOKS: I think that kind of thing is very possible today because it doesn’t rely on AI. Slaughterbots was a knee-jerk reaction saying that robots and war are a bad combination. There’s another reaction that I have. It always seemed to me that a robot could afford to shoot second. A 19-year-old kid just out of high school in a foreign country in the dark of night with guns going off around them can’t afford to shoot second.

  There’s an argument that keeping AI out of the military will make the problem go away. I think you need to instead think about what it is you don’t want to happen and legislate about that rather than the particular technology that is used. A lot of these things could be built without AI.

  As an example, when we go to the Moon next, it will rely heavily on AI and machine learning, but in the ‘60s we got there and back without either of those. It’s the action itself that we need to think about, not which particular technology is being used to perform that action. It’s naive to legislate against a technology and it doesn’t take into account the good things that you can do with it, like have the system shoot second, not shoot first.

  MARTIN FORD: What about the AGI control problem and Elon Musk’s comments about summoning the demon? Is that something that we should be having conversations about at this point?

  RODNEY BROOKS: In 1789 when the people of Paris saw hot-air balloons for the first time, they were worried about those people’s souls getting sucked out from up high. That’s the same level of understanding that’s going on here with AGI. We don’t have a clue what it would look like.

  I wrote an essay on The Seven Deadly Sins of Predicting the Future of AI (https://rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/), and they are all wrapped up in this stuff. It’s not going to be a case of having exactly the same world as it is today, but with an AI super intelligence in the middle of it. It’s going to come very gradually over time. We have no clue at all about what the world or that AI system are going to be like. Predicting an AI future is just a power game for isolated academics who live in a bubble away from the real world. That’s not to say that these technologies aren’t coming, but we won’t know what they will look like before they arrive.

  MARTIN FORD: When these technology breakthroughs do arrive, do you think there’s a place for regulation of them?

  RODNEY BROOKS: As I said earlier, the place where regulation is required is on what these systems are and are not allowed to do, not on the technologies that underlie them. Should we stop research today on optical computers because they let you perform matrix multiplication much faster, so you could apply greater deep learning much more quickly? No, that’s crazy. Are self-driving delivery trucks allowed to double park in congested areas of San Francisco? That seems to be a good thing to regulate, not what the technology is.

  MARTIN FORD: Taking all of this into account, I assume that you’re an optimist overall? You continue to work on this so you must believe that the benefits of all this are going to outweigh any risks.

  RODNEY BROOKS: Yes, absolutely. We have overpopulated the world, so we have to go this way to survive. I’m very worried about the standard of living dropping because there’s not enough labor as I get older. I’m worried about security and privacy, to name two more. All of these are real and present dangers, and we can see the contours of what they look like.

  The Hollywood idea of AGIs taking over is way in the future, and we have no clue even how to think about that. We should be worried about the real dangers and the real risks that we are facing right now.

  RODNEY BROOKS is a robotics entrepreneur who holds a PhD in Computer Science from Stanford University. He’s currently the Chairman and CTO of Rethink Robotics. For a decade between 1997 and 2007, Rodney was the Director of the MIT Artificial Intelligence Laboratory and later the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

  He’s a fellow to several organizations, including The Association for the Advancement of Artificial Intelligence (AAAI), where he is a founding fellow. So far in his career he’s won a number of awards for his work within the field, including the Computers and Thought Award, the IEEE Inaba Technical Award for Innovation Leading to Production, the Robotics Industry Association’s Engelberger Robotics Award for Leadership and the IEEE Robotics and Automation Award.

  Rodney even starred as himself in the 1997 Error Morris movie, Fast, Cheap and Out of Control. A movie named after one of his papers, and which currently holds a 91% Rotten Tomatoes score.

  Chapter 21. CYNTHIA BREAZEAL

  I am not nearly as concerned about super intelligence enslaving humanity as I am around people using the technology to do harm.

  DIRECTOR OF THE PERSONAL ROBOTS GROUP, MIT MEDIA LABORATORY FOUNDER, JIBO, INC.

  Cynthia Breazeal is the Director of the Personal Robotics Group at the MIT Media Lab, as well as the founder of Jibo, Inc. She is a pioneer of social robotics and human-robot interaction. In 2000 she designed Kismet, the world’s first social robot, as part of her doctoral research at MIT. Jibo was featured on the cover of TIME magazine, recognized as Best Inventions 2017. At the Media Lab, she has developed a variety of technologies focused on human-machine social interaction, including the development of new algorithms, understanding the psychology of human-robot interaction, as well as new social robot designs for applications in early childhood learning, home AI and personal robots, aging, healthcare and wellness, and more.

  MARTIN FORD: Do you have a sense of when personal robots will become a true mass consumer product, so that we’ll all want one in the same way we have a television set or a smartphone?

  CYNTHIA BREAZEAL: Yes, I actually think we’re already starting to see it. Back in 2014, when I was raising funds for my startup Jibo, a social robot for the home, everybody thought that our competitor was the smartphone, that the technology in the home that people were going to use to interact and control everything with was going to be a touchscreen. That Christmas, Amazon announced Alexa, and now we know that these VUI (Voice User Interface) assistants are actually the machines that people will use in their homes. It’s opened up the whole opportunity space because you can see that people are willing to use voice devices because it’s easy and it’s convenient.

  Back in 2014, most people interacting with AI at a consumer level were those with Siri or Google Assistant on their phones. Now, only four years later you’ve got everyone from young children to 98-year-olds talking to their voice-enabled AI smart devices. The type of people who are interacting with AI is fundamentally different now than it was even back in 2014. So, are the current talking speakers and devices going to be where it ends? Of course not. We’re in the primordial age of this new way of interacting with ambient AIs that coexist with us. A lot of the data and evidence that we have gathered even through Jibo shows very clearly that this deeper collaborative social-emotional, personalized, proactive engagement supports the human experience in such a deeper way.

  We’re starting with these transactional VUI AIs who get the weather or the news, but you can see how that’s going to grow and change into critical domains of real value for families, like extending education from the school to the home, scaling affordable healthcare from the healthcare institutions to the home, allowing people to age in place, and so on. When y
ou’re talking about those huge societal challenges, it’s about a new kind of intelligent machine that can collaboratively engage you over an extended longitudinal relationship and personalize, grow, and change with you. That’s what a social robot’s about, and that’s clearly where this is all going to go, and right now I think we’re at the beginning.

  MARTIN FORD: There are real risks and concerns associated with this kind of technology, though. People worry about the developmental impact on children if they’re interacting with Alexa too much, or take a dystopian view of robots being used as companions for elderly people. How do you address those concerns?

  CYNTHIA BREAZEAL: Let’s just say there’s the science that needs to be done, and there’s the fact of what these machines do now. Those present a design opportunity and challenge to create these technologies in a way that is both ethical and beneficial and supports our human values. Those machines don’t really exist yet. So yes, you can have dystopian conversations about what may happen 20 to 50 years from now, but the problem to be solved at this moment is: we have these societal challenges, and we have a range of technologies that have to be designed in the context of human support systems. The technologies alone are not the solution, they have to support our human support systems, and they have to make sense in the lives of everyday people. The work to be done is to understand how to do that in the right way.

  So yes, of course there will always be critics and people wringing their hands and thinking, “oh my god, what could happen,” and you need that dialog. You need those people being able to throw up the flares to say watch out for this, watch out for that. In a way, we’re living in a society where the alternative is unaffordable; you can’t afford the help. These technologies have the opportunity for scalable, affordable, effective, personalized support and services. That’s the opportunity, and people do need help. Going without help is not a solution, so we’ve got to figure out how to do it.

  There needs to be a real dialog and a real collaboration with the people who are trying to create solutions that are going to make a difference in people’s lives—you can’t just critique it. At the end of the day, everybody ultimately wants the same thing; people building the systems don’t want a dystopian future.

  MARTIN FORD: Can you talk a bit more about Jibo and your vision for where you see that going? Do you anticipate that Jibo will eventually evolve into a robot that runs around the house doing useful things, or is it intended to be focused more on the social side?

  CYNTHIA BREAZEAL: I think there’s going to be a whole bunch of different kinds of robots, and Jibo is the first of its kind that’s out there and is leading the way. We’re going to see other companies with other types of robots. Jibo is meant to be a platform that has extensible skills, but other robots may be more specialized. There’ll be those kinds of robots, but there’s also going to be physical assistance robots. A great example is the Toyota Research Institute, who are looking at mobile dexterous robots to provide physical support for elderly people, but they completely acknowledge those robots also need to have social and emotional skills.

  In terms of what comes into people’s homes, it’s going to depend on what the value proposition is. If you’re a person aging in place, you’re probably going to want a different robot than parents of a child who want that child to learn a second language. In the end, it’s all going to be based on what the value proposition is and what role that robot has in your home, including all the other factors like the price point. This is an area that’s going to continue to grow and expand, and these systems are going to be in homes, in schools, in hospitals, and in institutions.

  MARTIN FORD: How did you become interested in robotics?

  CYNTHIA BREAZEAL: I grew up in Livermore, California, which has two National Labs. Both my parents worked in those as computer scientists, so I was really bought up in a home where engineering and computer science were seen as a really great career path with a lot of opportunities. I also had toys like Lego, because my parents valued those kinds of constructive media.

  When I was growing up, there wasn’t nearly as much around for kids to do with computers as there is now, but I could go into the National Labs where they would have various activities for kids to do—I remember the punch cards! Because of my parents, I was able to get into computers at a much earlier age than a lot of my peers and, not surprisingly my parents were some of the first people to bring home personal computers.

  The first Star Wars movie came out when I was around 10 years old, and that was the first epiphany moment that set me on my particular career trajectory. I remember just being fascinated by the robots. It was the first time I had seen robots that were presented as full-fledged and collaborative characters, not just drones or automatons but mechanical beings who had emotions and relationships with each other and people. It really wasn’t just about the amazing things they could do; it was also around the human interpersonal connection they also formed with those around them that really struck that emotional chord. Because of that film, I grew up with this attitude that robots could be like that and I think that’s shaped a lot of what my research has been about.

  MARTIN FORD: Rodney Brooks, who is also interviewed in this book, was your doctoral adviser at MIT. How did that influence your career path?

  CYNTHIA BREAZEAL: At the time I decided that what I really wanted to do when I grew up was to be an astronaut mission specialist, so I knew I needed to get a PhD in a relevant field, and so I decided that mine was going to be space robotics. I applied to a bunch of graduate schools and one of the schools I was admitted to was MIT. I went to a visit week at MIT and I remember my first experience in Rodney Brooks’ mobile robot lab.

  I remember walking into his lab and seeing all these insect-inspired robots that were completely autonomous going around doing a variety of different tasks depending on what the graduate students were working on. For me that was the Star Wars moment all over again. I remember thinking if there were ever going to be robots like I saw in Star Wars, it was going to happen in a lab like that. That’s where it was going to begin, and quite possibly in that very lab, and I decided I had to be there and that’s really what clinched the deal for me.

  So, I went to MIT for graduate school, where Rodney Brooks was my academic adviser. Back then, Rod’s philosophy was always a very biologically inspired philosophy to intelligence, which was not typical for the overall field. During the course of my graduate degree, I started reading a lot of literature on intelligence, not just on AI and computational methods, but natural forms of intelligence and models of intelligence. The deep interplay between psychology and what we can learn from ethology and other forms of intelligence and machine intelligence has always been a thread and a theme of my work.

  At that time, Rodney Brooks was working on small-legged robots and he wrote a paper, Fast, Cheap and Out of Control: A Robot Invasion of the Solar System, where instead of sending up one or two very large, very expensive rovers, he was advocating for sending many, many small autonomous rovers, and if you did that then you could actually explore Mars and other kinds of celestial bodies much more easily. That was a very influential paper, and my master’s thesis was actually developing the first primordial planetary Micro-Rover-inspired robots. I had the opportunity as a graduate student to work with JPL (the Jet Propulsion Laboratory), and I like to think that some of that research contributed to Sojourner and Pathfinder.

  Years later, I was finishing up my master’s thesis and about to embark on my doctoral work when Rod went on sabbatical. When he came back he pronounced that we were going to do humanoids. This came as a shock because we all thought it was going to go from insects to reptiles, and maybe to mammals. We thought we were going to be developing up the evolutionary chain of intelligence, so to speak, but Rod insisted it had to be humanoids. It’s because when he was in Asia, particularly in Japan, they were already developing humanoids and he saw that. I was one of the senior graduate students at that time, so I stepped up to
lead the effort on developing these humanoid robots to explore theories of embodied cognition. That hypothesis was about the nature of physical embodiment having a very strong constraint and influence on the nature of intelligence a machine can have or learn to develop.

  The next step occurred literally on the date that NASA landed the Sojourner Mars Pathfinder rover on July 5th, 1997. On that day, I was working on my doctorate on a very different topic and I remember thinking at that moment, here we are in this field where we’re sending robots to explore the oceans and volcanoes, because the value proposition of autonomy was that machines can do tasks that are far too dull, dirty, and dangerous for people. The rover was really about the autonomy allowing people to do work in hazardous environments apart from people, and that’s why you needed them. We could land a robot on Mars, but they weren’t in our homes.

  It was from that moment that I started thinking quite a lot about how we in academia were developing these amazing autonomous robots for experts, but nobody was really embracing the scientific challenge of designing intelligent robots and researching the nature of intelligent robots that you need in order to have them coexist with people in society—from children to seniors, and everyone in between. It’s like how computers used to be huge and very expensive devices that experts used, and then there was a shift to thinking about a computer on every desk in every home. This was that moment in autonomous robotics.

  We already recognized that when people interacted with or talked about autonomous robots, they would anthropomorphize them. They would engage their social thinking mechanisms to try to make sense of them, so the hypothesis was that the social, interpersonal interface would be the universal interface. Up to that time, the focus on the nature of intelligence of machines was more around how do you engage and manipulate the physical inanimate world. This was now a complete shift to thinking about building a robot that can actually collaborate, communicate, and interact with people in a way that’s natural for people. That’s a very different kind of intelligence. If you look at human intelligence we have all these different kinds of intelligences, and social and emotional intelligence are a profoundly important, and of course underlies how we collaborate and how we live in social groups and how we coexist, empathize, and harmonize. At the time, no one was really working on that.

 

‹ Prev