Book Read Free

Architects of Intelligence

Page 28

by Martin Ford


  MARTIN FORD: It sounds to me like there was a big gap between where the physical machines were and where the algorithms were.

  DANIELA RUS: Exactly. At the time, I really realized that a machine is actually a closed connection between body and brain, and for any task you want that machine to execute, you really needed a body capable of those tasks, and then you needed a brain to control the body to deliver what it was meant to do.

  As a result, I became very interested in the interaction between body and brain, and challenging the notion of what a robot is. So industrial manipulators are excellent examples of robots, but they are not all that we could do with robots; there are so many other ways to envision robots.

  Today in my lab, we have all kinds of very non-traditional robots. There are modular cellular robots, soft robots, robots built out of food, and even robots built out of paper. We’re looking at new types of materials, new types of shapes, new types of architectures and different ways of imagining what the machine body ought to be. We also do a lot of work on the mathematical foundations of how those bodies operate, and I’m very interested in understanding and advancing the engineering of both the science of autonomy and of intelligence.

  I became very interested in the connection between the hardware of the device and the algorithms that control the hardware. When I think about algorithms, I think that while it’s very important to consider the solutions, it’s also important to consider the mathematical foundations for those solutions because that’s in some sense where we create the nuggets of knowledge that other people can build on.

  MARTIN FORD: You’re the director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), which is one of the most important research endeavors in not just robotics, but in AI generally. Could you explain what exactly CSAIL is?

  DANIELA RUS: Our objective at CSAIL is to invent the future of computing to make the world better through computing, and to educate some of the best students in the world in research.

  CSAIL is an extraordinary organization. When I was a student, I looked up to it as the Mount Olympus of technology and never imagined that I’d become a part of it. I like to think of CSAIL as the prophet for the future of computing, and the place where people envision how computing can be used to make the world better.

  CSAIL actually has two parts, Computer Science (CS) and AI, both having a really deep history. The AI side of our organization goes back to 1956 when the field was invented and founded. In 1956, Marvin Minsky gathered his friends in New Hampshire where they spent a month, no doubt hiking in the woods, drinking wine and having great conversations, uninterrupted by social media, email, and smartphones.

  When they emerged from the woods, they told the world that they had coined a new field of study: artificial intelligence. AI refers to the science and engineering of creating machines that exhibit human-level skills in how they perceive the world; in how they move in the world; in how they play games; in how they reason; in how they communicate; and even, in how they learn. Our researchers at CSAIL have been thinking about these questions and making groundbreaking contributions ever since, and it’s an extraordinary privilege to be part of this community.

  The computer science side goes back to 1963, when Bob Fano, a computer scientist and MIT professor, had the crazy idea that two people might use the same computer at the same time. You have to understand this was a big dream back then when computers were the size of rooms and you had to book time on them. Originally, it was set up as Project MAC, which stood for Machine-Aided Cognition, but there was a joke that it was actually named MAC after Minsky and Corby (Fernando “Corby” Corbató), who were the two technical leads for the CS and the AI side. Ever since the founding of the laboratory in 1963, our researchers have put a lot of effort into imagining what computing looks like and what it can accomplish.

  Many of the things that you take for granted today have their roots in the research developed at CSAIL, such as the password, RSA encryption, the computer time-sharing systems that inspired Unix, the optical mouse, object-oriented programming, speech systems, mobile robots with computer vision, the free software movement, the list goes on. More recently CSAIL has been a leader in defining the cloud and cloud computing, and in democratizing education through Massive Open Online Courses (MOOCs) and in thinking about security, privacy, and many other aspects of computing.

  MARTIN FORD: How big is CSAIL today?

  DANIELA RUS: CSAIL is the largest research laboratory at MIT, with over 1,000 members, and it cuts across 5 schools and 11 departments. CSAIL today has 115 faculty members, and each of these faculty members has a big dream about computing, which is such an important part of our ethos here. Some of our faculty members want to make computing better through algorithms, systems or networks, while others want to make life better for humanity with computing. For example, Shafi Goldwasser wants to make sure that we can have private conversations over the internet; and Tim Berners-Lee wants to create a bill of rights, a Magna Carta of the World Wide Web. We have researchers who want to make sure that if we get sick, the treatments that are available to us are personalized and customized to be as effective as they can be. We have researchers who want to advance what machines can do: Leslie Kaelbling wants to make Lieutenant-Commander Data, and Russ Tedrake wants to make robots that can fly. I want to make shape-shifting robots because I want to see a world with pervasive robots that support us in our cognitive and physical tasks.

  This aspiration is really inspired by looking back at history and observing that only 20 years ago, computation was a task reserved for the expert few because computers were large, expensive, and difficult to handle, and it took knowledge to know what to do with them. All of that changed a decade ago when smartphones, cloud computing, and social media came along.

  Today, so many people compute. You don’t have to be an expert in order to use computing, and you use computing so much that you don’t even know how much you depend on it. Try to imagine a day in your life without the world wide web and everything that enables. No social media; no communication through email; no GPS; no diagnosis in hospitals; no digital media; no digital music; no online shopping. It’s just incredible to see how computation has permeated into the fabric of life. To me, this raises a very exciting and important question, which is: In this world that has been so changed by computation, what might it look like with robots and cognitive assistants helping us with physical and cognitive tasks?

  MARTIN FORD: As a university-based organization, what’s the balance between what you would classify as pure research and things that are more commercial and that end up actually developing into products? Do you spin off startups or work with commercial companies?

  DANIELA RUS: We don’t house companies; instead, we focus on training our students and giving them various options for what they could do when they graduate, whether that be joining the academic life, going into high-tech industry, or becoming entrepreneurs. We fully support all of those paths. For example, say a student creates a new type of system after several years of research, and all of a sudden there is an immediate application for the system. This is the kind of technological entrepreneurship that we embrace, and hundreds of companies have been spun out of CSAIL research, but the actual companies do not get housed by CSAIL.

  We also don’t create products, but that’s not to say we ignore them. We’re very excited about how our work could be turned into products, but generally, our mission is really to focus on the future. We think about problems that are 5 to 10 years out, and that’s where most of our work is, but we also embrace the ideas that matter today.

  MARTIN FORD: Let’s talk about the future of robotics, which sounds like something you spend a great deal of your time thinking about. What’s coming down the line in terms of future innovations?

  DANIELA RUS: Our world has already been transformed by robotics. Today, doctors can connect with patients, and teachers can connect with students that are thousands of miles away. We have
robots that help with packing on factory floors, we’ve got networked sensors that we deploy to monitor facilities, and we have 3D printing that creates customized goods. Our world has already been transformed by advances in artificial intelligence and robotics, and when we consider adding even more extensive capabilities from our AI and robot systems, extraordinary things will be possible.

  At the high level, we have to picture a world where routine tasks will be taken off our plate because this is the sweet spot for where technology is today. These routine tasks could be physical tasks or could be computational or cognitive tasks.

  You already see some of that in the rise of machine learning applications for various industries, but I like to think of a world where more mundane routine tasks are taken off your plate. Maybe garbage cans that take themselves out and smart infrastructure to ensure that they disappear, or robots that will fold your laundry. We will have transportation available in the same way that water or electricity are available, and you will be able to go anywhere at any time. We will have intelligent assistants who will enable us to maximize our time at work and optimize our lives to live better and more healthily, and to work more efficiently. It will be extraordinary.

  MARTIN FORD: What about self-driving cars? When will I be able to call a robot taxi in Manhattan and have it take me anywhere?

  DANIELA RUS: I’m going to qualify my answer and say that certain autonomous driving technologies are available right now. Today’s solutions are good for certain level 4 autonomy situations (the penultimate level before full autonomy, as defined by the Society of Automotive Engineers). We already have robot cars that can deliver people and packages, and that operate at low speeds in low-complexity environments where you have low interaction. Manhattan is a challenging case because traffic in Manhattan is super chaotic, but we do already have robot cars that could operate in retirement communities or business campuses, or in general places where there is not too much traffic. Nevertheless, those are still real-world places where you can expect other traffic, other people, and other vehicles.

  Next, we have to think about how we extend this capability to make it applicable to bigger and more complex environments where you’ll face more complex interactions at higher speeds. That technology is slowly coming, but there are still some serious challenges ahead. For instance, the sensors that we use in autonomous driving today are not very reliable in bad weather. We’ve still got a long way to go to reach level 5 autonomy where the car is fully autonomous in all weather conditions. These systems also have to be able to handle the kind of congestion that you find in New York City, and we have to become much better at integrating robot cars with human-driven cars. This is why thinking about mixed human/machine environments is very exciting and very important. Every year we see gradual improvements in the technology but getting to a complete solution, if I was to estimate, could take another decade.

  There are, though, specific applications where we will see autonomy used commercially sooner than other applications. I believe that a retirement community could use autonomous shuttles today. I believe that long-distance driving with autonomous trucks is coming soon. It’s a little bit simpler than driving in New York, but it’s harder than what driving in a retirement community would look like because you have to drive at high speed, and there are a lot of corner cases and situations where maybe a human driver would have to step in. Let’s say it’s raining torrentially and you are on a treacherous mountain pass in the Rockies. To face that, you need to have a collaboration between a really great sensor and control system, and the human’s reasoning and control capabilities. With autonomous driving on highways, we will see patches of autonomy interleaved with human assistance, or vice versa, and that will be sooner than 10 years, for sure, maybe 5.

  MARTIN FORD: So in the next decade a lot of these problems would be solved, but not all of them. Maybe the service would be confined to specified routes or areas that are really well mapped?

  DANIELA RUS: Well, not necessarily. Progress is happening. In our group we just released a paper that demonstrates one of the first systems capable of driving on country roads. So, on the one hand, the challenges are daunting, but on the other hand, 10 years is a long time. 20 years ago, Mark Weiser, who was the chief scientist at Xerox PARC, talked about pervasive computing and he was seen as a dreamer. Today, we have solutions for all of the situations he envisioned where computing would be used, and how computing would support us.

  I want to be a technology optimist. I want to say that I see technology as something that has the huge potential to unite people rather than divide people, and to empower people rather than estrange people. In order to get there, though, we have to advance science and engineering to make technology more capable and more deployable.

  We also have to embrace programs that enable broad education and allow people to become familiar with technology to the point where they can take advantage of it and where anyone could dream about how their lives could be better by the use of technology. That’s something that’s not possible with AI and robotics today because the solutions require expertise that most people don’t have. We need to revisit how we educate people to ensure that everyone has the tools and the skills to take advantage of technology. The other thing that we can do is to continue to develop the technology side so that machines begin to adapt to people, rather than the other way around.

  MARTIN FORD: In terms of ubiquitous personal robots that can actually do useful things, it seems to me that the limiting factor is really dexterity. The cliché is being able to ask a robot to go to the refrigerator and get you a beer. That’s a real challenge in terms of the technology that we have today.

  DANIELA RUS: Yes, I think you’re right. We do currently see significantly greater successes in navigation than in manipulation, and these are two major types of capabilities for robots. The advances in navigation were enabled by hardware advances. When the LIDAR sensor—the laser scanner—was introduced, all of a sudden, the algorithms that didn’t work with sonar started working, and that was transformational. We now had a reliable sensor that control algorithms could use in a robust way. As a result of that, mapping, planning, and localization took off, and that fueled the great enthusiasm in autonomous driving.

  Coming back to dexterity, on the hardware side, most of our robot hands still look like they did 50 years ago. Most of our robot hands are still very rigid, industrial manipulators with a two-pronged pincer, and we need something different. I personally believe that we are getting closer because we are beginning to look at reimagining what a robot is. In particular, we have been working on soft robots and soft robot hands. We’ve shown that with soft robot hands—the kind that we can design and build in my lab—we are able to pick up objects and handle objects much more reliably and much more intuitively than what is possible with traditional two-finger grasps.

  It works as follows: if you have a traditional robot hand where the fingers are all made out of metal, then they are capable of what is technically called “hard finger contact”—you put your finger on the object you’re trying to grasp at one point, and that’s the point at which you can exert forces and torques. If you have that kind of a setup, then you really need to know the precise geometry of the object that you’re trying to pick up. You then need to calculate very precisely where to put your fingers on the surface of the object so that all their forces and torques balance out, and they can resist external forces and torques. This is called in technical literature, “the force closure and form closure problem.” This problem requires very heavy computation, very precise execution, and very accurate knowledge of the object that you’re trying to grasp.

  That’s not something that humans do when they grasp an object. As an experiment, try to grasp a cup with your fingernails—it is such a difficult task. As a human, you have a perfect knowledge of the object and where it is located, but you will have a difficult time with that. With soft fingers, you actually don’t need to know the exact geometry
of the object you’re trying to grasp because the fingers will comply to whatever the object surface is. Contact along a wider surface area means that you don’t have to think precisely about where to place the fingers in order to reliably envelop and lift the object.

  That translates into much more capable robots and much simpler algorithms. As a result, I’m very bullish about the future progress in grasping and manipulation. I think that soft hands, and in general, soft robots are going to be a very critical aspect of advancement in dexterity, just like the laser scanner was a critical aspect of advancing the navigation capabilities of robots.

  That goes back to my observation that machines are made up of bodies and brains. If you change the body of the machine and you make it more capable, then you will be able to use different types of algorithms to control that robot. I’m very excited about soft robotics, and I’m very excited about the potential for soft robotics to impact an area of robotics that has been stagnant for many years. A lot of progress has been made in grasping and manipulation, but we do not have the kinds of capabilities that compare with those of natural systems, people or animals.

 

‹ Prev