Architects of Intelligence
Page 46
David graduated from Manhattan College, with a BS degree in biology and from Rensselaer Polytechnic Institute with a PhD degree in computer science specializing in knowledge representation and reasoning. He has over 50 patents and has published papers in the areas of AI, automated reasoning, NLP, intelligent systems architectures, automatic story generation, and automatic question-answering.
David was awarded the title of IBM Fellow (fewer than 100 of 450,000 hold this technical distinction) and has won many awards for his work creating UIMA and Watson, including the Chicago Mercantile Exchange’s Innovation Award and the AAAI Feigenbaum Prize.
Chapter 20. RODNEY BROOKS
We don’t have anything anywhere near as good as an insect, so I’m not afraid of superintelligence showing up anytime soon.
CHAIRMAN, RETHINK ROBOTICS
Rodney Brooks is widely recognized as one of the world’s foremost roboticists. Rodney co-founded iRobot Corporation, an industry leader in both consumer robotics (primarily the Roomba vacuum cleaner) and military robots, such as those used to defuse bombs in the Iraq war (iRobot divested its military robotics division in 2016). In 2008, Rodney co-founded a new company, Rethink Robotics, focused on building flexible, collaborative manufacturing robots that can safely work alongside human workers.
MARTIN FORD: While at MIT, you started the iRobot company, which is now one of the world’s biggest distributors of commercial robots. How did that come about?
RODNEY BROOKS: I started iRobot back in 1990 with Colin Angle and Helen Greiner. At iRobot we had a run of 14 failed business models and didn’t get a successful one until 2002, at which point we hit on two business models that worked in the same year. The first one was robots for the military. They were deployed in Afghanistan to go into caves to see what was in them. Then, during the Afghanistan and Iraq conflicts, around 6,500 of them were used to deal with roadside bombs.
At the same time in 2002, we launched the Roomba, which was a vacuum cleaning robot. In 2017, the company recorded full-year revenue of $884 million and has, since launch, shipped over 20 million units. I think it’s fair to say the Roomba is the most successful robot ever in terms of numbers shipped, and that was really based on the insect-level intelligence that I had started developing at MIT around 1984.
When I left MIT in 2010, I stepped down completely and started a company, Rethink Robotics, where we build robots that are used in factories throughout the world. We’ve shipped thousands of them to date. They’re different from conventional industrial robots in that they’re safe to be with, they don’t have to be caged, and you can show them what you want them to do.
In the latest version of the software we use, Intera 5, when you show the robots what you want them to do, they actually write a program. It’s a graphical program that represents behavior trees, which you can then manipulate if you want, but you don’t have to. Since its launch, more sophisticated companies wanted to be able to get in and tweak exactly what the robot was doing after it had been shown what to do, but you don’t have to know what the underlying representation is. These robots use force feedback, they use vision, and they operate in real environments with real people around them 24 hours a day, seven days a week, 365 days a year, all over the world. I think certainly they are the most advanced artificial intelligence robots currently in mass deployment.
MARTIN FORD: How did you come to be at the forefront of robotics and AI? Where does your story begin?
RODNEY BROOKS: I grew up in Adelaide, South Australia, and in 1962 my mother found two American How and Why Wonder Books. One was called Electricity and the other, Robots and Electronic Brains. I was hooked, and I spent the rest of my childhood using what I’d learned from the books to explore and try to build intelligent computers, and ultimately robots.
I did an undergraduate degree in mathematics and started a PhD in artificial intelligence in Australia but realized there was a little problem in that there were no computer science departments or artificial intelligence researchers in the country. I applied to the three places that I’d heard of that did artificial intelligence, MIT (Massachusetts Institute of Technology), Carnegie Mellon (Pittsburgh, USA), and Stanford University. I got rejected by MIT but got accepted to Carnegie Mellon and Stanford, starting in 1977. I chose Stanford because it was closer to Australia.
My PhD at Stanford was on computer vision with Tom Binford. Following on from that, I was at Carnegie Mellon for a postdoc, then onto another postdoc at MIT, finally ending back at Stanford in 1983 as a member of the tenure-track faculty. In 1984 I moved back to MIT as a member of the faculty, where I stayed for 26 years.
While at MIT as a postdoc, I started working more on intelligent robots. By the time I moved back to MIT in 1984 I realized just how little progress we’d made in modeling robot perception. I got inspired by insects with a hundred thousand neurons outperforming any robot we had by fantastic amounts. I then started to try and model intelligence on insect intelligence, and that’s what I did for the first few years.
I then ran the Artificial Intelligence Lab at MIT that Marvin Minsky had founded. Over time, that merged with the Laboratory of Computer Science and formed CSAIL, the Computer Science and Artificial Intelligence Lab, which is, today, still the largest lab at MIT.
MARTIN FORD: Looking back, what would you say is the highlight of your career with either robots or AI?
RODNEY BROOKS: The thing I’m proudest of was in March 2011 when the earthquake hit Japan and the tidal wave knocked out the Fukushima Nuclear Power Plant. About a week after it happened, we got word that the Japanese authorities were really having problems in that they couldn’t get any robots into the plant to figure out what was going on. I was still on the board of iRobot at that time, and we shipped six robots in 48 hours to the Fukushima site and trained up the power company tech team. As a result, they acknowledged that the shutdown of the reactors relied on our robots being able to do things for them that they on their own were unable to do.
MARTIN FORD: I remember that story about Japan. It was a bit surprising because Japan is generally perceived as being on the very leading edge of robotics, and yet they had to turn to you to get working robots.
RODNEY BROOKS: I think there’s a real lesson there. The real lesson is that the press hyped up things about them being far more advanced than they really are. Everyone thought Japan had incredible robotic capabilities, and this was led by an automobile company or two, when really what they had was great videos and nothing about reality.
Our robots had been in war zones for nine years being used in the thousands every day. They weren’t glamorous, and the AI capability would be dismissed as being almost nothing, but that’s the reality of what’s real and what is applicable today. I spend a large part of my life telling people that they are being delusional when they see videos and think that great things are around the corner, or that there will be mass unemployment tomorrow due to robots taking over all of our jobs.
At Rethink Robotics, I say, if there was no lab demo 30 years ago, then it’s too early to think that we could make it into a practical product now. That’s how long it takes from a lab demo to a practical product. It’s certainly true of autonomous driving; everyone’s really excited about autonomous driving now. People forget that the first automobile that drove autonomously on a freeway at over 55 miles an hour for 10 miles was in 1987 near Munich. The first time a car drove across the US, hands off the wheel, feet off the pedals coast to coast, was No Hands Across America in 1995. Are we going to see mass-produced self-driving cars tomorrow? No. It takes a long, long, long time to develop something like this, and I think people are still overestimating how quickly this technology will be deployed.
MARTIN FORD: It sounds to me like you don’t really buy into the Kurzweil Law of Accelerating Returns. The idea that everything is moving faster and faster. I get the feeling that you think things are moving at the same pace?
RODNEY BROOKS: Deep learning has been fantastic, and people who are ou
tside the field of it come in and say, wow. We’re used to exponentials because we had exponentials in Moore’s Law, but Moore’s Law is slowing down because you can no longer halve the feature size. What it’s leading to though is a renaissance of computer architecture. For 50 years, you couldn’t afford to do anything out of the ordinary because the other guys would overtake you, just because of Moore’s Law. Now we’re starting to see a flourishing of computer architecture and I think it’s a golden era for computer architecture because of the end of Moore’s Law. That gets back to Ray Kurzweil and people who saw those exponentials and think that everything is exponential.
Certain things are exponential, but not everything. If you read Gordon Moore’s 1965 paper, The Future of Integrated Electronics, where Moore’s Law originated from, the last part was devoted to what the law doesn’t apply to. Moore said it doesn’t apply to power storage, for example, where it’s not about the information abstraction of zeroes and ones, it’s about bulk properties.
Take green tech as an example. A decade ago, venture capitalists in Silicon Valley got burned because they thought Moore’s Law was everywhere, and that it would apply to green tech. No, that’s not how it works. Green tech relies on bulk, it relies on energy, it’s not something that is halve-able physically and you still have the same information content.
Getting back to deep learning, people think because one thing happened and then another thing happened, it’s just going to get better and better. For deep learning, the fundamental algorithm of backpropagation was developed in the 1980s, and those people eventually got it to work fantastically after 30 years of work. It was largely written off in the 1980s and the 1990s for lack of progress, but there were 100 other things that were also written off at the same time. No one predicted which one out of those 100 things would pop. It happened to be that backpropagation came together with a few extra things, such as clamping, more layers, and a lot more computation, and provided something great. You could never have predicted that backpropagation and not one of those 99 other things were going to pop through. It was by no means inevitable.
Deep learning has had great success, and it will have more success, but it won’t go on forever providing more or greater success. It has limits. Ray Kurzweil is not going to be uploading his consciousness any time soon. It’s not how biological systems work. Deep learning will do some things, but biological systems rely on hundreds of algorithms, not just one algorithm. We will need hundreds more algorithms before we can make that progress, and we cannot predict when they will pop. Whenever I see Kurzweil I remind him that he is going to die.
MARTIN FORD: That’s mean.
RODNEY BROOKS: I’m going to die too. I have no doubt about it, but he doesn’t like to have it pointed out because he’s one of these techno-religion people. There are different versions of techno religion. There are the life extension companies being started by the billionaires in Silicon Valley, then there’s the upload yourself to a computer person like Ray Kurzweil. I think that probably for a few more centuries, we’re still mortal.
MARTIN FORD: I tend to agree with that. You mentioned self-driving cars, let me just ask you specifically how fast you see that moving? Google supposedly has real cars with nobody inside them on the road now in Arizona.
RODNEY BROOKS: I haven’t seen the details of that yet, but it has taken a lot longer than anyone thought. Both Mountain View (California) and Phoenix (Arizona) are different sorts of cities to much of the rest of the US. We may see some demos there, but it’s going to be a few years before there is a practical mobility-as-a-service operation that turns out to be anything like profitable. By profitable, I mean making money almost at the rate at which Uber is losing money, which was $4.5 billion last year.
MARTIN FORD: The general thought is that since Uber loses money on every ride, if they can’t go autonomous it’s not a sustainable business model.
RODNEY BROOKS: I just saw a story this morning, saying that the median hourly wage of an Uber driver is $3.37, so they’re still losing money. That’s not a big margin to get rid of and replace with those expensive sensors required for autonomous driving. We haven’t even figured out what the practical solution is for self-driving cars. The Google cars have piles of expensive sensors on the roof, and Tesla tried and failed with just built-in cameras. We will no doubt see some impressive demonstrations and they will be cooked. We saw that with robots from Japan, those demonstrations were cooked, very, very cooked.
MARTIN FORD: You mean faked?
RODNEY BROOKS: Not faked, but there’s a lot behind the curtain that you don’t see. You infer, or you make generalizations about what’s going on, but it’s just not true. There’s a team of people behind those demonstrations, and there will be teams of people behind the self-driving demonstrations in Phoenix for a long time, which is a long way from it being real.
Also, a place like Phoenix is different from where I live in Cambridge, Massachusetts, where it’s all cluttered one-way streets. This raises questions, such as where does the driving service pick you up in my neighborhood? Does it pick you up in the middle of the road? Does it pull into a bus lane? It’s usually going to be blocking the road, so it’s got to be fast, people will be tooting horns at them, and so on. It’s going to be a while before fully autonomous systems can operate in that world, so I think even in Phoenix we’re going to see designated pickup and drop-off places for a long time, they won’t be able to just slot nicely into the existing road network.
We’ve started to see Uber rolling out designated pick-up spots for their services. They now have a new system, which they were trying in San Francisco and Boston and has now expanded to six cities, where you can stand in line at an Uber rank with other people getting cold and wet waiting for their cars. We’re imagining self-driving cars are going to be just like the cars of today except with no driver. No, there’s going to be transformations of how they’re used.
Our cities got transformed by cars when they first came along, and we’re going to need a transformation of our cities for this technology. It’s not going to be just like today but with no drivers in the cars. That takes a long time, and it doesn’t matter how much of a fanboy you are in Silicon Valley, it isn’t going to happen quickly.
MARTIN FORD: Let’s speculate. How long will it take to have something like what we have with Uber today, a mass driverless product where you could be in Manhattan or San Francisco and it will pick you up somewhere and take you to another place you specify?
RODNEY BROOKS: It’s going to come in steps. The first step may be that you walk to a designated pick-up place and they’re there. It’s like when you pick up a Zipcar (an American car-sharing company scheme) today, there are designated parking spots for Zipcars. That will come earlier than the service that I currently get from an Uber where they pull up and double park right outside my house. At some point, I don’t know whether it is going to be in my lifetime, we’ll see a lot of self-driving cars moving around our regular cities but it’s going to be decades in the making and there’s going to be transformations required, but we haven’t quite figured out yet what they’re going to be.
For instance, if you’re going to have self-driving cars everywhere, how do you refuel them or recharge them? Where do they go to recharge? Who plugs them in? Well, some startups have started to think about how fleet management systems for electric self-driving cars might work. They will still require someone to do the maintenance and the normal daily operations. A whole bunch of infrastructure like that would have to come about for autonomous vehicles to be a mass product, and it’s going to take a while.
MARTIN FORD: I’ve had other estimates more in the range of five years until something roughly the equivalent to Uber is ready. I take it that you think that’s totally unrealistic?
RODNEY BROOKS: Yes, that’s totally unrealistic. We might get to see certain aspects of it, but not the equivalent. It’s going to be different, and there’s a whole bunch of new companies and new operations th
at have to support it that haven’t happened yet. Let’s start with the fundamentals. How are you going to get in the car? How’s it going to know who you are? How do you tell if you’ve changed your mind when you’re driving and you want to go to a different location? Probably with speech, Amazon Alexa and Google Home have shown us how good speech recognition is, so I think we will expect the speech to work.
Let’s look at the regulatory system. What can you tell the car to do? What can you tell the car to do if you don’t have a driver’s license? What can a 12-year-old, who’s been put in the car by their parents to go to soccer practice, tell the car to do? Does the car take voice commands from 12-year-olds, or does it not listen to them? There’s an incredible number of practical and regulatory problems that people have not been talking about that remain to be solved. At the moment, you can put a 12-year-old in a taxi and it will take him somewhere. That isn’t going to happen for a long time with self-driving cars.
MARTIN FORD: Let’s go back to one of your earlier comments on your previous research into insects. That’s interesting because I’ve often thought that insects are very good biological robots. I know you’re no longer a researcher yourself, but I was wondering what’s currently happening in terms of building a robot or an intelligence that begins to approach what an insect is capable of, and how does that influence our steps toward superintelligence?
RODNEY BROOKS: Simply put, we don’t have anything anywhere near as good as an insect, so I’m not afraid of superintelligence showing up anytime soon. We can’t replicate the learning capabilities of insects using only a small number of unsupervised examples. We can’t achieve the resilience of the insect in being able to adapt in the world. We certainly can’t replicate the mechanics of an insect, which are amazing. No one has anything that approaches an insect’s level of intent. We have great models that can look at something and classify it and even put a label on it in certain cases, but that’s so much different to even the intelligence of an insect.