by Martin Ford
It’s really a slightly different approach, but ultimately, we’re all working together on trying to build really intelligent, flexible AI systems. We want those systems to be able to come into a new problem and use pieces of knowledge that they’ve developed from solving many other problems to all of a sudden be able to solve that new problem in a flexible way, which is essentially one of the hallmarks of human intelligence. The question is, how can we build that capability into computer systems?
MARTIN FORD: What was the path that led to you becoming interested in AI and then to your current role at Google?
JEFF DEAN: My dad got a computer when I was 9 that he assembled from a kit, and I learned to program on that through middle and high school. From there, I went on to do a double degree in Computer Science and Economics at the University of Minnesota. My senior thesis was on parallel training of neural networks, and this was back when neural networks were hot and exciting in the late 1980s and early 1990s. At that time, I liked the abstraction that they provided; it felt good.
I think a lot of other people felt the same way, but we just didn’t have enough computational power. I felt like if we could get 60-times the speed on those 64-bit processor machines then we could actually do great things. It turns out that we needed more like a million-times the speed, but we have that now.
I then went to work for the World Health Organization for a year, doing statistical software for HIV and AIDS surveillance and forecasting. After that, I went to graduate school at the University of Washington, where I got a PhD in Computer Science, doing mostly compiler optimization work. I went on to work for DEC in Palo Alto in their industrial research lab, before joining a startup—I lived in Silicon Valley, and that was the thing to do!
Eventually, I ended up at Google back when it only employed around 25 people, and I’ve been here ever since. I’ve worked on a number of things at Google. The first thing I did here was working on our first advertising system. I then worked for many years on our search systems and features like the crawling system, query-serving system, the indexing system, and the ranking functions, etc. I then moved on to our infrastructure software, things like MapReduce, Bigtable and Spanner, and also our indexing systems.
In 2011, I started to work on more machine learning-oriented systems, because I started to get very interested in how we could apply the very large amounts of computation that we had to train very large and powerful neural nets.
MARTIN FORD: You’re the head, and one of the founders, of Google Brain, which was one of the first real applications of deep learning and neural networks. Could you sketch out the story of Google Brain, and the role it plays at Google?
JEFF DEAN: Andrew Ng was a consultant in Google X for one day a week, and I bumped into him in the kitchen one day, and I said, “What are you up to?” He said, “Oh, I’m still figuring things out here, but at Stanford, my students are starting to look at how neural networks can be applied to different kinds of problems, and they’re starting to work.” I had experience with neural networks from doing my undergraduate thesis 20 years ago, so I said, “That’s cool, I like neural networks. How are they working?” We started talking, and we came up with the relatively ambitious plan of trying to use as much computation as we could throw at the problem to try to train neural networks.
We tackled two problems: the first was the unsupervised learning of image data. Here, we took 10 million frames from random YouTube videos and tried to use unsupervised learning algorithms to see what would happen if we trained a very large network. Maybe you’ve seen the famous cat neuron visualization?
MARTIN FORD: Yes. I remember that got a lot of attention at the time.
JEFF DEAN: That was a sign that there was something interesting going on there when you trained these models at scale with large amounts of data.
MARTIN FORD: Just to emphasize, this was unsupervised learning, in the sense that it figured out the concept of a cat organically, from unstructured, unlabeled data?
JEFF DEAN: Correct. We gave it the raw images from a bunch of YouTube videos, and had an unsupervised algorithm that was trying to build a representation that would allow it to reconstruct those images from that compact representation. One of the things it learned to do was to discover a pattern that would fire if there was a cat of some sort in the center of the frame because that’s a relatively common occurrence in YouTube videos, so that was pretty cool.
The other thing we did was to work with the speech recognition team on applying deep learning and deep neural networks to some of the problems in the speech recognition system. At first, we worked on the acoustic model, where you try to go from raw audio waveforms to a part-of-word sound, like “buh,” or “fuh,” or “ss”—the things that form words. It turned out we could use neural networks to do that much better than the previous system they were using.
That got very significant decreases in word error rate for the speech recognition system. We then just started to look and collaborate with other teams around Google about what kinds of interesting perception problems that it had in the speech space or in the image recognition or video processing space. We also started to build software systems to make it easy for people to apply these approaches to new problems, and where we could automatically map these large computations onto multiple computers in a relatively easy way that the programmer didn’t have to specify. They’d just say, “Here’s a big model and I want to train it, so please go off and use 100 computers for it.” And that would happen. That was the first generation of software that we built to address these kinds of problems.
We then built the second generation, that is, TensorFlow, and we decided we would open source that system. We were really designing it for three objectives. One was to be really flexible, so we could try out lots of different research ideas in the machine learning space quickly. The second was to be able to scale and tackle problems where we had lots of data, and we wanted very large, computationally expensive models. The third was that we wanted to be able to go from a research idea to a production-serving system for a model that worked in the same sort of underlying software system. We open sourced that at the end of 2015, and since then it’s had quite a lot of adoption externally. Now there’s a large community of TensorFlow users across a range of companies, academic institutions, and both hobbyists and public users using it.
MARTIN FORD: Is TensorFlow going to become a feature of your cloud server so that your customers have access to machine learning?
JEFF DEAN: Yes, but there’s a bit of nuance here. TensorFlow itself is an open source software package. We want our cloud to be the best place to run TensorFlow programs, but you can run them wherever you want. You can run them on your laptop, you can run them on a machine with GPU cards that you bought, you can run them on a Raspberry Pi, and on Android.
MARTIN FORD: Right, but on Google Cloud, you’ll have tensor processors and the specialized hardware to optimize it?
JEFF DEAN: That’s correct. In parallel with the TensorFlow software development, we’ve been working on designing custom processors for these kinds of machine learning applications. These processors are specialized for essentially low-precision linear algebra, which forms the core of all of these applications of deep learning that you’ve been seeing over the last 6 to 7 years.
The processors can train models very fast, and they can do it more power-efficiently. They can also be used for inference, where you actually have a trained model, and now you just want to apply it very quickly with high throughput for some production use, like Google Translate, or our speech recognition systems, or even Google Search.
We’ve also made the second-generation Tensor Processing Units (TPUs), available to cloud customers in several ways. One is under the covers in a few of our cloud products, but the other is they can just get a raw virtual machine with a cloud TPU device attached, and then they can run their own machine learning computations expressed in TensorFlow on that device.
MARTIN FORD: With all o
f this technology integrated into the cloud, are we getting close to the point where machine learning becomes available to everybody, like a utility?
JEFF DEAN: We have a variety of cloud products that are meant to appeal to different constituencies in this space. If you’re fairly experienced with machine learning, then you can get a virtual machine with one of these TPU devices on it, and write your own TensorFlow programs to solve your particular problem in a very customizable way.
If you’re not as much of an expert, we have a couple of other things. We have pre-trained models that you can use that require no machine learning expertise. You can just send us an image or a clip of audio, and we will tell you what’s in that image. For instance, “that’s a picture of a cat,” or “people seem happy in the image,” or “we extracted these words from the image.” In the audio case, it’s “we think this is what the people said in this audio clip.” We also have translation models and video models. Those are very good if what you want is a general-purpose task, like reading the words in an image.
We also have a suite of AutoML products, which are essentially designed for people who may not have as much machine learning expertise, but want a customized solution for a particular problem they have. Imagine if you have a set of images of parts that are going down your assembly line and there are 100 kinds of parts, and you want to be able to identify what part it is from the pixels in an image. There, we can actually train you a custom model without you having to know any machine learning through this technique called AutoML. Essentially, it can repeatedly try lots and lots of machine learning experiments as a human-machine learning expert would, but without you having to be a machine learning expert. It does it in an automated way, and then we give you a very high-accuracy model for that particular problem, without you needing to have machine learning expertise.
I think that’s really important because if you think about the world today, there are between 10,000 to 20,000 organizations in the world that have hired machine learning expertise in-house and are productively employing it. I’m making up that number, but it’s roughly that order of magnitude. Then, if you think about all the organizations in the world that have data that could be used for machine learning, it’s probably 10 million organizations that have some sort of machine learning problem.
Our aim is to make that approach much easier to use, so that you don’t need a master’s-level course on machine learning to do this. It’s more at the level of someone who could write a database query. If users with that level of expertise were able to get a working machine learning model, that would be quite powerful. For example, every small city has lots of interesting data about how they should set their stop light timers. Right now, they don’t really do that with machine learning, but they probably should.
MARTIN FORD: So, a democratization of AI is one of the goals that you’re working toward. What about the route to general intelligence, what are some of the hurdles that you see there?
JEFF DEAN: One of the big problems with the use of machine learning today is that we typically find a problem we want to solve with machine learning, and then we collect a supervised training dataset. We then use that to train a model that’s very good at that particular thing, but it can’t do anything else.
If we really want generally intelligent systems, we want a single model that can do hundreds of thousands of things. Then, when the 100,001st thing comes along, it builds on the knowledge that it gained from solving the other things and develops new techniques that are effective at solving that new problem. That will have several advantages. One of them is that you get this incredible multitask benefit from using the wealth of your experience to solve new problems more quickly and better, because many problems share some aspects. It also means that you need much less data, or fewer observations, to learn to do a new thing.
Unscrewing one kind of jar lid is a lot like unscrewing another kind of jar lid, except for maybe a slightly different kind of turning mechanism. Solving this math problem is a lot like these other math problems, except with some sort of twist. I think that’s the approach we really need to be taking in these things, and I think experimentation is a big part of this. So, how can systems learn from demonstrations of things? Supervised data is like that, but we’re doing a bit of work in this space in robotics as well. We can have humans demonstrate a skill, and then robots can learn from video demonstrations of that skill and learn to pour things with relatively few examples of humans pouring things.
Another hurdle is that we need very large computational systems, because if we really want a single system that solves all of our machine learning problems, that’s a lot of computation. Also, if we really want to try different approaches of this, then you need the turnaround time on those kinds of experiments to be very fast. Part of the reason we’re investing in building large-scale machine learning accelerator hardware, like our TPUs, is that we believe that if you want these kinds of large, single, powerful models, it’s really important that they have enough computational capability to do interesting things and allow us to make fast progress.
MARTIN FORD: What about the risks that come along with AI? What are the things that we really need to be concerned about?
JEFF DEAN: Changes in the labor force are going to be significant things that governments and policymakers should really be paying attention to. It’s very clear that even without significant further advances in what we can do, the fact that computers can now automate a lot of things that didn’t use to be automatable even four or five years ago, is a pretty big change. It’s not just one sector; it’s an aspect that cuts across multiple different jobs and employment.
I was on a White House Office of Science and Technology Policy Committee, which was convened at the end of the Obama administration in 2016, and which brought together about 20 machine learning people and 20 economists. In this group, we discussed what kinds of impact this would have on the labor markets. It’s definitely the kind of thing where you want governments to be paying attention and figuring out for people whose jobs change or shift, how can they acquire new skills or get new kinds of training that make them able to do things that are not at risk of automation? That’s an important aspect that governments have a strong, clear role to play in.
MARTIN FORD: Do you think someday we may need a universal basic income?
JEFF DEAN: I don’t know. It’s very hard to predict because I think any time we’ve gone through technological change, that has happened; it’s not like this is a new thing. The Industrial Revolution, the Agricultural Revolution, all these things have caused imbalance to society as a whole. What people do in terms of their daily jobs has shifted tremendously. I think this is going to be similar, in that entirely new kinds of things will be created that people will do, and it’s somewhat hard to predict what those things will be.
So, I do think it’s important that people be flexible and learn new things throughout their career. I think that’s already true today. Whereas 50 years ago, you could go to school and then start a career and be in that career for many, many years, today you might work in one role for a few years and pick up some new skills, then do something a bit different. That kind of flexibility is, I think, important.
In terms of other kinds of risks, I’m not as worried about the Nick Bostrom superintelligence aspect. I do think that as computer scientists and machine learning researchers we have the opportunity and the ability to shape how we want machine learning systems to be integrated and used in our society.
We can make good choices there, or we can make some not so good choices. As long as we make good choices, where these things are actually used for the benefit of humanity, then it’s going to be fantastic. We’ll get better healthcare and we’ll be able to discover all kinds of new scientific discoveries in collaboration with human scientists by generating new hypotheses automatically. Self-driving cars are clearly going to transform society in very positive ways, but at the same time, that is going to be a source of dis
ruption in the labor markets. There are nuances to many of these developments that are important.
MARTIN FORD: One cartoon view of this is that a small team—maybe at Google—develops AGI, and that small group of people are not necessarily tied into these broader issues, then it turns out that these few people are making the decision for everyone. Do you think there is a place for regulation of some AI research or applications?
JEFF DEAN: It’s possible. I think regulation has a role to play, but I want regulation to be informed by people with expertise in the field. I think sometimes regulation has a bit of a lag factor, as governments and policymakers catch-up to what is now possible. Knee-jerk reactions in terms of regulation or policymaking are probably not helpful, but informed dialog with people in the field is important as government figures out what role it wants to play in informing how things should play out.
In respect to the development of AGI, I think it’s really important that we do this ethically and with sound decision-making. That’s one reason that Google has put out a clear document of the principles by which we’re approaching these sorts of issues (https://www.blog.google/technology/ai/ai-principles/). Our AI principles document is a good example of the thought we’re putting into not just the technical development of this, but the way in which we want to be guided in what kinds of problems we want to tackle with these approaches, how we will approach them, and what we will not do.
JEFFREY DEAN joined Google in 1999, and is currently a Google Senior Fellow in the Research Group, where he leads the Google Brain project and is the overall director of artificial intelligence research at the company.
Jeff received a PhD in Computer Science from the University of Washington, working with Craig Chambers on whole-program optimization techniques for object-oriented languages in 1996. He received a BS, summa cum laude from the University of Minnesota in Computer Science & Economics in 1990. From 1996 to 1999, he worked for Digital Equipment Corporation’s Western Research Lab in Palo Alto, where he worked on low-overhead profiling tools, design of profiling hardware for out-of-order microprocessors, and web-based information retrieval. From 1990 to 1991, Jeff worked for the World Health Organization’s Global Programme on AIDS, developing software to do statistical modeling, forecasting, and analysis of the HIV pandemic.