Architects of Intelligence

Home > Other > Architects of Intelligence > Page 45
Architects of Intelligence Page 45

by Martin Ford


  If we’re talking about identifying a cat in a picture, it’s very clear what the phenomenon is, and we would get a bunch of labeled data, and we would train the neural network. If you say: “How do I produce an understanding of this content?”, it’s not even clear I can get humans to agree on what an understanding is. Novels and stories are complex, multilayered things, and even when there is enough agreement on the understanding, it’s not written down enough for a system to learn the immensely complex function represented by the underlying phenomenon, which is human intelligence itself.

  Theoretically, if you had the data you needed that mapped every kind of English story to its meaning, and there was enough there to learn the meaning mapping—to learn what the brain does given an arbitrary collection of sentences or stories—then could a neural network learn it? Maybe, but we don’t have that data, we don’t know how much data is required, and we don’t know what it takes to learn it in terms of the complexity of the function a neural network could potentially learn. Humans can do it, but that’s because the human brain is constantly interacting with other humans and it’s prewired for doing this kind of thing.

  I would never take a theoretical position that says, “I have a general function finder. I can do anything with it.” At some levels, sure, but where’s the data to produce the function that represents human understanding? I don’t know.

  The methodology for engaging and acquiring that information is something I don’t know how to do with a neural network right now. I do have ideas on how to do that, and that doesn’t mean I don’t use neural networks and other machine learning techniques as part of that overarching architecture.

  MARTIN FORD: You had a part in a documentary called Do You Trust This Computer? and you said “In three to five years, we’ll have a computer system that can autonomously learn to understand and how to build understanding, not unlike the way a human mind works.” That really struck me. That sounds like AGI, and yet you’re giving it a three- to five-year time frame. Is that really what you’re saying?

  DAVID FERRUCCI: It’s a very aggressive timeline, and I’m probably wrong about that, but I would still argue that it’s something that we could see within the next decade or so. It’s not going to be a 50- or a 100-year wait.

  I think that we will see two paths. We will see the perception side and the control side continue to get better in leaps and bounds. That is going to have a dramatic impact on society, on the labor market, on national security, and on productivity, which is all going to be very significant, and that’s not even addressing the understanding side.

  I think that will lead to a greater opportunity for AI to engage humans, with things like Siri and Alexa engaging humans more and more in language and thinking tasks. It’s through those ideas, and with architectures like we’re building at Elemental Cognition, that we will start to be able to learn how to develop that understanding side.

  My three- to five-year estimate was a way of saying, this is not something that we have no idea how to do. This is something we do have an idea how to do, and it’s a matter of investing in the right approach and putting in the engineering necessary to achieve it. I would make a different estimate if it was something I thought was possible, but that I had no idea how to get there.

  However long the wait is depends a lot on where the investment goes. A lot of the investment today is going into the pure statistical machine learning stuff because it’s so short-term and so hot. There are just a lot of low-hanging fruit returns. One of the things I’m doing is getting investment for another technology that I think we need in order to develop that understanding side. It all depends on how the investment gets applied and over what time frame. I don’t think, as other people might, that we don’t know how to do it and we’re waiting for some enormous breakthrough. I don’t think that’s the case, I think we do know how to do it, we just need to prove that.

  MARTIN FORD: Would you describe Elemental Cognition as an AGI company?

  DAVID FERRUCCI: It’s fair to say we’re focused on building a natural intelligence with the ability to autonomously learn, read, and understand, and we’re achieving our goals for fluently dialoging with humans in that way.

  MARTIN FORD: The only other company I’m aware of that is also focused on that problem is DeepMind, but I’m struck by how different your approach is. DeepMind is focused on deep reinforcement learning through games and simulated environments, whereas what I hear from you is that the path to intelligence is through language.

  DAVID FERRUCCI: Let’s restate the goal a little bit. Our goal is to produce an intelligence that is anchored in logic, language and reason because we want to produce a compatible human intelligence. In other words, we want to produce something that can process language the way humans process language, can learn through language, and can deliver knowledge fluently through language and reason. This is very specifically the goal.

  We do use a variety of machine learning techniques. We use neural networks to do a variety of different things. The neural networks, however, do not alone solve the understanding problem. In other words, it’s not an end-to-end solution. We also use continuous dialog, formal reasoning, and formal logic representations. For things that we can learn efficiently with neural networks, we do. For the things we can’t, we find other ways to acquire and model that information.

  MARTIN FORD: Are you also working on unsupervised learning? Most AI that we have today is trained with labeled data, and I think real progress will probably require getting these systems to learn the way that a person does, organically from the environment.

  DAVID FERRUCCI: We do both. We do corpus and large corpus analysis, which is unsupervised. We do unsupervised learning from large corpora, but we also do supervised learning from annotated content as well.

  MARTIN FORD: Let’s talk about the future implications of AI. Do you think there is the potential for a big economic disruption in the near future, where a lot of jobs are going to be deskilled or to disappear?

  DAVID FERRUCCI: I think it’s definitely something that we need to pay attention to. I don’t know if it’ll be more dramatic than in previous examples of when a new technology has rolled in, like in the Industrial Revolution, but I think this AI revolution will be significant and comparable to the industrial revolution.

  I think there will be displacements and there will be the need to transition the workforce, but I don’t think it’s going to be catastrophic. There’s going to be some pain in that transition, but in the end, my guess is that it’s likely to create more jobs. I think that’s also what has happened historically. Some people might get caught in that and they have to retrain; that certainly happens, but it doesn’t mean there’ll be fewer jobs overall.

  MARTIN FORD: Do you think there’s likely to be a skill mismatch problem? For instance, if a lot of the new jobs created are for robotics engineers, deep learning experts, and so forth?

  DAVID FERRUCCI: Certainly, those jobs will get created, and there’ll be a skills mismatch, but I think other jobs will be created as well where there’ll be greater opportunities just for refocusing and saying, “What do we want humans doing if machines are doing these other things?” There are tremendous opportunities in healthcare and caregiving, where things like human contact are important.

  The future we envision at Elemental Cognition has human and machine intelligence tightly and fluently collaborating. We think of it as thought-partnership. Through thought-partnership with machines that can learn, reason, and communicate, humans can do more because they don’t need as much training and as much skill to get access to knowledge and to apply it effectively. In that collaboration, we are also training the computer to be smarter and more understanding of the way we think.

  Look at all the data that people are giving away for free today, that data has value. Every interaction you have with a computer has value because that computer’s getting smarter. So, to what extent do we start paying for that, and paying for that more regularly?
We want computers to interact in ways that are more compatible with humans, so why aren’t we paying humans to help us achieve that? I think the economics of the human-machine collaboration is interesting in and of itself, but there will be big transitions. Driverless cars are inevitable, and there are quite a few people who have decent blue-collar jobs driving, and I think that’ll evolve. I don’t know if that will be a trend, but that will certainly be a transition.

  MARTIN FORD: How do you feel about the risks of superintelligence that Elon Musk and Nick Bostrom have both been talking about?

  DAVID FERRUCCI: I think there’s a lot of cause to be concerned anytime you give a machine leverage. That’s when you put it in control over something that can amplify an error or the effect of a bad actor. For instance, if I put machines in control of the electrical grid, over weapon systems, or over the driverless car network, then any mistake there can be amplified into a significant disaster. If there’s a cybersecurity problem or an evil actor hacks the system, it’s going to amplify the impact of the error or the hack. That’s what we should be super concerned about. As we’re putting machines in control of more and more things like transportation systems, food systems, and national security systems, we need to be super careful. This doesn’t have anything specifically to do with AI, only that you must design those systems with concern about error cases and cybersecurity.

  The other thing that people like Nick Bostrom talk about is how the machine might develop its own goals and decide it’s going to lay waste to the human race to achieve its goals. That’s something I’m less concerned about because there are fewer incentives for machines to react like that. You’d have to program the computer to do something like that.

  Nick Bostrom talks about the idea that you could give the machine a benign goal but because it’s smart enough it will find a complex plan that will have unintended circumstances when it executes that plan. My response to that is simple, why would you do that? I mean, you don’t give a machine that has to make paper clips leverage over the electrical grid, it comes back to thoughtful design and design for security. There are many other human problems I would put higher on the list of concerns than the notion that an AI would suddenly come up with its own desires and goals, and/or plan to sacrifice the human race to make more paper clips.

  MARTIN FORD: What do you think about the regulation of AI, is there a need for that?

  DAVID FERRUCCI: The idea of regulation is something we do have to pay attention to. As an industry, we have to decide broadly who’s liable for what when we have machines making decisions that affect our lives. That’s the case whether it’s in health care, policymaking, or any of the other fields. Are we, as individuals who are affected by decisions that are made by machines, entitled to an explanation that we can understand?

  In some sense, we already face these kinds of things today. For example, in healthcare we’re sometimes given explanations that say, “We think you should do this and we highly recommend it because 90% of the time this is what happens.” They’re giving you a statistical average rather than particulars about an individual patient. Should you be satisfied with that? Can you request an explanation as to why they’re recommending that treatment based on this individual patient? It’s not about the probabilities, it’s about the possibilities for an individual case. It raises very interesting questions.

  That is one area where governments will need to step in and say, “Where does the liability fall and what are we owed as individuals who are potential subjects of machine decision-making?”

  The other area, which we talked a little bit about, was, what are the criteria when you design systems that have dramatic leverage, where negative effects like errors or hacking can be dramatically amplified and have broad human societal impact? You don’t want to slow down the advancement of technology, but at the same time, you don’t want to be too casual about the controls around deploying systems like that.

  Another area for regulation that’s a little dicey is the labor market. Do you slow things down and say, “you can’t put machines in this job because we want to protect the labor market”? I think there’s something to be said for helping society transition smoothly and avoiding dramatic impacts, but at the same time, you don’t want to slow down our advance as a society over time.

  MARTIN FORD: Since you departed IBM, they’ve built a big business unit around Watson and are trying to commercialize that with mixed results. What do you think of IBM’s experience and the challenges they’ve faced, and does that relate to your concern about building machines that can explain themselves?

  DAVID FERRUCCI: I’m many miles away from what’s going on there nowadays, but my sense of that from a business perspective, is that they seized Watson as a brand to help them get into the AI business, and I think it’s given them that opportunity. When I was at IBM, they were doing all kinds of AI technology, it was very spread out throughout the company in different areas. I think that when Watson won the Jeopardy! competition and demonstrated to the public a really palpable AI capability, all that excitement and momentum helped IBM to organize and integrate all their technology under a single brand. That demonstration gave them the ability to position themselves well, both internally and externally.

  With regard to the businesses, I think IBM is in a unique place regarding the way they can capitalize on this kind of AI. It’s very different than the consumer space. IBM can approach the market broadly through business intelligence, data analytics, and optimization. And they can deliver targeted value, for example in healthcare applications.

  It’s tough to measure how successful they’ve been because it depends on what you count as AI and where you are in the business strategy. We will see how it plays out. As far as the consumer mindshare these days it seems to me like Siri and Amazon’s Alexa are in the limelight. Whether or not they’re providing good value on the business side is a question I can’t answer.

  MARTIN FORD: There are concerns that China may have an advantage given that they have a larger population, more data, and fewer concerns about privacy. Is that something we should worry about? Do we need more industrial policy in the United States in order to be more competitive?

  DAVID FERRUCCI: I think that there is a bit of an arms race in the sense that these things will affect productivity, the labor markets, national security, and consumer markets, so it matters a lot. To stay competitive as a nation you do have to invest in AI to give a broad portfolio. You don’t want to put all your eggs in one basket. You have to attract and maintain talent to stay competitive, so I think there’s no question that national boundaries create a certain competition because of how much it affects competitive economics and security.

  The challenging balancing act is how do you remain competitive there and at the same time, think carefully about controls, regulation, and other kinds of impacts, such as privacy. Those are tough issues, and I think one of the things that the world’s going to need is more thoughtful and knowledgeable leaders in this space who can help set policy and make some of those calls. That’s a very important service, and the more knowledgeable you are, the better, because if you look under the hood, this is not simple stuff. There’s a lot of tough questions, a lot of technology issues to make choices on. Maybe you need AI for that!

  MARTIN FORD: Given these risks and concerns, are you optimistic with regard to the future of artificial intelligence?

  DAVID FERRUCCI: Ultimately, I’m an optimist. I think it’s our destiny to pursue this kind of thing. Step back to what interested me when I first started on my path in AI: Understanding human intelligence; understanding it in a mathematical and systematic way; understanding what the limitations are, how to enhance it, how to grow it, and how to apply it. The computer provides us with a vehicle through which we can experiment with the very nature of intelligence. You can’t say no to that. We associate our sense of self with our intelligence, and so how do we not do everything we can to understand it better, to apply it more effectively, an
d to understand its strengths and its weaknesses? It’s more our destiny than anything else. It’s the fundamental exploration—how do our minds work?

  It’s funny because we think about how humanity wants to explore space and beyond to find other intelligences, when in fact, we have one growing right next to us. What does it even mean? What’s the very nature of intelligence? Even if we were to find another species, we’ll know more about what to expect and what’s both possible and impossible as we explore the very fundamental nature of intelligence. It’s our destiny to cope with this, and I think that ultimately, it will dramatically enhance our creativity and our standard of living in ways we can’t even begin to imagine today.

  There is this existential risk, and I think it’s going to impact a change in how we think about ourselves, and what we consider unique about being human. Coming to grips with that is going to be a very interesting question. For any given task, we can get a machine that does it better, so where does our self-esteem go? Where does our sense of self go? Does it fall back into empathy, emotion, understanding, and things that might be more spiritual in nature? I don’t know, but these are the interesting questions as we begin to understand intelligence in a more objective way. You can’t escape it.

  DAVID FERRUCCI is the award-winning AI researcher who built and led the IBM Watson team from its inception in 2006 to its landmark success in 2011 when Watson defeated the greatest Jeopardy! players of all time.

  In 2013, David joined Bridgewater Associates as Director of Applied AI. His nearly 30 years in AI and his passion to see computers fluently think, learn, and communicate inspired him to found Elemental Cognition LLC in 2015 in partnership with Bridgewater. Elemental Cognition is focused on creating novel AI systems that dramatically accelerate automated language understanding and intelligent dialog.

 

‹ Prev