Book Read Free

Architects of Intelligence

Page 36

by Martin Ford


  GARY MARCUS: Yes, I think so. I will be surprised if we get there that quickly, but it’s for that reason why I think that we should have some people thinking about these problems. I just don’t think that they’re our most pressing problems right now. Even when we do get to an AGI system, who’s to say that the AGI system is going to have any interest whatsoever in meddling in human affairs?

  We’ve gone from AI not being able to win at checkers about 60 years ago to being able to win at Go, which is a much harder game, in the last year. You could plot a game IQ, and make up a scale to say that game IQ has gone from 0 to 60 in 60 years. You could then do a similar thing for machine malevolence. Machine malevolence has not changed at all over that time. There’s no correlation, there is zero machine malevolence. There was none, and there is none. It doesn’t mean that it’s impossible—I don’t want to make the inductive argument that because it never happened, it never will—but there’s no indication of it.

  MARTIN FORD: It sounds to me like a threshold problem, though, you can’t have machine malevolence until you have AGI.

  GARY MARCUS: Possibly. Some of it has to do with motivational systems and you could try to construct an argument saying that AGI is a prerequisite for machine malevolence, but you couldn’t say that it’s a necessary and sufficient condition.

  Here’s a thought experiment. I can name a single genetic factor that will increase your chance of committing violent acts by a factor of 5. If you don’t have it, your proclivity to violence is pretty low. Are machine’s going to have this genetic factor, or not? The genetic factor, of course, is male versus female.

  MARTIN FORD: Is that an argument for making AGI female?

  GARY MARCUS: The gender is a proxy, it’s not the real issue, but it is an argument for making AI nonviolent. We should have restrictions and regulations to reduce the chance of AI being violent or of coming up with ideas of its own about what it wants to do with us. These are hard and important questions, but they’re much less straightforward than Elon’s quotes might lead a lot of people to think.

  MARTIN FORD: What he’s doing in terms of making an investment in OpenAI doesn’t sound like a bad thing, though. Somebody ought to be doing that work. It would be hard to justify having the government invest massive resources in working on AI control issues, but having private entities doing that seems positive.

  GARY MARCUS: The US Department of Defense does spend some money on these things, as they should, but you have to have a risk portfolio. I’m more worried about certain kinds of bioterrorism than I am about these particular AI threats, and I’m more worried about cyber warfare, which is a real going concern.

  There are two key questions here. One is, do you think that the probability of X is greater than 0? The answer is clearly yes. The other is, relative to the other risks that you might be concerned about, where would you rank this? To which I would say, these are somewhat unlikely scenarios, and there are other scenarios that are more likely.

  MARTIN FORD: If at some point we succeed in building an AGI, do you think it would be conscious, or is it possible to have an intelligent zombie with no inner experience?

  GARY MARCUS: I think it’s the latter. I don’t think that consciousness is a prerequisite. It might be an epiphenomenon in humans or maybe some other biological creatures. There’s another thought experiment that says, could we have something that behaves just like me but isn’t conscious? I think the answer is yes. We don’t know for sure, because we don’t have any independent measure of what consciousness is, so it’s very hard to ground these arguments.

  How would we tell if a machine was conscious? How do I know that you’re conscious?

  MARTIN FORD: Well, you can assume I am because we’re the same species.

  GARY MARCUS: I think that’s a bad assumption. What if it turns out that consciousness is randomly distributed through our population to one-quarter of the people? What if it’s just a gene? I have the supertaster gene that makes me sensitive to bitter compounds, but my wife doesn’t. She looks like she’s from the same species as me, but we differ in that property, and so maybe we differ in the consciousness property also? I’m kidding, but we can’t really use an objective measure, here.

  MARTIN FORD: It sounds like an unknowable problem.

  GARY MARCUS: Maybe someone will come up with a cleverer answer, but so far, most of the academic research is focused on the part of consciousness we call awareness. At what point does your central neural system realize logically that certain information is available?

  Research has shown that if you only see something for 100 milliseconds then you might not realize you’d seen it. If you see it for half a second, you’re pretty sure you actually saw it. With that data we can start to build up a characterization of which neural circuits at which time frame contribute information that you can reflect on, and we can call that awareness. That we’re making progress on, but not yet general consciousness.

  MARTIN FORD: You clearly think AGI is achievable, but do you think it’s inevitable? Do you think there is any probability that maybe we can never build an intelligent machine?

  GARY MARCUS: It’s almost inevitable. I think the primary things that would keep us from getting there are other extinction-level existential risks, such as getting hit by an asteroid, blowing ourselves up, or engineering a super-disease. We’re continuously accumulating scientific knowledge, we’re getting better at building software and hardware, and there’s no principled reason why not to do it. I think it will almost certainly happen unless we reset the clock, which I can’t rule out.

  MARTIN FORD: What do you think about the international arms race toward advanced AI, particularly with countries like China?

  GARY MARCUS: China has made AI a major center of its ambitions and been very public about it. The United States for a while had no response whatsoever, and I found that disturbing and upsetting.

  MARTIN FORD: It does seem that China has many advantages, such as a much larger population and fewer privacy controls, which means more data.

  GARY MARCUS: They’re much more forward-thinking because they realize how important AI is, and they are investing in it as a nation.

  MARTIN FORD: How do you feel about regulation of the field? Do you think that the government should get involved in regulating AI research?

  GARY MARCUS: I do, but it’s not clear to me what those regulations should be. I think a significant portion of AI funding should address those questions. They’re hard questions.

  For example, I don’t love the idea of autonomous weapons, but to simply ban them outright is maybe naive and creates more problems, where some people have them, and others don’t. What should those regulations be, and how should we enforce them? I’m afraid I don’t have the answer.

  MARTIN FORD: Do you believe that AI is going to be positive for humanity?

  GARY MARCUS: Hopefully, but I don’t think that is a given. The best way in which AI could help humanity is by accelerating scientific discovery in healthcare. Instead, AI research and implementation right now is mostly about ad placement.

  AI has a lot of positive potential, but I don’t think there’s enough focus on that side of it. We do some, but not enough. I also understand that there are going to be risks, job losses, and social upheaval. I’m an optimist in a technical sense in that I do think AGI is achievable, but I would like to see a change in what we develop and how we prioritize those things. Right now, I’m not totally optimistic that we’re heading in the right direction in terms of how we’re using AI and how we’re distributing it. I think there’s serious work to be done there to make AI have the positive impact on humanity that it could.

  GARY MARCUS is a professor of psychology and neural science at New York University. Much of Gary’s research has focused on understanding how children learn and assimilate language, and how these findings might inform the field of artificial intelligence.

  He is the author of several books, including The Birth of the Mind, Kluge: Th
e Haphazard Construction of the Human Mind, and the bestselling Guitar Zero, in which he explores cognitive challenges involved as he learns to play the guitar. Gary has also contributed numerous articles on AI and brain science to The New Yorker and the New York Times. In 2014 he founded and served as CEO of Geometric Intelligence, a machine learning startup that was later acquired by Uber.

  Gary is known for his criticism of deep learning and has written that current approaches may soon “hit a wall.” He points out that the human mind is not a blank slate, but comes preconfigured with significant structure to enable learning. He believes that neural networks alone will not succeed in achieving more general intelligence, and that continued progress will require incorporating more innate cognitive structure into AI systems.

  Chapter 15. BARBARA J. GROSZ

  I’m thrilled that AI is actually out there in the world making a difference because I didn’t think that it would happen in my lifetime—because it seemed the problems were so hard.

  HIGGINS PROFESSOR OF NATURAL SCIENCES, HARVARD UNIVERSITY

  Barbara J. Gros Barbara J. Grosz is Higgins Professor of Natural Sciences at Harvard University. Over the course of her career, she has made ground-breaking contributions in artificial intelligence that have led to the foundational principles of dialogue processing that are important for personal assistants like Apple’s Siri or Amazon’s Alexa. In 1993, she became the first woman to serve as president of the Association for the Advancement of Artificial Intelligence.

  MARTIN FORD: What initially drove you to be interested in artificial intelligence, and how did your career progress?

  BARBARA GROSZ: My career was a series of happy accidents. I went to college thinking I would be a 7th-grade math teacher because my 7th-grade math teacher was the only person I had met in my first 18 years of life who thought that women, in general, could do mathematics, and he told me that I was quite good at math. My world really opened up though when I went to Cornell for college, as they had just started a computer science faculty.

  At the time there was no undergraduate major in computer science anywhere in the US, but Cornell provided the opportunity to take a few classes. I started in numerical analysis, a rather mathematical area of computer science, and ended up going to Berkeley to graduate school, initially for a master’s, then I moved into the PhD program.

  I worked in what would come to be called computational science and then briefly in theoretical computer science. I decided that I liked the solutions in the mathematical areas of computer science, but not the problems. So when I needed a thesis topic, I talked with many people. Alan Kay said to me, “Listen. You have to do something ambitious for your thesis. Why don’t you write a program that will read a children’s story and tell it back from one of the character’s points of view?” That’s what spurred my interest in natural language processing and is the root of my becoming an AI researcher.

  MARTIN FORD: Alan Kay? He invented the graphical user interface at Xerox PARC, right? That’s where Steve Jobs got the idea for the Macintosh.

  BARBARA GROSZ: Yes, right, Alan was a key player in that Xerox PARC work. I actually worked with him on developing a programming language called Smalltalk, which was an object-oriented language. Our goal was to build a system suitable for students [K-12] and learning. My children’s story program was to be written in Smalltalk. Before the Smalltalk system was finished, though, I realized that children’s stories were not just stories to be read and understood, but that they’re meant to inculcate a culture, and that Alan’s challenge to me was going to be really hard to meet.

  During that time, the first group of speech-understanding systems were also being developed through DARPA projects, and the people at SRI International who were working on one of them said to me, “If you’re willing to take the risk of working on children’s stories, why don’t you come work with us on a more objective kind of language, task-oriented dialogues, but using speech not text?” As a result, I got involved in the DARPA speech work, which was on systems that would assist people in getting tasks done, and that’s really when I started to do AI research.

  It was that work which led to my discovery of how dialogue among people, when they’re working on a task together, has a structure that depends on the task structure—and that a dialogue is much more than just question-answer pairs. From that insight, I came to realize that as human beings we don’t in general ever speak in a sequence of isolated utterances, but that there’s always a larger structure, much like there is for a journal article, a newspaper article, a textbook, even for this book, and that we can model that structure. This was my first major contribution to natural-language processing and AI.

  MARTIN FORD: You’ve touched on one of the natural language breakthroughs that you’re most known for: an effort to somehow model a conversation. The idea that a conversation can be computed, and that there’s some structure within a conversation that can be represented mathematically.

  I assume that this has become very important, because we’ve seen a lot of progress in the field. Maybe you could talk about some of the work you’ve done there and how things have progressed. Has it astonished you where things are at now in terms of natural language processing, compared to where they were back when you started your research?

  BARBARA GROSZ: It absolutely has astonished me. My early work was exactly in this area of how we might be able to build a computer system that could carry on a dialogue with a person fluently and in a way that seemed natural. One of the reasons I got connected to Alan Kay, and did that work with him, was because we shared an interest in building computer systems that would work with and adapt to people, rather than require people to adapt to them.

  At the time that I took that work on, there was a lot of work in linguistics on syntax and on formal semantics in philosophy and linguistics, and on parsing algorithms in computer science. People knew there was more to language understanding than an individual sentence, and they knew that context mattered, but they had no formal tools, no mathematics, and no computational constructs to take that context into account in speech systems.

  I said to people at the time that we couldn’t afford to just hypothesize about what was going on, that we couldn’t just carry on introspecting, that we had to get samples of how people actually carry on a dialogue when they’re doing a task. As a result, I invented this approach, which later was dubbed the “The Wizard of Oz” approach by some psychologists. In this work, I sat two people—in this case, an expert and an apprentice—in two different rooms, and I had the expert explain to the apprentice how to get something done. It was by studying the dialogues that resulted from their working together that I recognized the structure in these dialogues and its dependence on task structure.

  Later, I co-wrote a paper with Candy Sidner titled Attention, Intentions, and the Structure of Discourse. In that paper we argue that dialogues have a structure that is in part the language itself and is in part the intentional structure of why you’re speaking, and what your purposes are when speaking. This intentional structure was a generalization of task structure. These structural aspects are then moderated by a model of the attentional state.

  MARTIN FORD: Let’s fast forward and talk about today. What’s the biggest difference that you’ve seen?

  BARBARA GROSZ: The biggest difference I see is going from speech systems that were essentially deaf, to today’s systems that are incredibly good at processing speech. In the early days we really could not get much out of speech, and it proved very hard to get the right kinds of parses and meaning back then. We’ve also come a long way forward with how incredibly well today’s technology can process individual utterances or sentences, which you can see in modern search engines and machine translation systems.

  If you consider any of the systems that purport to carry on dialogues, however, the bottom line is they essentially don’t work. They seem to do well if the dialogue system constrains the person to following a script, but people aren’t very goo
d at following a script. There are claims that these systems can carry on a dialogue with a person, but in truth, they really can’t. For instance, the Barbie doll that supposedly can converse with a child is script-based and gets in trouble if the child responds in a way the designers didn’t anticipate. I’ve argued that the mistakes it makes actually raise some serious ethical challenges.

  Similar examples arise with all the phone personal assistant systems. For example, if you ask where the nearest emergency room is, you’ll get an answer of the nearest hospital to wherever you are when you ask, but if you ask where you can go to get a sprained ankle treated, the system is likely to just take you to a web page that tells you how to treat a sprained ankle. That’s not a problem for a sprained ankle, but if you’re asking about a heart attack because you think someone’s had one, it could actually lead to death. People would assume a system that can answer one of those questions you can answer the other.

  A related problem arises with dialogue systems based on learning from data. Last summer (2017), I was given the Association for Computational Linguistics Lifetime Achievement Award and almost all the people listening to my talk at the conference work on deep learning based natural-language systems. I told them, “if you want to build a dialogue system, you have to recognize that Twitter is not a real dialogue.” To build a dialogue system that can handle dialogues of the sort people actually engage in, you need to have real data of real people having real dialogues, and that’s much harder to get than Twitter data.

  MARTIN FORD: When you talk about going off script, it seems to me that this is the blurry line between pure language processing and real intelligence. The ability to go off script and deal with unpredictable situations is what true intelligence is all about; it’s the difference between an automaton or robot and a person.

  BARBARA GROSZ: You’re exactly right, and that’s exactly the problem. If you think about having a lot of data, that, with deep learning, enables you to, say, go from a sentence in one language to the same sentence in another language; or to go from a sentence with a question in it to an answer to that question; or from one sentence to a possible following sentence, there’s no real understanding of what those sentences actually mean, so there’s no way to work off script with them.

 

‹ Prev