Book Read Free

Architects of Intelligence

Page 38

by Martin Ford


  MARTIN FORD: But is that just because the example you’re using is facial recognition and they’re feeding in photographs of white people mostly? If they expanded and had data from a more diverse population, then that would be fixed, right?

  BARBARA GROSZ: Right, but that’s just the easiest example I can give you. Let’s take healthcare. Until only a few years ago, medical research was done only on males, and I’m not talking only about human males, I’m even talking about only male mice in basic biomedical research. Why? Because the females had hormones! If you’re developing a new medicine, a related problem arises with young people versus old people as older people don’t need the same dosages as young people. If most of your studies are on younger people, you again have a problem of biased data. The face data is an easy example, but the problem of data bias permeates everything.

  MARTIN FORD: Of course, that’s not a problem that’s exclusive to AI; humans are subject to the same issues when confronted with flawed data. It’s a bias in the data that results from past decisions that people doing research made.

  BARBARA GROSZ: Right, but now look what’s going on in some areas of medicine. The computer system can, “read all the papers” (more than a person could) and do certain kinds of information retrieval from them and extract results, and then do statistical analyses. But if most of the papers are on scientific work that was done only on male mice, or only on male humans, then the conclusions the system is coming to are limited.

  We’re also seeing this problem in the legal realm, with policing and fairness. So, as we build these systems, we have to think, “OK. What about how my data can be used?” Medicine is a place where I think it’s really dangerous to not be careful about the limitations of the data that you’re using.

  MARTIN FORD: I want to talk about the path to AGI. I know you feel very strongly about building machines that work with people, but I can tell you from having done these interviews that a lot of your colleagues are very interested in building machines that are going to be independent, alien intelligences.

  BARBARA GROSZ: They read too much science fiction!

  MARTIN FORD: But just in terms of the technical path to true intelligence, I guess the first question is if you think that AGI is achievable? Maybe you think it can’t be done at all. What are the technical hurdles ahead?

  BARBARA GROSZ: The first thing I want to tell you is that in the late 1970s, as I was finishing my dissertation, I had this conversation with another student who said, “Good thing we don’t care about making a lot of money, because AI will never amount to anything.” I reflect on that prediction often, and I know I have no crystal ball about the future.

  I don’t think AGI is the right direction to go. I think the focus on AGI is actually ethically dangerous because it raises all sorts of issues of people not having jobs, and robots run amok. Those are fine issues to think about, but they are very far in the future. They’re a distraction. The real point is we have any number of ethical issues right now, with the AI systems we have now, and I think it’s unfortunate to distract attention from those because of scary futuristic scenarios.

  Is AGI a worthwhile direction to go or not? You know, people have been wondering since at least The Golem of Prague, and Frankenstein, for many hundreds of years, if humanity could create something that is as smart as a human. I mean, you can’t stop people from fantasizing and wondering, and I am not going to try, but I don’t think that thinking about AGI is the best use of the resources we have, including our intelligence.

  MARTIN FORD: What are the actual hurdles to AGI?

  BARBARA GROSZ: I mentioned one hurdle, which is getting the wide range of data that would be needed and getting that data ethically because you’re essentially being Big Brother and watching a lot of behavior and from that, taking a lot of data from a lot of people. I think that may be one of the biggest issues and biggest hurdles.

  The second hurdle is that every AI system that exists today is an AI system with specialized abilities. Robots that can clean your house or systems that can answer questions about travel, or restaurants. To go from that kind of individualized intelligence to general intelligence that flexibly moves from one domain to another domain, and takes analogs from one domain to another, and can think not just about the present but also the future, those are really hard questions.

  MARTIN FORD: One major concern is that AI is going to unleash a big economic disruption and that there might be a significant impact on jobs. That doesn’t require AGI, just narrow AI systems that do specialized things well enough to displace workers or deskill jobs. Where do you fall on the spectrum of concern about the potential economic impact? How worried should we be?

  BARBARA GROSZ: So yes, I am concerned, but I’m concerned in a somewhat different way from how many other people are concerned. The first thing I want to say is that it’s not just an AI problem, but a wider technology problem. It’s a problem where those of us who are technologists of various sorts are partially responsible, but the business world carries a lot of responsibility as well.

  Here’s an example. You used to call in to get customer service when something wasn’t working, and you got to talk to a human being. Not all of those human customer service agents were good, but the ones who were good understood your problem and got you an answer.

  Of course, human beings are expensive, so now they’ve been replaced in many customer service settings by computer systems. At one stage, companies got rid of more intelligent people and hired the cheaper people who could only follow a script, and that wasn’t so good. But now, who needs a person who can only follow a script when you have a system? This approach makes for bad jobs, and it makes for bad customer service interactions.

  When you think about AI and the increasingly intelligent systems, there are going to be more and more opportunities where you can think, “OK, we can replace the people.” But it’s problematic to do that if the system isn’t fully capable of doing the task it’s been assigned. It’s also why I’m on the soapbox about building systems that complement people.

  MARTIN FORD: I’ve written quite a lot about this, and I guess the point I would make is that this is very much at the intersection of technology and capitalism.

  BARBARA GROSZ: Exactly!

  MARTIN FORD: There is an inherent drive within capitalism to make more money by cutting costs and historically that has been a positive thing. My view is that we need to adapt capitalism so that it can continue to thrive, even if we are at an inflection point, where capital will really start to displace labor to an unprecedented extent.

  BARBARA GROSZ: I’m with you entirely on that. I spoke about this recently at the American Academy of Arts and Sciences, and for me there are two key points.

  My first point was that it’s not a question of just what systems we can build but what systems we should build. As technologists, we have a choice about that, even in a capitalist system that will buy anything that saves money.

  My second point was that we need to integrate ethics into the teaching of computer science, so students learn to think about this dimension of systems along with efficiency and elegance of code.

  To the corporate and marketing people at this meeting, I gave the example of Volvo, who made a competitive advantage out of building cars that were safe. We need it to be a competitive advantage for companies to make systems that work well with people. But to do that is going to require engineers who don’t just think about replacing people, but who work with social scientists and ethicists to figure out, “OK. I can put this kind of capability in, but what does it mean if I do that? How does it fit with people?”

  We need to support building the kind of systems we should build, not just the systems that in the short-term look like they’ll sell and save money.

  MARTIN FORD: What about AI risks beyond the economic impact? What do you think we should be genuinely concerned about in terms of artificial intelligence, both in the near term and further out?

  BARBARA GROSZ:
From my perspective, there is a set of questions around the capabilities AI provides, the methods it has and what they can be used for, and the design of AI systems that go out in the world.

  And there’s a choice. Even with weapons, there’s a choice. Are they fully autonomous? Where are the people in the loop? Even with cars, Elon Musk had a choice. He could have said that what Tesla cars had was driver-assist instead of saying he had a car with autopilot, because of course he doesn’t have a car with autopilot. People get in trouble because they buy into the autopilot idea, trust it will work, and then have accidents.

  So, we have a choice in what we put in the systems, what claims we make about the systems, and how we test, verify and set up the systems. Will there be a disaster? That depends on what choices we make.

  Now is an absolutely crucial time for everyone involved in building systems that incorporate AI in some way—because those are not just AI systems: they’re computer systems that have some AI involved. Everyone needs to sit down and have, as part of their design teams, people who are going to help them think more broadly about the unintended consequences of the systems they’re building.

  I mean, the law talks about unintended consequences, and computer scientists talk about side effects. It’s time to stop, across technology development, as far as I’m concerned, saying, “Oh, I wonder if I can build a thing that does thus and such,” and then build it and foist it on the world. We have to think about the long-range implications of the systems we’re building. That’s a societal problem.

  I have gone from teaching a course on Intelligent Systems: Design and Ethical Challenges to now mounting an effort with colleagues at Harvard, which we call Embedded EthiCS, to integrate the teaching of ethics into every computer science course. I think that people who are designing systems, should not only be thinking about efficient algorithms and efficient code, but they should also be thinking about the ethical implications of the system.

  MARTIN FORD: Do you think there’s too much focus on existential threats? Elon Musk has set up OpenAI, which I think is an organization focused on working on this problem. Is that a good thing? Are these concerns something that we should take seriously, even though they may only be realized far in the future?

  BARBARA GROSZ: Somebody could very easily put something very bad on a drone, and it could be very damaging. So yes, I’m in favor of people who are thinking about how they can design safe systems and what systems to build as well as how they can teach students to design programs that are more ethical. I would never say not to do that.

  I do think that it’s too extreme, however, as some people are saying, that we shouldn’t be doing any more AI research or development until we have figured out how to avoid all such threats. It would be harmful to stop all of the wonderful ways in which AI can make the world a better place, because of perceived existential threats in the longer term.

  I think we can continue to develop AI systems, but we have to be mindful of the ethical issues and to be honest about the capabilities and limitations of AI systems

  MARTIN FORD: One phrase that you’ve used a lot is “we have a choice.” Given your strong feeling that we should build systems that work with people, are you suggesting that these choices should be made primarily by computer scientists and engineers, or by entrepreneurs? Decisions like that are pretty heavily driven by the incentives in the market. Should these choices be made by society as a whole? Is there a place for regulation or government oversight?

  BARBARA GROSZ: One thing I want to say is that even if you don’t design the system to work with people, it’s got to eventually work with people, so you’d better think about people. I mean, the Microsoft Tay bot and Facebook fake news disasters are examples of designers and systems where people didn’t think enough about how they were releasing systems into the “wild,” into a world that is full of people, not all of whom are trying to be helpful and agreeable. You can’t ignore people!

  So, I absolutely think there’s room for legislation, there’s room for policy, and there’s room for regulation. One of the reasons I have this hobbyhorse about designing systems to work well with people is that I think if you get social scientists and ethicists in the room when you’re thinking about your design, then you design better. As a result, the policies and the regulations will be needed only to do what you couldn’t do by design as opposed to over-reacting or retrofitting badly designed systems. I think we’ll always wind up with better systems if we design them to be the best systems they can be, and then the policy is on top of that.

  MARTIN FORD: One concern that would be raised about regulation, within a country, or even in the West, is that there is an emerging competitive race with China. Is that something we should worry about, that the Chinese are going to leap ahead of us and set the pace, and that too much regulation might leave us at a disadvantage?

  BARBARA GROSZ: There are two separate answers here right now. I know I sound like a broken record, but if we stop all AI research and development or severely restrict it, then the answer is yes.

  If, however, we develop AI in a context which takes ethical reasoning and thinking into account as well as the efficiency of code then no, because we’ll keep developing AI.

  The one place where there’s extraordinary danger is with weapons systems. A key issue is what would happen if we didn’t build AI-driven weapons and an enemy did; but that topic is so large that it would take another hour conversation.

  MARTIN FORD: To wind up, I wanted to ask you about women in the field. Is there any advice you would offer to women, or men, or to students just getting started? What would you want to say about the role of women in the field of AI and how things have evolved over the course of your career?

  BARBARA GROSZ: The first thing I would say to everybody is that this field has some of the most interesting questions of any field in the world. The set of questions that AI raises has always required a combination of thinking analytically, thinking mathematically, thinking about people and behavior, and thinking about engineering. You get to explore all sorts of ways of thinking and all sorts of design. I’m sure other people think their fields are the most exciting, but I think it’s even more exciting now for us in AI because we have much stronger tools: just look at our computing power. When I started in the field I had a colleague who’d knit a sweater waiting for a carriage return to echo!

  Like all of computer science and all of technology, I think it’s essential that we have the broadest spectrum of people involved in designing our AI systems. I mean not just women as well as men, I mean people from different cultures, people of different races, because that’s who’s going to use the systems. If you don’t, you have two big dangers. One is the systems you design are only appropriate for certain populations, and the second is that you have work climates that aren’t welcoming to the broadest spectrum of people and therefore benefit from only certain subpopulations. We’ve got to all work together.

  As for my experience, there were almost no women involved in AI at the beginning, and my experience depended entirely on what the men with whom I worked were like. Some of my experiences were fantastic, and some were horrible. Every university, every company that has a group doing technology, should take on the responsibility of making sure the environments encourage women as well as men, and people from under-represented minorities because, in the end, we know that the more diverse the design team, the better the design.

  BARBARA GROSZ is Higgins Professor of Natural Sciences in the School of Engineering and Applied Sciences at Harvard University and a member of the External Faculty of Santa Fe Institute. She has made groundbreaking contributions to the field of artificial intelligence through pioneering research in natural language processing and in theories of multi-agent collaboration and their application to human-computer interaction. Her current research explores ways to use models developed in this research to improve health care coordination and science education.

  Barbara received an AB in mathematics
from Cornell University, and a master’s and PhD in computer science from the University of California, Berkeley. Her many awards and distinctions include election to the National Academy of Engineering, the American Philosophical Society, and the American Academy of Arts and Sciences, and as a fellow of the Association for the Advancement of Artificial Intelligence and the Association for Computing Machinery. She received the 2009 ACM/AAAI Allen Newell Award, the 2015 IJCAI Award for Research Excellence, and the 2017 Association for Computational Linguistics Lifetime Achievement Award. She is also known for her leadership of interdisciplinary institutions and contributions to the advancement of women in science.

  Chapter 16. JUDEA PEARL

  The current machine learning concentration on deep learning and its non-transparent structures is a hang-up. They need to liberate themselves from this data-centric philosophy.

  PROFESSOR OF COMPUTER SCIENCE AND STATISTICS, UNIVERSITY OF CALIFORNIA, LOS ANGELES DIRECTOR OF THE COGNITIVE SYSTEMS LABORATORY

  Judea Pearl is known internationally for his contributions to artificial intelligence, human reasoning, and philosophy of science. He is particularly well known in the AI field for his work on probabilistic (or Bayesian) techniques and causality. He is the author of more than 450 scientific papers and three landmark books: Heuristics (1984), Probabilistic Reasoning (1988), and Causality (2000; 2009). His 2018 book, The Book of Why, makes his work on causation accessible to a general audience. In 2011, Judea received the Turing Award, which is the highest honor in the field of computer science and is often compared to the Nobel Prize.

 

‹ Prev