Architects of Intelligence

Home > Other > Architects of Intelligence > Page 43
Architects of Intelligence Page 43

by Martin Ford


  An even more subtle question is that of relating emotionally to other beings. I’m not sure that’s even well defined, because as a human you can fake it. There are people who fake an emotional connection to others. So, the question is, if you can get a computer to fake it well enough, how do you know that’s not real? That brings to mind the Turing test regarding consciousness, to which the answer is that we can never know for sure if another being is really conscious, or what that even means; if the behavior aligns with what we consider to be “conscious,” we just take it on faith.

  MARTIN FORD: That’s a good question. In order to have true artificial general intelligence, does that imply consciousness or could you have a superintelligent zombie? Could you have a machine that’s incredibly intelligent, but with nothing there in terms of an inner experience?

  DAPHNE KOLLER: If you go back to Turing’s hypothesis, which is what gave rise to the Turing test, he says that consciousness is unknowable. I don’t know for a fact that you are conscious, I just take that on faith because you look like me and I feel like I’m conscious and because there’s that surface similarity, I believe that you’re conscious too.

  His argument was that when we get to a certain level of performance in terms of behavior we will not be able to know whether an intelligent entity is conscious or not. If it’s not a falsifiable hypothesis then it’s not science, and you just have to take it on faith. There is an argument that says that we will never know because it is unknowable.

  MARTIN FORD: I want to ask now about the future of artificial intelligence. What would you point to as a demonstration of things that are currently at the forefront of AI?

  DAPHNE KOLLER: The whole deep learning framework has done an amazing job of addressing one of the key bottlenecks in machine learning, which is having to engineer a feature space that captures enough about the domain so that you can get very high performance, especially in contexts where you don’t have a strong intuition for the domain. Prior to deep learning, in order to apply machine learning you had to spend months or even years tweaking the representation of the underlying data in order to achieve higher performance.

  Now, with deep learning combined with the amount of data that we are able to bring to bear, you can really let the machine pick out those patterns for itself. That is remarkably powerful. It’s important to recognize, though, that a lot of human insight is still required in constructing these models. It’s there in a different place: in figuring out what the architecture of the model is that captures the fundamental aspects of a domain.

  If you look at the kind of networks, for instance, that one applies to machine translation, they’re very different to the architectures that you apply to computer vision, and a lot of human intuition went into designing those. It’s still, as of today, important to have a human in the loop designing these models, and I’m not convinced yet by the efforts to get a computer to design those networks as well as a human can. You can certainly get a computer to tweak the architecture and modify certain parameters, but the overall architecture is still one that a human has designed. That being said, there are a couple of key advances that are changing this. The first is being able to train these models with very large amounts of data. The second is the end-to-end training that I mentioned earlier, where you define the task from beginning to end, and you train the entire architecture to optimize the goal that you actually care about.

  This is transformative because the performance differential turns out to be quite dramatic. Both AlphaGo and AlphaZero are really good examples of that. The model there was trained to win in a game, and I think end-to-end training, combined with unlimited training data (which is available in that context) is what’s driven a lot of the huge performance gains in those applications.

  MARTIN FORD: Following these advances, how much longer will it be before we reach AGI, and how will we know when we’re close to it?

  DAPHNE KOLLER: There are a number of big leaps forward that need to happen in the technology to get us there, and those are stochastic events that you can’t predict. Someone could have a brilliant idea next month or it could take 150 years. Predicting when a stochastic event is going to happen is a fool’s errand.

  MARTIN FORD: But if these breakthroughs take place, then it could happen quickly?

  DAPHNE KOLLER: Even if the breakthrough happens, it’s going to require a lot of engineering and work to make AGI a reality. Think back to those advances of deep learning and end-to-end training. The seeds of those were planted in the ‘50s and the ideas kept coming back up every decade or so. We’ve made continual progress over time, but there were years of engineering effort to get us to the current point. And we’re still far from AGI.

  I think it’s unpredictable when the big step forward will come. We might not even recognize it when we see it at the first, second, or third time. For all we know, it might already have been made, and we just don’t know it. There’s still going to be decades of work after that discovery to really engineer this until the point that it works.

  MARTIN FORD: Let’s talk about some of the risks of AI, starting with economics. There is an idea that we’re on the leading edge of something on the scale of a new industrial revolution, but I think a lot of economists actually disagree with that. Do you think that we are looking at a big disruption?

  DAPHNE KOLLER: Yes, I think that we are looking at a big disruption on the economic side. The biggest risk/opportunity of this technology is that it will take a lot of jobs that are currently being done by humans and have those be taken over to a lesser or greater extent by machines. There are social obstacles to adoption in many cases, but as robust increased performance is demonstrated, it will follow the standard disruptive innovation cycle.

  It is already happening to paralegals and cashiers at the supermarket, and it will soon happen to the people who stack the shelves. I think that all of that is going to be taken over in five or ten years by robots or intelligent agents. The question is to what extent can we carve out meaningful jobs around that for humans to do. You can identify those opportunities in some cases, and in others it’s less clear.

  MARTIN FORD: One of the disruptive technologies that people focus on is self-driving cars and trucks. What’s your sense of when you’ll be able to call a driverless Uber and it will take you to your destination.

  DAPHNE KOLLER: I think that it’ll be a gradual transition, where you might have a fallback human remote driver. I think that is where a lot of these companies are heading as an intermediate step to full autonomy.

  You’ll have a remote driver sitting in an office and controlling three or four vehicles at once. These vehicles would call for help when they get stuck in a situation that they simply don’t recognize. With that safeguard in place, I would say probably within five years we’ll have a self-driving service available in some places. Full autonomy is more of a social evolution than a technical evolution, and those are harder to predict.

  MARTIN FORD: Agreed, but even so that’s a big disruption coming quite soon in one industry with a lot of drivers losing their jobs. Do you think a universal basic income is a possible solution to this job loss?

  DAPHNE KOLLER: It is just too early to make that decision. If you look back at some of the previous significant revolutions in history: The Agricultural Revolution, the Industrial Revolution, there were all the same predictions of massive workforce disruption and huge numbers of people being out of jobs. The world changed and those people found other jobs. It is too early to say that this one is going to be completely different to the others, because every disruption is surprising.

  Before we focus on universal basic income, we need to be a lot more thoughtful and deliberate about education. The world in general, with a few exceptions, has underinvested in educating people for this new reality, and I think it’s really important to consider the kind of skills that people will need in order to be successful moving forwards. If after doing that we still have no idea of how to keep the majority of the
human population employed then that’s when we need to think about a universal basic income.

  MARTIN FORD: Let’s move on to some of the other risks associated with artificial intelligence. There are two broad categories, the near-term risks, such as privacy issues, security, and the weaponization of drones and AI, and the long-term risks such as AGI and what that means.

  DAPHNE KOLLER: I’d say that all of those short-term risks already exist without artificial intelligence. For instance, there are already many complex, critical systems today that enemies could hack into.

  Our electricity grid is not artificially intelligent at this point, but it’s a significant security risk for someone to hack into that. People can currently hack into your pacemaker—again, it’s not an artificially intelligent system, but it’s an electronic system with the opportunity for hacking. As for weapons, is it impossible for someone to hack into the nuclear response system of one of the major superpowers and cause a nuclear attack to take place? So yes, there are security risks to AI systems, but I don’t know that they’re qualitatively different to the same risks with older technologies.

  MARTIN FORD: As the technology expands, though, doesn’t that risk expand? Can you imagine a future where self-driving trucks deliver all our food to stores, and someone then hacks into those and brings them to a halt?

  DAPHNE KOLLER: I agree, it’s just that it’s not a qualitative difference. It’s an increasing risk that grows as we rely more on electronic solutions that, by virtue of being larger and more interconnected, have a greater risk for a single point of failure. We started with individual drivers delivering goods to stores. If you wanted to disrupt those, you’d have to disrupt every single driver. We then moved on to large shipping companies directing large numbers of trucks. Disrupt one of those and you disrupt a larger proportion of deliveries. AI-controlled driverless trucks are the next step. As you increase centralization you increase the risks of a single point of failure.

  I’m not saying those systems aren’t more of a risk, I’m just saying that to me AI doesn’t seem qualitatively different in that regard. It’s the same progression of increasing risk as we rely more and more on complex technologies with a single point of failure.

  MARTIN FORD: Going back to the military and the weaponization of AI and robotics, there’s a lot of concern about advanced commercial technologies being used in nefarious ways. I’ve also interviewed Stuart Russell, who made a video, Slaughterbots, about that subject. Are you concerned that this technology could be used in threatening ways?

  DAPHNE KOLLER: Yes, I think it is possible that this technology can get into the hands of anyone, but of course that is true for other dangerous technologies as well. The ability to kill larger numbers of people using increasingly easier ways has been another aspect of human evolution. In the early days, you needed a knife, and you could kill one person at a time. Then you had guns, and you could kill five or six. Then you had assault rifles, and you could kill 40 or 50. Now you have the ability to create dirty bombs in ways that don’t require a huge amount of technological know-how. If you think about biological weapons and the ability to edit and print genomes to the point where people can now create their own viruses, that’s another way of killing a lot of people with an accessible modern technology.

  So yes, the risks of misusing technology are there, but we need to think about them more broadly than just AI. I wouldn’t say that stories of intelligent killer drones are more dangerous than someone synthesizing a version of smallpox and letting it loose. I don’t think we currently have a solution for either of those scenarios, but the latter actually seems much more likely to kill a lot of people quickly.

  MARTIN FORD: Let’s move on to those long-term risks, and in particular AGI. There’s the notion of a control problem where a superintelligence might set its own goals or implement the goals we set it in ways that we don’t expect or that are harmful. How do you feel about that concern?

  DAPHNE KOLLER: I think it is premature. In my opinion, there are several breakthroughs that need to happen before we are at that point, and too many unknowns before we can come to a conclusion. What nature of intelligence might be formed? Will it have an emotional component? What will determine its goals? Will it even want to interact with us humans, or will it just go off on its own?

  There are just so many unknowns that it seems premature to start planning for it. I don’t think it is on the horizon, and even once we get to that breakthrough point there’s going to be years or decades of engineering work that needs to be done. This is not going to be an emergent phenomenon that we just wake up to one day. This is going to be an engineered system, and once we figure out what the key components are, that would be a good time to start thinking about how we modulate and structure them so as to get the best outcomes. Right now, it’s just very ephemeral.

  MARTIN FORD: There are already a number of think-tank organizations springing up, such as OpenAI. Do you think those are premature in terms of the resources being invested, or do you think it’s a productive thing to start working on?

  DAPHNE KOLLER: OpenAI does multiple things. A lot of what it does is to create open source AI tools to democratize access to a truly valuable technology. In that respect, I think it’s a great thing. There’s a lot of work being done at those organizations thinking about the other important risks of AI. For instance, at a recent machine learning conference (NIPS 2017) there was a very interesting talk about how machine learning takes implicit biases in our training data and amplifies them to the point that it becomes really horrifying in capturing the worst behaviors (e.g., racism or sexism). Those are things that are important for us to be thinking about today, because those are real risks and we need to come up with real solutions to ameliorate them. That’s part of what these think tanks are doing.

  That’s very different from your question of how we build safeguards into an as-yet-non-existent technology that will prevent it from consciously trying to exterminate humans for reasons that are unclear at this point. Why would they even care about exterminating humans? It just seems too early to start worrying about that.

  MARTIN FORD: Do you think there’s a need for government regulation of AI?

  DAPHNE KOLLER: Let’s just say that I think the level of understanding that the government has of this technology is limited at best, and it’s a bad idea for governments to regulate something that they don’t understand.

  AI is also a technology that is easy to use and already available to other governments that have access to a lot of resources and are not necessarily bound by the same ethical scruples as our government might be. I don’t think regulating this technology is the right solution.

  MARTIN FORD: There’s a lot of focus in particular on China. In some ways, they have an advantage: they’ve got enormous amounts of data because their population is so large, and they don’t have to worry so much about privacy. Are we at risk of falling behind there, and should we be worried?

  DAPHNE KOLLER: I think the answer to that is yes, and I think it’s important. If you’re looking for a place for government intervention that would be beneficial, I would say it’s in enabling technological advancements that could maintain competitiveness not only with China but also with other governments. That includes an investment in science. It includes an investment in education. It includes the ability to get access to data in a way that is privacy-respecting and enables progress to be made.

  In the healthcare space that I’m interested in, there are things that one can do that would hugely ease the ability to make progress. For instance, if you talk to patients you’ll find that most of them are happy to have their data used for research purposes to drive progress toward cures. They realize that even if it doesn’t help them it can help others down the line, and they really want to do that. However, the legal and technological hoops that one needs to jump through before medical data is shared are so onerous right now that it just doesn’t happen. That really slows down our progress towards the abil
ity to aggregate data for multiple patients and to figure out likely cures for certain subpopulations, and so on.

  This is a place where government-level policy change, as well as a change in societal norms, can make a difference. As an example of what I mean, look at the difference in organ donation rates between countries where there is an opt-in for organ donation versus countries where there’s an opt-out. Both give equal amounts of control over whether a person’s organs are going to be donated should they die, but the countries that have opt-out have a much higher organ donation rate than the countries that have opt-in. You create the expectation that people naturally opt in for something although you give them every opportunity to opt out. A similar system for data sharing would make it much more available and would make publishing new research much faster.

  MARTIN FORD: Do you believe that the benefits of AI, machine learning, and all these technologies are going to outweigh these risks?

  DAPHNE KOLLER: Yes, I do. I also think that stopping progress by stopping technology is the wrong approach. If you want to ameliorate risks, you need to be thoughtful about how to change societal norms and how to put in appropriate safeguards. Stopping technology is just not a feasible approach. If you don’t make progress technologically, someone else will, and their intent might be considerably less beneficial than yours. We need to let technology progress and then think about the mechanisms to channel it towards good rather than bad.

  DAPHNE KOLLER was the Rajeev Motwani Professor of Computer Science at Stanford University. Daphne has made significant contributions to AI, especially in the field of Bayesian (probabilistic) machine learning and knowledge representation. In 2004, she was the recipient of a MacArthur Foundation fellowship for her work in this area.

  In 2012, Daphne, along with her Stanford colleague, Andrew Ng, founded the online education company Coursera. Daphne served as co-CEO and president of the company. Her current research focuses especially on the use of machine learning and data science in healthcare, and she had a role as Chief Computing Officer at Calico, a Google/Alphabet company that is reportedly working on increasing human longevity. Daphne is currently CEO and founder of insitro, a startup biotech company focused on using machine learning for drug discovery.

 

‹ Prev