Book Read Free

Architects of Intelligence

Page 22

by Martin Ford


  MARTIN FORD: How do you feel about the role of government regulation, both for self-driving cars and AI more generally?

  ANDREW NG: The automotive industry has always been heavily regulated because of safety, and I think that the regulation of transportation needs to be rethought in light of AI and self-driving cars. Countries with more thoughtful regulation will advance faster to embrace the possibilities enabled by, for example, AI-driven healthcare systems, self-driving cars, or AI-driven educational systems, and I think countries that are less thoughtful about regulation will risk falling behind.

  Regulation should be in these specific industry verticals because we can have a good debate about the outcomes. We can more easily define what we do and do not want to happen. I find it less useful to regulate AI broadly. I think that the act of thinking through the impact of AI in specific verticals for regulation will not only help the verticals grow but will also help AI develop the right solutions and be adopted faster across verticals.

  I think self-driving cars are only a microcosm of a broader theme here, which is the government. Every time there is a technological breakthrough, regulators must act. Regulators have to act to make sure that democracy is defended, even in the era of the internet and the era of artificial intelligence. In addition to defending democracy, governments must act to make sure that their countries are well positioned for the rise of AI.

  Assuming that one of governments’ primary responsibilities is the well-being of their citizens, I think that governments that act wisely can help their nations ride the rise of AI, to much better outcomes for their people. In fact, even today, some governments use the internet much better than other governments. This is about external websites and services to citizens, as well as internal ones, in terms of, how are your government IT services organized?

  Singapore has an integrated healthcare system, where every patient has a unique patient ID, and this allows for the integration of healthcare records in a way that is the envy of many other nations. Now, Singapore’s a small country, so maybe it’s easier for Singapore than a larger country, but the way the Singapore government has shifted the healthcare system to use the internet better, has a huge impact on the healthcare system, and on the health of the Singaporean citizens.

  MARTIN FORD: It sounds like you think the relationship between government and AI should extend beyond just regulating the technology.

  ANDREW NG: I think governments have a huge role to play in the rise of AI and in making sure that first, governance is done well with AI. For instance, should we better allocate government personnel using AI? How about the forestry resources, can we allocate that better using AI? Can AI help us set better economic policies? Can the government weed out fraud—maybe tax fraud—better and more efficiently using AI? I think AI will have hundreds of applications in governance, just as AI has hundreds of applications in the big AI companies. Governments should use AI well for themselves.

  For the ecosystem as well, I think public-private partnerships will accelerate the growth of domestic industry, and governments that make thoughtful regulation about self-driving cars will see self-driving accelerate in their communities. I’m very committed to my home state of California, but California regulations do not allow self-driving car companies to do certain things, which is why many self-driving car companies can’t have their home bases in California and are now almost forced to operate outside of California.

  I think that both at the state level as well as at the nation level, countries that have thoughtful policies about self-driving cars, about drones, and about the adoption of AI in payment systems and in healthcare systems, for example—those countries with thoughtful policies in all of these verticals will see much faster progress in how these amazing new tools can be brought to bear on some of the most important problems for their citizens. Beyond regulation and public-private partnership, to accelerate the adoption of these amazing tools, I think governments also need to come up with solutions in education and on the jobs issue.

  MARTIN FORD: The impact on jobs and the economy is an area that I’ve written about a lot. Do you think we may be on the brink of a massive disruption that could result in widespread job losses?

  ANDREW NG: Yes, and I think it’s the biggest ethical problem facing AI. Whilst the technology is very good at creating wealth in some segments of society, we have frankly left large parts of the United States and also large parts of the world behind. If we want to create not just a wealthy society but a fair one, then we still have a lot of important work to do. Frankly, that’s one of the reasons why I remain very engaged in online education.

  I think our world is pretty good at rewarding people who have the required skills at a particular time. If we can educate people to reskill even as their jobs are displaced by technology, then we have a much better chance of making sure that this next wave of wealth creation ends up being distributed in a more equitable way. A lot of the hype about evil AI killer robots distracts leaders from the much harder, but much more important conversation about what we do about jobs.

  MARTIN FORD: What do you think of a universal basic income as part of a solution to that problem?

  ANDREW NG: I don’t support a universal basic income, but I do think a conditional basic income is a much better idea. There’s a lot about the dignity of work and I actually favor a conditional basic income in which unemployed individuals can be paid to study. This would increase the odds that someone that’s unemployed will gain the skills they need to re-enter the workforce and contribute back to the tax base that is paying for the conditional basic income.

  I think in today’s world, there are a lot of jobs in the gig economy, where you can earn enough of a wage to get by, but there isn’t much room for lifting up yourself or your family. I am very concerned about an unconditional basic income causing a greater proportion of the human population to become trapped doing this low-wage, low-skilled work.

  A conditional basic income that encourages people to keep learning and keep studying will make many individuals and families better off because we’re helping people get the training they need to then do higher-value and better-paying jobs. We see economists write reports with statistics like “in 20 years, 50% of jobs are at risk of automation,” and that’s really scary, but the flip side is that the other 50% of jobs are not at risk of automation.

  In fact, we can’t find enough people to do some of these jobs. We can’t find enough healthcare workers, we can’t find enough teachers in the United States, and surprisingly we can’t seem to find enough wind turbine technicians.

  The question is, how do people whose jobs are displaced take on these other great-paying, very valuable jobs that we just can’t find enough people to do? The answer is not for everyone to learn to program. Yes, I think a lot of people should learn to program, but we also need to skill up more people in those areas of healthcare, education, and wind turbine technicians, and other in-demand rising categories of jobs.

  I think we’re moving away from a world where you have one career in your lifetime. Technology changes so fast that there will be people that thought they were doing one thing when they went to college that will realize that the career they set out toward when they were 17-years-old is no longer viable, and that they should branch into a different career.

  We’ve seen how millennials are more likely to hop among jobs, where you go from being a product manager in one company to the product manager of a different company. I think that in the future, increasingly we’ll see people going from being a material scientist in one company to being a biologist in a different company, to being a security researcher in a third company. This won’t happen overnight, it will take a long time to change. Interestingly, though, in my world of deep learning, I already see many people doing deep learning that did not major in computer science, they did subjects like physics, astronomy, or pure mathematics.

  MARTIN FORD: Is there any particular advice you’d give to a young person who is inter
ested in a career in AI, or in deep learning specifically? Should they focus entirely on computer science or is brain science, or the study of cognition in humans also important?

  ANDREW NG: I would say to study computer science, machine learning, and deep learning. Knowledge of brain science or physics is all useful, but the most time-efficient route to a career in AI is computer science, machine learning and deep learning. Because of YouTube videos, talks, and books, I think it’s easier than ever for someone to find materials and study by themselves, just step by step. Things don’t happen overnight, but step by step, I think it’s possible for almost anyone to become great at AI.

  There are a couple of pieces of advice that I tend to give to people. Firstly, people don’t like to hear that it takes hard work to master a new field, but it does take hard work, and the people who are willing to work hard at it will learn faster. I know that it’s not possible for everyone to learn a certain number of hours every week, but people that are able to find more time to study will just learn faster.

  The other piece of advice I tend to give people is that let’s say you’re currently a doctor and you want to break into AI—as a doctor you’d be uniquely positioned to do very valuable work in healthcare that very few others can do. If you are currently a physicist, see if there are some ideas on AI applied to physics. If you’re a book publisher, see if there’s some work you can do with AI in book publishing, because that’s one way to leverage your unique strengths and to complement that with AI, rather than competing on a more even playing field with the fresh college grad stepping into AI.

  MARTIN FORD: Beyond the possible impact on jobs, what are the other risks associated with AI that you think we should be concerned about now or in the relatively near future?

  ANDREW NG: I like to relate AI to electricity. Electricity is incredibly powerful and on average has been used for tremendous good, but it can also be used to harm people. AI is the same. In the end, it’s up to individuals, as well as companies and governments, to try to make sure we use this new superpower in positive and ethical ways.

  I think that bias in AI is another major issue. AI that learns from human-generated text data can pick up on health, gender, and racial stereotypes. AI teams are aware of this and are actively working on this, and I am very encouraged that today we have better ideas for reducing bias in AI than we do for reducing bias in humans.

  MARTIN FORD: Addressing bias in people is very difficult, so it does seem like it might be an easier problem to solve in software.

  ANDREW NG: Yes, you can zero a number in an AI piece of software and it will exhibit much less gender bias, we don’t have similarly effective ways of reducing gender bias in people. I think that soon we might see that AI systems will be less biased than many humans. That is not to say that we should be satisfied with just having less bias, there’s still a lot of work to do and we should keep on working to reduce that bias.

  MARTIN FORD: What about the concern that a superintelligent system might someday break free of our control and pose a genuine threat to humanity?

  ANDREW NG: I’ve said before that worrying about AGI evil killer robots today is like worrying about overpopulation on the planet Mars. A century from now I hope that we will have colonized the planet Mars. By that time, it may well be overpopulated and polluted, and we might even have children dying on Mars from pollution. It’s not that I’m heartless and don’t care about those dying children—I would love to find a solution to that, but we haven’t even landed on the planet yet, so I find it difficult to productively work on that problem.

  MARTIN FORD: You don’t think then that there’s any realistic fear of what people call the “fast takeoff” scenario, where an AGI system goes through a recursive self-improvement cycle and rapidly becomes superintelligent?

  ANDREW NG: A lot of the hype about superintelligence and exponential growth were based on very naive and very simplistic extrapolations. It’s easy to hype almost anything. I don’t think that there is a significant risk of superintelligence coming out of nowhere and it happening in a blink of an eye, in the same way that I don’t see Mars becoming overpopulated overnight.

  MARTIN FORD: What about the question of competition with China? It’s often pointed out that China has certain advantages, like access to more data due to a larger population and fewer concerns about privacy. Are they going to outrun us in AI research?

  ANDREW NG: How did the competition for electricity play out? Some countries like the United States have a much more robust electrical grid than some developing economies, so that’s great for the United States. However, I think the global AI race is much less of a race than the popular press sometimes presents it to be. AI is an amazing capability, and I think every country should figure out what to do with this new capability, but I think that it is much less of a race than the popular press suggests.

  MARTIN FORD: AI clearly does have military applications, though, and potentially could be used to create automated weapons. There’s currently a debate in the United Nations about banning fully autonomous weapons, so it’s clearly something people are concerned about. That’s not futuristic AGI-related stuff, but rather something we could see quite soon. Should we be worried?

  ANDREW NG: The internal combustion engine, electricity, and integrated circuits all created tremendous good, but they were all useful for the military. It’s the same with any new technology, including AI.

  MARTIN FORD: You’re clearly an optimist where AI is concerned. I assume you believe that the benefits are going to outweigh the risks as artificial intelligence advances?

  ANDREW NG: Yes, I do. I’ve been fortunate to be on the front lines, shipping AI products for the last several years and I’ve seen firsthand the way that better speech recognition, better web search, and better optimized logistics networks help people.

  This is the way that I think about the world, which may be a very naïve way. The world’s gotten really complicated, and the world’s not the way I want it to be. Frankly, I miss the times when I could listen to political leaders and business leaders, and take much more of what they said at face value.

  I miss the times when I had greater confidence in many companies and leaders to behave in an ethical way and to mean what they say and say what they mean. If you think about your as-yet-unborn grandchildren or your unborn great-great-grandchildren, I don’t think the world is yet the way that you want it to be for them to grow up in. I want democracy to work better, and I want the world to be fairer. I want more people to behave ethically and to think about the actual impact on other people, and I want the world to be fairer, for everyone to have access to and gain an education. I want people to work hard, but to work hard and to keep studying, and to do work that they find meaningful, and I think many parts of the world are not yet the way I think we would all like it to be.

  Every time there’s a technological disruption, it gives us the opportunity to make a change. I would like my teams, as well as other people around the world to take a shot at making the world a better place in the ways that we want it to be. I know that sounds like I’m a dreamer, but that’s what I actually want to do.

  MARTIN FORD: I think that’s a great vision. I guess the problem is that it’s a decision for society as a whole to set us on the path to that kind of optimistic future. Are you confident that we’ll make the right choices?

  ANDREW NG: I don’t think it will be in a straight line, but I think there are enough honest, ethical, and well-meaning people in the world to have a very good shot at it.

  ANDREW NG is one of the most recognizable names in AI and machine learning. He co-founded the Google Brain deep learning project as well as the online education company Coursera. Between 2014 and 2017, he was a vice president and chief scientist at Baidu, where he built the company’s AI group into an organization with several thousand people. He is generally credited with playing a major role in the transformation of both Google and Baidu into AI-driven companies.

  Since lea
ving Baidu, Andrew has undertaken a number of projects including launching deeplearning.ai, an online education platform geared toward educating deep learning experts, as well as Landing AI, which seeks to transform enterprises with AI. He’s currently the chairman of Woebot, a startup focused on mental health applications for AI and is on the board of directors of self-driving car company Drive.ai. He is also the founder and General Partner at AI Fund, a venture capital firm that builds new AI startups from the ground up.

  Andrew is currently an adjunct professor, and formerly the associate professor and Director of the AI Lab at Stanford University. He received his undergraduate degree in computer science from Carnegie Mellon University, his master’s degree from MIT, and his PhD from The University of California, Berkeley.

  Chapter 10. RANA EL KALIOUBY

  I feel that this view, about the existential threat that robots are going to take over humanity, takes away our agency as humans. At the end of the day, we’re designing these systems, and we get to say how they are deployed, we can turn the switch off.

  CEO & CO-FOUNDER OF AFFECTIVA

  Rana el Kaliouby is the co-founder and CEO of Affectiva, a startup company that specializes in AI systems that sense and understand human emotions. Affectiva is developing cutting-edge AI technologies that apply machine learning, deep learning, and data science to bring new levels of emotional intelligence to AI. Rana is an active participant in international forums that focus on ethical issues and the regulation of AI to help ensure the technology has a positive impact on society. She was selected as a Young Global Leader by the World Economic Forum in 2017.

 

‹ Prev