by Martin Ford
I just had a discussion with my team here about what we have to do to pass the Turing test beyond what we’ve already done. We already have some level of language understanding. One key requirement is multi-chain reasoning—being able to consider the inferences and implications of concepts—that’s a high priority. That’s one area where chatbots routinely fail.
If I say I’m worried about my daughter’s performance in nursery school, you wouldn’t want to then ask three turns later, do you have any children? Chatbots do that kind of thing because they’re not considering all the inferences of everything that has been said. As I mentioned, there is also the issue of real-world knowledge, but if we could understand all the implications of language, then real-world knowledge could be gained by reading and understanding the many documents available online. I think we have very good ideas on how to do those things and we have plenty of time to do them.
MARTIN FORD: You’ve been very straightforward for a long time that the year when you think human-level AI is going to arrive is 2029. Is that still the case?
RAY KURZWEIL: Yes. In my book, The Age of Intelligent Machines, which came out in 1989, I put a range around 2029 plus or minus a decade or so. In 1999 I published The Age of Spiritual Machines and made the specific prediction of 2029. Stanford University held a conference of AI experts to deal with this apparently startling prediction. At that time, we didn’t have instant polling machines, so we basically had a show of hands. The consensus view then was it would take hundreds of years, with about a quarter of the group saying it would never happen.
In 2006 there was a conference at Dartmouth College celebrating the 50th anniversary of the 1956 Dartmouth conference, which I mentioned earlier, and there we did have instant polling devices and the consensus was about 50 years. 12 years later, in 2018 the consensus view now is about 20 to 30 years, so anywhere from 2038 to 2048, so I’m still more optimistic than the consensus of AI experts, but only slightly. My view and the consensus view of AI experts is getting closer together, but not because I’ve changed my view. There’s a growing group of people who think I’m too conservative.
MARTIN FORD: 2029 is only 11 years away, which is not that far away really. I have an 11-year-old daughter, which really brings it into focus.
RAY KURZWEIL: The progress is exponential; look at the startling progress just in the last year. We’ve made dramatic advances in self-driving cars, language understanding, playing Go and many other areas. The pace is very rapid, both in hardware and software. In hardware, the exponential progression is even faster than for computation generally. We have been doubling the available computation for deep learning every three months over the past few years, compared to a doubling time of one year for computation in general.
MARTIN FORD: Some very smart people with a deep knowledge of AI are still predicting that it will take over 100 years, though. Do you think that is because they are falling into that trap of thinking linearly?
RAY KURZWEIL: A) they are thinking linearly, and B) they are subject to what I call the engineer’s pessimism—that is being so focused on one problem and feeling that it’s really hard because they haven’t solved it yet, and extrapolating that they alone are going to solve the problem at the pace they’re working on. It’s a whole different discipline to consider the pace of progress in a field and how ideas interact with each other and study that as a phenomenon. Some people are just not able to grasp the exponential nature of progress, particularly when it comes to information technology.
Halfway through the human genome project, 1% had been collected after 7 years, and mainstream critics said, “I told you this wasn’t going to work. 1% in 7 years means it’s going to take 700 years, just like we said.” My reaction was, “We finished one percent—We’re almost done. We’re doubling every year. 1% is only 7 doublings from 100%.” And indeed, it was finished 7 years later.
A key question is why do some people readily get this, and other people don’t? It’s definitely not a function of accomplishment or intelligence. Some people who are not in professional fields understand this very readily because they can experience this progress just in their smartphones, and other people who are very accomplished and at the top of their field just have this very stubborn linear thinking. So, I really don’t actually have an answer for that.
MARTIN FORD: You would agree though that it’s not just about exponential progress in terms of computing speed or memory capacity? There are clearly some fundamental conceptual breakthroughs that have to happen in terms of teaching computers to learn from real time, unstructured data the way that human beings do, or in reasoning and imagination?
RAY KURZWEIL: Well, progress in software is also exponential, even though it has that unpredictable aspect that you’re alluding to. There’s a cross-fertilization of ideas that is inherently exponential, and once we have established performance at one level, ideas emerge to get to the next level.
There was a study done by the Obama administration scientific advisory board on this question. They examined how hardware and software progress compares. They took a dozen classical engineering and technical problems and looked at the advance quantitatively to see how much was attributable to hardware. Generally, over the previous 10 years from that point, it was about 1,000 to 1 in hardware, which is consistent with the implication of doubling in price performance every year. The software, as you might expect, varied, but in every case, it was greater than the hardware. Advances tend to be exponential. If you make an advance in software, it doesn’t progress linearly; it progresses exponentially. On the overall progress is the product of the progress in hardware and software.
MARTIN FORD: The other date that you’ve given as a projection is 2045 for what you referred to as the singularity. I think most people associate that with an intelligence explosion or the advent of a true superintelligence. Is that the right way to think about it?
RAY KURZWEIL: There are actually two schools of thought on the singularity: there’s a hard take off school and a soft take off school. I’m actually in the soft take off school that says we will continue to progress exponentially, which is daunting enough. The idea of an intelligence explosion is that there is a magic moment where a computer can access its own design and modify it and create a smarter version of itself, and that it keeps doing that in a very fast iterative loop and just explodes in its intelligence.
I think we’ve actually been doing that for thousands of years, ever since we created technology. We are certainly smarter as a result of our technology. Your smartphone is a brain extender, and it does make us smarter. It’s an exponential process. A thousand years ago paradigm shifts and advances took centuries, and it looked like nothing was happening. Your grandparents lived the same lives you did, and you expected your grandchildren to do the same. Now, we see changes on an annual basis if not faster. It is exponential and that results in an acceleration of progress, but it’s not an explosion in that sense.
I think we will achieve a human level of intelligence by 2029 and it’s immediately going to be superhuman. Take for example our Talk to Books, you ask it a question and it reads 600 million sentences, 100,000 books, in half a second. Personally, it takes me hours to read 100,000 books!
Your smartphone right now is able to do searching based on keywords and other methods and search all human knowledge very quickly. Google search already goes beyond keyword search and has some semantic capability. The semantic understanding is not yet at human levels, but it’s a billion times faster than human thinking. And both the software and the hardware will continue to improve at an exponential pace.
MARTIN FORD: You’re also well known for your thoughts on using technology to expand and extend human life. Could you let me know more about that?
RAY KURZWEIL: One thesis of mine is that we’re going to merge with the intelligent technology that we are creating. The scenario that I have is that we will send medical nanorobots into our bloodstream. One application of these medical nanorobots w
ill be to extend our immune systems. That’s what I call the third bridge to radical life extension. The first bridge is what we can do now, and bridge two is the perfecting of biotechnology and reprogramming the software of life. Bridge three constitutes these medical nanorobots to perfect the immune system. These robots will also go into the brain and provide virtual and augmented reality from within the nervous system rather than from devices attached to the outside of our bodies. The most important application of the medical nanorobots is that we will connect the top layers of our neocortex to synthetic neocortex in the cloud.
MARTIN FORD: Is this something that you’re working on at Google?
RAY KURZWEIL: The projects I have done with my team here at Google use what I would call crude simulations of the neocortex. We don’t have a perfect understanding of the neocortex yet, but we’re approximating it with the knowledge we have now. We are able to do interesting applications with language now, but by the early 2030s we’ll have very good simulations of the neocortex.
Just as your phone makes itself a million times smarter by accessing the cloud, we will do that directly from our brain. It’s something that we already do through our smartphones, even though they’re not inside our bodies and brains, which I think is an arbitrary distinction. We use our fingers and our eyes and ears, but they are nonetheless brain extenders. In the future, we’ll be able to do that directly from our brains, but not just to perform tasks like search and language translation directly from our brains, but to actually connect the top layers of our neocortex to synthetic neocortex in the cloud.
Two million years ago, we didn’t have these large foreheads, but as we evolved we got a bigger enclosure to accommodate more neocortex. What did we do with that? We put it at the top of the neocortical hierarchy. We were already doing a very good job at being primates, and now we were able to think at an even more abstract level.
That was the enabling factor for us to invent technology, science, language, and music. Every human culture that we have discovered has music, but no primate culture has music. Now that was a one-shot deal, we couldn’t keep growing the enclosure because birth would have become impossible. This neocortical expansion two million years ago actually made birth pretty difficult as it was.
This new extension in the 2030s to our neocortex will not be a one-shot deal. Even as we speak, the cloud is doubling in power every year. It’s not limited by a fixed enclosure, so the non-biological portion of our thinking will continue to grow. If we do the math, we will multiply our intelligence a billion-fold by 2045, and that’s such a profound transformation that it’s hard to see beyond that event horizon. So, we’ve borrowed this metaphor from physics of the event horizon and the difficulty of seeing beyond it.
Technologies such as Google Search and Talk to Books are at least a billion times faster than humans. It’s not at human levels of intelligence yet, but once we get to that point, AI will take advantage of the enormous speed advantage which already exists and an ongoing exponential increase in capacity and capability. So that’s the meaning of the singularity, it’s a soft take off, but exponentials nonetheless become quite daunting. If you double something 30 times, you’re multiplying by a billion.
MARTIN FORD: One of the areas where you’ve talked a lot about the singularity having an impact is in medicine and especially in the longevity of human life, and this is maybe one area where you’ve been criticized. I heard a presentation you gave at MIT last year where you said that within 10 years, most people might be able to achieve what you call “longevity escape velocity,” and you also said that you think you personally might have achieved that already? Do you really believe it could happen that soon?
RAY KURZWEIL: We are now at a tipping point in terms of biotechnology. People look at medicine, and they assume that it is just going to plod along at the same hit or miss pace that they have been used to in the past. Medical research has essentially been hit or miss. Drug companies will go through a list of several thousand compounds to find something that has some impact, as opposed to actually understanding and systematically reprogramming the software of life.
It’s not just a metaphor to say that our genetic processes are software. It is a string of data, and it evolved in an era where it was not in the interest of the human species for each individual to live very long because there were limited resources such as food. We are transforming from an era of scarcity to an era of abundance
Every aspect of biology as an information process has doubled in power every year. For example, genetic sequencing has done that. The first genome cost US $1 billion, and now we’re close to $1,000. But our ability to not only collect this raw object code of life but to understand it, to model it, to simulate it, and most importantly to reprogram it, is also doubling in power every year.
We’re now getting clinical applications—it’s a trickle today, but it’ll be a flood over the next decade. There are hundreds of profound interventions in process that are working their way through the regulatory pipeline. We can now fix a broken heart from a heart attack, that is, rejuvenate a heart with a low ejection fraction after a heart attack using reprogrammed adult stem cells. We can grow organs and are installing them successfully in primates. Immunotherapy is basically reprogramming the immune system. On its own, the immune system does not go against cancer because it did not evolve to go after diseases that tend to get us later on in life. We can actually reprogram it and turn it on to recognize cancer and treat it as a pathogen. This is a huge bright spot in cancer treatment, and there are remarkable trials where virtually every person in the trial goes from stage 4 terminal cancer to being in remission.
Medicine is going to be profoundly different in a decade from now. If you’re diligent, I believe you will be able to achieve longevity escape velocity, which means that we’ll be adding more time than is going by, not just to infant life expectancy but to your remaining life expectancy. It’s not a guarantee, because you can still be hit by the proverbial bus tomorrow, and life expectancy is actually a complicated statistical concept, but the sands of time will start running in rather than running out. In another decade further out, we’ll be able to reverse aging processes as well.
MARTIN FORD: I want to talk about the downsides and the risks of AI. I would say that sometimes you are unfairly criticized as being overly optimistic, maybe even a bit Pollyannaish, about all of this. Is there anything we should worry about in terms of these developments?
RAY KURZWEIL: I’ve written more about the downsides than anyone, and this was decades before Stephen Hawking or Elon Musk were expressing their concerns. There was extensive discussion of the downsides of GNR—Genetics, Nanotechnology, and Robotics (which means AI)—in my book, The Age of Spiritual Machines, which came out in 1999 that led Bill Joy to write his famous Wired cover story in January 2000 titled, Why the Future Doesn’t Need Us.
MARTIN FORD: That was based upon a quote from Ted Kaczynski, the Unabomber, wasn’t it?
RAY KURZWEIL: I have a quote from him on one page that sounds like a very level-headed expression of concern, and then you turn the page, and you see that this is from the Unabomber Manifesto. I discussed in quite some detail in that book the existential risk of GNR. In my 2005 book, The Singularity is Near, I go into the topic of GNR risks in a lot of detail. Chapter 8 is titled, “The Deeply Intertwined Promise versus Peril of GNR.”
I’m optimistic that we’ll make it through as a species. We get far more profound benefit than harm from technology, but you don’t have to look very far to see the profound harm that has manifested itself, for example, in all of the destruction in the 20th century—even though the 20th century was actually the most peaceful century up to that time, and we’re in a far more peaceful time now. The world is getting profoundly better, for example, poverty has been cut 95% in the last 200 years and literacy rates have gone from under 10% to over 90% in the world.
People’s algorithm for whether the world is getting better or worse is “how often do I
hear good news versus bad news?”, and that’s not a very good method. There was a poll taken of 24,000 people in about 26 countries asking this question, “Is poverty worldwide getting better or worse over the last 20 years?” 87% said, incorrectly, that it’s getting worse. Only 1% said correctly that it’s fallen by half or more in the last 20 years. Humans have an evolutionary preference for bad news. 10,000 years ago, it was very important that you paid attention to bad news, for example that little rustling in the leaves that might be a predator. That was more important to pay attention to than studying that your crops are half a percent better than last year, and we continue to have this preference for bad news.
MARTIN FORD: There’s a step-change, though, between real risks and existential risks.
RAY KURZWEIL: Well, we’ve also done reasonably well with existential risks from information technology. Forty years ago, a group of visionary scientists saw both the promise and the peril of biotechnology, neither of which was close at hand at the time, and they held the first Asilomar Conference on biotechnology ethics. These ethical standards and strategies have been updated on a regular basis. That has worked very well. The number of people who have been harmed by intentional or accidental abuse or problems with biotechnology has been close to zero. We’re now beginning to get the profound benefit that I alluded to, and that’s going to become a flood over the next decade.