Architects of Intelligence

Home > Other > Architects of Intelligence > Page 57
Architects of Intelligence Page 57

by Martin Ford


  We would expect this to happen just like it did with social media; after all, humans are humans. We’ll always be humans. What I’m suggesting is this is the reason why we enhance ourselves. We know what humans do with stuff, it’s a very proven model. We have thousands and thousands of years of data to know what humans do with stuff. We need to go beyond humans, to something akin to humanity 3.0 or 4.0. We need to radically improve ourselves as a species beyond what we can imagine, but the issue is that we don’t have the tools to that right now.

  MARTIN FORD: Are you suggesting that all of this in some sense would have to be regulated? There’s a possibility that as an individual, I might not want my morality to be enhanced. Perhaps I just want to enhance my intelligence, my speed, or something similar, so that I can profit from that without buying in to the other beneficial stuff that you perceive happening. Wouldn’t you need some overall regulation or control of this to be sure that it’s used in a way that benefits everyone?

  BRYAN JOHNSON: May I adjust the framing of your question in two ways? First, your statement about regulation implicitly assumes that our government is the only group that can arbitrate interests. I do not agree with that assumption. The government is not the only group in the entire world that can regulate interests. We could potentially create self-sustaining communities of regulation; we do not have to rely on government. The creation of new regulating bodies or self-regulating bodies can emerge that keep the government from being the sole keeper of that.

  Second, your statement on morals and ethics assumes that you as a human have the luxury to decide what morals and ethics you want. What I’m suggesting is that if you look back through history, almost every biological species that has ever existed on this earth for the four-plus billion years it has existed have gone extinct. Humans are in a tough spot, and we need to realize we’re in a tough spot because we are not born in an inherent position of luxury. We need to make very serious contemplations, which does not mean that we’re not going to have moral ethics; it does. It just means that it needs to be balanced to realize that we are in a tough spot.

  For example, there’s a couple of books that have come out, like Factfulness: Ten Reasons We’re Wrong About the World, and Why Things Are Better Than You Think, by Hans Rosling, and Steven Pinker’s The Better Angels of Our Nature: Why Violence Has Declined. Those books basically say that the world’s not bad, and that although everyone says how terrible it is, all the data says it’s getting better, and it’s getting better faster. What they’re not contemplating is that the future is dramatically different to the past. We’ve never had a form of intelligence in the form of AI that has progressed this fast. Humans have never had these types of tools that have been this destructive. We have not experienced this future before, and it’s our very first time going through this.

  That’s why I don’t buy the historical determinism argument that somehow because we’ve done well in the past, we’re guaranteed to do well in the future. I would say that I’m equal parts optimistic about what the future can bring, but I’m also equal parts cautious. I’m cautionary in terms of acknowledging that in order for us to be successful in the future, we must achieve future literacy. We must also be able to start planning for, thinking about, and creating models for the future that enable us to become future literate.

  If you look at us as a species now, we fly by the seat of our pants. We pay attention to things when they become a crisis and we can’t plan ahead, and as humans, we know this. We typically do not get ahead in life if we don’t plan for it, and as a species, we have no plan. So again, there are all these concepts that if we are hoping to survive in the future, what gives us confidence that we can do that? We don’t plan for it, we don’t think about it, and we don’t look at anything else beyond individuals, individual states, companies, or countries. We’ve never done it before. How do we deal with that in a thoughtful way so that we can maintain the things we care about?

  MARTIN FORD: Let’s talk more generally about artificial intelligence. First of all, is there anything that you can talk about in terms of your portfolio companies and what they are doing?

  BRYAN JOHNSON: The companies that I invested in are using AI to push science discovery forward. That’s the one thing they all have in common, whether they’re developing new drugs to cure disease, or finding new proteins for everything, for inputs into agriculture, for food, drugs, pharmaceuticals, or physical products. Whether these companies are designing microorganisms, like synthetic bio, or they’re designing new materials, like true nanotech, they’re all using some form of machine learning.

  Machine learning is a tool that is enabling discovery faster and better than anything we’ve ever had before. A couple of months ago, Henry Kissinger wrote an open letter to The Atlantic saying that when he was aware of what AlphaGo did in chess and Go, he was worried about “strategically unprecedented moves.” He literally sees the world as a board game because he was in politics in the cold-war era when the US and Russia were arch rivals, and we literally were, both in chess and as nation states. He saw that when you apply AI to chess and Go—and human geniuses have been playing those games for thousands of years—when we gave the game to AlphaGo within a matter of days, the AI came up with genius moves that we had never seen before.

  So, sitting underneath our nose the entire time was undiscovered genius. We didn’t know, and we couldn’t see it ourselves, but AI showed it to us. Henry Kissinger saw that, and he said, that makes me scared. I see that, and I say that’s the best thing in the entire world because AI has the ability to show us what we cannot see ourselves. This is a limitation when humans cannot imagine the future. We cannot imagine what radically enhancing ourselves means, we can’t imagine what the possibilities are, but AI can fill this gap. That’s why I think it’s the best thing that could ever happen to us; it is absolutely critical for us to survive. The issue is that most people, of course, have accepted this narrative of fear from outspoken people who have talked about it, and I think it’s terribly damaging that as a society that narrative is ongoing.

  MARTIN FORD: There is a concern expressed by people like Elon Musk and Nick Bostrom, where they talk about the fast take-off scenario, and the control problem related to superintelligence. Their focus is on the fear that AI could get away from us. Is that something we should worry about? I have heard the case made that by enhancing cognitive capability we will be in a better position to control the AI. Is that a realistic view?

  BRYAN JOHNSON: I’m appreciative of Nick Bostrom for being as thoughtful as he has been about the risks that AI presents. He started this whole discussion, and he’s been fantastic in framing it. It is a good use of time to contemplate how we might anticipate undesired outcomes and work to fend those off, and I am very appreciative that he allocated his brain to do that.

  Regarding Elon, I think the fear mongering that he has done is a negative in society, because in comparison it has not been as thorough and thoughtful as Nick’s work. Elon has basically just taken it out to the world, and both created and inflicted fear among a class of people that can’t comment intelligently on the topic, which I think is unfortunate. I also think we would be well suited as a species to be humbler in acknowledging our cognitive limitations and in contemplating how we might improve ourselves in every imaginable way. The fact that it is not our number one priority as a species demonstrates the humility we need.

  MARTIN FORD: The other thing I wanted to ask you about is that there is a perceived race with other countries, and in particular China both in terms of AI, and potentially with the kind of neural interface technology you’re working on with Kernel. What’s your view on that? Could competition be positive since it will result in more knowledge? Is it a security issue? Should we pursue some sort of industrial policy to make sure that we don’t fall behind?

  BRYAN JOHNSON: It’s how the world works currently. People are competitive, nation states are competitive, and everybody pursues their self-interest above the ot
her. This is exactly how humans will behave, and I come back to the same observation every single time.

  The future that I imagine for humans that paves the way for our success is one in which we are radically improved. Could it mean we live in harmoniousness, instead of a competition-based society? Maybe. Could it mean, something else? Maybe. Could it mean a rewiring of our ethics and morals so far that we won’t even be able to recognize it from our viewpoint today? Maybe. What I am suggesting is we may need a level of imagination about our own potential and the potential of the entire human race to change this game, and I don’t think this game we’re playing now is going to end well.

  MARTIN FORD: You’ve acknowledged that if the kinds of technologies that you are thinking about fell into the wrong hands, then that could pose a great risk. We’d need to address that globally, and that seems to present a coordination problem.

  BRYAN JOHNSON: I totally agree, I think we absolutely need to focus on that possibility with the utmost attention and care. That’s how human and nation states are going to behave based on historical data.

  An equal part to that is that we need to extend our imagination to a point where we can alter that fundamental reality to where we may not have to assume that everyone’s going to just work on their own interests and that people will do whatever they can to other people to achieve what they want. What I am suggesting is that calling into question those fundamentals is something we are not doing as a society. Our brain keeps us trapped in our current perception of what is reality because it’s very hard to imagine that the future would be different to what we currently live in.

  MARTIN FORD: You have discussed your concern that we might all become extinct, but overall, are you an optimist? Do you think that as a race we will rise to these challenges?

  BRYAN JOHNSON: Yes, I would definitely say I’m an optimist. I’m absolutely bullish on humanity. The statements I make about the difficulties that we face are in order to create a proper assessment of our risk. I don’t want us to have our heads in the sand. We have some very serious challenges as a species, and I think we need to reconsider how we approach these problems. That’s one of the reasons why I founded OS Fund—we need to invent new ways to solve the problems at hand.

  As you’ve heard me say many times now, I think we need to rethink the first principles on our existence as a human, and what we can become as a species. To that end, we need to prioritize our own improvement above everything else, and AI is absolutely essential for that. If we do that to a point where we can prioritize our improvement and get fully involved in AI, in a way that we both progress together, I think we can solve all the problems that we are facing, and I think we can create an existence that’s far more magical and fantastic than anything we can imagine.

  BRYAN JOHNSON is founder of Kernel, OS Fund and Braintree.

  In 2016, he founded Kernel, investing $100M to build advanced neural interfaces to treat disease and dysfunction, illuminate the mechanisms of intelligence, and extend cognition. Kernel is on a mission to dramatically increase our quality of life as healthy lifespans extend. He believes that the future of humanity will be defined by the combination of human and artificial intelligence (HI+AI).

  In 2014, Bryan invested $100M to start OS Fund, which invests in entrepreneurs commercializing breakthrough discoveries in genomics, synthetic biology, artificial intelligence, precision automation, and the development of new materials.

  In 2007, Bryan founded Braintree (and acquired Venmo), which he sold to PayPal in 2013 for $800M. Bryan is an outdoor-adventure enthusiast, pilot, and the author of a children’s book, Code 7.

  Chapter 25. When Will Human-Level AI be Achieved? Survey Results

  As part of the conversations recorded in this book, I asked each participant to give me his or her best guess for a date when there would be at least a 50 percent probability that artificial general intelligence (or human-level AI) will have been achieved. The results of this very informal survey are shown below.

  A number of the individuals I spoke with were reluctant to attempt a guess at a specific year. Many pointed out that the path to AGI is highly uncertain and that there are an unknown number of hurdles that will need to be surmounted. Despite my best persuasive efforts, five people declined to give a guess. Most of the remaining 18 preferred that their individual guess remain anonymous.

  As I noted in the introduction, the guesses are neatly bracketed by two people willing to provide dates on the record: Ray Kurzweil at 2029 and Rodney Brooks at 2200.

  Here are the 18 guesses:

  202911 years from 2018

  203618 years

  203820 years

  204022 years

  2068 (3)50 years

  208062 years

  208870 years

  2098 (2)80 years

  2118 (3)100 years

  2168 (2)150 years

  2188170 years

  2200182 years

  Mean: 2099, 81 years from 2018

  Nearly everyone I spoke to had quite a lot to say about the path to AGI, and many people—including those who declined to give specific guesses—also gave intervals for when it might be achieved, so the individual interviews offer a lot more insight into this fascinating topic.

  It is worth noting that the average date of 2099 is quite pessimistic compared with other surveys that have been done. The AI Impacts website (https://aiimpacts.org/ai-timeline-surveys/) shows results for a number of other surveys.

  Most other surveys have generated results that cluster in the 2040 to 2050 range for human-level AI with a 50 percent probability. It’s important to note that most of these surveys included many more participants and may, in some cases, have included people outside the field of AI research.

  For what it’s worth, the much smaller, but also very elite, group of people I spoke with does include several optimists, but taken as a whole, they see AGI as something that remains at least 50 years away, and perhaps 100 or more. If you want to see a true thinking machine, eat your vegetables.

  Chapter 26. Acknowledgments

  This book has truly been a team effort. Packt acquisitions editor Ben Renow-Clarke proposed this project to me in late 2017, and I immediately recognized the value of a book that would attempt to get inside the minds of the foremost researchers responsible for building the technology that will very likely reshape our world.

  Over the past year, Ben has been instrumental in guiding and organizing the project, as well as editing the individual interviews. My role primarily centered on arranging and conducting the interviews. The massive undertaking of transcribing the audio recordings and then editing and structuring the interview text was handled by the very capable team at Packt. In addition to Ben, this includes Dominic Shakeshaft, Alex Sorrentino, Radhika Atitkar, Sandip Tadge, Amit Ramadas, Rajveer Samra, and Clare Bowyer for her work on the cover.

  I am very grateful to the 23 individuals I interviewed, all of whom were very generous with their time, despite extraordinarily demanding schedules. I hope and believe that the time they invested in this project has produced a result that will be an inspiration for future AI researchers and entrepreneurs, as well as a significant contribution to the emerging discourse about artificial intelligence, how it will impact society, and what we need to do to ensure that impact is a positive one.

  Finally, I thank my wife Xiaoxiao Zhao and my daughter Elaine for their patience and support as I worked to complete this project.

  mapt.io

  Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.

  mapt.io

  Mapt is an online digital library that gives you full access to over 5,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.

  Wh
y subscribe?

  Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals

  Learn better with Skill Plans built especially for you

  Get a free eBook or video every month

  Mapt is fully searchable

  Copy and paste, print, and bookmark content

  Packt.com

  Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.packt.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at for more details.

  At www.packt.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.

 

 

 


‹ Prev