Book Read Free

Architects of Intelligence

Page 56

by Martin Ford


  MARTIN FORD: How did you come to the idea of starting Kernel and OS Fund? What route did your early career take to bring you to that point?

  BRYAN JOHNSON: The starting point for my career was when I was 21, where I had just returned from my Mormon mission to Ecuador. I lived among and witnessed extreme poverty and suffering. During my two years of living among extreme poverty, the only question that was weighing on my mind was, what could I do that would create the most value for the greatest number of people in the world? I wasn’t motivated by fame or money, I just wanted to do good in the world. I looked at all the options I could find, and none of them satisfied me. Because of that, I determined to become an entrepreneur, build a business, and retire by the age of 30. In my 21-year-old mind that made sense. I got lucky, and fourteen years later I sold my company Braintree for $800 million in cash to eBay in 2013.

  By that point, I had also left Mormonism, which had defined my entire reality of what life was about, and when I left that I had to recreate myself from scratch. I was 35, fourteen years since my initial life decisions, and that drive to benefit humanity hadn’t left me. I asked myself the question, what’s the one single thing that I can do that will maximize the probability that the human race will survive. In that moment of observation, it wasn’t clear to me that humans have what we need to survive ourselves and survive the challenges we face. I saw two answers to that question, and they were Kernel and the OS Fund.

  The idea behind OS Fund is that most people in the world who manage or have money do not have scientific expertise, and therefore, they typically invest in things that they are more comfortable with, such as finance or transportation. That means that there is insufficient capital going to science-based endeavors. My observation was that if I could demonstrate as a non-scientist that I could invest in some of the hardest science in the world and be successful in doing this, I would create a model that others could follow. So, I invested $100 million in my OS Fund to do that, and five years in, we are in the top decile of performance among US firms. We’ve made 28 investments, and we’ve been able to demonstrate that we can successfully invest in these science-based entrepreneurs that are doing world-changing technology.

  The second thing was Kernel. In the beginning, I talked to over 200 really smart people, asking them what they were doing in the world and why. From there, I’d ask them follow-on on questions to understand the entire assumptions stack of how they think, and the one thing that I walked away from is that the brain is the originator of all things, everything we do as humans stems from our brains. Everything we build, everything we’re trying to become, and every problem we’re trying to solve. It lives upstream from anything else, yet it was absent in anybody’s focus. There were efforts, for example from DARPA and the Allen Brain Institute, but most were focused on specific medical applications or basic research in neuroscience. There was nobody in the world that I could identify that basically said, the brain is the most important thing in existence because everything sits downstream from the brain. It’s a really simple observation, but it was a blind spot everywhere.

  Our brain sits right behind our eyes, yet we focus on everything downstream from it. There is not an endeavor that is on a scale that’s relevant, something that lets us read and write neural code to read and write our cognition. So, with Kernel, I set out to do for the brain what we did for the genome, which is to sequence a genome and then create a tool to write the genome. In 2018, we can read and edit the DNA—the software—that makes us humans, and I wanted to do the same thing for the brain, which is read and write our code.

  There’s a bunch of reasons why I want to be able to read and write the human brain. My fundamental belief behind all of this is that we need to radically up-level ourselves as a species. AI is moving very quickly, and what the future of AI holds is anyone’s guess. The expert opinions are across the board. We don’t know if AI is growing on a linear curve, an S curve, an exponential curve, or a punctuated equilibrium, but we do know that the promise of AI is up and to the right.

  The rate of our improvement as humans is flat. People hear this and say that we’re hugely improved over people 500 years ago, but we’re not. Yes, we understand greater complexity, for example, more complex concepts in physics and mathematics, but our species generally is exactly the same as we were thousands of years ago. We have the same proclivities and we make the same mistakes. Even if you were to make the case that we are improving as a species, if you compare it to AI, humans are flatlining. If you just simply look at the graph and say AI is up and to the right, humans might be a little bit to the right. So the question is, how big is that delta going to be between AI and ourselves when we begin to feel incredibly uncomfortable? It’s going to just run by us, and then what are we as a species? It is an important question to ask.

  Another reason is based on the concept that we have this impending job crisis with AI. The most creative thing people are coming up with is universal basic income, which is basically waving the white flag and saying we can’t cope and we need some money from the government. Nowhere in the conversation is radical human improvement discussed. We need to figure out how to not just nudge ourselves forward, but to make a radical transformation. What we need to do is acknowledge the reason that we need to improve ourselves radically is that we cannot imagine the future. We are constrained in our imagination to what we are familiar with.

  If you were to take humans and put them back with Gutenberg and the printing press, and say, paint me a miraculous vision of what’s possible, they wouldn’t be able to do it. They would never have guessed at what’s evolved like the internet or computers. The same is true of radical human enhancement. We don’t know what’s on the other side. What we do know is that is if we are to be relevant as a species, we must advance ourselves significantly.

  One more reason is the idea that somehow AI became the biggest threat that we should all care about, which in my mind is just silly. The biggest thing I’m worried about is humans. We have always been our own biggest threat. Look at all of history, we have done awful things to each other. Yes, we’ve done remarkable things with our technology, but we have also inflicted tremendous harm on each other. So, in terms of asking is AI a risk, and should we prioritize that? I would say AI is the best thing since sliced bread. We should embrace it wholeheartedly and understand the secrets of unlocking the human brain by embracing AI. We can’t do it by ourselves.

  MARTIN FORD: There are a number of other companies in the same general space as Kernel. Elon Musk has Neuralink and I think both Facebook and DARPA are also working on something. Do you feel that there are direct competitors out there, or is Kernel unique in its approach?

  BRYAN JOHNSON: DARPA has done a wonderful job. They have been looking at the brain for quite some time now, and they’ve been a galvanizer of success. Another visionary in the field is Paul Allen and the Allen Institute for Brain Science. The gap that I identified was not understanding that the brain matters, but identifying the brain as the primary entry point to everything in existence we care about. Then through that frame, creating the tools to read and write neural code. To read and write human.

  I started Kernel, and then less than a year later both Elon Musk and Mark Zuckerberg did similar things. Elon started a company that was roughly in a similar vein as mine, a similar trajectory of trying to figure out how to re-write human to play well with AI, and then Facebook decided to do theirs focused on further engagement with their users within the Facebook experience. Though it’s still to be determined whether Neuralink, Facebook, and Kernel will be successful over the next couple of years, at least there’s a few of us going at it, which I think is an encouraging situation for the entire industry.

  MARTIN FORD: Do you have a sense of how long all this could take? When do you imagine that there will be some sort of device, or chip that is readily available that will enhance human intelligence?

  BRYAN JOHNSON: It really depends upon the modality. If it’s implanta
ble, there is a longer time frame, but if it’s not invasive, then that is a shorter time frame. My guess on the time frame is that within 15 years neural interfaces will be as common as smartphones are today.

  MARTIN FORD: That seems pretty aggressive.

  BRYAN JOHNSON: When I say neural interfaces, I am not specifying the type. I am not saying that people have a chip implanted in their brain. I’m just saying that the user will be able to bring the brain online.

  MARTIN FORD: What about the specific idea that you might be able to download information or knowledge directly into your brain? A simple interface is one thing. But to actually download information seems especially challenging because I don’t believe we have any real understanding of how information is stored in the brain. So, the idea that you could take information from another source and inject it directly into your brain really seems like a strictly science-fiction concept.

  BRYAN JOHNSON: I agree with that, I don’t think anybody could intelligently speculate on that ability. We have demonstrated methods for enhanced learning or enhanced memory, but the ability to decode thoughts in the brain has not been demonstrated. It’s impossible to give a date because we are inventing the technology as we speak.

  MARTIN FORD: One of the things that I have written a lot about is the potential for a lot of jobs to be automated and the potential for rising unemployment and workforce inequality. I have advocated the idea of a basic income, but you’re saying the problem would be better solved by enhancing the cognitive capabilities of people. I think there are a number of problems that come up there.

  One is that it wouldn’t address the issue that a large fraction of jobs is routine and predictable, and they will eventually be automated by specialized machines. Increasing the cognition of workers won’t help them keep those jobs. Also, everyone has different levels of ability to begin with, and if you add some technology that enhances cognition, that might raise the floor, but it probably wouldn’t make everyone equal. Therefore, many people might still fall below the threshold that would make them competitive.

  Another point that is often raised with this kind of technology is that access to it is not going to be equal. Initially, it’s going to only be accessible to wealthy people. Even if the devices get cheaper and more people can afford them, it seems certain that there would be different versions of this technology, with the better models only accessible to the wealthy. Is it possible that this technology could actually increase inequality, and maybe add to the problem rather than address it?

  BRYAN JOHNSON: Two points about this. At the top of everybody’s minds are questions around inequality, around the government owning your brain, around people hacking your brain, and around people controlling your thoughts. The moment people contemplate the possibility of interfacing with their brain, they immediately jump into loss mitigation mode—what’s going to go wrong?

  Then, different scenarios come to mind: Will things go wrong? Yes. Will people do bad things? Yes. That’s part of the problem, humans always do those things. Will there be unintended consequences? Yes. Once you get past all these conversations, it opens up another area of contemplation. When we ask those questions, we assume that somehow humans are in this undisputed secure position on this planet and that we can forfeit all the considerations as a species, so we can optimize for equality and other things.

  My fundamental premise is that we are at risk of going extinct by doing harm to ourselves, and by exterior factors. I’m coming to this conversation with the belief that whether we enhance ourselves is not a question of luxury. It’s not like should we, or shouldn’t we? Or what are the pros and cons? I’m saying that if humans do not enhance themselves, we will go extinct. By saying that, though, I’m not saying that we should be reckless, or not thoughtful, or that we should embrace inequality.

  What I’m suggesting is that the first principle discussion of conversation is that it is an absolute necessity. Once we acknowledge that, then we can contemplate and say, “Now given this constraint, how do we best accommodate everyone’s interest within society? How do we make sure that we march forward at a steady pace together? How do we ensure that we design into the system knowing that people are going to abuse it?” There is a famous quote that the internet was designed with criminals in mind, so the question is, how do we design neural interfaces knowing that people are going to abuse it? How do we design it knowing that the government is going to want to get into your brain? How do we do all of those things? That is a conversation that is not currently happening. People stop at this luxury argument, which I think is short-sighted, and one of the reasons why we’re in trouble as a species.

  MARTIN FORD: It sounds like you’re making a practical argument that realistically we may have to accept more radical inequality. We may have to enhance a group of people so that they can solve the problems we face. Then after the problems are solved, we can turn our attention to making the system work for everyone. Is that what you’re saying?

  BRYAN JOHNSON: No, what I am suggesting is that we need to develop the technology. As a species we need to upgrade ourselves to be relevant in the face of artificial intelligence, and to avoid destroying ourselves as a species. We already possess the weaponry to destroy ourselves today, and we’ve been on the verge of doing that for decades.

  Let me put it in a new frame. I think it’s possible that in 2050, humans look back and they say, “oh my goodness, can you believe that humans in 2017 thought it was acceptable to maintain weapons that could annihilate the entire planet?” What I am suggesting is that there’s a future of human existence that is more remarkable than we can even imagine. Right now, we’re stuck in our current conception of reality, and we can’t get past this contemplation that we might be able to create a future based on harmoniousness instead of competition, and that we might somehow have a sufficient amount of resources and a mindset for all of us to thrive together.

  We immediately jump into the fact that we always strive to hurt one another. What I am suggesting is this is why we need enhancement to get past these limits and cognitive bias that we have. So, I am in favor of enhancing everybody at the same time. That puts a burden on the development of the technology, but that’s what the burden needs to be.

  MARTIN FORD: When you describe this, I get the sense that you’re thinking in terms of not just enhancing intelligence, but also morality and ethical behavior and decision making. Do you think that there’s potential for technology to make us more ethical and altruistic as well?

  BRYAN JOHNSON: To be clear, I find that intelligence is such a limiting word in its conception. People associate intelligence with IQ, and I’m not doing that at all. I don’t want to suggest only intelligence. When I talk about humans radically improving themselves, I mean in every possible realm. For example, let me paint a picture of what I think could happen to AI. AI is extremely good at performing logistical components of our society, an example being it will be a lot better at driving cars than humans. Give AI enough time, and it will be substantially better, and there will be fewer deaths on the road. We’ll look back and say, “can you believe humans used to drive?” AI is a lot better at flying autopilot on an airplane; it’s a lot better at playing Go and chess.

  Imagine a scenario where we can develop AI to a point where AI largely runs the logistical aspects of everyone’s lives: transportation, clothing, personal care, health—everything is automated. In that world, our brain is now freed from doing what it does for 80% of the day. It’s free to pursue higher-order complexities. The question now is, what will we do? For example, what if studying physics and quantum theory produced the same reward system that watching the Kardashians does today? What if we found out that our brains could extend to four, five, or ten dimensions? What would we create? What would we do?

  What I’m suggesting is the hardest concept in the entire world to grasp, because our brain convinces us that we are an all-seeing eye, that we understand all of the things around us, and that current reality is t
he only reality. What I am suggesting is that there is a future in cognitive enhancement that we can’t even see, and that’s what limits our imaginations to contemplate it. It’s like going back in time and asking Gutenberg to imagine all the kinds of books that will be written. Since then, the literary world has flourished over the centuries. The same thing is true for neural enhancement, and so you start to get a scale of how gigantic a topic this is.

  By traveling through this topic, we’ll get into the constraints of our imagination, we’ll get into human enhancement, people will have to address all their fears even to get to a point where they’d be open to thinking about this. They have to reconcile with AI, they have to figure out if AI is a good thing or a bad thing. If we did enhance ourselves, what would it look like? To squeeze this all into a topic is really hard, and that’s why this stuff is so complex, but also so important. Yet, getting to a level where we can talk about this as a society is very hard, because you have to scaffold your way to all the different pieces we have to get someone who is willing to scaffold to these different layers, and that’s the hardest part of this.

  MARTIN FORD: Assuming you could actually build this technology, then how as a society do we talk about it and really wrestle with the implications, particularly in a democracy? Just look at what’s happened with social media, where a lot of unintended and unanticipated problems have clearly developed. What we’re talking about here could be an entirely new level of social interaction and interconnection, perhaps similar to today’s social media, but greatly amplified. What would address that? How should we prepare for that problem?

  BRYAN JOHNSON: The first question is, why would we expect anything different than what’s happened with social media? It’s entirely predictable that humans will use the tools they are given to pursue their own self-interests along the lines of making money, gaining status, respect, and an advantage over others. That’s what humans do, and it's how we’ve wired to do, and it how we’ve always done it. That’s what I am saying, we haven’t improved ourselves. We’re the same.

 

‹ Prev