Valley of Genius
Page 51
Tony Fadell: Change is going to be continual, and today is the slowest day society will ever move.
Kristina Woolsey: Technology is changing fundamental things. It changes where you can live and work; it changes who you know; it changes who you can collaborate with. Commerce has completely changed. Those things change the nature of society.
Scott Hassan: Never ever try to compete with a computer on doing something, because if you don’t lose today, you’ll lose tomorrow.
Tony Fadell: Tomorrow will get faster and every year after it’s only going to continue to get faster in terms of the amount of change. That means all of these incumbents with these big businesses that have been around for a hundred or two hundred years can be unseated, because technology is the unseating element. Technology is the levelizer.
Carol Bartz: We are very arrogant out here that nothing can change unless technology is involved, and technology will drive any business out there to a disruption point.
Marc Porat: Technology, that’s what we do here in Silicon Valley. We just push technology until someone figures out what to do with it.
Andy Hertzfeld: Right now the Valley is particularly excited about two things: One of them is machine learning; incredible progress has been made in machine learning the last three or four years. A broader way of saying it is artificial intelligence.
Kevin Kelly: The fundamental disruption, the central event of this coming century, is going to be artificial intelligence, which will be underpinning and augmenting everything that we do—it will be pervasive, cheap, and ubiquitous.
Marissa Mayer: I’m incredibly optimistic about what AI can do. I think right now we are just at the early stages, and a lot of fears are overblown. Technologists are terrible marketers. This notion of artificial intelligence, even the acronym itself, is scary.
Tiffany Shlain: There’s all this hysteria about AI taking over. But here’s the thing: The skills we need most in today’s world—skills like empathy, creativity, taking initiative, and cross-disciplinary thinking—are all things that machines will never have. Those are the skills that will be most needed in the future, too.
Marissa Mayer: If we’d had better marketing, we would have said, “Wait, can we talk about enhanced intelligence or computer-augmented intelligence, where the human being isn’t replaced in the equation?” The people who are working on artificial intelligence are looking at how they can take a repetitive menial task and make a computer do it faster and better. To me, that’s a much less threatening notion than creating an artificially intelligent being.
Andy Hertzfeld: You know there wasn’t so much a breakthrough algorithm that made it happen, although there was some improved algorithm work. Mainly it was just the ability to apply the algorithm to billions of data inputs—it was just a difference of scale. And so that scale just came naturally from Moore’s law.
Alvy Ray Smith: Moore’s law is astonishing, it’s beyond belief, it’s the dynamo of the modern age.
Andy Hertzfeld: It couldn’t have happened no matter how brilliant your algorithm was twenty years ago, because billions were just out of reach. But what they found was that by just increasing the scale, suddenly things started working incredibly well. It kind of shocked the computer science world, that they started working so good.
Kevin Kelly: So just like the way we made artificial flying, we’re going to make new types of thinking that don’t occur in nature. It’s not that it’s going to be humanlike but faster. It’s an alien intelligence—and that turns out to be its chief benefit. The reason why we employ an AI to drive our cars is because they’re not driving it like humans.
Jim Clark: It’s bound to happen, the robotic driving of cars.
Nolan Bushnell: Self-driving cars are going to change everything and help cities to literally become gardens because streets, in some ways, kind of go away.
Jaron Lanier: Once we have automated transportation you might stop thinking about home in the same way. The idea is that everybody would be in self-driving RVs forever, so there would just be like this constant streaming of people living a mobile lifestyle going from here to there all over the world. Why not? And indeed, there’s something very attractive about that. I could see raising a family where you have a bunch of families with young kids of similar ages, all traveling together seeing different parts of the world, convening for their education and working remotely. That could actually be really nice, because what we do these days is we spend hours a day moving kids around from this lesson to that school to this soccer thing or whatever and it’s kind of an insane way to live. That makes no sense to anybody. And so I could imagine something that’s actually pretty nice coming together. I like that vision a lot.
Scott Hassan: I like the concept of self-driving cars, but I worry that our legal system can’t really handle them. The problem with autonomous cars is that it’s the manufacturer who is driving that car.
Kevin Kelly: Humans should not be allowed to drive! We’re just terrible drivers. In the last twelve months humans killed one million other humans driving.
Scott Hassan: If we can bring that down to, like, three a day, that will be amazing, right? It will be a thousand times less, right? But the problem is those three deaths are going to be caused by an autonomous car killing them. And that could sink any company making autonomous cars, because the payout—three per day—that’s a lot of lawsuits. So from a liability point of view, it actually might be easier to skip over the autonomous driving cars, and go direct to flying cars. It might actually be easier, because there’s less liability involved. And so I think we’re going to start seeing flying cars in the next five to ten years, and then they’re going to get widespread in like twenty to thirty years. Because the nice thing about a flying car is that all you have to do is convince one agency that it’s cool—the FAA—and there’s really nothing anyone can do about it, because if they say, “It’s okay,” then it’s okay.
Andy Hertzfeld: So the second thing Silicon Valley is particularly excited about right now is artificial reality, or you might say mixed reality or whatever you want to call it.
Kevin Kelly: That VR vision of the alternative world is still there, but the new thing is this other version of “augmented” or “mixed” reality, where artificial things are inserted into the real world, whether they be objects or characters or people.
Scott Hassan: VR blocks off your field of vision, and everything has to be reconstructed digitally. And so MR, which is mixed reality, is a technology that can selectively draw on any part of your vision. It can actually include all your vision, if that’s what’s required. So MR is, I believe, the next step in how we interface with computers and information and people. It’s all going to be through mixed reality. And VR is a special case of mixed reality.
Nolan Bushnell: All of this is on a continuum, and right now augmented reality is a little bit harder than virtual reality, technically.
Steve Wozniak: Because of Moore’s law, we always have more bits and more speed to handle those more bits on the screen. Well, we now have finally gotten to the point where we have enough computer power that you can put the screen on your head, and it’s like you’re living in a different world, and it fools you. It’s enough to fool the brain.
Nolan Bushnell: I’ve seen how technology has moved from Pong to what we’re playing today, and I expect the same kind of pathway to virtual reality. I think that twenty years from now we will be shocked at how good VR is. I like to say we are at the “Pong phase” of virtual reality. Twenty years from now, VR is going to be old hat. Everybody will be used to it by then. Maybe living there permanently.
Brenda Laurel: The only way I can see that happening is if we completely trash the planet.
Jim Clark: Nolan is a good friend, I know him well, and he can get hyperbolic. I do not think people are going to be living in virtual reality. That might be true in a hundred years—not in twenty.
Nolan Bushnell: So when is VR indistinguishable from reality? I’ve
actually put a little plot together on that. I think we’re about 70 percent of the way there visually. I think we’re 100 percent of the way there with audio. I think we’re 100 percent of the way there in smell. I think we’re just scratching the surface on touch and fooling your inner ear, and acceleration, and the thing that I think will break the illusion will be food. I think that’s going to be the hardest one to simulate in VR. So when you see the guy in The Matrix having a great bottle of wine and steak? That’s going to be hard.
Jaron Lanier: Just on some spiritual level, it just seems terribly wrong to say, “Well, we know enough about reality that living in this simulation is just as good.” Giving up that mystery of what the real world is just seems like a form of suicide or something.
Jim Clark: Plus I’d rather have real sex than virtual sex.
Nolan Bushnell: That’s really a matter of haptics—a full haptic body suit, where the suit simulates temperature and pressure on your skin, and various things…
Brenda Laurel: You know what? If the boys can objectify software instead of people, then it’s good for everyone—except the boys.
Scott Hassan: That same type of technology will be used in tele-operated robots; some people call them Waldos. Think of this device as a set of arms that rolls around, that’s able to do stuff—two hands that can be manipulated from afar. Let’s say it fits where your dishwasher used to be, and whenever you need it, it comes out of there and it unfolds and it’s operated by somebody else in another location that has expertise that you want at that time. You want dinner made? Well, it’s just remote-operated by a chef, in some type of rig, so that when they move their arms, the robot moves its arms, in the exact same way. And that person is wearing these gloves, so that whenever the robot touches something, they can feel that touch. So that that person can pick up things and chop things and go into the refrigerator, open the doors, as if they were there. Then that same Waldo, when that person is done with making dinner for you, instantly switches over to this other person who loves to clean up, and then they go and clean up the whole kitchen for you. It’s the service economy, but it’s in your own home, right? And so you basically have expertise on demand. So, well, something like that is going to be widespread in maybe ten years.
Nolan Bushnell: In twenty years, 80 percent of homes will have some kind of a robot.
Carol Bartz: Every inflection point really followed from the fact that you could make something affordable, so that the public or industry could do something with it. You could get this in the hands of more people, which meant it was a bigger market, and on and on, and off you went.
Scott Hassan: They’ll probably be the same price as a refrigerator. It’s going to be one of those things: You got your car, you got your house, and you got your Waldo. But the cool thing about that is, once that kind of stuff comes out, then people will write all these applications that help those people do certain tasks. So you would install an app so that you click on the potato, and then your Waldo takes over and does it for you automatically, really fast, right? So you would have all these application makers making little things that can make someone’s job easier, and then eventually you get to a point where you’re not just controlling one of these Waldos, you will be controlling maybe three or ten or one hundred of these simultaneously, and you’re more managing these Waldos now, not controlling them individually. Does that make sense? So you’ve got this huge scaling effect.
Jim Clark: Yeah, I don’t get excited about the virtual reality stuff, the car driving and robotics and stuff like that. It’s just going to happen. The parts that really get my juices going are the human-computer interface, through the nervous system, and biology transformation. If I was a young man just getting a PhD, I would definitely do biology, because I think that’s where it’s going. A biologist armed with all this knowledge of computer science and technology can make a huge impact on humanity.
Adele Goldberg: If you were to predict the future based on seeing what is in the labs today and extrapolate, you would believe synthetic biology is the future, not electronics.
Andy Hertzfeld: Because the idea of bio being the next frontier is based on the silicon, really. There’s about one hundred billion neurons estimated in most people’s heads, and the world knew that thirty years ago and I remember thinking, Boy, a hundred billion, that’s enormous! And now I think, A hundred billion? Hey, that’s not so much! Right? If there was a byte per thing, that’s not even a terabyte. I have thirty terabytes on my desktop computer upstairs! So it’s just that Moore’s law has gotten us to the point we’re up to dealing with the biological scale of complexity.
Alvy Ray Smith: Moore’s law means one order of magnitude every five years—that’s the way I define it. And so what do you do with another two to three orders of magnitude increase in Moore’s law? We humans can’t answer that question. We don’t know. An order of magnitude is sort of a natural barrier. Or another way to say it is, if you’ve got just enough vision to go beyond the order of magnitude, you would probably become a billionaire.
Jim Clark: I think that connecting humans to computers, having that interface, is increasingly going to be possible with a helmet that’s measuring neurological signals from the brain and using that to control things. I’m pretty sure that twenty years from now we’re going to be well into getting the human-computer interface wrapped around a direct kind of brain-fed interface.
Scott Hassan: We’re going to tap right into the optic nerve, and insert things that you don’t see, but your brain doesn’t know that you don’t see them. We’re just going to insert it right into your optic nerve. We really don’t understand how memory works and stuff like that, but we understand somewhat how the optic nerve works, because it’s just a cable going back to your brain, and, you know, we know in theory how to insert things into it, so it’s just a bunch of engineering work to make that happen.
Jim Clark: And as time goes on, I think we’ll get more and more refined at being able to map and infer and project those signals, on the cortex, on the brain, and I feel as certain about that as I feel about anything.
Larry Page: Eventually we’ll have the implant, where if you think about a fact, it will just tell you the answer.
Scott Hassan: It’s maybe twenty years away. I mean it depends on how well the market takes up MR, mixed reality. If it really loves it, then it’s going to be sooner, so if it’s slow to pick up then it’s going to be longer. But I think eventually it’s going to be there.
Jim Clark: We will for sure be controlling computers with thoughts, and I think increasingly we’re going to have kind of hybrid systems that are kind of biological- and computer-like, and they’re going to be there to make humans more effective at whatever…
Tony Fadell: So then after that I think it’s really going to be how we coevolve with AI. How do we as humans coevolve with the artificial intelligence machine-learning kinds of technologies we are creating now? So, look—chess champions, right? They got beat by Deep Blue back in the nineties, right?
Kevin Kelly: When Garry Kasparov lost to Deep Blue he complained to the organizers, saying, “Look, Deep Blue had access to this database of every single chess move that was ever done. If I had access to that same database in real time, I could have beat Deep Blue.” So he said, “I want to make a whole new chess league where you can play as a human with access to that database.” And it’s kind of like free martial arts, where you could play anywhere, you could play as a human with access, you could play it as a human alone or you could play just as an AI alone. And he called that combination of the AI and the human, he called that a centaur. That team was a centaur, and in the last four years, the world’s best chess player on the planet is not an AI, it’s not a human, it’s a centaur, it’s a team.
Tony Fadell: And so the chess champions have now coevolved with the technology and they’ve gotten smarter and better. Too many times do we get stuck in our own way of thinking, and these things can give us a shot in the arm to think dr
amatically different. It’s like an Einstein showed up to help us, right?
Kevin Kelly: Then I would say that in thirty years people will be beginning to get used to the idea that you can have artificial consciousness… And when you give a body to an AI you have a robot.
Nolan Bushnell: Now, when the robot manifests self-awareness, and then becomes aware of its existence and wants to self-preserve that, we’ve got some interesting issues that we’re going to have to solve. What actions can it take in self-preservation? In a self-aware, self-programming, self-understanding, self-learning bot, can we truly control the limits on its actions? I think we’re fifty years from that, but it’s something that we’re going to have to cross.
Steve Wozniak: But even if it got smarter than us, it would take us on as partners, because we’re the creators, you know we’re going to be its first friend. Right now humans are still in control, and I’ve never heard anyone talk like we’re going to be out of control very soon.
Tony Fadell: And then I think there’s going to be another big split when we decide that biological means locomotion, or biological means manipulation. Because robots are incredibly fragile, they’re hard to repair, they don’t self-heal, and—this is a crazy thing—we’re going to figure out a way to actually turn the biological systems into the robots we want, that self-heal, that train more easily, that actually are much more energy efficient. Literally, the power efficiency of how much you eat versus how much energy you expend is so much more efficient in a biological system versus a mechanical system. So when we want superhuman things, how are they not going to be biological as opposed to a megatron? And then there’s going to be all kinds of societal and philosophical and governmental and ethical issues that are going to arise, just like with AI.
Steve Wozniak: Hundreds of years downstream machines will be a superior species, but what will we humans be doing? Can a human be satisfied just being taken care of, like a family dog?
Kevin Kelly: I kind of reject the idea of this super-AI that becomes God-like as being unfeasible for a number of different reasons, but there will be aspects of it that I think may not be understandable, and that’s one of the definitions of the singularity.