Brief Answers to the Big Questions
Page 13
As development in these areas and others moves from laboratory research to economically valuable technologies, a virtuous cycle evolves, whereby even small improvements in performance are worth large sums of money, prompting further and greater investments in research. There is now a broad consensus that AI research is progressing steadily and that its impact on society is likely to increase. The potential benefits are huge; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide. The eradication of disease and poverty is possible. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls. Success in creating AI would be the biggest event in human history.
Unfortunately, it might also be the last, unless we learn how to avoid the risks. Used as a toolkit, AI can augment our existing intelligence to open up advances in every area of science and society. However, it will also bring dangers. While primitive forms of artificial intelligence developed so far have proved very useful, I fear the consequences of creating something that can match or surpass humans. The concern is that AI would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded. And in the future AI could develop a will of its own, a will that is in conflict with ours. Others believe that humans can command the rate of technology for a decently long time, and that the potential of AI to solve many of the world’s problems will be realised. Although I am well known as an optimist regarding the human race, I am not so sure.
In the near term, for example, world militaries are considering starting an arms race in autonomous weapon systems that can choose and eliminate their own targets. While the UN is debating a treaty banning such weapons, autonomous-weapons proponents usually forget to ask the most important question. What is the likely end-point of an arms race and is that desirable for the human race? Do we really want cheap AI weapons to become the Kalashnikovs of tomorrow, sold to criminals and terrorists on the black market? Given concerns about our ability to maintain long-term control of ever more advanced AI systems, should we arm them and turn over our defence to them? In 2010, computerised trading systems created the stock-market Flash Crash; what would a computer-triggered crash look like in the defence arena? The best time to stop the autonomous-weapons arms race is now.
In the medium term, AI may automate our jobs, to bring both great prosperity and equality. Looking further ahead, there are no fundamental limits to what can be achieved. There is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains. An explosive transition is possible, although it may play out differently than in the movies. As mathematician Irving Good realised in 1965, machines with superhuman intelligence could repeatedly improve their design even further, in what science-fiction writer Vernor Vinge called a technological singularity. One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders and potentially subduing us with weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.
In short, the advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity. The real risk with AI isn’t malice but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants. We should plan ahead. If a superior alien civilisation sent us a text message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here, we’ll leave the lights on”? Probably not, but this is more or less what has happened with AI. Little serious research has been devoted to these issues outside a few small non-profit institutes.
Fortunately, this is now changing. Technology pioneers Bill Gates, Steve Wozniak and Elon Musk have echoed my concerns, and a healthy culture of risk assessment and awareness of societal implications is beginning to take root in the AI community. In January 2015, I, along with Elon Musk and many AI experts, signed an open letter on artificial intelligence, calling for serious research into its impact on society. In the past, Elon Musk has warned that superhuman artificial intelligence is capable of providing incalculable benefits, but if deployed incautiously will have an adverse effect on the human race. He and I sit on the scientific advisory board for the Future of Life Institute, an organisation working to mitigate existential risks facing humanity, and which drafted the open letter. This called for concrete research on how we could prevent potential problems while also reaping the potential benefits AI offers us, and is designed to get AI researchers and developers to pay more attention to AI safety. In addition, for policymakers and the general public the letter was meant to be informative but not alarmist. We think it is very important that everybody knows that AI researchers are seriously thinking about these concerns and ethical issues. For example, AI has the potential to eradicate disease and poverty, but researchers must work to create AI that can be controlled.
In October 2016, I also opened a new centre in Cambridge, which will attempt to tackle some of the open-ended questions raised by the rapid pace of development in AI research. The Leverhulme Centre for the Future of Intelligence is a multi-disciplinary institute, dedicated to researching the future of intelligence as crucial to the future of our civilisation and our species. We spend a great deal of time studying history, which, let’s face it, is mostly the history of stupidity. So it’s a welcome change that people are studying instead the future of intelligence. We are aware of the potential dangers, but perhaps with the tools of this new technological revolution we will even be able to undo some of the damage done to the natural world by industrialisation.
Recent developments in the advancement of AI include a call by the European Parliament for drafting a set of regulations to govern the creation of robots and AI. Somewhat surprisingly, this includes a form of electronic personhood, to ensure the rights and responsibilities for the most capable and advanced AI. A European Parliament spokesman has commented that, as a growing number of areas in our daily lives are increasingly affected by robots, we need to ensure that robots are, and will remain, in the service of humans. A report presented to the Parliament declares that the world is on the cusp of a new industrial robot revolution. It examines whether or not providing legal rights for robots as electronic persons, on a par with the legal definition of corporate personhood, would be permissible. But it stresses that at all times researchers and designers should ensure all robotic design incorporates a kill switch.
This didn’t help the scientists on board the spaceship with Hal, the malfunctioning robotic computer in Stanley Kubrick’s 2001: A Space Odyssey, but that was fiction. We deal with fact. Lorna Brazell, a consultant at the multinational law firm Osborne Clarke, says in the report that we don’t give whales and gorillas personhood, so there is no need to jump at robotic personhood. But the wariness is there. The report acknowledges the possibility that within a few decades AI could surpass human intellectual capacity and challenge the human–robot relationship.
By 2025, there will be about thirty mega-cities, each with more than ten million inhabitants. With all those people clamouring for goods and services to be delivered whenever they want them, can technology help us keep pace with our craving for instant commerce? Robots will definitely speed up the online retail process. But to revolutionise shopping they need to be fast enough to allow same-day delivery on every order.
Opportunities for interacting with the world, without having to be physically present, are increasing rapidly. As you can imag
ine, I find that appealing, not least because city life for all of us is so busy. How many times have you wished you had a double who could share your workload? Creating realistic digital surrogates of ourselves is an ambitious dream, but the latest technology suggests that it may not be as far-fetched an idea as it sounds.
When I was younger, the rise of technology pointed to a future where we would all enjoy more leisure time. But in fact the more we can do, the busier we become. Our cities are already full of machines that extend our capabilities, but what if we could be in two places at once? We’re used to automated voices on phone systems and public announcements. Now inventor Daniel Kraft is investigating how we can replicate ourselves visually. The question is, how convincing can an avatar be?
Interactive tutors could prove useful for massive open online courses (MOOCs) and for entertainment. It could be really exciting—digital actors that would be forever young and able to perform otherwise impossible feats. Our future idols might not even be real.
How we connect with the digital world is key to the progress we’ll make in the future. In the smartest cities, the smartest homes will be equipped with devices that are so intuitive they’ll be almost effortless to interact with.
When the typewriter was invented, it liberated the way we interact with machines. Nearly 150 years later and touch screens have unlocked new ways to communicate with the digital world. Recent AI landmarks, such as self-driving cars, or a computer winning at the game of Go, are signs of what is to come. Enormous levels of investment are pouring into this technology, which already forms a major part of our lives. In the coming decades it will permeate every aspect of our society, intelligently supporting and advising us in many reas including healthcare, work, education and science. The achievements we have seen so far will surely pale against what the coming decades will bring, and we cannot predict what we might achieve when our own minds are amplified by AI.
Perhaps with the tools of this new technological revolution we can make human life better. For instance, researchers are developing AI that would help reverse paralysis in people with spinal-cord injuries. Using silicon chip implants and wireless electronic interfaces between the brain and the body, the technology would allow people to control their body movements with their thoughts.
I believe the future of communication is brain–computer interfaces. There are two ways: electrodes on the skull and implants. The first is like looking through frosted glass, the second is better but risks infection. If we can connect a human brain to the internet it will have all of Wikipedia as its resource.
The world has been changing even faster as people, devices and information are increasingly connected to each other. Computational power is growing and quantum computing is quickly being realised. This will revolutionise artificial intelligence with exponentially faster speeds. It will advance encryption. Quantum computers will change everything, even human biology. There is already one technique to edit DNA precisely, called CRISPR. The basis of this genome-editing technology is a bacterial defence system. It can accurately target and edit stretches of genetic code. The best intention of genetic manipulation is that modifying genes would allow scientists to treat genetic causes of disease by correcting gene mutations. There are, however, less noble possibilities for manipulating DNA. How far we can go with genetic engineering will become an increasingly urgent question. We can’t see the possibilities of curing motor neurone diseases—like my ALS—without also glimpsing its dangers.
Intelligence is characterised as the ability to adapt to change. Human intelligence is the result of generations of natural selection of those with the ability to adapt to changed circumstances. We must not fear change. We need to make it work to our advantage.
We all have a role to play in making sure that we, and the next generation, have not just the opportunity but the determination to engage fully with the study of science at an early level, so that we can go on to fulfil our potential and create a better world for the whole human race. We need to take learning beyond a theoretical discussion of how AI should be and to make sure we plan for how it can be. We all have the potential to push the boundaries of what is accepted, or expected, and to think big. We stand on the threshold of a brave new world. It is an exciting, if precarious, place to be, and we are the pioneers.
When we invented fire, we messed up repeatedly, then invented the fire extinguisher. With more powerful technologies such as nuclear weapons, synthetic biology and strong artificial intelligence, we should instead plan ahead and aim to get things right the first time, because it may be the only chance we will get. Our future is a race between the growing power of our technology and the wisdom with which we use it. Let’s make sure that wisdom wins.
Why are we so worried about artificial intelligence? Surely humans are always able to pull the plug?
People asked a computer, “Is there a God?” And the computer said, “There is now,” and fused the plug.
10
HOW DO WE SHAPE THE FUTURE?
A century ago, Albert Einstein revolutionised our understanding of space, time, energy and matter. We are still finding awesome confirmations of his predictions, like the gravitational waves observed in 2016 by the LIGO experiment. When I think about ingenuity, Einstein springs to mind. Where did his ingenious ideas come from? A blend of qualities, perhaps: intuition, originality, brilliance. Einstein had the ability to look beyond the surface to reveal the underlying structure. He was undaunted by common sense, the idea that things must be the way they seemed. He had the courage to pursue ideas that seemed absurd to others. And this set him free to be ingenious, a genius of his time and every other.
A key element for Einstein was imagination. Many of his discoveries came from his ability to reimagine the universe through thought experiments. At the age of sixteen, when he visualised riding on a beam of light, he realised that from this vantage light would appear as a frozen wave. That image ultimately led to the theory of special relativity.
One hundred years later, physicists know far more about the universe than Einstein did. Now we have greater tools for discovery, such as particle accelerators, supercomputers, space telescopes and experiments such as the LIGO lab’s work on gravitational waves. Yet imagination remains our most powerful attribute. With it, we can roam anywhere in space and time. We can witness nature’s most exotic phenomena while driving in a car, snoozing in bed or pretending to listen to someone boring at a party.
As a boy, I was passionately interested in how things worked. In those days, it was more straightforward to take something apart and figure out the mechanics. I was not always successful in reassembling toys I had pulled to pieces, but I think I learned more than a boy or girl today would, if he or she tried the same trick on a smartphone.
My job now is still to figure out how things work, only the scale has changed. I don’t destroy toy trains any more. Instead, I try to figure out how the universe works, using the laws of physics. If you know how something works, you can control it. It sounds so simple when I say it like that! It is an absorbing and complex endeavour that has fascinated and thrilled me throughout my adult life. I have worked with some of the greatest scientists in the world. I have been lucky to be alive through what has been a glorious time in my chosen field, cosmology, the study of the origins of the universe.
The human mind is an incredible thing. It can conceive of the magnificence of the heavens and the intricacies of the basic components of matter. Yet for each mind to achieve its full potential, it needs a spark. The spark of enquiry and wonder.
Often that spark comes from a teacher. Allow me to explain. I wasn’t the easiest person to teach, I was slow to learn to read and my handwriting was untidy. But when I was fourteen my teacher at my school in St Albans, Dikran Tahta, showed me how to harness my energy and encouraged me to think creatively about mathematics. He opened my eyes to maths as the blueprint of the universe itself. If you look behind every exceptional person there is an exceptiona
l teacher. When each of us thinks about what we can do in life, chances are we can do it because of a teacher.
However, education and science and technology research are endangered now more than ever before. Due to the recent global financial crisis and austerity measures, funding is being significantly cut to all areas of science, but in particular the fundamental sciences have been badly affected. We are also in danger of becoming culturally isolated and insular, and increasingly remote from where progress is being made. At the level of research, the exchange of people across borders enables skills to transfer more quickly and brings new people with different ideas, derived from their different backgrounds. This can easily make for progress where now this progress will be harder. Unfortunately, we cannot go back in time. With Brexit and Trump now exerting new forces in relation to immigration and the development of education, we are witnessing a global revolt against experts, which includes scientists. So what can we do to secure the future of science and technology education?
I return to my teacher, Mr Tahta. The basis for the future of education must lie in schools and inspiring teachers. But schools can only offer an elementary framework where sometimes rote-learning, equations and examinations can alienate children from science. Most people respond to a qualitative, rather than a quantitative, understanding, without the need for complicated equations. Popular science books and articles can also put across ideas about the way we live. However, only a small percentage of the population read even the most successful books. Science documentaries and films reach a mass audience, but it is only one-way communication.