Quantum Computing

Home > Other > Quantum Computing > Page 1
Quantum Computing Page 1

by Amit Katwala




  Amit Katwala

  * * *

  QUANTUM COMPUTING

  How It Works, and Why It Could Change the World

  Contents

  Introduction

  1 What is quantum computing?

  2 Building the impossible

  3 Exponential power

  4 Cracking the code

  5 Simulating nature

  6 The quantum future

  Glossary

  Acknowledgements

  Bibliography

  Notes

  Index

  About the Author

  WIRED is the world’s most authoritative and respected publication reporting on the emerging trends, ideas and technologies shaping our world. Our mission is to tell the stories of the people who are driving this change and to understand its impact on business, society and individuals. WIRED has become synonymous with informed and intelligent analysis of these transformational forces and the significance of them for industries and individuals and is a consistently reliable predictor of change.

  Amit Katwala is a senior editor at WIRED UK.

  Introduction

  Summit, the IBM supercomputer at Oak Ridge National Laboratory in Tennessee, weighs three times more than a blue whale and fills the space of two tennis courts. It has 219 kilometres of cabling and can perform more than three quintillion calculations per second. In June 2019, it was crowned the world’s fastest supercomputer for the second year in a row.

  But around the same time as it claimed that prize, Summit was being secretly outdone – not by Sierra, its closest US rival, nor by Fugaku, a Japanese project touted to overtake it when it comes fully online in 2021. Instead, this vast machine was quietly beaten by a tiny chip no bigger than a thumbnail, in a small private research lab near the beach in Santa Barbara, California.

  The chip, called Sycamore, was developed by researchers at the search giant Google. It forms the central part of a quantum computer – a new, fundamentally different type of device that works according to the laws of quantum physics.

  Quantum computers have vast potential. They could eventually revolutionise everything from artificial intelligence to the development of new drugs. A working quantum computer could help create powerful new materials, turbocharge the fight against climate change and completely upend the cryptography that keeps our secrets safe. It would, as Discover magazine put it in the early days of research, be ‘less a machine than a force of nature’.1

  If their potential can be fully realised, these devices won’t simply be more powerful versions of our existing computers. They work in a completely different way, which could enable them to do seemingly impossible things. Because their strengths are so alien to the way most of us perceive the universe, it can be difficult to explain them without resorting to slightly fuzzy analogies. But effectively, quantum computers could unlock a new set of abilities based on the new, deeper understanding of the universe that physicists have developed over the last century.

  ‘Imagine you’re playing chess and the rules are changed in your favour, enabling your rook an expanded range of moves,’ write quantum scientist Michael Nielsen and software engineer Andy Matuschak, in an analogy that attempts to make something astonishingly complex comprehensible to the average person.2 ‘That extra flexibility might enable you to achieve checkmate much faster because you can get to new positions much more quickly.’

  In certain situations, quantum computers could allow us to do things that are impossible right now, even with the power of a million supercomputers. Believe the hype, and we’re on the verge of a new technological era, in which quantum computers will help us create more efficient travel routes, and crunch complex sums in scientific experiments. They’ll potentially change the way banks analyse risk, and allow chemists and biologists to create detailed simulations of the natural world to develop new, more efficient materials and processes. ‘This is one of the biggest technology jumps ever, in history,’ says William Hurley (known as Whurley), tech entrepreneur and founder of Strangeworks, which is working to make quantum computing accessible to all. ‘Computing will change more in the next ten years than it has in the last hundred.’

  Until recently, there were doubts over whether quantum computers would ever actually work. Even now, some are still sceptical whether a practically useful version of a quantum computer will ever actually exist. They’re incredibly difficult to build, presenting huge engineering, manufacturing and mathematical challenges. But, over the last twenty years, some of the world’s biggest companies – Google, Amazon, Microsoft, IBM, Intel and others – have been racing to build working, practically useful quantum devices.

  In the summer of 2019, a team of researchers at Google, who had spent the better part of a decade trying to build quantum computers, reached a milestone known as ‘quantum supremacy’. The term, coined by the physicist John Preskill, describes the point at which a quantum computer can do something that the world’s best classical computer could never do.

  This seminal moment in the history of computing arrived on 13 June, when Google’s Sycamore chip – chilled to a temperature colder than outer space – performed a series of complex calculations that would have taken the Summit supercomputer 10,000 years, in just 3 minutes and 20 seconds. It was a landmark moment. ‘This is a wonderful achievement. The engineering here is just phenomenal,’ Peter Knight, a physicist at Imperial College London, told New Scientist when the research paper was published a few months later.3 ‘It shows that quantum computing is really hard but not impossible. It is a stepping stone toward a big dream.’

  It marked a fundamental moment: when quantum computing went from being neat theory to genuine possibility. Since Google’s announcement, millions of dollars in funding have poured into the field from governments and venture capitalists.

  Inside Google, they compare the achievement to the first flight by the Wright brothers at Kitty Hawk, which marked the birth of the aviation industry. ‘There are people that literally think that the thing we did or the next steps are not possible,’ says Google quantum hardware engineer Tony Megrant, who helped design and build the Sycamore chip. Others are less convinced. By the time the research was officially published, some of Google’s rivals in the quantum race – particularly IBM – had started to cast some doubt on whether the Sycamore chip was actually as far ahead of Summit as the search giant had claimed. They argued that the task Google set was too narrow and specific to count as quantum supremacy.

  But regardless of the technicalities, quantum supremacy is a huge technical achievement, and one that could mark a new era of technological process: the dawn of the Quantum Age. The real race, however, has only just begun. ‘We’ve been working on the hardware aspect of this for ten years, so I always picture in my head the starting line of a race being at the top of a mountain,’ Megrant says. ‘We had to get up here to start the race.’

  This book will tell the story of that race, and explore the myriad ways in which quantum computers could reshape the worlds of finance, medicine and politics, and further our understanding of the universe. But we’ll start with the basics.

  1

  What is quantum computing?

  Until very recently, every computer in the world – from the room-sized codebreakers of the 1940s to the tiny processor in your smartphone – worked in essentially the same way. The birth of silicon chips and semiconductors has driven unbelievable progress, but the underlying principles governing today’s high-tech devices are exactly the same ones that Alan Turing and his colleagues worked with at Bletchley Park, the British codebreaking centre which gave rise to some of the first classical computers. They are the same ones that power everything that came in between, to the extent that even your creaking old desktop PC ca
n theoretically do anything the Summit supercomputer can do (if you give it enough time and memory).

  These classical computers all work using bits. Bits are basically tiny switches – they can be one of three different kinds, either valves, relays or transistors etched in silicon – that can be in the off position, represented by a 0, or in the on position, represented by a 1. Every song you play, YouTube video you watch and app you download is ultimately made up of some combination of these 1s and 0s.

  This combination of 0s and 1s – on and off switches – is known as binary code. It worked fine when the things computers needed to do were simple. The number 2 in binary, for instance, is represented by the string 10, while 3 is 11 and 4 is 100. But as computer processes get more complex, the number of bits you need to encode them grows rapidly. For example, 15 is 1111, while 500 is 111110100.

  Each element in that string of binary code requires a separate bit, which means a separate physical switch that can alternate between one and zero. The technological marvels of the computer age have been made possible by huge advances in our ability to make those switches smaller and more efficient, so we can cram more and more bits into the same amount of space.

  But there are still things we can’t do, even with millions of these chips and trillions of bits running together in a supercomputer like Summit. Bits are black and white, either/or. When things are uncertain or complex, you need a lot more bits to describe them, which means that some seemingly simple problems can become exponentially more difficult for normal computers to handle.

  ‘Say we want to send you from the UK to fourteen cities in the US and work out the optimal path – my laptop can do that in a second,’ says tech entrepreneur Whurley. ‘But if I made it twenty-two cities, using the same algorithm and the same laptop, it would take two thousand years.’ This is a classic example of what’s called the travelling salesman problem: the kind of problem where the challenge grows exponentially with each added variable.

  A classical device trying to plot the most efficient order in which to visit the cities has to check every single possible combination, so for every city you add to the journey, the amount of computing power balloons – 11 cities have 20 million routes between them, 12 cities have 240 million routes, 15 cities have more than 650 billion. Modelling complex interactions between molecules, as we need to do if we want to accurately simulate chemical reactions, or speed up the development of medicines, creates the same problem: with every variable you add, the challenge gets a lot bigger.

  The American physicist Richard Feynman was one of the first to realise this. Feynman was something of a rock star in academic circles – he’d worked on the atomic bomb, won the Nobel Prize and done pioneering work on nanotechnology. With his long hair and outspoken manner, he’d even achieved the often impossible feat of breaking into the public consciousness – a 1999 poll ranked him as one of the ten most influential physicists of the twentieth century.

  But, most importantly, he was also a leading voice in quantum mechanics – the study of the strange things that start to happen in physics when you get down to a really small scale. In 1981, he gave a couple of lectures – one at Caltech in Pasadena, the other at MIT in Boston – that marked the beginnings of the field of quantum computing.

  A (very) brief history of quantum mechanics

  For centuries, physicists viewed the universe as a kind of giant pool table where atoms bounced off each other at perfect angles, in geometric lines determined by their speed and angle of impact. Their theories dated back to the work of the seventeenth-century mathematician Isaac Newton, who codified them in his famous laws of motion and gravitation.

  But around the dawn of the twentieth century, as physicists delved deeper into the inner workings of the atom, they started spotting things that didn’t match up to our understanding of Newtonian physics or thermodynamics, which governs how substances behave as their temperature changes. Below a certain scale, the laws of the universe seemed to just stop.

  The field of quantum mechanics sprang up to describe the strange behaviour of the particles that go to make up an atom. (Atoms, as a reminder, consist of a nucleus of protons and neutrons orbited by electrons). Sometimes, physicists discovered, electrons behave like continuous streams, spreading out like a beam of light. At other times, they seem to be broken down into individual ‘packets’ or ‘quanta’ (hence ‘quantum physics’), which, far from being continuous, are limited to discrete values – like a car that can only go at 30mph or 40mph, but nothing in between, or water from a tap that emerges not as a stream but only as visible drops. And sometimes they can appear to be in both states simultaneously – in a phenomenon called quantum superposition.

  The classic illustration of this is the ‘double slit’ experiment. Imagine you’re standing facing a thin wall, with two vertical slits cut into it like windows, and that there’s another wall behind it. Now, imagine you have a paintball gun, and that you’re firing it at the wall in front of you, strafing back and forth like the muscled hero in a low-budget action movie. The wall in front of you would be covered in paint, of course, while the wall behind that would be completely clear except for two vertical strips of paint corresponding with the two gaps in the first wall. But – and this is the bit that’s basically driven the entire field of quantum physics for a century – if you were to shrink that experiment down and fire single electrons instead of paintballs, you’d find a completely different result.

  An illustration of the double slit experiment

  Even if you’d fired the electron through one of those two slits, it would appear somewhere seemingly random on the back wall – not necessarily behind one of the two slits. Only after repeating this hundreds or thousands of times would a pattern start to emerge – one of alternating light and dark stripes, like a barcode. This pattern – called an interference pattern – happens because electrons seem to behave like waves until the point at which they’re observed or measured, when they revert to being particles again.

  When this initial electron wave passes through the two slits in the double-slit experiment, it creates two smaller waves, which then cancel out and reinforce each other. It’s as if you’d dropped two stones into a pond and watched the ripples overlap and change one another. But these aren’t real waves like the ones on the surface of still water. Instead of tracking the movement or behaviour of an object over time, these ‘wave functions’, as they’re known, describe all the possible positions where an electron might be, and the varying probability of it occupying each position at a particular moment in time. When it’s measured – in this case, when the electron hits the back wall – it’s said that the wave function ‘collapses’ into a single specific outcome, like a spinning die coming to a stop. Physicists haven’t yet explained how or why this happens, or indeed whether the wave function is simply a mathematical phenomenon, or something that actually happens in the real world in some observable way.

  Imagine a textured map – like one of the world displaying features like the rise and fall of mountain ranges and valleys. Each point on the map represents a probability amplitude: the higher the terrain, the greater the likelihood of the electron occupying that position in space. The wave function in this analogy is the map itself – but it’s a map that’s capable of being twisted and distorted, like a wave of sound or water.

  In the double-slit experiment, when this wave function hits the gaps in the front wall, it gets squeezed and compressed like a swell of water would. The different probability amplitudes that make up the wave function begin to interfere with one another in complex ways, as if you’d folded the textured map in half, or crunched it up into a ball.

  The chance of an electron landing at a particular point on the back wall is determined by the way these probability amplitudes interfere with one another. In our normal lives, probability can only be a positive value – ranging from 0 per cent (impossible) to 100 per cent (certain). But in the quantum realm – the world of electrons and photons – pr
obability can be ‘positive, negative, or even complex’, as Scott Aaronson, founding director of the Quantum Information Center at the University of Texas at Austin, explains in an article for the New York Times.1 Where events at the classical scale have probabilities of between 0 and 1, at the quantum level, events have probability amplitudes instead – which can be either positive or negative.

  This means that as well as reinforcing each other, these probability amplitudes can cancel one another out – so the event never happens at all, which explains the blank spaces in the barcode pattern of the double slit experiment. ‘This is “quantum interference”,’ Aaronson writes, ‘and it is behind everything else you’ve ever heard about the weirdness of the quantum world.’

  At the atomic scale – the one we all live at – you can make specific, fairly accurate predictions about where an atom or an object will end up if you know its speed and direction. But these patterns of quantum interference mean that the subatomic scale has a degree of randomness baked in. You can only ever know the rough probability of something happening. You can never say for certain. The universe is not like a pool table. Nature has uncertainty baked into its core.

  From bits to qubits

  By the 1980s, early computers were beginning to be used for running simulations of the weather or chemical reactions, but Richard Feynman spotted the flaw. In order to accurately simulate physics, chemistry or anything else both complex and minuscule, you need a simulation that can adhere to the same, probability-based laws of quantum mechanics. Feynman summed up the problem at the end of one of his talks, which was on the challenges of simulating nature using computers, in a passage that’s now part of quantum computing lore. ‘Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical,’ he said. ‘And, by golly, it’s a wonderful problem, because it doesn’t look so easy.’2

 

‹ Prev