Book Read Free

Hello World

Page 14

by Hannah Fry


  The latter certainly presents more of a technical challenge than level 2, but some manufacturers have already started to build their cars to accommodate our inattention. Audi’s traffic-jam pilot is one such example.58 It can completely take over when you’re in slow-moving highway traffic, leaving you to sit back and enjoy the ride. Just be prepared to step in if something goes wrong.fn5

  There’s a reason why Audi has limited its system to slow-moving traffic on limited-access roads. The risks of catastrophe are lower in motorway congestion. And that’s important. Because as soon as a human stops monitoring the road, you’re left with the worst possible combination of circumstances when an emergency happens.

  A driver who’s not paying attention will have very little time to assess their surroundings and decide what to do. Imagine sitting in a self-driving car, hearing an alarm and looking up from your book to see a truck ahead shedding its load into your path. In an instant, you’ll have to process all the information around you: the motorbike in the left lane, the van braking hard ahead, the car in the blind spot on your right. You’d be most unfamiliar with the road at precisely the moment you need to know it best; add in the lack of practice, and you’ll be as poorly equipped as you could be to deal with the situations demanding the highest level of skill.

  It’s a fact that has also been borne out in experiments with driverless car simulations. One study, which let people read a book or play on their phones while the car drove itself, found that it took up to 40 seconds after an alarm sounded for them to regain proper control of the vehicle.59 That’s exactly what happened with Air France flight 447. Captain Dubois, who should have been easily capable of saving the plane, took around one minute too long to realize what was happening and come up with the simple solution that would have solved the problem.60

  Ironically, the better self-driving technology gets, the worse these problems become. A sloppy autopilot that sets off an alarm every 15 minutes will keep a driver continually engaged and in regular practice. It’s the smooth and sophisticated automatic systems that are almost always reliable that you’ve got to watch out for.

  This is why Gill Pratt, who heads up Toyota’s research institute, has said:

  The worst case is a car that will need driver intervention once every 200,000 miles … An ordinary person who has a [new] car every 100,000 miles would never see it [the automation hand over control]. But every once in a while, maybe once for every two cars that I own, there would be that one time where it suddenly goes ‘beep beep beep, now it’s your turn!’ And the person, typically having not seen this for years and years, would … not be prepared when that happened.61

  Great expectations

  Despite all this, there’s good reason to push ahead into the self-driving future. The good still outweighs the bad. Driving remains one of the biggest causes of avoidable deaths in the world. If the technology is remotely capable of reducing the number of fatalities on the roads overall, you could argue that it would be unethical not to roll it out.

  And there’s no shortage of other advantages: even simple self-driving aids can reduce fuel consumption62 and ease traffic congestion.63 Plus – let’s be honest – the idea of taking your hands off the steering wheel while doing 70 miles an hour, even if it’s only for a moment, is just … cool.

  But, thinking back to Bainbridge’s warnings, they do hint at a problem with how current self-driving technology is being framed.

  Take Tesla, one of the first car manufacturers to bring an autopilot to the market. There’s little doubt that their system has had a net positive impact, making driving safer for those who use it – you don’t need to look far to find online videos of the ‘Forward Collision Warning’ feature, recognizing the risk of accident before the driver, setting off an alarm and saving the car from crashing.64

  But there’s a slight mismatch between what the cars can do – with what’s essentially a fancy forward-facing parking sensor and clever cruise control – and the language used to describe them. For instance, in October 2016 the company announced that ‘all Tesla cars being produced now have full self-driving hardware’.fn6 According to an article in The Verge, Elon Musk, product architect of Tesla, added: ‘The full autonomy update will be standard on all Tesla vehicles from here on out.’65 And that phrase ‘full autonomy’ is arguably at odds with the warning users must accept before using the current autopilot: ‘You need to maintain control and responsibility of your vehicle.’66

  Expectations are important. You may disagree, but I think that people shoving oranges in their steering wheels – or worse, as I found in the darker corners of the Web, creating and selling devices that ‘[allow] early adopters to [drive, while] reducing or turning off the autopilot check-in warning’fn7 – is the inevitable corollary of a trusted brand using language that misleads.

  Of course, Tesla isn’t the only culprit in the car industry. And every company on earth appeals to our fantasies to sell their products. But for me, there’s a difference between buying a perfume because I think it will make me more attractive, and buying a car because I think its full autonomy will keep me safe.

  Marketing strategies aside, I can’t help but wonder if we’re thinking about driverless cars in the wrong way altogether.

  By now, we know that humans are really good at understanding subtleties, at analysing context, applying experience and distinguishing patterns. We’re really bad at paying attention, at precision, at consistency and at being fully aware of our surroundings. We have, in short, precisely the opposite set of skills to algorithms.

  So, why not follow the lead of the tumour-finding software in the medical world and let the skills of the machine complement the skills of the human, and advance the abilities of both? Until we get to full autonomy, why not flip the equation on its head and aim for a self-driving system that supports the driver rather than the other way around? A safety net, like ABS or traction control, that can patiently monitor the road and stay alert for a danger the driver has missed. Not so much a chauffeur as a guardian.

  That is the idea behind work being done by the Toyota research institute. They’re building two modes into their car. There’s the ‘chauffeur’ mode, which – like Audi’s traffic-jam pilot – could take over in heavy congestion; and there’s the ‘guardian’ mode, which runs in the background while a human drives, and acts as a safety net,67 reducing the risk of an accident if anything crops up that the driver hasn’t seen.

  Volvo has adopted a similar approach. Its ‘Autonomous Emergency Braking’ system, which automatically slows the car down if it gets too close to a vehicle in front, is widely credited for the impressive safety record of the Volvo XC90. Since the car first went on sale in the UK in 2002, over 50,000 vehicles have been purchased, and not a single driver or passenger within any of them has been killed in a crash.68

  Like much of the driverless technology that is so keenly discussed, we’ll have to wait and see how this turns out. But one thing is for sure – as time goes on, autonomous driving will have a few lessons to teach us that apply well beyond the world of motoring. Not just about the messiness of handing over control, but about being realistic in our expectations of what algorithms can do.

  If this is going to work, we’ll have to adjust our way of thinking. We’re going to need to throw away the idea that cars should work perfectly every time, and accept that, while mechanical failure might be a rare event, algorithmic failure almost certainly won’t be any time soon.

  So, knowing that errors are inevitable, knowing that if we proceed we have no choice but to embrace uncertainty, the conundrums within the world of driverless cars will force us to decide how good something needs to be before we’re willing to let it loose on our streets. That’s an important question, and it applies elsewhere. How good is good enough? Once you’ve built a flawed algorithm that can calculate something, should you let it?

  Crime

  IT WAS A warm July day in 1995 when a 22-year-old university student packed up her
books, left the Leeds library and headed back to her car. She’d spent the day putting the finishing touches to her dissertation and now she was free to enjoy the rest of her summer. But, as she sat in the front seat of her car getting ready to leave, she heard the sound of someone running through the multi-storey car park towards her. Before she had a chance to react, a man leaned in through the open window and held a knife to her throat. He forced her on to the back seat, tied her up, super-glued her eyelids together, took the wheel of the car and drove away.

  After a terrifying drive, he pulled up at a grassy embankment. She heard a clunk as he dropped his seat down and then a shuffling as he started undoing his clothes. She knew he was intending to rape her. Fighting blind, she pulled her knees up to her chest and pushed outwards with all her might, forcing him backwards. As she kicked and struggled, the knife in his hand cut into his fingers and his blood dripped on to the seats. He hit her twice in the face, but then, to her immense relief, got out of the car and left. Two hours after her ordeal had begun, the student was found wandering down Globe Road in Leeds, distraught and dishevelled, her shirt torn, her face red from where he’d struck her and her eyelids sealed with glue.1

  Sexual attacks on strangers like this one are incredibly rare, but when they do occur they tend to form part of a series. And sure enough, this wasn’t the first time the same man had struck. When police analysed blood from the car they found its DNA matched a sample from a rape carried out in another multi-storey car park two years earlier. That attack had taken place some 100 kilometres further south, in Nottingham. And, after an appeal on the BBC Crimewatch programme, police also managed to link the case to three other incidents a decade before in Bradford, Leeds and Leicester.2

  But tracking down this particular serial rapist was not going to be easy. Together, these crimes spanned an area of 7,046 square kilometres3 – an enormous stretch of the country. They also presented the police with a staggering number of potential suspects – 33,628 in total – each one of whom would have to be eliminated from their enquiries or investigated.

  An enormous search would have to be made, and not for the first time. The attacks ten years earlier had led to a massive man-hunt; but despite knocking on 14,153 front doors, and collecting numerous swabs, hair samples and all sorts of other evidence, the police investigation had eventually led nowhere. There was a serious risk that the latest search would follow the same path, until a Canadian ex-cop, Kim Rossmo, and his newly developed algorithm were brought in to help.4

  Rossmo had a bold idea. Rather than taking into account the vast amount of evidence already collected, his algorithm would ignore virtually everything. Instead, it would focus its attention exclusively on a single factor: geography.

  Perhaps, said Rossmo, a perpetrator doesn’t randomly choose where they target their victims. Perhaps their choice of location isn’t an entirely free or conscious decision. Even though these attacks had taken place up and down the country, Rossmo wondered if there could be an unintended pattern hiding in the geography of the crimes – a pattern simple enough to be exploited. There was a chance, he believed, that the locations at which crimes took place could betray where the criminal actually came from. The case of the serial rapist was a chance to put his theory to the test.

  Operation Lynx and the lawn sprinkler

  Rossmo wasn’t the first person to suggest that criminals unwittingly create geographical patterns. His ideas have a lineage that dates back to the 1820s, when André-Michel Guerry, a lawyer-turned-statistician who worked for the French Ministry of Justice, started collecting records of the rapes, murders and robberies that occurred in the various regions of France.5

  Although collecting these kinds of numbers seems a fairly standard thing to do now, at the time maths and statistics had only ever been applied to the hard sciences, where equations are used to elegantly describe the physical laws of the universe: tracing the path of a planet across a sky, calculating the forces within a steam engine – that sort of thing. No one had bothered to collect crime data before. No one had any idea what to count, how to count or how often they should count it. And anyway – people thought at the time – what was the point? Man was strong, independent in nature and wandering around acting according to his own free will. His behaviour couldn’t possibly be captured by the paltry practice of statistics.6

  But Guerry’s analysis of his national census of criminals suggested otherwise. No matter where you were in France, he found, recognizable patterns appeared in what crimes were committed, how – and by whom. Young people committed more crimes than old, men more than women, poor more than rich. Intriguingly, it soon became clear that these patterns didn’t change over time. Each region had its own set of crime statistics that would barely change year on year. With an almost terrifying exactitude, the numbers of robberies, rapes and murders would repeat themselves from one year to the next. And even the methods used by the murderers were predictable. This meant that Guerry and his colleagues could pick an area and tell you in advance exactly how many murders by knife, sword, stone, strangulation or drowning you could expect in a given year.7

  So maybe it wasn’t a question of the criminal’s free will after all. Crime is not random; people are predictable. And it was precisely that predictability that, almost two centuries after Guerry’s discovery, Kim Rossmo wanted to exploit.

  Guerry’s work focused on the patterns found at the country and regional levels, but even at the individual level, it turns out that people committing crime still create reliable geographical patterns. Just like the rest of us, criminals tend to stick to areas they are familiar with. They operate locally. That means that even the most serious of crimes will probably be carried out close to where the offender lives. And, as you move further and further away from the scene of the crime, the chance of finding your perpetrator’s home slowly drops away,8 an effect known to criminologists as ‘distance decay’.

  On the other hand, serial offenders are unlikely to target victims who live very close by, to avoid unnecessary police attention on their doorsteps or being recognized by neighbours. The result is known as a ‘buffer zone’ which encircles the offender’s home, a region in which there’ll be a very low chance of their committing a crime.9

  These two key patterns – distance decay and the buffer zone – hidden among the geography of the most serious crimes, were at the heart of Rossmo’s algorithm. Starting with a crime scene pinned on to a map, Rossmo realized he could mathematically balance these two factors and sketch out a picture of where the perpetrator might live.

  That picture isn’t especially helpful when only one crime has been committed. Without enough information to go on, the so-called geoprofiling algorithm won’t tell you much more than good old-fashioned common sense. But, as more crimes are added, the picture starts to sharpen, slowly bringing into focus a map of the city that highlights areas in which you’re most likely to catch your culprit.

  It’s as if the serial offender is a rotating lawn sprinkler. Just as it would be difficult to predict where the very next drop of water is going to fall, you can’t foresee where your criminal will attack next. But once the water has been spraying for a while and many drops have fallen, it’s relatively easy to observe from the pattern of the drops where the lawn sprinkler is likely to be situated.

  And so it was with Rossmo’s algorithm for Operation Lynx – the hunt for the serial rapist. The team now had the locations of five separate crimes, plus several places where a stolen credit card had been used by the attacker to buy alcohol, cigarettes and a video game. On the basis of just those locations, the algorithm highlighted two key areas in which it believed the perpetrator was likely to live: Millgarth and Killingbeck, both in the suburbs of Leeds.10

  Back in the incident room, police had one other key piece of evidence to go on: a partial fingerprint left by the attacker at the scene of an earlier crime. It was too small a sample for an automatic fingerprint recognition system to be able to whizz through a databas
e of convicted criminals’ prints looking for a match, so any comparisons would need to be made meticulously by an expert with a magnifying glass, painstakingly examining one suspect at a time. By now the operation was almost three years old and – despite the best efforts of 180 different officers from five different forces – it was beginning to run out of steam. Every lead resulted in just another dead end.

  Officers decided to manually check all the fingerprints recorded in the two places the algorithm had highlighted. First up was Millgarth: but a search through the prints stored in the local police database returned nothing. Then came Killingbeck – and after 940 hours of sifting through the records here, the police finally came up with a name: Clive Barwell.

  Barwell was a 42-year-old married man and father of four, who had been in jail for armed robbery during the hiatus in the attacks. He now worked as a lorry driver and would regularly make long trips up and down the country in the course of his job; but he lived in Killingbeck and would often visit his mother in Millgarth, the two areas highlighted by the algorithm.11 The partial print on its own hadn’t been enough to identify him conclusively, but a subsequent DNA test proved that it was he who had committed these horrendous crimes. The police had their man. Barwell pleaded guilty in court in October 1999. The judge sentenced him to eight back-to-back life sentences.12

  Once it was all over, Rossmo had the chance to take stock of how well the algorithm had performed. It had never actually pinpointed Barwell by name, but it did highlight on a map the areas where the police should focus their attention. If the police had used the algorithm to prioritize their list of suspects on the basis of where each of them lived – checking the fingerprints and taking DNA swabs of each one in turn – there would have been no need to trouble anywhere near as many innocent people. They would have found Clive Barwell after searching only 3 per cent of the area.13

 

‹ Prev