by Hannah Fry
So, imagine for a moment: what if we accepted that perfection doesn’t exist? Algorithms will make mistakes. Algorithms will be unfair. That should in no way distract us from the fight to make them more accurate and less biased wherever we can – but perhaps acknowledging that algorithms aren’t perfect, any more than humans are, might just have the effect of diminishing any assumption of their authority.
Imagine that, rather than exclusively focusing our attention on designing our algorithms to adhere to some impossible standard of perfect fairness, we instead designed them to facilitate redress when they inevitably erred; that we put as much time and effort into ensuring that automatic systems were as easy to challenge as they are to implement. Perhaps the answer is to build algorithms to be contestable from the ground up. Imagine that we designed them to support humans in their decisions, rather than instruct them. To be transparent about why they came to a particular decision, rather than just inform us of the result.
In my view, the best algorithms are the ones that take the human into account at every stage. The ones that recognize our habit of over-trusting the output of a machine, while embracing their own flaws and wearing their uncertainty proudly front and centre.
This was one of the best features of the IBM Watson Jeopardy-winning machine. While the format of the quiz show meant it had to commit to a single answer, the algorithm also presented a series of alternatives it had considered in the process, along with a score indicating how confident it was in each being correct. Perhaps if likelihood of recidivism scores included something similar, judges might find it easier to question the information the algorithm was offering. And perhaps if facial recognition algorithms presented a number of possible matches, rather than just homing in on a single face, misidentification might be less of an issue.
The same feature is what makes the neural networks that screen breast cancer slides so effective. The algorithm doesn’t dictate which patients have tumours. It narrows down the vast array of cells to a handful of suspicious areas for the pathologist to check. The algorithm never gets tired and the pathologist rarely misdiagnoses. The algorithm and the human work together in partnership, exploiting each other’s strengths and embracing each other’s flaws.
There are other examples, too – including in the world of chess, where this book began. Since losing to Deep Blue, Garry Kasparov hasn’t turned his back on computers. Quite the opposite. Instead, he has become a great advocate of the idea of ‘Centaur Chess’, where a human player and an algorithm collaborate with one another to compete with another hybrid team. The algorithm assesses the possible consequences of each move, reducing the chance of a blunder, while the human remains in charge of the game.
Here’s how Kasparov puts it: ‘When playing with the assistance of computers, we could concentrate on strategic planning instead of spending so much time on calculations. Human creativity was even more paramount under these conditions.’2 The result is chess played at a higher level than has ever been seen before. Perfect tactical play and beautiful, meaningful strategies. The very best of both worlds.
This is the future I’m hoping for. One where the arrogant, dictatorial algorithms that fill many of these pages are a thing of the past. One where we stop seeing machines as objective masters and start treating them as we would any other source of power. By questioning their decisions; scrutinizing their motives; acknowledging our emotions; demanding to know who stands to benefit; holding them accountable for their mistakes; and refusing to become complacent. I think this is the key to a future where the net overall effect of algorithms is a positive force for society. And it’s only right that it’s a job that rests squarely on our shoulders. Because one thing is for sure. In the age of the algorithm, humans have never been more important.
Photograph Credits
1. Here: ‘Car–dog’, reproduced by permission of Danilo Vasconcellos Vargas, Kyushu University, Fukuoka, Japan.
2. Here: ‘Gorilla in chest’, reproduced by permission of Trafton Drew, University of Utah, Salt Lake City, USA.
3. Here: ‘Steve Talley images’ © Steve Talley (left) and FBI.
4. Here: ‘Neil Douglas and doppelgänger’, reproduced by permission of Neil Douglas.
5. Here: ‘Tortoiseshell glasses’, reproduced by permission of Mahmood Sharif, Carnegie Mellon University, Pittsburgh, USA; ‘Milla Jovovich at the Cannes Film Festival’ by Georges Biard.
Notes
A note on the title
1 Brian W. Kernighan and Dennis M. Ritchie. The C Programming Language (Upper Saddle River, NJ: Prentice-Hall, 1978).
Introduction
1 Robert A. Caro, The Power Broker: Robert Moses and the Fall of New York (London: Bodley Head, 2015), p. 318.
2 There are a couple of fantastic essays on this very idea that are well worth reading. First, Langdon Winner, ‘Do artifacts have politics?’, Daedalus, vol. 109, no. 1, 1980, pp. 121–36, https://www.jstor.org/stable/20024652, which includes the example of Moses’ bridges. And a more modern version: Kate Crawford, ‘Can an algorithm be agonistic? Ten scenes from life in calculated publics’, Science, Technology and Human Values, vol. 41, no. 1, 2016, pp. 77–92.
3 Scunthorpe Evening Telegraph, 9 April 1996.
4 Chukwuemeka Afigbo (@nke_ise) posted a short video of this effect on Twitter. Worth looking up if you haven’t seen it. It’s also on YouTube: https://www.youtube.com/watch?v=87QwWpzVy7I.
5 CNN interview, Mark Zuckerberg: ‘I’m really sorry that this happened’, YouTube, 21 March 2018, https://www.youtube.com/watch?v=G6DOhioBfyY.
Power
1 From a private conversation with the chess grandmaster Jonathan Rowson.
2 Feng-Hsiung Hsu, ‘IBM’s Deep Blue Chess grandmaster chips’, IEEE Micro, vol. 19, no. 2, 1999, pp. 70–81, http://ieeexplore.ieee.org/document/755469/.
3 Garry Kasparov, Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins (London: Hodder & Stoughton, 2017).
4 TheGoodKnight, ‘Deep Blue vs Garry Kasparov Game 2 (1997 Match)’, YouTube, 18 Oct. 2012, https://www.youtube.com/watch?v=3Bd1Q2rOmok&t=2290s.
5 Ibid.
6 Steven Levy, ‘Big Blue’s Hand of God’, Newsweek, 18 May 1997, http://www.newsweek.com/big-blues-hand-god-173076.
7 Kasparov, Deep Thinking, p. 187.
8 Ibid., p. 191.
9 According to Merriam–Webster. The Oxford English Dictionary’s definition makes more of the mathematical nature of algorithms: ‘a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer’.
10 There are lots of different ways you could group algorithms, and I have no doubt that some computer scientists will complain that this list is too simplistic. It’s true that a more exhaustive list would have included several other category headers: mapping, reduction, regression and clustering, to name a few. But in the end, I chose this particular set of categories – from Nicholas Diakopoulos, Algorithmic Accountability Reporting: On the Investigation of Black Boxes (New York: Tow Center for Digital Journalism, Columbia University, 2014) – as it does a great job at covering the basics and offers a useful way to demystify and distil a vast, complex area of study.
11 Kerbobotat, ‘Went to buy a baseball bat on Amazon, they have some interesting suggestions for accessories’, Reddit, 28 Sept. 2013, https://www.reddit.com/r/funny/comments/1nb16l/went_to_buy_a_baseball_bat_on_amazon_they_have/.
12 Sarah Perez, ‘Uber debuts a “smarter” UberPool in Manhattan’, TechCrunch, 22 May 2017, https://techcrunch.com/2017/05/22/uber-debuts-a-smarter-uberpool-in-manhattan/.
13 I say ‘in theory’ deliberately. The reality might be a little different. Some algorithms have been built over years by hundreds, even thousands, of developers, each incrementally adding their own steps to the process. As the lines of code grow, so does the complexity of the system, until the logical threads become like a tangled plate of spaghetti. Eventually, the algorithm becomes impossible to follow, and far too
complicated for any one human to understand.
In 2013 Toyota was ordered to pay $3 million in compensation after a fatal crash involving one of its vehicles. The car had accelerated uncontrollably, despite the driver having her foot on the brake rather than the throttle at the time. An expert witness told the jury that an unintended instruction, hidden deep within the vast tangled mess of software, was to blame. See Phil Koopman, A case study of Toyota unintended acceleration and software safety (Pittsburgh: Carnegie Mellon University, 18 Sept. 2014), https://users.ece.cmu.edu/~koopman/pubs/koopman14_toyota_ua_slides.pdf.
14 This illusion (the example here is taken from https://commons.wikimedia.org/wiki/File:Vase_of_rubin.png) is known as Rubin’s vase, after Edgar Rubin, who developed the idea. It is an example of an ambiguous image – right on the border between two shadowed faces looking towards each other, and an image of a white vase. As it’s drawn, it’s fairly easy to switch between the two in your mind, but all it would take is a couple of lines on the picture to push it in one direction or another. Perhaps a faint outline of the eyes on the faces, or the shadow on the neck of the vase.
The dog/car example of image recognition is a similar story. The team found a picture that was right on the cusp between two different classifications and used the smallest perturbation possible to shift the image from one category to another in the eye of the machine.
15 Jiawei Su, Danilo Vasconcellos Vargas and Kouichi Sakurai, ‘One pixel attack for fooling deep neural networks’, arXiv:1719.08864v4 [cs.LG], 22 Feb. 2018, https://arxiv.org/pdf/1710.08864.pdf.
16 Chris Brooke, ‘“I was only following satnav orders” is no defence: driver who ended up teetering on cliff edge convicted of careless driving’, Daily Mail, 16 Sept. 2009, http://www.dailymail.co.uk/news/article-1213891/Driver-ended-teetering-cliff-edge-guilty-blindly-following-sat-nav-directions.html#ixzz59vihbQ2n.
17 Ibid.
18 Robert Epstein and Ronald E. Robertson, ‘The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections’, Proceedings of the National Academy of Sciences, vol. 112, no. 33, 2015, pp. E4512–21, http://www.pnas.org/content/112/33/E4512.
19 David Shultz, ‘Internet search engines may be influencing elections’, Science, 7 Aug. 2015, http://www.sciencemag.org/news/2015/08/internet-search-engines-may-be-influencing-elections.
20 Epstein and Robertson, ‘The search engine manipulation effect (SEME)’.
21 Linda J. Skitka, Kathleen Mosier and Mark D. Burdick, ‘Accountability and automation bias’, International Journal of Human–Computer Studies, vol. 52, 2000, pp. 701–17, http://lskitka.people.uic.edu/IJHCS2000.pdf.
22 KW v. Armstrong, US District Court, D. Idaho, 2 May 2012, https://scholar.google.co.uk/scholar_case?case=17062168494596747089&hl=en&as_sdt=2006.
23 Jay Stanley, Pitfalls of Artificial Intelligence Decision making Highlighted in Idaho ACLU Case, American Civil Liberties Union, 2 June 2017, https://www.aclu.org/blog/privacy-technology/pitfalls-artificial-intelligence-decisionmaking-highlighted-idaho-aclu-case.
24 ‘K.W. v. Armstrong’, Leagle.com, 24 March 2014, https://www.leagle.com/decision/infdco20140326c20.
25 Ibid.
26 ACLU Idaho staff, https://www.acluidaho.org/en/about/staff.
27 Stanley, Pitfalls of Artificial Intelligence Decision-making.
28 ACLU, Ruling mandates important protections for due process rights of Idahoans with developmental disabilities, 30 March 2016, https://www.aclu.org/news/federal-court-rules-against-idaho-department-health-and-welfare-medicaid-class-action.
29 Stanley, Pitfalls of Artificial Intelligence Decision-making.
30 Ibid.
31 Ibid.
32 Ibid.
33 Ibid.
34 Kristine Phillips, ‘The former Soviet officer who trusted his gut – and averted a global nuclear catastrophe’, Washington Post, 18 Sept. 2017, https://www.washingtonpost.com/news/retropolis/wp/2017/09/18/the-former-soviet-officer-who-trusted-his-gut-and-averted-a-global-nuclear-catastrophe/?utm_term=.6546e0f06cce.
35 Pavel Aksenov, ‘Stanislav Petrov: the man who may have saved the world’, BBC News, 26 Sept. 2013, http://www.bbc.co.uk/news/world-europe-24280831.
36 Ibid.
37 Stephen Flanagan, Re: Accident at Smiler Rollercoaster, Alton Towers, 2 June 2015: Expert’s Report, prepared at the request of the Health and Safety Executive, Oct. 2015, http://www.chiark.greenend.org.uk/~ijackson/2016/Expert%20witness%20report%20from%20Steven%20Flanagan.pdf.
38 Paul E. Meehl, Clinical versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence (Minneapolis: University of Minnesota, 1996; first publ. 1954), http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.693.6031&rep=rep1&type=pdf.
39 William M. Grove, David H. Zald, Boyd S. Lebow, Beth E. Snitz and Chad Nelson, ‘Clinical versus mechanical prediction: a meta-analysis’, Psychological Assessment, vol. 12, no. 1, 2000, p. 19.
40 Berkeley J. Dietvorst, Joseph P. Simmons and Cade Massey. ‘Algorithmic aversion: people erroneously avoid algorithms after seeing them err’, Journal of Experimental Psychology, Sept. 2014, http://opim.wharton.upenn.edu/risk/library/WPAF201410-AlgorithmAversion-Dietvorst-Simmons-Massey.pdf.
Data
1 Nicholas Carlson, ‘Well, these new Zuckerberg IMs won’t help Facebook’s privacy problems’, Business Insider, 13 May 2010, http://www.businessinsider.com/well-these-new-zuckerberg-ims-wont-help-facebooks-privacy-problems-2010-5?IR=T.
2 Clive Humby, Terry Hunt and Tim Phillips, Scoring Points: How Tesco Continues to Win Customer Loyalty (London: Kogan Page, 2008).
3 Ibid., Kindle edn, 1313–17.
4 See Eric Schmidt, ‘The creepy line’, YouTube, 11 Feb. 2013, https://www.youtube.com/watch?v=o-rvER6YTss.
5 Charles Duhigg, ‘How companies learn your secrets’, New York Times, 16 Feb. 2012, https://www.nytimes.com/2012/02/19/magazine/shopping-habits.html.
6 Ibid.
7 Sarah Buhr, ‘Palantir has raised $880 million at a $20 billion valuation’, TechCrunch, 23 Dec. 2015.
8 Federal Trade Commission, Data Brokers: A Call for Transparency and Accountability, (Washington DC, May 2014), https://www.ftc.gov/system/files/documents/reports/data-brokers-call-transparency-accountability-report-federal-trade-commission-may-2014/140527databrokerreport.pdf.
9 Ibid.
10 Wolfie Christl, Corporate Surveillance in Everyday Life, Cracked Labs, June 2017, http://crackedlabs.org/en/corporate-surveillance.
11 Heidi Waterhouse, ‘The death of data: retention, rot, and risk’, The Lead Developer, Austin, Texas, 2 March 2018, https://www.youtube.com/watch?v=mXvPChEo9iU.
12 Amit Datta, Michael Carl Tschantz and Anupam Datta, ‘Automated experiments on ad privacy settings’, Proceedings on Privacy Enhancing Technologies, no. 1, 2015, pp. 92–112.
13 Latanya Sweeney, ‘Discrimination in online ad delivery’, Queue, vol. 11, no. 3, 2013, p. 10, https://dl.acm.org/citation.cfm?id=2460278.
14 Jon Brodkin, ‘Senate votes to let ISPs sell your Web browsing history to advertisers’, Ars Technica, 23 March 2017, https://arstechnica.com/tech-policy/2017/03/senate-votes-to-let-isps-sell-your-web-browsing-history-to-advertisers/.
15 Svea Eckert and Andreas Dewes, ‘Dark data’, DEFCON Conference 25, 20 Oct. 2017, https://www.youtube.com/watch?v=1nvYGi7-Lxo.
16 The researchers based this part of their work on Arvind Narayanan and Vitaly Shmatikov, ‘Robust de-anonymization of large sparse datasets’, paper presented to IEEE Symposium on Security and Privacy, 18–22 May 2008.
17 Michal Kosinski, David Stillwell and Thore Graepel. ‘Private traits and attributes are predictable from digital records of human behavior’, vol. 110, no. 15, 2013, pp. 5802–5.
18 Ibid.
19 Wu Youyou, Michal Kosinski and David Stillwell, ‘Computer-based personality judgments are more accurate than those made by humans’, Proceedings of the National Academy of Sciences, vol. 112, no. 4, 2015, pp. 1036–40.
20
S. C. Matz, M. Kosinski, G. Nave and D. J. Stillwell, ‘Psychological targeting as an effective approach to digital mass persuasion’, Proceedings of the National Academy of Sciences, vol. 114, no. 48, 2017, 201710966.
21 Paul Lewis and Paul Hilder, ‘Leaked: Cambridge Analytica’s blueprint for Trump victory’, Guardian, 23 March 2018.
22 ‘Cambridge Analytica planted fake news’, BBC, 20 March 2018, http://www.bbc.co.uk/news/av/world-43472347/cambridge-analytica-planted-fake-news.
23 Adam D. I. Kramer, Jamie E. Guillory and Jeffrey T. Hancock, ‘Experimental evidence of massive-scale emotional contagion through social networks’, Proceedings of the National Academy of Sciences, vol. 111, no. 24, 2014, pp. 8788–90.
24 Jamie Bartlett, ‘Big data is watching you – and it wants your vote’, Spectator, 24 March 2018.
25 Li Xiaoxiao, ‘Ant Financial Subsidiary Starts Offering Individual Credit Scores’, Caixin, 2 March 2015, https://www.caixinglobal.com/2015-03-02/101012655.html.
26 Rick Falkvinge, ‘In China, your credit score is now affected by your political opinions – and your friends’ political opinions’, Privacy News Online, 3 Oct. 2015, https://www.privateinternetaccess.com/blog/2015/10/in-china-your-credit-score-is-now-affected-by-your-political-opinions-and-your-friends-political-opinions/.