The Formula_How Algorithms Solve All Our Problems... and Create More

Home > Other > The Formula_How Algorithms Solve All Our Problems... and Create More > Page 22
The Formula_How Algorithms Solve All Our Problems... and Create More Page 22

by Luke Dormehl


  What is it about the modern world that makes us demand easy answers? Is it that we are naturally pattern-seeking creatures, as the statistician Nate Silver argues in The Signal and the Noise? Or is there something about the effects of the march of technology that demands the kind of answers only an algorithm can provide?

  “[The algorithm does] seem to be a key metaphor for what matters now in terms of organizing the world,” acknowledges McKenzie Wark, a media theorist who has written about digital technologies for the last 20 years. “If one thinks of algorithms as processes which terminate and generate a result, there’s a moment when the process ceases and you have your answer. If something won’t terminate then it probably means that your computer has gone wrong. There’s a sense that we [increasingly] structure reality around processes that will yield results—that we’ve embedded machine logic in the world.”

  The idea of the black box is one that comes up a lot when discussing algorithms, and it is one that Bruno Latour seizes upon as a powerful metaphor in his work. The black box is, he notes, a term used by cyberneticians whenever a piece of machinery or else a set of commands is too complex. In its place, the black box stands in as an opaque substitute for a technology in which nothing needs to be known other than inputs and outputs. Once opened, it makes both the creators and the users confront the subjective biases and processes that have resulted in a certain answer. Closed, it becomes the embodiment of objectivity: a machine capable of spitting out binary “yes” or “no” answers without further need of qualification. “Insofar as they consider all the black boxes well sealed, people do not, any more than scientists, live in a world of fiction, representation, symbol, approximation, convention,” Latour observes. “They are simply right.” In this vein, we might also turn to Slavoj Žižek’s conception of the toilet bowl: a seemingly straightforward technological mechanism through which excrement disappears from our reality and enters another space we phenomenologically perceive to be a messier, more chaotically primordial reality.34

  It is possible to see some of this thinking in the work of Richard Berk I profiled in Chapter 3. “It frees me up,” Berk said of his future crime prediction algorithm. “I don’t care whether it makes sense in any kind of causal way.” While Berk’s comments are designed to get actionable information to predict future criminality, one could argue that by black-boxing the inner workings of the technology, something similar has taken place with the underlying social dynamics. In other areas—particularly as relate to law—a reliance on algorithms might simply justify existing bias and lack of understanding, in the same way that the “filter bubble” effect described in Chapter 1 can result in some people not being presented with certain pieces of information, which may take the form of opportunities.

  “It’s not just you and I who don’t understand how these algorithms work—the engineers themselves don’t understand them entirely,” says scholar Ted Striphas. “If you look at the Netflix Prize, one of the things the people responsible for the winning entries said over and over again was that their algorithms worked, even though they couldn’t tell you why they worked. They might understand how they work from the point of view of mathematical principles, but that math is so complex that it is impossible for a human being to truly follow. That troubles me to some extent. The idea that we don’t know the world that we’re creating makes it very difficult for us to operate ethically and mindfully within it.”

  How to Stay Human in the World of The Formula

  One of the most disconcerting algorithms I came across during my research was the so-called TruthTeller algorithm developed by the Washington Post newspaper in 2012, to coincide with that summer’s presidential election season. Capable of scanning through political speeches in real time and informing us of when we are being lied to, the TruthTeller is an uncomfortable reminder of both our belief in algorithmic objectivity and our desire for easy answers. In a breathless article, Geek.com described it as “the most robust, automated way to tell whether a politician is lying or not, even more [accurate] than a polygraph test . . . because politicians are so delusional they end up genuinely believing their lies.” The algorithm works by using speech recognition technology developed by Microsoft, which converts audio signals into words, before handing the completed transcript over to a matching algorithm to comb through and compare alleged “facts” to a database of previously recorded, proven facts.35

  Imagine the potential for manipulation should such a technology ever ascend beyond simple gimmickry to enjoy the ubiquity of, for instance, automated spell-checking algorithms. If such a tool was to be implemented within a future edition of MS Word or Google Docs, it is not inconceivable that users may one day finish typing a document and hit a single button—at which point it is auto-checked for spelling, punctuation, formatting and truthfulness. Already there is widespread use of algorithms in academia for sifting through submitted work and pulling up passages that may or may not be plagiarized. These will only become more widespread as natural language processing becomes more intuitive and able to move beyond simple passage comparison to detailed content and idea analysis.

  There is no one-size-fits-all answer to how best to deal with algorithms. In some cases, increased transparency would appear to be the answer. Where algorithms are used to enforce laws, for instance, releasing the source code to the general public would both protect against the dangers of unchecked government policy-making and make it possible to determine how specific decisions have been reached. But this approach also wouldn’t work in matters of national security, where revealing the inner workings of specific black boxes would enable certain individuals to “game” the system in such a way as to render the algorithm useless.36

  A not dissimilar paradox can be seen in what happened to Google’s Flu Trends algorithm in 2013. Heralded as a breakthrough development thanks to its ability to track the spread of flu through semantic analysis of user searches, Flu Trends ran into an unexpected problem when it received so much media attention that its algorithm began to malfunction. Previously designed to look for searches like “medicine for cough and fever” and assume that these users were sick, what Google found instead was that people were typing in flu-related searches to look for information about Google’s own algorithm. Experiencing spikes in its data, Google predicted near-epidemic levels of flu, which then failed to materialize. The company wound up admitting that while its algorithms might be on top of the flu problem, they were also “susceptible to heightened media coverage.” “The lesson here is rich with irony,” wrote InformationWeek journalist Thomas Claburn when he reported the story. “To effectively assess data from a public source, the algorithm must remain private, or [else] someone will attempt to introduce bias.”37

  Beyond this there is the question of how we stay human in the world of The Formula. After all, if the original dream of hard AI was to create a computer that could believably behave like a human, then the utilitarian opposite of this is to somehow reduce human activity to the point where it is made as predictable as a computer.

  We can find many cases where the algorithmization of these profoundly human activities risks losing what makes them special in the first place. As Jaron Lanier argues in his techno-skeptical book, You Are Not a Gadget, “Being a person is not a pat formula, but a quest, a mystery, a leap of faith.”38

  But this is also easier said than done in a world increasingly subject to algorithmic proceduring. So how to survive then? Certainly, there is a minority currently engaged in what we might term “algorithmical data jamming,” trying to develop tactics to obscure or evade algorithms’ attempts to know and categorize them. But to do this means losing out on some of the most valuable tools of the modern age. Since algorithms increasingly define what is relevant, it also means stepping away from many matters of public discourse.

  Instead, I propose that users learn more about the world of The Formula, since knowledge of these algorithmic processes is going to be
key to many of the important debates going forward—in everything from human relationships to law. For instance, police must establish reasonable suspicion for a “stop and search” to take place, by pointing to “specific and articulable facts which, taken together with rational inferences from those facts, reasonably warrant that intrusion.” In this case, does “the algorithm said so” (provided the algorithm is shown to work effectively) provide enough probable cause to carry out such a stoppage?39

  Similarly, if human relationships with algorithms are not “real,” are they at least “real enough”? In 2013, a group of researchers in Spain successfully coded a piece of software designed to mimic the language and attitude of a 14-year-old girl. Negobot—also referred to as the “Virtual Lolita”—is designed to trap potential pedophiles in online chat rooms. Starting in a neutral mode that talks with strangers about ordinary subjects, Negobot goes into “game mode” when a stranger starts communicating in innuendo or with overt sexual overtones, trying to provoke the person on the other end of the conversation to agree to a meet-up. Should such a technology be adopted by law enforcers, it will suggest that feelings toward an algorithm are at least “real enough” to warrant prosecution. As Al Gore has said, “The ability to code and understand the power of computing is crucial to success in today’s hyper-connected world.”40

  Most important of all is asking questions—and not expecting simple answers. One of these questions involves not just what algorithms are doing to us—but what they are designed to do in the first place. This is a pressing question, and one that needs to be asked particularly in cases where a service is ostensibly free to users. Words like “relevant” and “newsworthy” are loaded terms that encourage (but often fail to answer) the seemingly obvious follow-up question: “relevant” and “newsworthy” to whom? In the case of a company like Google, the answer is simple: to the company’s shareholders, of course. Facebook’s algorithms can similarly be viewed as a formula for maintaining and building your friendship circle—but of course the reality is that Facebook’s purpose isn’t to make you friends, but rather to monetize your social graph through advertising.41

  Hopefully, this questioning process is starting to happen. A number of researchers working with recommender systems have told me how user expectations have changed in recent years. Where five or ten years ago, people would be happy with any recommendations, today an increasing number want to know why these recommendations have been made for them. With asking why we are expected to take things at “interface value” will come the ability to critique the continued algorithmization of everything. Ultimately there are no catchall answers. Our lives would look a lot different—and, most likely, be far worse—without algorithms. But that doesn’t mean we stop asking the important questions.

  And particularly when easy answers are seemingly involved.

  A Note on Author Interviews

  (Conducted 2012–2013)

  NB: I was in the privileged position of speaking to a large number of individuals while researching The Formula. Not everyone is mentioned by name in the text, but below is a list reflecting the individuals whose contributions are worthy of noting:

  Vincent Dean Boyce, Steve Carter, Kristin Celello, John Cheney-Lippold, Noam Chomsky, Danielle Citron, Kevin Conboy, Tom Copeman, Paul Dourish, Robert Epstein, Konrad Feldman, Lee Goldman, Graeme Gordon, Jonathan Gottschall, Guy Halfteck, David Hursh, John Kelly, Alexis Kirke, Alec Levenson, Jiwei Li, Benjamin Liu, Sean Malinowski, Lev Manovich, Nick Meaney, Vivienne Ming, George Mohler, John Parr (UK representative for PCM), Giles Pavey, Richard Posner, Ivan Poupyrev, Stephen Ramsay, Emily Riehl, Anna Ronkainen, Adam Sadilek, Matthew Salganik, Lior Shamir, Larry Smarr, Mark Smolinski, Celestino Soddu, Ted Striphas, Garth Sundem, Harry Surden, Francisco Vico, McKenzie Wark, Neil Clark Warren, Robert Wert.

  Notes

  An Explanation of the Title, and Other Cyberbole

  1 Tancer, Bill. Click: What We Do Online and Why It Matters (London: HarperCollins, 2009).

  Chapter 1: The Quantified Selves

  1 technologyreview.com/featuredstory/426968/the-patient-of-the-future/.

  2 Smarr, Larry. “Towards Digitally Enabled Genomic Medicine: A 10-Year Detective Story of Quantifying My Body.” September 2011. lsmarr.calit2.net/repository/092811_Special_Letter,_Smarr.final.pdf.

  Gorbis, Marina. The Nature of the Future: Dispatches from the Socialstructed World (New York: Free Press, 2013).

  3 Bowden, Mark. “The Measured Man.” The Atlantic, July 13, 2012. theatlantic.com/magazine/archive/2012/07/the-measured-man/309018/.

  4 quantifiedself.com.

  5 750words.com.

  6 Nafus, Dawn, and Jamie Sherman. “This One Does Not Go Up to Eleven: The Quantified Self Movement as an Alternate Big Data Practice” (Review draft). April 2013. quantifiedself.com/wp-content/uploads/2013/04/NafusShermanQSDraft.pdf.

  7 Friedman, Ted. Electric Dreams: Computers in American Culture (New York: New York University Press, 2005).

  8 media.mit.edu/wearables/.

  9 Copeland, Douglas. Generation X: Tales for an Accelerated Culture (New York: St. Martin’s Press, 1991).

  10 James, William. The Principles of Psychology, Vol. 1 (New York: Henry Holt, 1890).

  11 Cockerton, Paul. “Tesco Using Minority Report–Style Face Tracking Technology So Ads on Screens Can Be Tailored.” Irish Mirror, November 4, 2013. irishmirror.ie/news/tesco-using-minority-report-style-face-2674367.

  12 Toffler, Alvin. The Third Wave (New York: Morrow, 1980).

  Elias, Norbert. The Civilizing Process (New York: Urizen Books, 1978).

  13 This wave metaphor was not, in itself, new: the German sociologist Norbert Elias had referred to “a wave of advancing integration over several centuries” in his book The Civilizing Process, as had other writers over the previous century.

  14 Richtel, Matt. “How Big Data Is Playing Recruiter for Specialized Workers.” New York Times, April 27, 2013. nytimes.com/2013/04/28/technology/how-big-data-is-playing-recruiter-for-specialized-workers.html?_r=0.

  15 Kwoh, Leslie. “Facebook Profiles Found to Predict Job Performance.” Wall Street Journal, February 21, 2012. online.wsj.com/news/articles/SB10001424052970204909104577235474086304212.

  16 Bulmer, Michael. Francis Galton: Pioneer of Heredity and Biometry (Baltimore: Johns Hopkins University Press, 2003).

  17 Pearson, Karl. The Life, Letters and Labours of Francis Galton (Cambridge, UK: Cambridge University Press, between 1914 and 1930).

  18 Galton, Francis. Statistical Inquiries into the Efficacy of Prayer (Melbourne: H. Thomas, Printer, between 1872 and 1880).

  19 Gould, Stephen Jay. The Mismeasure of Man (New York: Norton, 1981).

  20 Dormehl, Luke. “This Algorithm Can Tell Your Life Story Through Twitter.” Fast Company, November 4, 2013. fastcolabs.com/3021091/this-algorithm-can-tell-your-life-story-through-twitter.

  21 Isaacson, Walter. Steve Jobs (New York: Simon & Schuster, 2011).

  22 Kosinskia, Michal, David Stillwella, and Thore Graepelb. “Private Traits and Attributes Are Predictable from Digital Records of Human Behavior.” PNAS, vol. 110, no. 15, April 9, 2013. pnas.org/content/110/15/5802.full.

  23 Larsen, Noah. “Bloggers Reveal Personalities with Word, Phrase Choice.” Colorado Arts & Sciences Magazine, January 2011. artsandsciences.colorado.edu/magazine/2011/01/bloggers-word-choice-bares-their-personalities/.

  24 Markham, Annette. “The Algorithmic Self: Layered Accounts of Life and Identity in the 21st Century.” Selected Papers of Internet Research, 14.0, 2013. spir.aoir.org/index.php/spir/article/download/891/466.

  25 Streitfeld, David. “Teacher Knows If You’ve Done the E-Reading.” New York Times, April 8, 2013. nytimes.com/2013/04/09/technology/coursesmart-e-textbooks-track-students-progress-for-teachers.html?hp&_r=0.

  26 Levy, Steven. In the Plex: How Google Thinks, Works, and Shapes Our Lives (New York: S
imon & Schuster, 2011).

  27 Manjoo, Farhad. “The Happiness Machine.” Slate, January 21, 2013. slate.com/articles/technology/technology/2013/01/google_people_operations_the_secrets_of_the_world_s_most_scientific_human.html.

  28 Taylor, Frederick. The Principles of Scientific Management (New York: Norton, 1967, 1947).

  29 Hogge, Becky. Barefoot into Cyberspace: Adventures in Search of Techno-Utopia (Saffron Walden, UK: Barefoot, 2011).

  30 O’Connor, Sarah. “Amazon’s Human Robots: They Trek 15 Miles a Day Around a Warehouse, Their Every Move Dictated by Computers Checking Their Work. Is This the Future of the British Workplace?” Daily Mail, February 28, 2013. dailymail.co.uk/news/article-2286227/Amazons-human-robots-Is-future-British-workplace.html.

 

‹ Prev