Book Read Free

The Formula_How Algorithms Solve All Our Problems... and Create More

Page 4

by Luke Dormehl


  The basis for Knack’s work is an insight that has been explored by psychologists for the last half century: that the way we play games can be used to predict how we behave in the real world. “Even though to your eye, your behavior in a game does not necessarily characterize your real-world behavior, it is highly likely that the way you play a game and another person plays that same game would reveal differences about personality and the way that your brain works,” Halfteck says. “Your working memory, your strategic thinking, your risk-taking—these are all things which are manifested in how we game.”

  Knack’s games currently include Wasabi Waiter and Balloon Brigade. Both are straightforward, pick-and-play affairs that nonetheless offer the player a number of different ways to compete. In Wasabi Waiter, for example, players take on the role of a waiter and chef as they take customers’ orders and then prepare the dish that matches his or her facial expression. This expression might be “happy,” “sad,” “angry” or “any mood” in the event that the player is unsure. When a customer finishes eating, the player brings their plate back to the sink and starts the process again with someone new.

  This may appear simple, but beneath the surface this is anything but. In a game, literally everything is measurable: each action, message, item and rule is composed of raw data. For every millisecond of play in Wasabi Waiter or Balloon Brigade, hundreds of data variables are gathered, processed and analyzed based on the decisions players make, the speed at which they do things, and the degree to which their game playing changes over time. What Halfteck perceives to be the accompanying behavioral qualities are then teased out using machine-learning tools and data-mining algorithms. “This is a very rich data stream we’re collecting,” Halfteck says. “It really allows us to get closer to the unique behavioral genome of a person.”

  There is, however, an innate danger in attempting to quantify the unquantifiable. When it comes to taking complex ideas and reducing these to measurable elements, the most famous critique came from evolutionary biologist Stephen Jay Gould. In his 1981 book, The Mismeasure of Man, Gould warned about the dangers of converting concepts such as “intelligence” into simplified measures such as IQ. Gould’s concerns weren’t just about the potential dangers of abstraction, but about the ways in which apparently objective truths can be used to back up human biases, rather than to expose genuine insights. In his own words, The Mismeasure of Man is an attack on “the abstraction of intelligence as a single entity, its location within the brain, its quantification as one number for each individual, and the use of these numbers to rank people in a single series of worthiness, invariably to find that oppressed and disadvantaged groups—races, classes, or sexes—are innately inferior and deserve their status.”19 Measurement and reductionism, of course, go hand in hand. Each time we begin to measure something, we lose whatever it is that the measurement tool is not designed to capture, or that the person measuring is not aware of the need to measure. As I will discuss in the coming chapters, technological attempts to create simple quantifiable measures for ideas like “creativity” and “love” meet with fierce opposition—largely because the concepts are far from simple.

  But Halfteck disagrees that the beauty of terms like “empathy” and “insightfulness” is in their abstract amorphousness. “We are talking about an ever-expanding universe of things that are being measured,” he says. “In the case of Knack, we’re moving in a positive direction from a paradigm where people are being measured on a single dimension, to one in which people are measured in a polydimensional way on exponentially more aspects of their personality. It’s not just about ‘intelligence,’ but rather the sum total of the human condition. It’s far more nuanced than anything we’ve seen before.”

  Your Life According to Twitter

  In November 2013, I wrote an article for Fast Company about two computer science researchers who had created an algorithm that used the information posted in your Twitter feed to generate customized user biographies.20 It was a fascinating experiment in the subject of “topic extraction” and—based on the feedback and the number of hits the article received—I was not alone in feeling that way. Twitter, explained one of the study’s authors, Jiwei Li, was the perfect tool for researchers. First, it encouraged its users to keep diaries in a publicly accessible medium. Second, the micro-blogging site’s imposed 140-character limit for messages forced users to be concise: compressing complex life events into a few short sentences.

  Since reading through a person’s Twitter feed going back a number of years could prove prohibitively time consuming (particularly if you were doing this for a large number of people, as you might if you were the boss wanting to know more about the people who worked for you), the algorithm’s job was to scan through this information and output it in a more accessible and easily readable format; pulling out only the relevant tidbits that might inform you about the major events in a person’s life. The algorithm worked by analyzing the contents of each tweet and dividing these into “public” and “private” categories (say, your opinion on a major sporting event versus your own birthday) and then subdividing each of these categories again into “general” and “specific” headings. “General” events would typically be things like complaints about the commute to work or comments on your weekly yoga class, and would show their predictability by virtue of recurring over a long period of time. “Specific” events, on the other hand, would be life-changing events like a person’s wedding, or the offer of a new job, and would frequently be the subject of a large amount of activity taking place over a short time span.

  Unsurprisingly, it was this last “private-specific” event that was of most interest when it came to generating a mini-biography. As anyone who has ever read a popular biography will know, more space is typically given to the unique events in a person’s life that are specific to them rather than a broader contextual look at where they fit within the wider society. Walter Isaacson’s Steve Jobs, for example, focuses far more on Jobs’s work creating the iPhone than it does on what he saw on his drive to and from his Palo Alto home each day.21 But assuming that “public” or “general” observations are simply noise to be filtered out risks grossly simplifying a person’s life by viewing them as existing in a context-free void. Wherever you are on the political spectrum, it is impossible to deny that public events like general elections or overseas conflicts have a bearing on the lives of the typical individual. The same is true of events that recur regularly, but are nonspecific in nature. The worker’s commute to the office, or the inner-city family that has its power turned off or lives in a crime-ridden neighborhood, may not be “specific” but does as much, if not more, to explain their circumstances than which geo-tagged location they were in when they proposed to their partner.

  I am not entirely blaming the two researchers for this. Their conception of the self is a neat one, which fits neatly within the Western value system that presents the individual as both a unique and fundamentally autonomous being. This idea forms not just the basis of the social sciences, but also politics, economics and the legal system. In politics, the self is the citizen who participates in democracy through voting and other political activities. In a market economy, the self is the optimizer of costs and benefits in such a way that they correspond with a person’s best interests. In the legal system (which I explore in more detail in Chapter 3), the self is usually imagined as an agent who is responsible for his or her own behavior within society. What underlines all of these interpretations is the notion that at its root the self is a profoundly rational entity.

  In today’s digital world the conception of the self relies largely on the inferences of algorithms—comparing individual qualities against large data sets of “knowable” qualities to find correlations. These are, in a very real sense, formulas of the self. Some are extremely complex and depend on gathering as many data points as possible about particular people before coming up with conclusions. Others are about simplici
ty: as with abstract art, taking the broadest possible “shapes” that define a human being, reductio ad absurdum. One start-up named Hunch claims that with just five data points (in other words, a user answering five questions) it can answer practically any consumer preference question with 80 to 85 percent accuracy. YouAreWhatYouLike, meanwhile, offers to create detailed profiles of particular users by analyzing the common associations of their Facebook “likes” with a data set of “social dimensions.” We are told, for instance, that users who “like” online art community deviantART.com are liberal in their political leanings, while those who “like” NASCAR tend toward the conservative. Other “likes” prove similarly predictive of personality types who might be “distrustful” or “reserved.” A similar study carried out by University of Cambridge researchers in 2013 suggested that algorithms provided with a dataset of 58,000 American Facebook users were able to accurately predict qualities and traits, including race, age, IQ, sexual preference, personality, substance use and political views—all based upon “likes.”22 Another service—TweetPsych—claims to use algorithms to score a person’s emotional and intellectual quotients based upon the topics they choose to tweet about, including learning, money, emotions and anxiety. Yet more studies have been shown to be able to deduce gender, sexual orientation, political preference, religion and race with a greater than 75 percent level of accuracy. A 2010 investigation by psychologist Tal Yarkoni of the University of Colorado at Boulder analyzed the words in 695 blogs and compared these to the personalities of their owners as revealed through personality tests. Yarkoni suggested that neurotic bloggers are more likely to use words like “awful” and “lazy,” conscientious ones overuse “completed,” and generally agreeable ones fall back on describing things as “wonderful.”23

  It is the gulf between the idea of the autonomous individual and the algorithmic tendency to view the individual as one categorizable node in an aggregate mass that can result in The Formula’s equivalent of a crisis of self. Writing in 2012, a Facebook user commented on the new Timeline feature being rolled out in the social network’s user interface at the time. Unlike the previous Facebook user interface, the Timeline had the narrativizing effect of proceduring history into a series of events (jobs, relationships) with unknown categories marked by blank spaces so that the implication was that the user should continue adding materials to their Timeline—retroactively tracing their own personal narrative back until they reach the category “born.” “I feel betrayed by . . . an interface that appears to give so many choices on the surface, while limiting almost every bit of our creative endeavor to the predefined and prepackaged boxes and categories within which we’re supposed to find a place,” the user noted.

  It hurts us all, in different, small ways. Sure, I feel fine clicking the “female” category, but I know at least two dozen friends who wouldn’t be able to choose a box. I’m supposed to declare a “hometown” but I’ve not had one for more than twenty years, so that’s not useful at all. I now have to mark places on a map, or accept the default map that appeared on my profile just this afternoon . . . What if I don’t want to be defined by time or any other moment that Facebook has determined is “relevant” in my life?24

  Big Brother, Sort Of

  A number of cultural critics have commented upon the large number of ways in which bureaucratic measures have intensified under neoliberalism—despite its presentation as being profoundly and fundamentally antibureaucratic by nature.

  Such critiques can certainly be applied to those high-tech companies in thrall to The Formula. One example is the high-tech start-up CourseSmart, which allows teachers to surveil their students even when they are away from the classroom. Much like e-book analytics that can be fed back to publishers (something that I will describe later on in this book), CourseSmart uses algorithms to track whether students are skipping pages in their textbooks, not highlighting significant passages, hardly bothering to take notes, or even failing to study at all. In April 2013, CourseSmart was the subject of an article in the New York Times, under the headline “Teacher Knows If You’ve Done the E-Reading.” The story related the plight of a teacher who tracked 70 students in three of his classes. Despite one student regularly scoring high marks in mini-tests, CourseSmart alerted the teacher that his student was doing all of their studying the night before tests, as opposed to taking a long-haul approach to learning. The article quoted the university’s school of business dean as describing the service as “Big Brother, sort of, but with a good intent.”25 According to the story:

  Students do not see their engagement indexes (CourseSmart’s proprietary analytics tool) unless a professor shows them, but they know the books are watching them. For a few, merely hearing the number is a shock. Charles Tejeda got a C on the last quiz, but the real revelation that he is struggling was a low CourseSmart index.

  “They caught me,” said Mr. Tejeda, 43. He has two jobs and three children, and can study only late at night. “Maybe I need to focus more,” he said.

  On the surface, CourseSmart offers considerably more freedom than the kind of factory model “industrial schooling” that rose to prominence with the Industrial Revolution, pitting warden-teachers against prisoner-students. Students are less classroom-bound and are afforded opportunities to study on their own. (Or, at least, ostensibly on their own.) However, such tools actually represent a more continuous form of control system based on increasingly abstract entities like “engagement.” After all, as the New York Times story demonstrates, a person could receive a “satisfactory” C grade, only to fail a class on engagement.

  A parallel to CourseSmart is the kind of deep data analytics Google uses to track its own workforce. Like many high-tech businesses, Google models itself as a libertarian utopia: the type of company where employees used to be allowed one extra day per week to pursue their own lines of inquiry, and are as likely to spend their time ascending Google’s indoor rock-climbing wall or having free food served up to them by a former Grateful Dead chef as they are to be coding. However, as Steven Levy points out in In the Plex, his 2011 study of Google, the search leviathan’s apparent loopiness is “the crazy-like-a-fox variety and not the kind calling for straightjackets.”26 Despite Google’s widely publicized quirks, its irreverent touches are data-driven to a fault. “At times Google’s largesse can sound excessive,” notes an article in Slate. “Yet it would be a mistake to conclude that Google doles out such perks just to be nice. [The company] rigorously monitors a slew of data about how employees respond to benefits, and . . . rarely throws money away.”27

  For instance, there is a dedicated team within Google called the People Analytics group, whose job is to quantify the “happiness” of employees working for the company. This is done using “Googlegeist,” a scientifically constructed employee survey, which is then mined for insights using state-of-the-art proprietary algorithms. An example of what the People Analytics team does occurred several years ago, when Google noticed that a larger-than-normal number of female employees were leaving the company. Drilling down with data-mining tools, the People Analytics group discovered that this wasn’t so much a “woman” problem as it was a “mother” problem: women who had recently given birth were twice as likely to leave Google as its average departure rate. The most cost-effective answer, it was concluded, was to increase maternity leave from the standard 12 weeks of paid absence to a full five months. Once the problem had been identified and acted upon, the company’s attrition rate for new mothers dropped by 50 percent.

  Similar data-driven insights are used to answer a plethora of other questions. Just how often, for example, should employees be reminded to contribute to pension plans, and what tone should best be used when addressing them? Do successful middle managers possess certain skills in common, and could these be taught to less successful managers? And what is the best way to maximize happiness and, thus, efficiency in staff? A salary increase? Cash bonus? Stock options? More time off?

>   For all its hiding behind the image of soft “servant leadership” the real thing an entity such as Google’s People Analytics group returns to prominence is the concept of “Taylorism.” Created in the early 20th century by engineer Frederick Taylor, the ideas behind Taylorism were outlined in a 1911 book called The Principles of Scientific Management.28 At the center of Taylor’s beliefs was the idea that the goal of human labor and thought should be increased efficiency; that technical calculation is always superior to human judgment; that subjectivity represents a dumbing-down of clear-thinking objectivity; and that whatever is unable to be quantified either does not exist or has no value. “It is,” he argued, “only through enforced standardization of methods, enforced adoption of the best implements and working conditions, and enforced cooperation that . . . faster work can be assured.”

  Work Faster and Happier

  Of course, it’s not just about faster work. As the quantification of “happiness” and “engagement” demonstrate, it is no longer enough to simply be an effective laborer. A person must also be an affective laborer, offering “service with a smile.” In this way it is necessary to ask the degree to which The Formula is genuinely improving working conditions, or whether it is simply (to quote cultural critic Paul Virilio) transforming workers into unwitting participants in a Marxist state pageant, “miming the joys [of] being liberated”—with full knowledge that anything other than full enthusiasm will be noted down against their CourseSmart-style engagement index and unearthed by an algorithm in time for a future job interview.

 

‹ Prev