Book Read Free

Weapons of Math Destruction

Page 18

by Cathy O'Neil


  This is what happens when the immensely powerful network we share with 1.5 billion users is also a publicly traded corporation. While Facebook may feel like a modern town square, the company determines, according to its own interests, what we see and learn on its social network. As I write this, about two-thirds of American adults have a profile on Facebook. They spend thirty-nine minutes a day on the site, only four minutes less than they dedicate to face-to-face socializing. Nearly half of them, according to a Pew Research Center report, count on Facebook to deliver at least some of their news, which leads to the question: By tweaking its algorithm and molding the news we see, can Facebook game the political system?

  The company’s own researchers have been looking into this. During the 2010 and 2012 elections, Facebook conducted experiments to hone a tool they called the “voter megaphone.” The idea was to encourage people to spread word that they had voted. This seemed reasonable enough. By sprinkling people’s news feeds with “I voted” updates, Facebook was encouraging Americans—more than sixty-one million of them—to carry out their civic duty and make their voices heard. What’s more, by posting about people’s voting behavior, the site was stoking peer pressure to vote. Studies have shown that the quiet satisfaction of carrying out a civic duty is less likely to move people than the possible judgment of friends and neighbors.

  At the same time, Facebook researchers were studying how different types of updates influenced people’s voting behavior. No researcher had ever worked in a human laboratory of this scale. Within hours, Facebook could harvest information from tens of millions of people, or more, measuring the impact that their words and shared links had on each other. And it could use that knowledge to influence people’s actions, which in this case happened to be voting.

  That’s a significant amount of power. And Facebook is not the only company to wield it. Other publicly held corporations, including Google, Apple, Microsoft, Amazon, and cell phone providers like Verizon and AT&T, have vast information on much of humanity—and the means to steer us in any way they choose.

  Usually, as we’ve seen, they’re focused on making money. However, their profits are tightly linked to government policies. The government regulates them, or chooses not to, approves or blocks their mergers and acquisitions, and sets their tax policies (often turning a blind eye to the billions parked in offshore tax havens). This is why tech companies, like the rest of corporate America, inundate Washington with lobbyists and quietly pour hundreds of millions of dollars in contributions into the political system. Now they’re gaining the wherewithal to fine-tune our political behavior—and with it the shape of American government—just by tweaking their algorithms.

  The Facebook campaign started out with a constructive and seemingly innocent goal: to encourage people to vote. And it succeeded. After comparing voting records, researchers estimated that their campaign had increased turnout by 340,000 people. That’s a big enough crowd to swing entire states, and even national elections. George W. Bush, after all, won in 2000 by a margin of 537 votes in Florida. The activity of a single Facebook algorithm on Election Day, it’s clear, could not only change the balance of Congress but also decide the presidency.

  Facebook’s potency comes not only from its reach but also from its ability to use its own customers to influence their friends. The vast majority of the sixty-one million people in the experiment received a message on their news feed encouraging them to vote. The message included a display of photos: six of the user’s Facebook friends, randomly selected, who had clicked the “I Voted” button. The researchers also studied two control groups, each numbering around six hundred thousand. One group saw the “I Voted” campaign, but without the pictures of friends. The other received nothing at all.

  By sprinkling its messages through the network, Facebook was studying the impact of friends’ behavior on our own. Would people encourage their friends to vote, and would this affect their behavior? According to the researchers’ calculations, seeing that friends were participating made all the difference. People paid much more attention when the “I Voted” updates came from friends, and they were more likely to share those updates. About 20 percent of the people who saw that their friends had voted also clicked on the “I Voted” button. Among those who didn’t get the button from friends, only 18 percent did. We can’t be sure that all the people who clicked the button actually voted, or that those who didn’t click it stayed home. Still, with sixty-one million potential voters on the network, a possible difference of two points can be huge.

  Two years later, Facebook took a step further. For three months leading up to the election between President Obama and Mitt Romney, a researcher at the company, Solomon Messing, altered the news feed algorithm for about two million people, all of them politically engaged. These people got a higher proportion of hard news, as opposed to the usual cat videos, graduation announcements, or photos from Disney World. If their friends shared a news story, it showed up high on their feed.

  Messing wanted to see if getting more news from friends changed people’s political behavior. Following the election, Messing sent out surveys. The self-reported results indicated that the voter participation in this group inched up from 64 to 67 percent. “When your friends deliver the newspaper,” said Lada Adamic, a computational social scientist at Facebook, “interesting things happen.” Of course, it wasn’t really the friends delivering the newspaper, but Facebook itself. You might argue that newspapers have exerted similar power for eons. Editors pick the front-page news and decide how to characterize it. They choose whether to feature bombed Palestinians or mourning Israelis, a policeman rescuing a baby or battering a protester. These choices can no doubt influence both public opinion and elections. The same goes for television news. But when the New York Times or CNN covers a story, everyone sees it. Their editorial decision is clear, on the record. It is not opaque. And people later debate (often on Facebook) whether that decision was the right one.

  Facebook is more like the Wizard of Oz: we do not see the human beings involved. When we visit the site, we scroll through updates from our friends. The machine appears to be only a neutral go-between. Many people still believe it is. In 2013, when a University of Illinois researcher named Karrie Karahalios carried out a survey on Facebook’s algorithm, she found that 62 percent of the people were unaware that the company tinkered with the news feed. They believed that the system instantly shared everything they posted with all of their friends.

  The potential for Facebook to hold sway over our politics extends beyond its placement of news and its Get Out the Vote campaigns. In 2012, researchers experimented on 680,000 Facebook users to see if the updates in their news feeds could affect their mood. It was already clear from laboratory experiments that moods are contagious. Being around a grump is likely to turn you into one, if only briefly. But would such contagions spread online?

  Using linguistic software, Facebook sorted positive (stoked!) and negative (bummed!) updates. They then reduced the volume of downbeat postings in half of the news feeds, while reducing the cheerful quotient in the others. When they studied the users’ subsequent posting behavior, they found evidence that the doctored new feeds had indeed altered their moods. Those who had seen fewer cheerful updates produced more negative posts. A similar pattern emerged on the positive side.

  Their conclusion: “Emotional states can be transferred to others…, leading people to experience the same emotions without their awareness.” In other words, Facebook’s algorithms can affect how millions of people feel, and those people won’t know that it’s happening. What would occur if they played with people’s emotions on Election Day?

  I have no reason to believe that the social scientists at Facebook are actively gaming the political system. Most of them are serious academics carrying out research on a platform that they could only have dreamed about two decades ago. But what they have demonstrated is Facebook’s enormous power to affect what we learn, how we feel, and whether we vot
e. Its platform is massive, powerful, and opaque. The algorithms are hidden from us, and we see only the results of the experiments researchers choose to publish.

  Much the same is true of Google. Its search algorithm appears to be focused on raising revenue. But search results, if Google so chose, could have a dramatic effect on what people learn and how they vote. Two researchers, Robert Epstein and Ronald E. Robertson, recently asked undecided voters in both the United States and India to use a search engine to learn about upcoming elections. The engines they used were programmed to skew the search results, favoring one party over another. Those results, they said, shifted voting preferences by 20 percent.

  This effect was powerful, in part, because people widely trust search engines. Some 73 percent of Americans, according to a Pew Research report, believe that search results are both accurate and impartial. So companies like Google would be risking their own reputation, and inviting a regulatory crackdown, if they doctored results to favor one political outcome over another.

  Then again, how would anyone know? What we learn about these Internet giants comes mostly from the tiny proportion of their research that they share. Their algorithms represent vital trade secrets. They carry out their business in the dark.

  I wouldn’t yet call Facebook or Google’s algorithms political WMDs, because I have no evidence that the companies are using their networks to cause harm. Still, the potential for abuse is vast. The drama occurs in code and behind imposing firewalls. And as we’ll see, these technologies can place each of us into our own cozy political nook.

  By late spring of 2012, the former governor of Massachusetts, Mitt Romney, had the Republican nomination sewn up. The next step was to build up his war chest for the general election showdown with President Obama. And so on May 17, he traveled to Boca Raton, Florida, for a fund-raiser at the palatial home of Marc Leder, a private equity investor. Leder had already poured $225,000 into the pro-Romney Super PAC Restore Our Future and had given another $63,330 to the Romney Victory PAC. He had gathered a host of rich friends, most of them in finance and real estate, to meet the candidate. Naturally, the affair would be catered.

  Romney could safely assume that he was walking into a closed setting with a group of people who thought much like Marc Leder. If this had been a televised speech, Romney would have taken great care not to ruffle potential Republican voters. Those ranged from Evangelical Christians and Wall Street financiers to Cuban Americans and suburban soccer moms. Trying to please everyone is one reason most political speeches are boring (and Romney’s, even his supporters groused, were especially so). But at an intimate gathering at Marc Leder’s house, a small and influential group might get closer to the real Mitt Romney and hear what the candidate really believed, unfiltered. They had already given him large donations. A frank chat was the least they could expect for their investment.

  Basking in the company of people he believed to be supportive and like-minded, Romney let loose with his observation that 47 percent of the population were “takers,” living off the largesse of big government. These people would never vote for him, the governor said—which made it especially important to reach out to the other 53 percent. But Romney’s targeting, it turned out, was inexact. The caterers circulating among the donors, serving drinks and canapés, were outsiders. And like nearly everyone in the developed world, they carried phones equipped with video cameras. Romney’s dismissive remarks, captured by a bartender, went viral. The gaffe very likely cost Romney any chance he had of winning the White House.

  Success for Romney at that Boca Raton gathering required both accurate targeting and secrecy. He wanted to be the ideal candidate for Marc Leder and friends. And he trusted that Leder’s house represented a safe zone in which to be that candidate. In a dream world, politicians would navigate countless such targeted safe zones so that they could tailor their pitch for every subgroup—without letting the others see it. One candidate could be many candidates, with each part of the electorate seeing only the parts they liked.

  This duplicity, or “multiplicity,” is nothing new in politics. Politicians have long tried to be many things to many people, whether they’re eating kielbasa in Milwaukee, quoting the Torah in Brooklyn, or pledging allegiance to corn-based ethanol in Iowa. But as Romney discovered, video cameras can now bust them if they overdo their contortions.

  Modern consumer marketing, however, provides politicians with new pathways to specific voters so that they can tell them what they know they want to hear. Once they do, those voters are likely to accept the information at face value because it confirms their previous beliefs, a phenomenon psychologists call confirmation bias. It is one reason that none of the invited donors at the Romney event questioned his assertion that nearly half of voters were hungry for government handouts. It only bolstered their existing beliefs.

  This merging of politics and consumer marketing has been developing for the last half century, as the tribal rituals of American politics, with their ward bosses and long phone lists, have given way to marketing science. In The Selling of the President, which followed Richard Nixon’s 1968 campaign, the journalist Joe McGinniss introduced readers to the political operatives working to market the presidential candidate like a consumer good. By using focus groups, Nixon’s campaign was able to hone his pitch for different regions and demographics.

  But as time went on, politicians wanted a more detailed approach, one that would ideally reach each voter with a personalized come-on. This desire gave birth to direct-mail campaigns. Borrowing tactics from the credit card industry, political operatives built up huge databases of customers—voters, in this case—and placed them into various subgroups, reflecting their values and their demographics. For the first time, it was possible for next-door neighbors to receive different letters or brochures from the same politician, one vowing to protect wilderness and the other stressing law and order.

  Direct mail was microtargeting on training wheels. The convergence of Big Data and consumer marketing now provides politicians with far more powerful tools. They can target microgroups of citizens for both votes and money and appeal to each of them with a meticulously honed message, one that no one else is likely to see. It might be a banner on Facebook or a fund-raising email. But each one allows candidates to quietly sell multiple versions of themselves—and it’s anyone’s guess which version will show up for work after inauguration.

  In July of 2011, more than a year before President Obama would run for reelection, a data scientist named Rayid Ghani posted an update on LinkedIn:

  Hiring analytics experts who want to make a difference. The Obama re-election campaign is growing the analytics team to work on high-impact large-scale data mining problems.

  We have several positions available at all levels of experience. Looking for experts in statistics, machine learning, data mining, text analytics, and predictive analytics to work with large amounts of data and help guide election strategy.

  Ghani, a computer scientist educated at Carnegie Mellon, would be heading up the data team for Obama’s campaign. In his previous position, at Accenture Labs in Chicago, Ghani had developed consumer applications for Big Data, and he trusted that he could apply his skills to politics. The goal for the Obama campaign was to create tribes of like-minded voters, people as uniform in their values and priorities as the guests at Marc Leder’s reception—but without the caterers. Then they could target them with the messaging most likely to move them toward specific objectives, including voting, organizing, and fund-raising.

  One of Ghani’s projects at Accenture involved modeling super market shoppers. A major grocer had provided the Accenture team with a massive database of anonymized consumer purchases. The idea was to dig into this data to study each consumer’s buying habits and then to place the shoppers into hundreds of different consumer buckets. There would be the impulse shoppers who bought candy at the checkout counter and the health nuts who were willing to pay triple for organic kale. Those were the obvious categ
ories. But others were more surprising. Ghani and his team, for example, could spot people who stuck close to a brand and others who would switch for even a tiny discount. There were buckets for these “persuadables,” too. The end goal was to come up with a different plan for each shopper and to guide them through the store, leading them to all the foods they were most likely to want and buy.

  Unfortunately for Accenture’s clients, this ultimate vision hinged upon the advent of computerized shopping carts, which haven’t yet caught on in a big way and maybe never will. But despite the disappointment in supermarkets, Ghani’s science translated perfectly into politics. Those fickle shoppers who switched brands to save a few cents, for example, behaved very much like swing voters. In the supermarket, it was possible to estimate how much it would cost to turn each shopper from one brand of ketchup or coffee to another more profitable brand. The supermarket could then pick out, say, the 15 percent most likely to switch and provide them with coupons. Smart targeting was essential. They certainly didn’t want to give coupons to shoppers who were ready to pay full price. That was like burning money.*1

  Would similar calculations work for swing voters? Armed with massive troves of consumer, demographic, and voting data, Ghani and his team set out to investigate. However, they faced one crucial difference. In the supermarket project, all of the available data related precisely to the shopping domain. They studied shopping patterns to predict (and influence) what people would buy. But in politics there was very little relevant data available. Data teams for both campaigns needed proxies, and this required research.

  They started out by interviewing several thousand people in great depth. These folks fell into different groups. Some cared about education or gay rights, others worried about Social Security or the impact of fracking on freshwater aquifers. Some supported the president unconditionally. Others sat on the fence. A good number liked him but didn’t usually get around to voting. Some of them—and this was vital—were ready to contribute money to Obama’s campaign.

 

‹ Prev