Book Read Free

Data and Goliath

Page 13

by Bruce Schneier


  This practice can get very intrusive. High-end restaurants are starting to Google their customers, to better personalize their dining experiences. They can’t give people menus with different prices, but they can certainly hand them the wine list with either the cheaper side up or the more expensive side up. Automobile insurance companies are experimenting with usage-based insurance. If you allow your insurance company to monitor when, how far, and how fast you drive, you could get a lower insurance rate.

  The potential for intrusiveness increases considerably when it’s an employer–employee relationship. At least one company negotiated a significant reduction in its health insurance costs by distributing Fitbits to its employees, which gave the insurance company an unprecedented view into its subscribers’ health habits. Similarly, several schools are requiring students to wear smart heart rate monitors in gym class; there’s no word about what happens to that data afterwards. In 2011, Hewlett-Packard analyzed employee data to predict who was likely to leave the company, then informed their managers.

  Workplace surveillance is another area of enormous potential harm. For many of us, our employer is the most dangerous power that has us under surveillance. Employees who are regularly surveilled include call center workers, truck drivers, manufacturing workers, sales teams, retail workers, and others. More of us have our corporate electronic communications constantly monitored. A lot of this comes from a new field called “workplace analytics,” which is basically surveillance-driven human resources management. If you use a corporate computer or cell phone, you have almost certainly given your employer the right to monitor everything you do on those devices. Some of this is legitimate; employers have a right to make sure you’re not playing Farmville on your computer all day. But you probably use those devices on your own time as well, for personal as well as work communications.

  Any time we’re monitored and profiled, there’s the potential for getting it wrong. You are already familiar with this; just think of all the irrelevant advertisements you’ve been shown on the Internet, on the basis of some algorithm misinterpreting your interests. For some people, that’s okay; for others, there’s low-level psychological harm from being categorized, whether correctly or incorrectly. The opportunity for harm rises as the judging becomes more important: our credit ratings depend on algorithms; how we’re treated at airport security depends partly on corporate-collected data.

  There are chilling effects as well. For example, people are refraining from looking up information about diseases they might have because they’re afraid their insurance companies will drop them.

  It’s true that a lot of corporate profiling starts from good intentions. Some people might be denied a bank loan because of their deadbeat Facebook friends, but Lenddo’s system is designed to enable banks to give loans to people without credit ratings. If their friends had good credit ratings, that would be a mark in their favor. Using personal data to determine insurance rates or credit card spending limits might cause some people to get a worse deal than they otherwise would have, but it also gives many people a better deal than they otherwise would have.

  In general, however, surveillance data is being used by powerful corporations to increase their profits at the expense of consumers. Customers don’t like this, but as long as (1) sellers are competing with each other for our money, (2) software systems make price discrimination easier, and (3) the discrimination can be hidden from customers, it is going to be hard for corporations to resist doing it.

  SURVEILLANCE-BASED MANIPULATION

  Someone who knows things about us has some measure of control over us, and someone who knows everything about us has a lot of control over us. Surveillance facilitates control.

  Manipulation doesn’t have to involve overt advertising. It can be product placement ensuring you see pictures that have a certain brand of car in the background. Or just an increase in how often you see that car. This is, essentially, the business model of search engines. In their early days, there was talk about how an advertiser could pay for better placement in search results. After public outcry and subsequent guidance from the FTC, search engines visually differentiated between “natural” results by algorithm and paid ones. So now you get paid search results in Google framed in yellow, and paid search results in Bing framed in pale blue. This worked for a while, but recently the trend has shifted back. Google is now accepting money to insert particular URLs into search results, and not just in the separate advertising areas. We don’t know how extensive this is, but the FTC is again taking an interest.

  When you’re scrolling through your Facebook feed, you don’t see every post by every friend; what you see has been selected by an automatic algorithm that’s not made public. But people can pay to increase the likelihood that their friends or fans will see their posts. Payments for placement represent a significant portion of Facebook’s income. Similarly, a lot of those links to additional articles at the bottom of news pages are paid placements.

  The potential for manipulation here is enormous. Here’s one example. During the 2012 election, Facebook users had the opportunity to post an “I Voted” icon, much like the real stickers many of us get at polling places after voting. There is a documented bandwagon effect with respect to voting; you are more likely to vote if you believe your friends are voting, too. This manipulation had the effect of increasing voter turnout 0.4% nationwide. So far, so good. But now imagine if Facebook manipulated the visibility of the “I Voted” icon on the basis of either party affiliation or some decent proxy of it: ZIP code of residence, blogs linked to, URLs liked, and so on. It didn’t, but if it had, it would have had the effect of increasing voter turnout in one direction. It would be hard to detect, and it wouldn’t even be illegal. Facebook could easily tilt a close election by selectively manipulating what posts its users see. Google might do something similar with its search results.

  A truly sinister social networking platform could manipulate public opinion even more effectively. By amplifying the voices of people it agrees with, and dampening those of people it disagrees with, it could profoundly distort public discourse. China does this with its 50 Cent Party: people hired by the government to post comments on social networking sites supporting, and to challenge comments opposing, party positions. Samsung has done much the same thing.

  Many companies manipulate what you see according to your user profile: Google search, Yahoo News, even online newspapers like the New York Times. This is a big deal. The first listing in a Google search result gets a third of the clicks, and if you’re not on the first page, you might as well not exist. The result is that the Internet you see is increasingly tailored to what your profile indicates your interests are. This leads to a phenomenon that political activist Eli Pariser has called the “filter bubble”: an Internet optimized to your preferences, where you never have to encounter an opinion you don’t agree with. You might think that’s not too bad, but on a large scale it’s harmful. We don’t want to live in a society where everybody only ever reads things that reinforce their existing opinions, where we never have spontaneous encounters that enliven, confound, confront, and teach us.

  In 2012, Facebook ran an experiment in control. It selectively manipulated the newsfeeds of 680,000 users, showing them either happier or sadder status updates. Because Facebook constantly monitors its users—that’s how it turns its users into advertising revenue—it could easily monitor the experimental subjects and collect the results. It found that people who saw happier posts tended to write happier posts, and vice versa. I don’t want to make too much of this result. Facebook only did this for a week, and the effect was small. But once sites like Facebook figure out how to do this effectively, they will be able to monetize this. Not only do women feel less attractive on Mondays; they also feel less attractive when they feel depressed. We’re already seeing the beginnings of systems that analyze people’s voices and body language to determine mood; companies want to better determine when customers are getting frustr
ated, and when they can be most profitably upsold. Manipulating those emotions to market products better is the sort of thing that’s acceptable in the advertising world, even if it sounds pretty horrible to us.

  Manipulation is made easier because of the centralized architecture of so many of our systems. Companies like Google and Facebook sit at the center of our communications. That gives them enormous power to manipulate and control.

  Unique harms can arise from the use of surveillance data in politics. Election politics is very much a type of marketing, and politicians are starting to use personalized marketing’s capability to discriminate as a way to track voting patterns and better “sell” a candidate or policy position. Candidates and advocacy groups can create ads and fund-raising appeals targeted to particular categories: people who earn more than $100,000 a year, gun owners, people who have read news articles on one side of a particular issue, unemployed veterans . . . anything you can think of. They can target outraged ads to one group of people, and thoughtful policy-based ads to another. They can also fine-tune their get-out-the-vote campaigns on Election Day, and more efficiently gerrymander districts between elections. Such use of data will likely have fundamental effects on democracy and voting.

  Psychological manipulation—based both on personal information and on control of the underlying systems—will get better and better. Even worse, it will become so good that we won’t know we’re being manipulated. This is a hard reality for us to accept, because we all like to believe we are too smart to fall for any such ploy. We’re not.

  PRIVACY BREACHES

  In 1995, the hacker Kevin Mitnick broke into the network of an Internet company called Netcom and stole 20,000 customer credit card numbers. In 2004, hackers broke into the network of the data broker ChoicePoint, stole data on over 100,000 people, and used it to commit fraud. In late 2014, hackers broke into Home Depot’s corporate networks and stole over 60 million credit card numbers; a month later, we learned about a heist of 83 million households’ contact information from JPMorgan Chase. Two decades of the Internet, and it seems as if nothing has changed except the scale.

  One reasonable question to ask is: how well do the Internet companies, data brokers, and our government protect our data? In one way, the question makes little sense. In the US, anyone willing to pay for data can get it. In some cases, criminals have legally purchased and used data to commit fraud.

  Cybercrime is older than the Internet, and it’s big business. Numbers are hard to come by, but the cost to the US is easily in the tens of billions of dollars. And with that kind of money involved, the business of cybercrime is both organized and international.

  Much of this crime involves some sort of identity theft, which is the fancy Internet-era term for impersonation fraud. A criminal hacks into a database somewhere, steals your account information and maybe your passwords, and uses them to impersonate you to secure credit in your name. Or he steals your credit card number and charges purchases to you. Or he files a fake tax return in your name and gets a refund that you’re later liable for.

  This isn’t personal. Criminals aren’t really after your intimate details; they just want enough information about your financial accounts to access them. Or sufficient personal information to obtain credit.

  A dozen years ago, the risk was that the criminals would hack into your computer and steal your personal data. But the scale of data thefts is increasing all the time. These days, criminals are more likely to hack into large corporate databases and steal your personal information, along with that of millions of other people. It’s just more efficient. Government databases are also regularly hacked. Again and again we have learned that our data isn’t well-protected. Thefts happen regularly, much more often than the news services report. Privacy lawyers I know tell me that there are many more data vulnerabilities and breaches than get reported—and many companies never even figure out that their networks have been hacked and their data is being stolen. It’s actually amazing how bad corporate security can be. And because institutions legitimately have your data, you often have no recourse if they lose it.

  Sometimes the hackers aren’t after money. Californian Luis Mijangos was arrested in 2010 for “sextortion.” He would hack into the computers of his female victims, search them for sexual and incriminating photos and videos, surreptitiously turn on the camera and take his own, then threaten to publish them if they didn’t provide him with more racy photos and videos. People who do this are known as “ratters,” for RAT, or remote access Trojan. That’s the piece of malware they use to take over your computer. The most insidious RATs can turn your computer’s camera on without turning the indicator light on. Not all ratters extort their victims; some just trade photos, videos, and files with each other.

  It’s not just hackers who spy on people remotely. In Chapter 7, I talked about a school district that spied on its students through their computers. In 2012, the Federal Trade Commission successfully prosecuted seven rent-to-own computer companies that spied on their customers through their webcams.

  While writing this book, I heard a similar story from two different people. A few years after a friend—or a friend’s daughter—applied to colleges, she received a letter from a college she’d never applied to—different colleges in each story. The letter basically said that the college had been storing her personal data and that hackers had broken in and stolen it all; it recommended that she place a fraud alert on her account with the major credit bureaus. In each instance, the college had bought the data from a broker back when she was a high school senior and had been trying to entice her to consider attending. In both cases, she hadn’t even applied to the college. Yet the colleges were still storing that data years later. Neither had secured it very well.

  As long as our personal data sloshes around like this, our security is at risk.

  9

  Business Competitiveness

  In 1993, the Internet was a very different place from what it is today. There was no electronic commerce; the World Wide Web was in its infancy. The Internet was a communications tool for techies and academics, and we used e-mail, newsgroups, and a chat protocol called IRC. Computers were primitive, as was computer security. For about 20 years, the NSA had managed to keep cryptography software out of the mainstream by classifying it as a munition and restricting its export. US products with strong cryptography couldn’t be sold overseas, which meant that US hardware and software companies put weak—and by that I mean easily breakable—cryptography into both their domestic and their international products, because that was easier than maintaining two versions.

  But the world was changing. Cryptographic discoveries couldn’t be quashed, and the academic world was catching up to the capabilities of the NSA. In 1993, I wrote my first book, Applied Cryptography, which made those discoveries accessible to a more general audience. It was a big deal, and I sold 180,000 copies in two editions. Wired magazine called it “the book the National Security Agency wanted never to be published,” because it taught cryptographic expertise to non-experts. Research was international, and non-US companies started springing up, offering strong cryptography in their products. One study from 1993 found over 250 cryptography products made and marketed outside the US. US companies feared that they wouldn’t be able to compete, because of the export restrictions in force.

  At the same time, the FBI started to worry that strong cryptography would make it harder for the bureau to eavesdrop on the conversations of criminals. It was concerned about e-mail, but it was most concerned about voice encryption boxes that could be attached to telephones. This was the first time the FBI used the term “going dark” to describe its imagined future of ubiquitous encryption. It was a scare story with no justification to support it, just as it is today—but lawmakers believed it. They passed the CALEA law I mentioned in Chapter 6, and the FBI pushed for them to ban all cryptography without a backdoor.

  Instead, the Clinton administration came up with a solution: the Clipper C
hip. It was a system of encryption with surveillance capabilities for FBI and NSA access built in. The encryption algorithm was alleged to be strong enough to prevent eavesdropping, but there was a backdoor that allowed someone who knew the special key to get at the plaintext. This was marketed as “key escrow” and was billed as a great compromise; trusted US companies could compete in the world market with strong encryption, and the FBI and NSA could maintain their eavesdropping capabilities.

  The first encryption device with the Clipper Chip built in was an AT&T secure phone. It wasn’t a cell phone; this was 1993. It was a small box that plugged in between the wired telephone and the wired handset and encrypted the voice conversation. For the time, it was kind of neat. The voice quality was only okay, but it worked.

  No one bought it.

  In retrospect, it was rather obvious. Nobody wanted encryption with a US government backdoor built in. Privacy-minded individuals didn’t want it. US companies didn’t want it. And people outside the US didn’t want it, especially when there were non-US alternatives available with strong cryptography and no backdoors. The US government was the only major customer for the AT&T devices, and most of those were never even used.

  Over the next few years, the government tried other key escrow initiatives, all designed to give the US government backdoor access to all encryption, but the market soundly rejected all of those as well.

  The demise of the Clipper Chip, and key escrow in general, heralded the death of US government restrictions on strong cryptography. Export controls were gradually lifted over the next few years, first on software in 1996 and then on most hardware a few years later. The change came not a moment too soon. By 1999, over 800 encryption products from 35 countries other than the US had filled the market.

 

‹ Prev