Future Crimes

Home > Other > Future Crimes > Page 19
Future Crimes Page 19

by Marc Goodman


  The Siemens PLCs were key to the attack, but the authors of Stuxnet were not impetuous cyber warriors with a pillage-and-burn mentality. They were patient, strategic, and cunning in their attack on Natanz. In the first phase of the assault on Natanz, Stuxnet did nothing but observe, sitting there silently, stealthily gathering information to understand how the enrichment centrifuges worked. The worm recorded all of its findings in a masterful preplanned move that would prove crucial to the success of the operation.

  It was in phase two, however, that Stuxnet began to show its true powers as the worm established dominion over the industrial control systems at Natanz. Slowly, its puppet masters began manipulating the centrifuge valves and motors responsible for enriching U-235 at the facility. For months, and even years, the centrifuges sped up and slowed down, fluctuating from their designed 100,000 RPM specifications. Centrifuge pressure mounted, rotors failed, and yields of enriched uranium began to plummet.

  Meanwhile, inside the highly secure operations control room at Natanz, all systems were in full working order—at least according to the computer screens monitored by the engineers at the facility. Every one of the thousands of centrifuges was represented by a light on a computer screen, and each was carefully monitored for system malfunctions. A green light meant the centrifuge was working as designed; a gray or red light indicated problems. Day after day, engineers dutifully watched their screens for any evidence of failure, as the lights continued to shine bright green on the data safety systems before their eyes. Cascade protection system? Check. Centrifugal pressure? Check. Rotor speed? Check. Screens on the walls, screens on their desktops, screens on their control panels—all information systems inside their operations command center told the Iranians their nuclear ambitions were on track. Yet nothing could be further from the truth.

  The damage caused by the Stuxnet worm was designed to be subdued at first. Gradually, some centrifuges began spinning out of control, but the Iranians blamed bad parts or the incompetence of their engineers. Each centrifuge that failed seemed to have a different explanation: this device was too slow, that one was too fast, there was too much pressure in these. The uranium processed was increasingly of poor quality and not usable. Inspection after inspection of the facility was carried out, and researchers continued to closely observe the status of their entire operation from the computers inside their control room. As time passed, dozens and then hundreds of centrifuges began to fail. Iran’s nuclear ambitions were now in doubt. What the hell was going on? As it turned out, the Iranians had placed too much trust in the computer screens governing their prized secretive nuclear enrichment site.

  The data logging and computer recording of the industrial control systems stealthily perpetrated by the Stuxnet worm in phase one of the attack had a clear, if not immediately obvious, purpose: to fully document what the Siemens PLCs looked like when they were in full, proper working order. Rotors spinning according to plan and pressure at expected levels yielded all systems go, all maintenance lights green. Stuxnet captured all of those data and recorded it on the PLC equivalent of a VCR, carefully saved for posterity. What happened next was straight out of a Hollywood blockbuster, portrayed many times in films such as Ocean’s Eleven and National Treasure. The attackers simply prerecorded video footage of the casino vault or safe room to be targeted and played it back on the screens of the watchers and security staff.

  As the uranium enrichment centrifuges spun out of control at Natanz, Stuxnet masterfully intercepted the actual input values from the pressure, rotational, and vibration sensors before they reached the operational control room monitored by the plant’s engineers. Rather than presenting the correct real-time data from the Siemens PLCs, Stuxnet merely replayed the prerecorded information it had taken during phase one of the operation, showing all systems in full working order. The brilliant move meant that even though in reality the industrial control systems were melting down and digitally screaming for help, the flashing red danger signs displayed by the system were supplanted by a sea of green calm on the monitors of the Iranians controlling Natanz. As the centrifuges spun out of control and tore themselves apart, the human operators in the digital control room had no idea their own reality had been hacked, hijacked by a computer worm with a funny name sent on a mission to search and destroy.

  Life in a Mediated World

  Unfortunately, you have more in common with the Iranians than you realize. While you may not be producing U-235, you too depend on screens every day to translate the world around you. Your cell phone tells you who has called, your PC reminds you that you need to update your operating system, and the GPS in your car shows you how to get to your morning meeting. All of this and more transpire long before you finish your second cup of coffee. The result? We no longer live life through our own innate primary human sensory abilities. Rather, we experience it mediated through screens, virtual walls that sever us from our intrinsic senses and define the world for us. Screens interpose themselves between us and the real world, projecting information that is purportedly equal to reality but is at best only ever a rough approximation, one that is easily manipulated.

  At our airports, hospitals, banks, and ATMs, screens have become an omnipresent fixture in our lives. But screens today are dumb; they do little more than present the underlying information contained in data systems, systems which are eminently hackable. Those who control the computer code also control our screens and thereby our experiences and our perceptions. Everything from video games to voting machines can be tampered with, and in this brave new world seeing something with your own two eyes and hearing it with your own ears is by no means an indication that it is legitimate, correct, or safe. As a result, the screens we watch can deceive us in ways most have yet to understand.

  Whether or not you realize it, your entire experience in the online world and displayed on digital screens is being curated for you. Some of this filtering of course is good. With billions of tweets, Snapchats, status updates, and blog posts, there is no way any of us could consume the volume of data thrown our way on a daily basis. Knowing this, Internet companies go to great lengths to learn what you like and to customize your online experience using a series of computer algorithms. Facebook studies your Web links, images, pokes, messages, events, and Likes to customize what you see on your screen every day. As a result, you do not see most of what’s posted by your friends or on the pages you follow, and your friends see perhaps only 10 percent of your own updates on their news feeds. For as much effort as Facebook puts into studying and segmenting you for its advertisers, it works at least as hard to determine which of your friend’s posts you would most likely want to see every time you visit his site or launch his app. But why does it do this? Simply stated, Facebook, Google, and other Internet companies know that if they provide you the “right” stuff, you’ll spend more time on their sites and click on more links, allowing them to serve you up more ads.

  Facebook is by no means alone in this game, and Google too quantifies all your prior searches and, more important, what you’ve clicked on, in order to customize your online experience. In his book The Filter Bubble, the technology researcher Eli Pariser carefully documented the phenomenon. Getting you the “right” results is big business, and millions of computer algorithms are dedicated to the task. Google reportedly has at least fifty-seven separate personalization signals it tracks and considers before answering your questions, potentially to include the type of computer you are on, the browser you are using, the time of day, the resolution of your computer monitor, messages received in Gmail, videos watched on YouTube, and your physical location. Google alters, in real time, the search results it provides you based on what it knows about you. A search for the word “abortion” returns links to Planned Parenthood for some and Catholic.​com for others; if your query is “Egypt,” you may receive results on the Arab Spring, while your mom sees info on the pyramids or Nile cruises. Like Pariser, you can run this experiment yourself, and the results will pr
ovide an illuminating perspective on how Google sees you.

  The fact of the matter is that there is no such thing as “standard Google.” Eric Schmidt has publicly acknowledged that “it will be very hard for people to watch or consume something [online] that has not in some sense been tailored for them.” While none of this is necessarily malicious, there are important questions to be asked about how this information is being culled, sorted, and curated by others purportedly on your behalf. The challenge, however, is that Google, Facebook, Netflix, and Amazon do not publish their algorithms. In fact, the methods they use to filter the information you see are deeply proprietary and the “secret sauce” that drives each company’s profitability. The problem with this invisible “black box” algorithmic approach to information is that we do not know what has been edited out for us and what we are not seeing. As a result, our digital lives, mediated through a sea of screens, are being actively manipulated and filtered on a daily basis in ways that are both opaque and indecipherable. This fundamental shift in the way information flows online shapes not only the way we are informed but the way we view the world. Most of us are living in filter bubbles, and we don’t even realize it.

  Around the world, nations are increasingly deciding what data citizens should be able to access and what information should be prohibited. Using compelling arguments such as “protecting national security,” “ensuring intellectual property rights,” “preserving religious values,” and the perennial favorite, “saving the children,” governments are ever expanding their national firewalls for the purpose of Internet censorship. Some of these filtering techniques are disclosed to the general public. For example, in France and Germany, sites promoting Nazism or denying the Holocaust are openly censored. In Syria, YouTube, Facebook, Amazon, Hotmail, and pro-Kurdish sites have been blocked. In Saudi Arabia, 400,000 sites have been restricted, including those that discuss any political, religious, or social issue incompatible with Islam or the personal beliefs of the monarch. In many instances, however, there is no indication that your online information is being censored; instead, your content simply does not appear. In the United Arab Emirates, the government has even blocked all access to the entire .il domain of Israel, digitally erasing the existence of the Jewish state from the virtual world.

  Tech companies have collaborated in national censorship programs and acceded to state demands to filter offending content in real time, as Google did when entering the Chinese market in 2005. But perhaps no other government is as adept at and rigorous with its Internet-filtering programs as China. The “Great Firewall” of China ensures that its billion-plus residents are unable to see politically sensitive topics, such as the Tiananmen Square protests, embarrassing details about the Chinese leadership, or discussions of Tibetan rights, the Dalai Lama, Falun Gong, Taiwanese independence, political reform, or human rights. Internet censorship, however, is not restricted to autocratic regimes or despots. As of 2014, there were more than four billion people living in countries that practice Internet filtering of one sort or another.

  Screens tell you not what is really out there but what the government or Facebook thinks you should see. If you searched for something and it wasn’t there, how would you know it really was? To paraphrase an old philosophical question, if a tree falls on the Internet and no search engine indexes it, does it make any noise? As we live our lives increasingly mediated through screens, when it doesn’t exist online, it doesn’t exist. If an event is not listed in Google, it never happened. Conversely, if it does appear in Google, it still might not have happened. Welcome to the world of digital trickery, a virtual hall of mirrors represented as screens where all is magically possible.

  The profound risk of life in a technologically mediated world is that it creates mammoth opportunities for information to be manipulated in undetectable ways that most neither expect nor understand. Screens are everywhere, beeping, ringing, and blinking for our attention. But what if these screens were lying? Feeding us false information and misleading us? In today’s world, all that we see on screens can be faked and is easily spoofed. Ask anybody who has ever visited an online dating site, and he or she will tell you: what you see is not always what you get.

  Does Not Compute

  Why, sometimes I’ve believed as many as six impossible things before breakfast.

  LEWIS CARROLL, THROUGH THE LOOKING GLASS

  What do hackers, fraudsters, and organized criminals have in common with Facebook, Google, and the NSA? Each is perfectly capable of mediating and controlling the information you see on your computer screens. In a world where information is power, the gatekeepers who control the flow of data to your screen can also control others. We encounter this behavior on a daily basis every time we go online. Most of us would not consider making a major purchase or reserving a table at a new restaurant for a special occasion without first doing our own Internet research. Who better to inform us than our fellow shoppers and diners? Nearly 90 percent of consumers say online reviews influence their buying decisions, and a Nielsen study found that a surprising 70 percent trust the reviews they read online as much as recommendations from a friend. Unfortunately, according to an investigation by the New York State attorney general, 25 percent of the reviews on Yelp, one of the most popular sites of this kind, are completely bogus. Worse, in September 2014, a federal appeals court ruled it was completely legal for Yelp to manipulate its ratings based on which companies advertised on the site; big spenders could legally get five stars, even if all users rated them a one. Reviews on eBay, Amazon, and TripAdvisor are also all easily faked, and many of those five-star postings you see were written by the businesses themselves or by paid proxies. There are even professional companies whose entire business model rests on gaming the online review system. The practice is known as astroturfing and is widespread. One company investigated by the State of New York, known as Zamdel Inc., was accused of writing more than fifteen hundred fake reviews on Yelp and Google Places.

  I Thought You Were My Friend

  According to Facebook’s own 2014 annual report, up to 11.2 percent of its accounts are fake. Considering the world’s largest social media company has 1.3 billion users, that means up to 140 million Facebook accounts are fraudulent and these users simply don’t exist. With 140 million inhabitants, fake Facebook-land would be the tenth-largest country in the world. Just as Nielsen ratings on television sets determine different advertising rates for The Walking Dead versus the Super Bowl, online ad sales are determined by how many eyeballs a Web site or social media service can command—if only the data could be believed.

  Want 4,000 followers on Twitter? They can be yours for $5. Want 100,000 fans on Facebook? No problem, you can buy them on SocialMediaCorp.​org for a mere $1,500. Have even more cash to burn? How about a million new friends on Instagram? “For you we make special deal,” only $3,700. Whether you want favorites, Likes, retweets, up votes, or page views, all are for sale on Web sites like Swenzy, Fiverr, and Craigslist. These fraudulent social media accounts are then used to falsely endorse a product, service, or company, for a small fee of course. Most of the work is carried out in the developing world, in places such as India and Bangladesh, where actual humans may control the accounts. In other locales, such as Russia, Ukraine, and Romania, the entire process has been scripted by computer bots, little programs that will carry out your pre-encoded automated instructions, such as “click the Like button,” over and over again using different fake personas.

  Just as mythological shape-shifters were able to physically transform themselves from one being into another, these modern screen shifters have their own magical powers, and criminals are eager for a taste of the action, studying their techniques and deploying them against easy marks for massive profit. In fact, many of these clicks are done for the purposes of “click fraud.” Businesses pay companies such as Facebook and Google every time a potential customer clicks on one of those banner ads or links you see online, but organized crime groups have figured
out how to game the system to drive profits their way via so-called ad networks, which capitalize on all those extra clicks. Stung by the criticism, social media companies have attempted to cut back on the number of fake profiles out there. The results of Facebook’s actions were revealing. Rihanna and Shakira lost 22,000 Facebook fans, Lady Gaga had 32,000 of hers removed, and Zynga’s Texas Hold ’Em Poker had 100,000 purported supporters vanish in thin air.

  If Facebook has 140 million fake profiles, there is no way they could have been created manually one by one; there has to be something much more sinister at work, and there is. The practice is called sock puppetry and is a reference to the children’s toy puppet created when a hand is inserted into a sock to bring the object to life. In the online world, organized crime groups create sock puppets by combining computer scripting, Web automation, and social networks to create legions of online personas. This can be done easily and cheaply enough to allow those with deceptive intentions to create hundreds of thousands of fake online citizens.

  One only needs to consult a readily available online directory of the most common names in any particular country or region. Have your scripted bot merely pick a first name and a last name, then choose a date of birth and let the bot sign up for a free e-mail account. Next, scrape online photo sites such as Picasa, Instagram, Facebook, Google, and Flickr to choose an age-appropriate image to represent your new sock puppet. Armed with an e-mail address, name, date of birth, and photograph, you just need to sign up for an account on Facebook, Twitter, or Instagram. As a final step, you teach your puppets how to talk by scripting them to reach out and send friend requests, repost other people’s tweets, and randomly “like” things they see online. Your bots can even communicate and cross post with one another. Before you know it, you have thousands of sock puppets at your disposal for use as you see fit. It is these armies of sock puppets that criminals use as key constituents in their phishing attacks, to fake online reviews, to trick users into downloading spyware, and to commit a wide variety of financial frauds—all based upon misplaced trust.

 

‹ Prev