Book Read Free

Countdown to Zero Day: Stuxnet and the Launch of the World's First Digital Weapon

Page 19

by Kim Zetter


  When distribution lines overheat, it causes them to sag or melt. Sagging lines were the cause of the 2003 Northeast blackout that cut power to 50 million people in eight states and parts of Canada. Although a digital attack wasn’t the cause of the outage, a software bug thwarted early detection and prevention of the cascade.

  The problem began in Ohio when sagging power lines tangled with trees, but it was exacerbated by the fact that the emergency alert system at FirstEnergy’s control center in Akron failed to register faults in the system, leaving operators ignorant about deteriorating conditions. About two and a half hours before the blackout occurred, industrial customers and even other power plants were calling FirstEnergy to report low voltages and tripping transmission lines—indications that major problems were brewing in the grid. But because FirstEnergy operators didn’t see any sign of trouble on their control screens, they assumed the problem lay elsewhere. “[American Electric Power] must have lost some major stuff,” one First-Energy operator told a caller, pointing the finger at another utility.53 It wasn’t until the lights in FirstEnergy’s own control room went dark that operators realized the problem was with their own system. They eventually traced the glitch in the alert system to a software bug. “[The bug] had never evidenced itself until that day,” a FirstEnergy spokesman later said. “This fault was so deeply embedded, it took them weeks of poring through millions of lines of code and data to find it.”54

  An even more destructive attack than targeting distribution lines, however, would be to target equipment at substations that feed electricity to those lines. The grid consists of more than 15,000 nodes, or substations, divided into three types—generator substations that create power, transmission substations that transfer it between power lines, and distribution substations that deliver it to consumers. The majority of these are transmission substations, which are responsible for “stepping up” the voltage to transmit it long distances and then “stepping down” the voltage before it gets distributed to end users. A recent study by the Federal Energy Regulatory Commission found that an attack that took out just nine critical substations—four in the Eastern grid, three in the Western grid, and two in the Texas grid—could cause a national power outage for weeks, possibly months, creating panic and leading to loss of life.55

  The good news is that because grid systems are owned and operated by different utilities, they use different equipment and configurations, thwarting a one-size-fits-all attack and making a single widespread attack on energy systems difficult to pull off. But regional attacks and blackouts are not out of the reach of average hackers. And an attack that also destroyed industrial-sized generators at power-generation plants would make recovery more difficult. This was precisely the point of the Aurora Generator Test.

  NAMED AFTER THE Roman goddess who was mother to the four winds, the test had its origins in the cascading Northeast blackout of 2003. That blackout lasted for only two days, but it got people thinking about the possibility of remote attacks against power-generation plants that might not be so recoverable. Mike Assante was in charge of pulling a team together to test the hypothesis.

  While a naval intelligence officer in 2001, Assante had been assigned to work at the FBI’s new National Infrastructure Protection Center in Washington, DC, to research the risks posed by cyberattacks against energy infrastructures. After a year, he left the Navy to take a job with American Electric Power (AEP) in Ohio, one of the largest electric utilities in the country. AEP wanted help developing an infrastructure protection program, and it was during this time that Assante began to think about attacks that might cause physical destruction to the grid.

  While at AEP, Assante was struck by a Washington Post story about the Idaho National Lab’s SCADA test-bed program, in which workers there terrified the chairman of the Federal Energy Regulatory Commission with a simulation showing him how easily a hacker could destroy a utility’s turbine by shutting down the mechanism responsible for lubricating the machine. Without oil greasing the moving metal parts, the turbine seized up and tore itself apart.56 The chairman’s reaction to the demo was visceral. “I wished I’d had a diaper on,” he told the Post after the test.57

  Assante visited the INL lab for himself and was impressed with the group of experts the lab had recruited for its program. In addition to control-system engineers, the lab had hired a group of code warriors fresh out of high school and college who knew how to hack them. They cut through the control-system networks with little resistance, exploiting weaknesses that were invisible to the engineers who had worked on them for years. The lab also had its own substations and mini-grid—a sevenmile section of redundant grid that researchers could isolate from the public grid—to run live tests. Assante was so intrigued by the possibilities for conducting real security research on the grid—not just simulated tests—that he quit his job at AEP in 2005 and took a position with the lab.

  Once there, Assante and his colleagues began to consider scenarios for how equipment might be destroyed. Until then, most of the cyber concern around the security of the grid had been focused on someone getting into a power network to open breakers and create an outage. A power outage, however, could be resolved fairly quickly by resetting the breakers. But what about an attack that defeated or bypassed security and safety systems to physically destroy a generator that couldn’t be easily fixed?

  They decided to get at the generator by focusing the attack on protective relays—safety devices that monitor changes in the grid and are responsible for tripping breakers if conditions enter a danger zone that could harm transmission lines. Disabled protective relays played a role in a large outage in February 2008, when nearly 600,000 people in Florida lost power after a field engineer with Florida Power and Light turned off the protective relays at a substation while investigating a malfunctioning switch.58 When a fault occurred on the line that he was examining, there was nothing to keep it from radiating out. The result was a cascading outage that spread to thirty-eight substations, including one that fed electricity to a nuclear plant, causing the plant to go into automatic shutdown.

  But protective relays don’t just trigger breakers on transmission lines, they also disconnect generators and other equipment from the grid if conditions grow dangerous. The power grid operates at 60 Hz—or sixty cycles a second—and devices connected to it have to be in sync or they can be damaged. Plug something into the grid when it’s out of sync and it creates torque that can destroy the equipment. When a generator connects to the grid, the load from the grid pushes back, like the gravitational force that pushes against a car climbing a hill. But when a breaker opens up and disconnects the generator from the grid, the still-running generator speeds up in the absence of any load pushing against it. Within just 10 milliseconds, the generator will be out of sync with the grid. If the breaker then closes, bringing the generator back onto the grid while the two are out of sync, the effect is similar to a car hitting a brick wall. The generator expends too much energy that has nowhere to go, and once it hits the slower grid, the force of that energy slams back against it. It’s a well-known phenomenon that has been the cause of accidents in the past.

  So the question Assante’s team posed for their test was simple: If protective relays were supposed to prevent equipment from being damaged, what if they could be subverted to aid the equipment’s destruction? Designing such an attack turned out to be only slightly more complicated than the question. The hack involved writing malicious code to change the settings of the digital relays so that the breaker for a generator opened and closed in rapid succession, causing the equipment to disconnect from the grid quickly and repeatedly and then reconnect when it was out of sync. Without the protection of the relays, there was nothing to prevent the generator from destroying itself. “This is what made it so damn insidious,” says Joe Weiss. “The thing that was supposed to stop an attack like this from happening was the thing they used to conduct the attack.” By abruptly opening and closing the protective circuit, the rel
ay went from “providing maximum protection to inflicting maximum damage,” the DHS later wrote in a report about the test.59

  For their victim, they chose a Wärtsilä generator that had been retired from the oil fields in Alaska and was purchased through a broker for one-third of its $1 million price tag brand-new.60

  The attack lasted three minutes but could have achieved its aim in just fifteen seconds. The researchers had built in pauses to the attack to give engineers time to assess the damage and check safety systems at each stage. Each time the circuit breaker closed on the out-of-sync generator, connecting it back to the grid, the machine jumped and vibrated from the force of its own energy hitting back, until eventually the coupling between the diesel engine and the generator broke.61

  Workers in the operations center who monitored the grid for anomalies and weren’t told of the attack before it occurred never noticed anything amiss on their monitors. The safety system that was designed to ride out little spikes and valleys that normally occurred on the grid also never registered the destructive interruption. “We could do the attack, essentially open and close a breaker so quickly that the safety systems didn’t see it,” said Perry Pederson, who headed DHS’s control-system security program at the time and oversaw the test.62

  Replacing a twenty-seven-ton generator that was destroyed in this way wouldn’t be trivial, but it was doable. But there were 800-megawatt generators at large power plants and other facilities that would take months or a year to replace, since generators that size are often built to order overseas. Not all generators powering the grid would be susceptible to this attack in the same way—it would depend on how the power on that part of the grid is balanced. But the same thing could happen with critical equipment, powering things other than the grid, that isn’t easily replaced. A substation powering a bank of pumps responsible for delivering drinking water to a major metropolitan area, for example, would cause major disruptions if taken out of commission. “I don’t know what would happen to a big 50,000 horsepower pump, but I could imagine it would be just as bad as a generator,” Pederson said.63

  Since the Aurora test took place in 2007, there have been other demonstrations of destructive cyberattacks. In a 2009 report on 60 Minutes, researchers at Sandia National Lab showed how they could cause components at an oil refinery to overheat by simply changing the settings of a heating element and disabling the recirculation pumps that helped regulate the temperature.64

  STUXNET AND THE Maroochy Shire incident aside, there have been no really destructive digital attacks recorded in the world to date. Experts have offered a number of possible reasons for why this is the case—such attacks are more difficult to pull off than the evidence presented here seems to indicate, and those who possess the skills and resources to conduct them have simply lacked the motivation to take action thus far, while others who have the will to launch such an attack don’t yet have the way.

  One thing, however, seems certain: given the varied and extensive possibilities for conducting such attacks, and the proof of concept provided by Stuxnet, it is only a matter of time until the lure of the digital assault becomes too irresistible for someone to pass up.

  * * *

  1 Author interview with Assante, September 2011.

  2 SCADA systems are generally used where the systems being managed are geographically dispersed over large areas—such as in pipelines, railway systems, and water and electrical distribution. Distributed control systems, on the other hand, are best for when operators need extensive and complex control in confined facilities like refineries and water-treatment and power-generation plants, although power plants also use SCADA systems to monitor remote substations in the field. SCADA systems consist of an operator’s station, a communications network, and a remote terminal unit, or RTU, in the field. The RTUs, which are similar to PLCs, send data back through the network to the operator’s monitoring station. The operator’s station generally runs on Windows—with all of its inherent vulnerabilities—and the field devices use specialized operating systems, which generally have little security built into them.

  3 Industrial control system incidents are tracked in the RISI database (Repository of Industrial Security Incidents), which began recording incidents in 2001 but fell dormant between 2006 and 2009. The subscription database is maintained by the Security Incidents Organization and can be found at securityincidents.​org.

  4 Marty Niland, “Computer Virus Brings Down Train Signals,” Associated Press, August 20, 2003, available at informationweek.​com/​news/​13100807.

  5 The worm arrived via the corporate network of the utility company that operated the plant, and spread from the plant’s business network to the control network. Luckily, the plant had been offline for nearly a year due to other issues, so no harm was done. Plant operators also said they had manual controls that would have served as backup with the automated ones down. See Kevin Poulsen, “Slammer Worm Crashed Ohio Nuke Plant Network,” SecurityFocus, August 19, 2003, available at securityfocus.​com/​news/​6767/​.

  6 “Cybersecurity: Preparing for and Responding to the Enduring Threat,” Speech to the Senate Committee on Appropriations, June 12, 2013, available at hsdl.​org/​?view&​did=​739096.

  7 Lewis was speaking on the Diane Rehm radio show, broadcast by WAMU in Southern California, on June 4, 2012. Interview available at thedianerehmshow.​org/​shows/​2012-​06-​04/​growing-​threat-​cyberwarfare.

  8 Gregory Benford, a physicist, is credited with one of the first mentions of a computer virus in a story he wrote in 1969 called “The Scarred Man,” which was published in the May 1970 issue of Venture magazine. The notion of digital worms originated in John Brunner’s 1975 science-fiction book The Shockwave Rider, which featured a digital tapeworm that slithered from machine to machine.

  9 “Teen Hacker Pleads Guilty to Crippling Mass. Airport,” Boston Globe, March 19, 1998.

  10 “Teen Hacker Faces Federal Charges,” CNN, March 18, 1998, available at edition.​cnn.​com/​TECH/​computing/​9803/​18/​juvenile.​hacker/​index.​html.

  11 “Critical Foundations: Protecting America’s Infrastructures,” President’s Commission on Critical Infrastructure Protection, October 1997. The report is available at https://www.fas.org/sgp/library/pccip.pdf.

  12 “Electric Power Risk Assessment,” National Security Telecommunications Advisory Committee, Information Assurance Task Force, available at solarstorms.​org/​ElectricAssessment.​html.

  13 Information about the Maroochy Shire case comes from an author interview conducted August 2012 with Robert Stringfellow, a water district engineer who helped investigate the case, as well as from redacted court documents and a police report written by forensic examiner Peter Kingsley. Some details from the court documents were first published by Joe Weiss in his book Protecting Industrial Control Systems from Electronic Threats (New York: Momentum Press, 2010).

  14 Between March 14 and April 23 about ninety incidents occurred. A worker traced some of the activity to the RTU at pump station 14, where the malicious radio signals seemed to originate. He knew it was easy to alter the address of an RTU simply by flipping certain switches on the device, so he concluded the intruder must be using a rogue RTU that had its switches set to 14 to send out malicious commands as if they were coming from the real pump station 14. To set a trap, he changed the address for pump station 14 to 3. If an intruder was sending spoofed messages, he wouldn’t know the address had changed and would continue sending his messages under the old address. That’s exactly what occurred one night when a flood of malicious traffic sailed across the network from pump station 14 aimed at crashing the central computer. Investigators concluded, then, that the attacker must be an insider with knowledge of the Maroochy system and access to Hunter WaterTech software and equipment.

  15 Boden was convicted and sentenced in October 2001 to two years in prison. He later appealed, at which point his conviction on two of the charges was set aside, but his
conviction on other charges remained, as well as his sentence.

  16 A survey of utilities conducted by the Electronic Power Research Institute in 1996 found that only 25 percent of respondents reported using any intrusion detection methods. The survey, the EPRI Summer 1996 Electronic Information Security Survey, and the statistic are referenced at solarstorms.​org/​ElectricAssessment.​html.

  17 Maroochy Water Services had little choice but to involve law enforcement in the case, because the spillages were so public and threatened public safety. The incidents also brought heavy scrutiny from Australia’s environmental protection agency and from regional government officials who demanded an explanation for why they occurred.

  18 Kingsley was speaking at the AusCERT2002 conference in Australia. In a report examining the Maroochy case in 2008, eight years after it occurred, the authors concluded that some of the issues raised by the incident were just beginning to be addressed by companies, while “some are unresolved with no solution in sight.” See Marshall Abrams and Joe Weiss, “Malicious Control System Cyber Security Attack Case Study–Maroochy Water Services, Australia,” February 23, 2008, available at csrc.​nist.​gov/​groups/​SMA/​fisma/​ics/​documents/​Maroochy-​Water-​Services-​Case-​Study_report.​pdf.

  19 This and other quotes from Weiss in this chapter come from an author interview, June 2012.

  20 Barton Gellman, “Cyber-Attacks by Al Qaeda Feared,” Washington Post, June 27, 2002.

  21 Ibid.

  22 “Critical infrastructure” in the United States is broadly defined by the government as any facility or system that falls into one of sixteen categories that include: agriculture and food, banking and finance, chemical, commercial facilities, critical manufacturing, dams, defense industrial base, drinking-water and water-treatment systems, emergency services, energy, government facilities, information technology, nuclear reactors and waste, public health and health care, telecommunications, and transportation. See dhs.​gov/​critical-​infrastructure-​sectors.

 

‹ Prev