Countdown to Zero Day: Stuxnet and the Launch of the World's First Digital Weapon

Home > Other > Countdown to Zero Day: Stuxnet and the Launch of the World's First Digital Weapon > Page 18
Countdown to Zero Day: Stuxnet and the Launch of the World's First Digital Weapon Page 18

by Kim Zetter


  Many SCADA field devices, if not connected directly to the public internet, are accessible via modem and are secured only with default passwords. Switches and breakers for the power grid, for example, are often set up this way with default passwords so that workers who need to access them in an emergency will remember the password. For the same reason, control systems aren’t generally designed to lock someone out after several failed password attempts—a standard security feature in many IT systems to prevent someone from brute-forcing a password with multiple guesses—because no one wants a control system to lock out an operator who mistypes a password a few times in a state of panic. In 2011, a test team led by security researcher Marc Maiffret penetrated the remote-access system for a Southern California water plant and was able to take control of equipment the facility used for adding chemicals to drinking water. They took control of the system in just a day, and Maiffret said it would have taken just a couple of additional steps to dump chemicals into the water to make it potentially undrinkable.38

  Making critical systems remotely accessible from the internet creates obvious security risks. But if Stuxnet proved anything, it’s that an attacker doesn’t need remote access to attack a system—instead, an autonomous worm can be delivered via USB flash drive or via the project files that engineers use to program PLCs. In 2012, Telvent Canada, a maker of control software used in the smart grid, was hacked by intruders linked to the Chinese military, who accessed project files for the SCADA system the company produced—a system installed in oil and gas pipelines in the United States as well as in water systems. Telvent used the project files to manage the systems of customers. Though the company never indicated whether the attackers modified the project files, the breach demonstrated how easily an attacker might target oil and gas pipelines by infecting the project files of a company like Telvent.39

  Direct computer network intrusions aren’t the only concern when it comes to critical infrastructure, however. There are documented cases involving electromagnetic pulses interfering with SCADA systems and field devices. In November 1999, the radar system from a US Navy ship conducting exercises twenty-five miles off the coast of San Diego interrupted the wireless networks of SCADA systems at local water and electric utilities. The disturbance prevented workers from opening and closing valves in a pipeline, forcing them to dispatch technicians to remote locations to manually activate the valves and prevent water from overflowing reservoirs. Electromagnetic pulse (EMP) disturbances were also responsible for a gas explosion that occurred near the Dutch naval port of Den Helder in the late ’80s when a naval radar system caused the SCADA system for a natural gas pipeline to open and close a valve.40

  OVER THE YEARS, numerous Doomsday scenarios have explored the possible consequences of a massive cyberattack.41 But to date, no such attack has occurred, and unintentional events involving control systems have far outnumbered intentional ones.

  But one need only look at accidental industrial disasters to see the extent of damage a cyberattack could wreak, since often the consequences of an industrial accident can be replicated for an intentional attack. A smart hacker could simply study the causes and effects of an accidental disaster reported in the news and use them to design an attack that would achieve the same destructive results.

  The NSA’s Keith Alexander has cited the catastrophic accident that occurred at the Sayano-Shushenskaya hydroelectric plant in southern Siberia as an example of what could occur in an attack.42 The thirty-year-old dam, the sixth largest in the world, was eight hundred feet high and spanned about half a mile across a picturesque gorge on the Yenisei River, before it collapsed in 2009, killing seventy-five people.

  Just after midnight on August 17, a 940-ton turbine in the dam’s power-generation plant was hit with a sudden surge of water pressure that knocked it off its bolts and caused it to shoot in the air. As a geyser of water flooded the engine room from the shaft where the turbine had been, it caused massive damage to more than half a dozen other turbines, triggering multiple explosions and causing the roof to cave in.

  The catastrophe was attributed in part to a fire at the Bratsk power station some five hundred miles away that caused the energy output from Bratsk to drop. This forced the turbines at Sayano-Shushenskaya to pick up the load. But one of those turbines was already at the end of its life and had been vibrating dangerously on and off for a while. A new control system had been installed months earlier to stabilize the machine, but vibrations from the added workload proved to be too much. The turbine sheared off the bolts holding it down and became unmoored. Surveillance images showed workers scrambling over equipment to flee the site. In addition to killing seventy-five workers and flooding the surrounding community, the plant spilled 100 tons of oil into the Yenisei River and killed 4,000 tons of trout in local fisheries. Experts calculated that repairs would take four years and cost $1.3 billion.43

  The June 1999 pipeline explosion in Washington state also presented a blueprint for hackers to follow. In that case, a 16-inch-diameter pipeline belonging to the Olympic Pipe Line Company in Bellingham ruptured and spewed more than 237,000 gallons of gasoline into a creek in Whatcom Falls Park. Gas poured out of the pipe for ninety minutes before it ignited into a fireball that stretched 1.5 miles downstream, killing two ten-year-old boys and a teen and injuring eight others. Although multiple issues contributed to the disaster, including improperly configured valves and a backhoe that weakened part of the pipe, an unresponsive control system also played a role. “[I]f the SCADA system computers had remained responsive to the commands of the Olympic controllers,” investigators found, “the controller operating the accident pipeline probably would have been able to initiate actions that would have prevented the pressure increase that ruptured the pipeline.”44

  It took operators more than an hour to register the leak, and by then residents were already calling 911 to report a strong smell of petroleum in the creek. Although the gas leak wasn’t caused by hackers, investigators found a number of security problems with Olympic’s system that made it vulnerable to attack. For example, the company had set up remote dial-in access for its SCADA control system that was secured only with a username and password, and its business and SCADA networks were interconnected. Although they were connected by a bridge that provided some security from a casual intruder, the connection lacked a robust firewall as well as virus protection or access monitoring, raising the possibility that a determined attacker could break into the business network from the internet, then jump to the critical SCADA network.

  The natural-gas pipeline explosion in San Bruno, California, in 2010 was another worst-case scenario that served as a cautionary tale. The explosion occurred after maintenance on an uninterrupted power supply unit, or UPS, caused electricity to the SCADA system to go out. A control valve on the pipeline was programmed to fall open automatically if the SCADA system lost power; as a result, gas poured into the pipeline unimpeded, causing pressure to build in the aging structure until it burst. Since the SCADA system had lost power, operators couldn’t see what was happening in the pipeline.45

  Then there was the collapse of a dike in Missouri in December 2005. The disaster began when sensors on the dam wall became detached from their mounts and failed to detect when the dam’s 1.5 billion-gallon reservoir was full. As pumps continued to feed water to the reservoir, a “fail-safe” shutdown system also failed to work.46 The overflow began around 5:10 a.m. and within six minutes a 60-foot section of the parapet wall gave way. More than a billion gallons of water poured down Proffit Mountain, sweeping up rocks and trees in its massive embrace before entering Johnson’s Shut-Ins State Park and washing away the park superintendent’s home—with him and his family still in it—and depositing them a quarter of a mile away.47 No one was seriously injured, but cars on a nearby highway were also swept up in the torrent, and a campground at the park was flooded. Luckily, because it was winter, the campsite was empty.

  Railway accidents also provide blueprints for d
igital attacks. The systems that operate passenger trains combine multiple, often interconnected components that provide possible avenues for attack: access-control systems to keep nonticketed pedestrians out of stations, credit-card processing systems, digital advertising systems, lighting management, and closed-circuit TVs, not to mention the more critical systems for fire and emergency response, crossings and signals control, and the operation of the trains themselves. In the past, these systems were separate and did not communicate with one another except through wires. But today the systems are increasingly digital and interconnected, including systems that communicate via radio signals and transmit unencrypted commands in the clear. Although rail systems have redundancies and fail-safe mechanisms to prevent accidents from occurring, when many systems are interconnected, it creates the opportunity for misconfigurations that could allow someone to access the safety systems and undermine them.

  On June 22, 2009, a passenger train in the DC Metro system collided during the afternoon rush hour with another train stopped on the tracks, killing one of the operators and eight passengers, and injuring eighty others. Malfunctioning sensors on the track had failed to detect the presence of the stopped train and communicate that to the moving train. Although the latter train was equipped with anti-collision sensors that should have triggered its brakes when it got within 1,200 feet of the other cars, that system had failed too, and for some reason the operator never applied the manual brakes. A decade earlier, communication relays on the same Metro system had sent incorrect instructions to trains on several occasions—one time telling a train to travel 45 miles per hour on a section of track with a 15 mile per hour speed limit.48

  These incidents were all accidental, but in Poland in 2008 a fourteen-year-old boy in Lódz caused several trains to derail when he used the infrared port of a modified TV remote control to hijack the railway’s signaling system and switch the tram tracks. Four trams derailed, and twelve people were injured.49

  ALTHOUGH THERE ARE many different ways to attack critical infrastructure, one of the most effective is to go after the power grid, since electricity is at the core of all critical infrastructure. Cut the power for a prolonged period, and the list of critical services and facilities affected is long—commuter trains and traffic lights; banks and stock exchanges; schools and military installations; refrigerators controlling the temperature of food and blood supplies; respirators, heart monitors, and other vital equipment in hospitals; runway lights and air traffic control systems at airports. Emergency generators would kick in at some critical facilities, but generators aren’t a viable solution for a prolonged outage, and in the case of nuclear power plants, a switch to generator power triggers an automatic, gradual shutdown of the plant, per regulations.

  One way to target electricity is to go after the smart meters electric utilities have been installing in US homes and businesses by the thousands, thanks in part to a $3 billion government smart-grid program, which has accelerated the push of smart meters without first ensuring that the technology is secure.

  One of the main problems security researchers have found with the system is that smart meters have a remote-disconnect feature that allows utility companies to initiate or cut off power to a building without having to send a technician. But by using this feature an attacker could seize control of the meters to disconnect power to thousands of customers in a way that would not be easily recoverable. In 2009, a researcher named Mike Davis developed a worm that did just this.

  Davis was hired by a utility in the Pacific Northwest to examine the security of smart meters the company planned to roll out to customers. As with the Siemens PLCs that Beresford examined, Davis found that the smart meters were promiscuous and would communicate with any other smart meters in their vicinity as long as they used the same communication protocol. They would even accept firmware updates from other meters. All an attacker needed to update the firmware on a meter was a network encryption key. But since all the meters the company planned to install had the same network key embedded in their firmware, an attacker only had to compromise one meter to extract the key and use it to deliver malicious updates to other meters. “Once we had control of one device, we had pretty much everything we needed,” Davis said. “That was the case across a bunch of meters that we had looked at from different vendors.”50

  The meters communicated with one another via radio and were always in listening mode to detect other meters nearby. Some meters could communicate with one another from miles away. The ones Davis examined had a reach of about 400 feet, a little longer than the length of a football field—which was more than enough to propagate a malicious update between neighboring houses that would shut off the electricity and spread the worm to additional meters. Davis didn’t even need to compromise an existing meter at a house to get the infection going; he could simply buy his own meter of the same brand—as long as it spoke the same protocol—and load it with malware and the necessary encryption key, then place it in the vicinity of a metered house. “Because of the radio, it’s going to get picked up automatically [by other meters around it],” Davis says. Once the update was complete, the victim meter would restart with the new firmware in place and automatically begin spreading its update to other meters within range, setting off a chain reaction. Operators wouldn’t know anything had changed with the meters until power started dropping out in neighborhoods.

  Normally the vendor’s meters got upgraded remotely through a utility company’s central network, or via a technician in the field who used a special dongle connected to a laptop to communicate wirelessly with the meters. So when Davis and his team told the vendor they could write software that propagated automatically from one meter to another without using the central computer or a dongle, the vendor scoffed and said the meters didn’t have the ability to initiate a firmware update to other meters. “They told us … that wasn’t part of their feature set,” Davis recalls. “We said we know, we added the feature [to our malicious firmware update].” The vendor still didn’t believe a worm would have much effect, so Davis wrote a program to simulate an infection in a residential neighborhood of Seattle that in a day spread to about 20,000 smart meters.51 “We had pretty much full compromise by the end of the twenty-four-hour cycle,” he says. The infection spread one meter at a time, but a real-world attack would move much more quickly since an attacker could send out a plague of firmware updates from multiple patient zeros located strategically throughout a city.

  The vendor scoffed at Davis’s simulation, too, saying a worm would take two to four minutes to update each meter’s firmware, and in that time, technicians would spot the outage before too many customers lost electricity and send out a remote firmware update to turn the power back on to them.

  That’s when Davis delivered his final blow and told the vendor that his malicious software didn’t just turn the power off, it also deleted the firmware update feature on the meters so they couldn’t be updated again to restore power. Technicians would have to replace the meter at each house or take them back to the lab and flash their chips with new firmware. “That actually seemed to get their attention more than anything,” he says. “We were able to prove the point that this could get out of hand well before they would be able to figure out what’s going on.”

  Since conducting the simulation, Davis has seen vendors improve their meters. Some vendors now use multiple network keys on their meters, assigning a different key for different neighborhoods to limit the damage an attacker could do with a single key. But the remote disconnect is still a problem with most smart meters, since an attacker who breaches a utility’s central server could do what Davis’s worm did, but in a much simpler way. “Were [the remote disconnect] not in there, none of this would really be all that much of an issue,” Davis says. “In my opinion, if it’s got the remote disconnect relay in it, whether it’s enabled or not … it’s a real big, ugly issue.”

  Going after smart meters is an effective way to cut electricity. But an ev
en more effective and widespread attack would be to take out generators that feed the grid or the transmission systems that deliver electricity to customers. Defense Secretary Leon Panetta said at his confirmation hearing in June 2011 that the next Pearl Harbor the nation experiences could very well be a cyberattack that cripples the grid.

  The North American power grid is large and complex and actually consists of three large regional grids—known as the Eastern, Western, and Texas Interconnections. The grids are composed of more than 450,000 miles of high-voltage transmission lines owned and operated by about three thousand utilities. Because power is traded on energy markets, it sometimes gets routed long distances between and within states to fulfill demand, such as by Cal-ISO, the entity that was hacked in 2001. Although the existence of many independent systems means that an attack on one utility or substation will have a limited effect, their interconnectedness means that a coordinated and strategic attack on a number of systems could cause cascading blackouts that are difficult to fix and plunge users into darkness for weeks.52

  For example, circuit breakers that monitor distribution lines are designed to sense a dangerous surge on the lines and open to disconnect them from the grid to prevent them from being damaged. When one breaker trips, however, the power from that line gets redirected to other lines. If those lines reach capacity, their breakers will also trip, creating a blackout. But a well-crafted attack could trip the breakers on some lines while manipulating the settings on others to prevent them from tripping, causing the lines to overheat when they exceed capacity.

 

‹ Prev