Cascadia's Fault
Page 20
The fact that the peaks of southern Vancouver Island, the Olympic Peninsula in Washington, and the Coast Range in Oregon and California are not towering piles like Mount Everest could be taken as one more level of proof that the nonstop pressure of subduction must have been released every few hundred years by very large earthquakes. The point of this latest study was to find the edges of the locked part of the zone and figure out how much real estate was likely to slip sideways when the two plates come unstuck next time.
Roy Hyndman and Kelin Wang, coauthors with Dragert and Rogers, had been working for several years on the idea that temperature could tell the story of where these two plates of rock were bonded together by friction and where they were sliding along smoothly. Wang told me that the dangerous part of a subduction zone—the area that can generate an earthquake—cannot be very far below the surface. “It doesn’t go very deep,” he said, “because when they go deeper and the temperature is too high, the rock becomes too soft to produce earthquakes.”
Rocks go from brittle to ductile as they heat up. The brittle ones will grind against each other and friction will lock them together. Hotter, softer rocks won’t stick together. The deeper the slab of ocean floor slides during subduction, the warmer and softer the surfaces become. So at a certain depth and temperature, two tectonic plates theoretically would slide past each other without the risk of rupture. For these reasons Hyndman and Wang thought measuring heat flow could show them where the seismogenic danger zone started and stopped.
First they studied published results of laboratory experiments on rock friction and figured out how warm the ocean floor and continental crust would have to be to stick together and get locked. They knew there was a wild card in the equation—something unusual about the Cascadia Subduction Zone—that would complicate their calculations. The accretionary wedge of sand and clay piled along the edge of the fault was getting dragged down with the slice of ocean floor. It was full of seawater. How did that affect the temperature on the surface of the subduction zone? Kelin Wang had a hunch that this sediment played a major role in the size and severity of subduction zone quakes.
“There’s a tendency for the rupture to be very long at these subduction zones,” he said. Plates can slip along hundreds of miles of rock surface—from the epicenter, ripping along the “strike” of the fault at more than a mile (2–3 km) per second—all in one earthquake. “I think the amount of sediments cause faults like this to be very smooth,” Wang continued. “When the fault is smooth, the rupture has a better chance to propagate for a long way.” It’s not that the sediments lubricate the movement, he clarified. “It just makes the fault property more uniform.” Wang’s idea was that instead of the fault having a million little asperities—rough spots, cracks, and jagged edges—the ocean sediment probably coats the surfaces of the two plates, making the point of contact smoother. That way, when stress finally builds up to the point where the rocks fail, long segments fail together.
Realizing that sediment must play an important role, they had to figure out how it affected the overall temperature of the locked subduction zone. The bottom layer—the subducting slab of ocean floor—started out warm because it was so young, having been created relatively recently (in geologic time) by the hot volcanic furnace of the Juan de Fuca Ridge. Wang and Hyndman figured the layer of sediment probably acted as an insulating blanket, trapping heat in the lower plate.
As the ocean slab moved slightly deeper and the sedimentary wedge dried out, the temperature would rise to 300 degrees Fahrenheit (150°C), which laboratory experiments and research at other subduction zones had suggested was usually the minimum necessary to cause two plates to stick together and generate earthquakes. Farther down, where the temperature rose to 660 degrees Fahrenheit (350°C), the lab studies showed that rocks became too soft to stick together. With this in mind, Wang and Hyndman took measurements of heat flow in the earth, marked the starting point where the thermometer topped 300 Fahrenheit and the stopping point at 660 and drew a new set of wavering gridlines on the map of the West Coast.
They found that the “fully locked zone” capable of generating quakes was about forty miles (65 km) wide, running roughly north–south parallel to the coast in the offshore region beneath the outer continental shelf. The inner edge of the transition zone barely reached the western beaches of Vancouver Island. The locked zone was slightly wider in Oregon and almost twice as wide along Washington’s Olympic Peninsula.
The report issued by the Canadian team pointed optimistically to the fact that the energy released in Cascadia’s next big event would be restricted to this relatively narrow band of rock beneath the continental shelf west of Vancouver Island. “This reduces the expected ground motion and amplitude at the major coastal cities,” they wrote. With the city of Vancouver ninety miles (150 km) east of the locked zone, the maximum magnitude of the coming shockwave might also be somewhat less than some pessimists were expecting. So that was the good news.
The bad news was that the rupture would be shallow and therefore have a greater potential to generate large tsunamis. And the long duration (three minutes or more) of shaking would not be reduced by distance from the zone. The quake would still cause major damage, especially to tall buildings, and “events well over magnitude 8 are still possible,” according to the report. A separate paper published a short time later by Garry Rogers also reminded readers that being ninety miles away from the locked zone didn’t mean cities like Vancouver would get an exemption from Cascadia’s effects. “Anchorage is about the same distance from the down-dip end of the Alaska seismogenic zone as Vancouver is from the Cascadia seismogenic zone,” he wrote.
The same kinds of cautionary words would also apply to Victoria, Seattle, Tacoma, Portland, and other cities as far south as Sacramento. Yes, the locked zone is a fair distance from the major urban areas—not directly underneath—but don’t forget that Mexico City was roughly 125 miles (200 km) away from the 1985 epicenter and look what happened there. So “the zone” had been defined as a swath about forty miles (65 km) wide, locked and ready to rip.
Years later, in 2009, a study led by Timothy Melbourne of Central Washington University would suggest the locked part of the fault was even closer to the big urban areas—within fifty miles (80 km) of Seattle, Tacoma, and Portland. But the question of whether the entire plate boundary would break all at once from end to end remained unanswered. If Cascadia’s fault broke today, would it start as a magnitude 8.8 in northern California and continue northward with several more huge quakes over the next decade? Or would it slip all at once in a magnitude 9.2 mega-disaster? To this day, nobody really knows.
For Stephanie Fritts in Pacific County, Washington, the second alarm came eight years after the first. She was working as a volunteer on the ambulance squad on October 4, 1994, when another subduction earthquake (a magnitude 8.2 in the Kuril Islands north of Japan) ripped the seabed of the North Pacific. The Alaska Tsunami Warning Center in Palmer, Alaska, quickly issued a standby alert to emergency officials in Hawaii and to the entire west coast of North America. When the notification hit the desk of Stephanie’s boss, Sheriff Jerry Benning, jaws clenched and sparks began to fly.
Sheriff Benning was notified that if a wave was generated, it would take six or seven hours to reach the Washington shore. There were still no deep-ocean warning buoys, so there was still no way of knowing how big the wave might be—or even that a wave had been generated for sure—until it passed some place like Hawaii on its way toward North America. Benning clearly did not like having to second-guess the scientists.
Emergency officials all along the west coast monitored the situation nervously for the first several hours. Fritts recalls hearing Sheriff Benning and the county commissioners make a telephone call to some distant island across the Pacific. They were told there had been “no significant sea level change,” so they weighed the options, thought about what had happened last time, and decided not to issue an evacuation order.
When officials at the State Capitol in Olympia were informed of the Pacific County decision, they nearly blew a fuse. Two hours before the tsunami was due to arrive, they contacted county headquarters in South Bend and threatened that if they didn’t issue an evacuation order immediately, the state would. But by that time it was already too late. The 1986 experience had convinced the sheriff and his team that a full evacuation would take at least four or five hours. Evacuations are dangerous. People get hurt. People could die if there was panic.
The Pacific County Emergency Management Council convened an emergency meeting. They were furious that the state government would make what they considered a rash decision at such a late hour. They rang Olympia and told state officials that according to their reading of state law, the authority to evacuate belongs to local government. It was Sheriff Benning’s call.
Benning resolutely stood his ground—no evacuation order. Meantime thousands of people had heard about the distant quake and possible tsunami on the radio. Confusion and anxiety spread as the wave got closer and closer. Finally, when the tsunami did arrive, it turned out to be only five and a half inches (14 cm) higher than the normal tide.
In early 1996, a year following the second tsunami fiasco, Stephanie Fritts got hired full time to run the Pacific County Emergency Management Agency, a job she still holds today. Her primary day-to-day responsibility is the 911 call dispatch center, but she’s also in charge of emergency planning. On day one at work, the Emergency Management Council informed her, “Your number one task is to fix this!”—meaning the tsunami problem. They wanted her to come up with a strategy for dealing with potential killer waves, a rational evacuation plan. At the same time they wanted her to do some digging and find a way to solve the false alarm problem.
Fritts knew next to nothing about tsunamis, so she pulled out a phone book and started contacting people who were already working on the issue. She helped to organize a series of public meetings and invited earthquake program manager Chris Jonientz-Trisler from the Federal Emergency Management Agency to drive out to the coast from her office in Seattle and share what she knew about tsunamis. Complaints about the false or overstated warnings came up at every meeting.
Scores of people, including mayors and local officials, stood up in community halls expressing anger and frustration about the warning system. They wanted answers that nobody really had. The technology available at the time was simply incapable of detecting the size of a wave in the immediate aftermath of an offshore rupture. Jonientz-Trisler told them as much as she knew, based on her experience at FEMA, but the bottom line was that until newer, more sophisticated equipment was developed, pretty much the same confusing and scary uncertainty would spread like a virus down the coast every time a big ocean rupture happened. The citizens of Pacific County made it known that the status quo was simply unacceptable.
By the fall of 1994, Eddie Bernard and his team at NOAA’s Pacific Marine Environmental Laboratory in Seattle were quite far along in developing better technology that would eventually reduce tsunami false alarms. Work on the warning system had begun in 1946 after a tsunami generated in the Aleutian Islands devastated Hilo, Hawaii. The original Pacific Tsunami Warning Center was set up in Ewa Beach, Hawaii, and was operational by 1949. Unfortunately there was a 75 percent false alarm rate in the early years.
Upgrades were installed after the 1960 Chilean quake killed dozens in Hawaii and as many as two hundred people in Japan. After the 1964 Alaska event, yet another series of improvements was made and a second warning center was established in Palmer to concentrate on Alaska, British Columbia, and the U.S. west coast states.
The physical detection and measurement of waves in the deep ocean remained the primary technological challenge. While NASA satellites vastly improved the ability to study the atmosphere and the ocean’s surface from space—which in turn allowed hurricane scientists to look down from above through the eyes of storms and begin forecasting what big waves and seawater surges would do as they approached the coast—there was still no real-time alarm system that could tell the scientists at NOAA that a seafloor rupture had triggered a tsunami.
Up to that time the two tsunami warning centers depended on data from a network of seismographs to tell them exactly where an earthquake had occurred and what the magnitude was. If they confirmed that the epicenter was under the ocean floor—and if the magnitude was greater than 7—it was entirely possible (but not guaranteed) that a tsunami had been generated. Whether or not to issue a warning was still an educated guess because sometimes the sea floor moves horizontally more than it does vertically. And without the vertical upheaval, a mountain of water does not get lifted and no significant wave is created.
When a tsunami reaches the nearest shore, tide gauges register the sudden sea-level change. By the time the reading is taken, though, it’s often too late to issue a warning to nearby residents. So those living closest to the zone that triggers the wave are simply out of luck.
There would, of course, still be plenty of time to warn coastal communities on the far side of the ocean. The waves will take hours to cross the sea. But if those tide gauges closest to the zone did not survive the initial impact—if the equipment was destroyed by the force of the incoming wave—there would still be no measurement of how big the tsunami was. Decision makers on duty in the warning centers would not be able to tell people living thousands of miles away what to expect when the waves finally reached them. So the decision to issue a warning, or not, was frequently based on incomplete evidence. NOAA had no hard physical data of its own that confirmed the creation, size, and movement of big waves. The system could not avoid false alarms or the tendency to overstate potential threats.
It was a problem that had to be fixed and Stephanie Fritts wasn’t the only one trying to deal with the downstream consequences. Emergency planners in British Columbia and all five Pacific states had already been through enough false alarms to know that evacuations were not only dangerous and disruptive—they were also expensive.
In full-scale evacuations, businesses were suspended. Factories had to be shut down. Restarting complex equipment and production lines always took time and money. In Hawaii, state officials estimated that a single false alarm cost the local economy nearly $60 million. Not surprisingly, they made it known to NOAA and other federal agencies that the need to “confirm potentially destructive tsunami impacts and reduce false alarms” was their top priority. Stephanie Fritts and Sheriff Jerry Benning in Pacific County knew exactly how the Hawaiian authorities felt.
What Eddie Bernard and a team of more than twenty-five PMEL engineers, technicians, and scientists, along with eighty-five partner companies and suppliers, came up with was a four-stage warning system they called a “tsunameter,” which does for wave detection what seismographs do for earthquake measurement.
They started with a device that records pressure changes at the bottom of the ocean. Waves whipped up by storms or hurricanes affect only the surface layer of the sea. A subduction earthquake lifts the entire water column from bottom to top. When a mound of seawater several miles deep is lifted and breaks into a series of waves that start to roll across the ocean floor, the weight of all that water can be measured as a change in pressure when the wave passes over the bottom pressure recorder (BPR) developed by the team at PMEL. The BPR had to be able to function under almost twenty thousand feet (6,000 m) of water without needing maintenance for at least two years.
The second stage of the system involved an acoustical transmission device that could beam the pressure data up to a buoy tethered by cable at the surface. This turned out to be the greatest engineering challenge of all, although they eventually found a way to do it. A deep-ocean buoy technology had already been developed for NOAA’s Tropical Atmosphere and Ocean weather forecasting system, but the gear needed modifications to make sure it could survive the frequent and more severe storms of the North Pacific.
In the third stage, the buoys would relay the pressure data from the
BPR to an orbiting satellite that would beam the signal back to land. In the fourth and final stage, the data would be received and processed at the two Pacific tsunami warning centers.
That was the plan. Making it happen was something else. They had to build and deploy a new generation of buoys that could withstand an entire year on the wild and turbulent surface of the North Pacific. The equipment for each tsunameter—the BPR, acoustical transmitter, buoy, and satellite relay—cost roughly $250,000, plus another $30,000 per year for maintenance. The most expensive part of the process, however, would be delivering and anchoring the buoy systems in the deep ocean, using ships that cost roughly $22,000 per day to operate.
A prototype to be deployed two hundred miles (320 km) off the coast of Oregon was ready to go by September 1997. It quickly delivered an accurate stream of data, so NOAA decided to install two more. It would take eight different ships on eighteen cruises—more than ninety days of sea time—to set up this initial three-station array. The good news was that it worked better than expected. It was transmitting tsunami data with a reliability factor of 97 percent—much higher than the 80 percent success rate they had hoped for from a prototype.
That was just the beginning. Since the Ring of Fire’s subduction zones constantly eat slabs of sea floor, there was an enormous amount of real estate to cover: more than 5,600 miles (9,000 km) of plate boundaries and grinding trenches that could create large earthquakes and trigger tsunamis. NOAA figured they would need buoys spaced 125 to 250 miles (200–400 km) apart to “reliably assess the main energy beam” of a tsunami generated by a magnitude 8 event. Full coverage would require deployment of twenty-five to fifty tsunameter stations.