Power Hungry

Home > Other > Power Hungry > Page 24
Power Hungry Page 24

by Robert Bryce


  The physical production limits on oil and coal may keep carbon dioxide emissions far below the projections put forward by the Intergovernmental Panel on Climate Change, which has said that carbon dioxide concentrations could reach almost 1,000 parts per million by 2099.74 In his analysis, Rutledge predicted that due to peak coal, global carbon dioxide concentrations will not rise much above 450 parts per million by 2065. If his predictions are correct, then some of the worry about future carbon dioxide emissions may be misplaced.

  But N2N offers the most viable way to hedge our bets with regard to both peak oil and peak coal. To be sure, both oil and coal will continue to be key sources of primary energy throughout the world for the rest of the twenty-first century, but it is also apparent that the inevitable production plateaus of those fuels will force consumers to find alternatives. And natural gas and nuclear power are the only alternatives that can provide the scale of energy supplies needed to substitute for some of the expected declines in oil and coal production. In other words, natural gas and nuclear power can be used as a hedge against the looming “twin peaks.”

  Embracing N2N can also help the world deal with another megatrend: increasing urbanization. In his book Whole Earth Discipline, Stewart Brand wrote, “In 1800 the world was 3% urban; in 1900, 14% urban; in 2007, 50% urban. The world’s population crossed that threshold—from a rural majority to an urban majority—at a sprint. We are now a city planet.” According to Brand, “Every week there are 1.3 million new people in cities. That’s 70 million a year, decade after decade. It is the largest movement of people in history.”75

  The migration to cities requires more clean energy sources that people can use in their homes. Coal and wood won’t do; natural gas and nuclear power are the obvious choices to provide the cooking and heating fuel that city-dwellers need as well as the electricity they use to turn on their lights and keep their computers, entertainment centers, and appliances running.

  Given these megatrends—decarbonization, increased use and availability of gaseous fuels, concerns about peak oil and peak coal, increasing urbanization, and continuing worries about carbon dioxide emissions—it makes sense for the United States to begin promoting N2N as a winning long-term strategy. Natural gas and nuclear power plants require far less land than wind and solar installations; both have lower carbon emissions than oil or coal; they emit no air pollutants; and both pass the challenges of cost and scale.

  Although N2N is the obvious way forward, we must be realistic. Adding significant quantities of new nuclear power capacity in the United States will take decades. In the meantime, the United States should focus on the first N: natural gas. But emphasizing natural gas will be difficult, particularly given the strength of the coal lobby in Congress. It will also require Congress and federal regulators to overcome a decades-long parade of bad legislation that has had the perverse effect of stifling U.S. natural gas production and reducing gas’s share of the U.S. primary energy market. Much of that legislation was based on a mistaken notion that America’s natural gas resources were running out.

  Thus, before looking forward to what should happen next with the U.S. gas sector, we need to take a quick look back at the history of an energy source that was once viewed as being nearly worthless.

  CHAPTER 22

  A Very Short History of American Natural Gas and Regulatory Stupidity

  The choice is not between cheap and expensive natural gas, because there is no such thing as a plentiful supply of cheap gas.

  Editorial, Los Angeles Times, May 26, 19751

  FOR DECADES, the calculus in the oil field was simple: Oil was cash. Natural gas was trash.

  Oil was easily transported—by pipeline, rail, or truck—and it had a huge and growing market in the burgeoning fleet of trucks and automobiles. It could also be used for manufacturing, lubrication, heating, cooking, and lighting. Oil was relatively easy to handle. If a well blew out, the petroleum could be diverted into a hastily dug pit. Longer-term storage could be handled with relatively inexpensive wooden tanks, which would suffice until the oil could be transported.

  Gas, on the other hand, was difficult to transport and had a nasty habit of exploding. The fuel could only be profitably moved by pipeline, and other than heating and municipal lighting systems—which required good pipelines—it had few apparent uses. That meant the fuel was nearly worthless to the oil companies—and in the early days of the industry, that’s what they were, oil companies. Few companies targeted natural gas when drilling. Instead, they were forced to deal with the natural gas that was mixed with the oil. In the industry, that type of gas is known as “associated gas.” And given that it was mixed with oil and was hard to get rid of, it was viewed as more of a problem than an asset. Thus, during the first few decades of the U.S. oil industry, there was no “gas industry” to speak of. The idea of actually drilling for natural gas on purpose—or, in industry parlance, looking for “unassociated gas”—was rare.

  A few entrepreneurs did see value in natural gas. In 1891, America’s first significant high-pressure gas pipeline was built by Indiana Gas and Oil to carry natural gas from a field in Indiana to customers in Chicago, a distance of about 120 miles. However, the pipe was a leaky mess, and by 1907, it was shut down. Two decades later, there were only a handful of natural gas pipelines in the entire country, and many of them were plagued by leaks and inefficient operations. It wasn’t until the early 1920s—when manufacturers began making significant quantities of seamless steel pipe, which could be joined using new technologies such as oxyacetylene and electric welding—that pipeline builders were able to string together longer sections of high-strength steel pipe.2 But even with those technologies, by the mid-1920s the longest gas pipeline in the United States was only about 300 miles long.

  By 1930, when oil was selling for about $1 per barrel, natural gas was selling in some locales for as little as $0.03 per thousand cubic feet—if a market for the gas was even available.3 With scant pipeline capacity and little monetary interest in selling the gas, most producers looked for ways to simply get rid of the fuel. During the 1940s, gas was so abundant in the areas around Amarillo, Texas, that the city offered companies free natural gas for five years if they agreed to set up shop in the city and employ at least fifty people.4 The city got no takers. With few uses for the gas, huge quantities of the fuel were simply vented into the atmosphere. That worked, but it also came with the risk of explosion. (Furthermore, we now know that methane is a greenhouse gas that is about twenty times more effective at trapping heat in the atmosphere than carbon dioxide.)

  It was both safer and better, from an environmental standpoint, to burn the gas near the wellhead. Companies with too much gas on their hands put up tall pipes near their producing wells and set the gas aflame. During the 1930s and 1940s in Texas, gas flares were so common, and so bright, that in parts of the Lone Star State motorists could drive for hours at night without ever needing to flip on their headlights. According to historian David Prindle, “Miles away from any major oil field, newspapers could be read at night by the light” from the flares. In 1934 alone, about 1 billion cubic feet of gas per day were being flared in the Texas Panhandle.5

  Flaring might have continued but for the intervention of the Texas Railroad Commission, the Austin-based agency that from the 1930s through 1973 determined production levels for key U.S. oil fields, which had the effect of largely determining global oil prices. In 1947, after years of wrangling with the industry, the agency passed a rule that prohibited the flaring of gas. The agency determined that gas was a valuable natural resource and therefore should be conserved.6 Some producers fought the rule, but the courts sided with the agency. The Railroad Commission’s move forced Texas producers to either reinject the natural gas into their wells—a move that helped to maintain the oil reservoir’s natural pressure, and therefore its long-term productivity—or put it into a pipeline.7 The ban on flaring was later adopted by other oil-producing states in the United States, and the do
mestic gas industry began to mature.

  Between 1949 and 1957, U.S. gas consumption doubled, and in 1958, gas surpassed coal to become the second-largest source of primary energy in the country.8 By the late 1950s, gas looked ready to rob even greater market share away from coal. But just as that energy transition was beginning, natural gas became a favored target for federal regulators. And the hodgepodge of regulations that resulted would hamstring the U.S. gas industry for decades. The result was predictable: U.S. gas consumption fell, coal consumption rose, and along with increased coal use came increased emissions of carbon dioxide.

  The case was known as Phillips Petroleum Co. v. Wisconsin. In 1954, just four years before natural gas eclipsed coal as the second-biggest source of primary energy in the United States, the U.S. Supreme Court issued a ruling in the Phillips case that gave federal authorities the power to set prices on gas that was sold into the interstate market. Federal regulators were empowered to set prices based on the gas producers’ costs, plus a “fair” profit.9 But the prices deemed “fair” by federal regulators were often uneconomic for gas producers. The result was a downward spiral in the amount of gas destined for sale across state lines. Gas producers could sell their production to customers inside their own state (intrastate) for a mutually agreed-on price. But if those same producers—say, from gasrich states such as Oklahoma or Texas—wanted to sell their gas to a pipeline for delivery to customers in, say, Ohio or Illinois, then that interstate gas was subject to federal price controls. In short, the price controls effectively prevented producers from sending their gas to the most lucrative markets, and that, predictably, slowed the growth of the U.S. gas industry.

  By 1971, a shortage of natural gas was looming. That year, Monty Hoyt, a reporter for the Christian Science Monitor, summed up the situation: “The wellhead price for interstate natural gas has been kept artificially low for more than a decade” by federal regulators, and “this has stimulated demand for the ‘Mr. Clean’ of fuels, while at the same time discouraging exploration and drilling of new wells. Reserves have dwindled as a result.” He went on to say that “cheap natural gas has caused utilities and major industries to switch to gas-burning boilers,” a move that depressed demand for other boiler fuels such as coal and oil.10

  In January 1973, the Washington Post reported that the United States was in the midst of a “chronic shortage of natural gas,” and when a cold snap hit the nation that month, energy shortages were widespread. Schools and factories in eleven states, from Colorado to Ohio, were forced to close. Homeowners in several states, including Michigan, Minnesota, and Illinois, were running short of natural gas. The havoc wasn’t limited to natural gas. Price controls imposed by the Nixon administration on refined oil products led to widespread shortages.11

  But despite the recurring shortages, some of the energy industry’s loudest critics simply couldn’t fathom the concept that government price controls might be the problem rather than the solution. In 1974, S. David Freeman, a lawyer who later became the head of the Tennessee Valley Authority, wrote an opinion piece for the Los Angeles Times, in which he insisted that price controls on natural gas were essential and must be continued. His solution to America’s energy crisis was more governmental intervention to the point where the government, he said, would “provide everyone with a smaller ration of gasoline.”12

  Fortunately, Freeman’s grand plan for rationing motor fuel didn’t gain any traction. But his belief in the need to keep energy scarce and expensive gained plenty of traction in California. About seven months after Freeman wrote his piece for the Los Angeles Times, the paper’s editorial board published its own piece, arguing that natural gas prices should be deregulated—but only for “new gas.” That is, any gas discoveries that were made from 1975 onward should be allowed to sell their gas “in a fundamentally free market.” But existing gas supplies would continue to be sold at prices set by federal regulators. The Times went on to contend that a “windfall profits tax” should be imposed on the gas producers who “failed to plow most of the profits back into the hunt for new supplies.” The editors concluded their May 26, 1975, opinion piece by declaring that “the choice is not between cheap and expensive natural gas, because there is no such thing as a plentiful supply of cheap gas.”13

  That belief—that there’s no such thing as a plentiful supply of cheap gas—continued to be the dominant mindset throughout the mid- to late 1970s. And that mindset assured that the federal price controls stayed in place. In January 1977, when the United States was hit by another blast of nasty winter weather, natural gas shortages came roaring back. The shortages were so severe that a utility in Buffalo, New York, asked its business, industrial, and school customers to shut down for two days and suggested that residential customers turn their thermostats down to 55 degrees. In Ohio, gas utilities were forced to cut off supplies to 4,500 industrial customers, and automobile plants in Michigan and Ohio were closed, putting 56,000 people out of work.14 By late January, some 300,000 workers had been laid off as a result of the shortages of natural gas. The crisis was so acute that on January 26, 1977, President Jimmy Carter, who’d just been sworn into office about a week earlier, asked Congress for emergency legislation that would temporarily lift the federal price controls on interstate gas shipments.15

  Throughout the late 1970s, ideologues such as Freeman and other Carter-era energy bureaucrats were convinced that there was a geological shortage of natural gas and therefore the United States had to embark on an alternative path away from the fuel. In 1977, John O’Leary, the administrator of the Federal Energy Administration, told Congress that “it must be assumed that domestic natural gas supplies will continue to decline” and that the United States should “convert to other fuels just as rapidly as we can.” That same year, Gordon Zareski of the Federal Power Commission testified before Congress and stated that U.S. policies “should be based on the expectation of decreasing gas availability.” He went on to say that annual production of natural gas would “continue to decline, even assuming successful exploration and development of the frontier areas.”16

  What O’Leary and Zareski didn’t tell Congress was that within gasrich states such as Texas, there was a surfeit of gas. Why? The answer was simple: Gas producers could get good prices for their product by keeping it within the state’s borders. In mid-1973, when federal interstate prices were about $0.25 per thousand cubic feet, some Texas producers were selling their production intrastate for as much as $0.82.17 By 1978, Texas had so much gas in its intrastate market that the Texas Railroad Commission began acting as a type of referee for the intrastate sales so that it could more closely match output with demand.18 The havoc caused by federal meddling in the interstate gas market was recognized by energy-industry historian Ruth Sheldon Knowles, who, in a 1977 opinion piece for the Los Angeles Times, wrote that “the nation’s natural gas shortage was created by interstate controls. By contrast, the free-market approach has actually increased supplies for intrastate use.” The reason federal price controls on interstate gas persisted was that “keeping the price artificially low—that is, below its true market value—has been politically popular.”19

  The mistaken belief that the United States was running short of natural gas was politically advantageous for the coal producers, who were eager to limit competition from the oil and gas business. And those coal producers had powerful friends in Congress, including, but not limited to, Senator Robert Byrd, the powerful Democrat from West Virginia.

  In 1978, Byrd and his allies convinced Congress to pass two bills that would haunt the gas industry for years: the Powerplant and Industrial Fuel Use Act and the Natural Gas Policy Act. The most important provision of the Fuel Use Act was that it prohibited the use of gas for electricity generation. Meanwhile, the Natural Gas Policy Act created a briar patch of categories for gas pricing based on whether it was sold in interstate or intrastate commerce, what type of wells were involved, and even how deep the wells were.20 In the wake of the legislat
ion, gas consumption plummeted. By 1986, natural gas consumption had fallen to about 44 billion cubic feet per day—a level not seen in the United States since 1965.21

  In 1987, Congress finally reversed course and repealed the Power-plant and Industrial Fuel Use Act. Although the law was in effect for less than a decade, it did plenty of damage. The ban on the use of natural gas for power generation led many electric utilities—which were seeing booming demand for electricity—to build more coal plants.22 In 1978, natural gas was generating 13.8 percent of U.S. electricity. By 1988—a decade after the Powerplant and Industrial Fuel Use Act was passed—natural gas’s share of the U.S. electricity business had fallen to a modern low of just 9.3 percent.

  The winner of all this federal intervention was coal. Between 1973 and 2008, coal’s share of the primary energy market jumped from 18 percent to 25 percent.23 Much of that coal was used for electricity generation. Between 1978—the year that Congress passed the Powerplant and Industrial Fuel Use Act—and 1988, coal’s share of the U.S. electricity generation market soared, going from 44.2 percent to 56.9 percent, the highest level of the modern era.24

  The irony here is almost too great. Today, Congress is working mightily to impose a cap-and-trade system or some similar plan aimed at reducing carbon dioxide emissions, and in particular to reduce coal consumption. Had the natural gas sector that was booming back in the 1950s been allowed to continue without excessive federal regulation, U.S. carbon emissions would undoubtedly be lower today, because more gas would be employed for electricity generation and other uses. That would have meant less coal consumption, and therefore lower carbon emissions and less coal-related pollution of mercury, soot, and sulfur dioxide.

 

‹ Prev