So long as the nation-state dominated public affairs, it was inconceivable that states would willingly lose control of their national telecommunication industries. If it had been proposed in 1945 that the U.S. Bell System should be dismantled, objections on grounds of national security and law enforcement would almost certainly have trumped efficiency concerns. Today, the desire to bring better service at a lower cost to consumers has made the security and law arguments sound antiquated. In fact the distinctions between local and long distance and between wireline and wireless service providers are beginning to disappear. All aspects of the public switched telephone network have now been opened to competition. In the traditionally monopolistic local markets, local exchange carriers have been required to allow alternative access providers to interconnect to them. The U.S. Telecommunications Act of 1996 cleared the way for cable television operators to offer telephone and other services over their cable systems. As the structure of the industry changes, fewer services will be delivered wholly by a single provider; more often services will involve interconnection and interworking among several providers, which will inevitably mean greater reliance on what I have called the superinfrastructure. As in banking, the industry will consolidate. The Pacific Telesis – SBC and NYNEX – Bell Atlantic mergers in 2000 reflected this trend. Indeed future mergers will be international in scope, such as was presaged by British Telecom's attempted takeover of the MCI network. Thus we will see a more diverse and decentralized system that is, at the same time, far more dependent on a smaller number of electronic gateways.
Banking and finance, after having remained essentially unchanged since the Second World War, are being revolutionized through access to the superinfrastructure, interacting with a political environment that has radically changed regulatory policy. Until the 1980s, the financial services infrastructure of most countries was primarily the product of states that prohibited these institutions from entering specific lines of business, limited the ownership of various types of firms, and prevented banks from operating on an international or even national level. In some states, such as Japan and Germany, many of these constraints were still largely in force at the century's end. Once deregulation occurs, however, financial institutions need advanced telecommunications to remain competitive in the new environment. In the twenty-first century, the infrastructure of national banking and financial services will become heavily dependent on computer-controlled systems and the telecommunications systems that link them together to move instruments of value through the economy. Payment systems, perhaps the most crucial sector in banking and financial operations, rely on a small number of networked information systems to track, finalize, and account for transactions. Practically all communications in the industry use leased terrestrial circuits; and it is anticipated that the trading markets, electronic funds transfer, and other financial functions will migrate to shared networks like the Internet that are more cost-effective. The use of electronic cash is quickly increasing, with a significant impact on the volume and value of transactions flowing through electronic funds. Visa and Mastercard are international systems of banking and debit that would be impossible without this electronic linkage. In the five years from 1990 to 1995 the use of cash in all transactions decreased 5 percent, and this trend is accelerating. The number of banks is expected to continue to decline. Many financial institutions are outsourcing activities, allowing them to focus on core business functions and reduce overhead. The result of this, however, is to concentrate back-office financial functions in a handful of third-party providers connected by the superinfrastructure, so that disruption of one major outsource would affect multiple companies.
Similarly, for nearly sixty years the electric power industry reflected a well-defined pattern of mutually exclusive regulated monopolies, each serving customers in its discrete area. Utilities in the United States and Britain now must unbundle generation, transmission, and other services, enabling rivals to lease lines to send power to their customers. Companies must post data on transmission availability and rates on the Internet. Moreover, in the past, steam-driven generators were the norm, whether relying on coal or nuclear fuel. Now aero-derivative gas turbines make power more cheaply, use less fuel, and are cleaner. With the new technology, power companies can achieve comparable output with plants one-tenth the size. This means that new, smaller companies can enter the market, increasing competition and penalizing older utilities with high sunk costs in outmoded plant and equipment. Today, telecommunications networks hook up to the giant Interconnects that are the islands of the electrical power infrastructure for the developed world. Electrical power generation, transmission, and distribution are largely controlled by a multitude of automated systems that monitor, report on, and in part control the flow of energy throughout these systems. Yet as more players enter the field, the SCADA—supervisory control and data acquisition—systems that manage the flow of energy are becoming more numerous. These standardized, automated systems are linked to control centers that are linked in turn to management systems responding to the increasingly competitive business environment. Thus we have the paradox of more access to competition, meaning more competitors, and yet more centralization and dependence upon the superinfrastructure.
The pipelines that carry oil and gas, like the energy transmission lines, also are controlled by SCADA systems that rely on standardized, automated mechanisms as a way of meeting the pressures of intensified competition. These systems controlled in 2000 much of the 22,000 miles of oil pipelines and 1.2 million miles of gas pipelines, regulating the flow of oil and gas through an array of pumps, vents, valves, and storage facilities throughout the pipeline system. Here as elsewhere, the efforts toward standardization and establishment of common protocols are driven by the high cost of maintaining multiple kinds of protocols, computer hardware, and software. Many infrastructure entities look to the day when virtually all of their operations will run on networks of large computers using standard communications software throughout.
The difficulty posed by these infrastructure developments is that a cyberattack on that structure can now be launched from anywhere on the globe and can have an impact that is compounded by the interconnectivity among essential elements of the infrastructure. The rapid dependence on information that is sweeping the infrastructure is accompanied by a mutual dependency among, and a dramatic lessening of the number of, critical nodes as well as a general standardization. These developments are largely responsible for the increase in wealth that has been brought about by the new deployment of information; unavoidably they have created a situation of very high risk should that information be tampered with or interdicted. The use of information technology has grown from an option to enhance efficiency to a necessity that many parts of the infrastructure require to function.
The critical superinfrastructure provides the link between the processes of quite different organizations and thus, if compromised, has the ability to create a cascading effect, multiplying destruction exponentially. Thus, for example, a national outage of the U.S. public switched network (PSN) would not only bring almost all local service and all long-distance telephone service in North America to a halt; it would also disrupt Internet communications and cut off essential services such as air traffic control, banking and financial transactions, and even the emergency response to deal with the crisis caused by this outage.
Who would mount such an attack? Unlike conventional warfare, this type of operation would offer little strategic warning and few indications of an imminent assault. Physical attacks would be carried out by small, highly mobile units, while individuals equipped with laptop computers could launch attacks from any point on the global network. This form of warfare would be inexpensive, putting it within reach of most groups and most states. As in the world economy, the greatest asset in this conflict would be information: in this case, the information necessary to turn information technology against itself.
Where would such an attack on the critical
superinfrastructure come from? It might be the result of a natural disaster, like an earthquake or flood, or of a simple accident at a critical node owing to design flaws, installation errors, or inadequate operation. Or it might be caused by an intentional act of terrorism, like the attacks on the World Trade Center that targeted both the American air traffic network and its financial services industry. The most insidious and conceivably the most damaging threat to cyber systems, however, is a cyber threat. Such threats are new, the product of the information age that gave rise to the superinfrastructure in the first place.
Cyber threats might arise from malicious insiders, from terrorists or military opponents, or organized crime, from hackers or competing industrial firms, or from the national intelligence or defense agencies of other countries.13 National intelligence agencies may wish to siphon off data or even to insert disinformation. In a 1990s incident, organized crime electronically robbed Citibank of $10 million through its branch office in St. Petersburg, Russia; it would be idle to suppose that criminal conspiracies will not explore the possibilities of falsifying criminal records, accounts, and other data stored electronically. Hackers are often students who penetrate government and private systems for the sheer thrill of beating the system. In an era of deep suspicion of the motives of governments and large corporations, the number of such persons will surely increase as the number of persons with computer expertise and experience increases.14 Insiders pose the most dangerous threat because they have detailed knowledge of the systems they attack and ready access to the target's own resources.
Even while economic competition is driving globalization and the centralization of risks that amounts to placing very heavy bets on a few roulette numbers, this same competition provides a strong disincentive to actions in the private sector to ensure information security. Steps that are sufficient from an economic point of view are not necessarily reasonable from the viewpoint of national security and emergency preparedness, but greater measures are more costly, and therefore competitively penalize the company that undertakes them.
Nor is it yet entirely clear what government should do. There are no unified bodies of law devoted to critical infrastructure. Rather there are elaborate fiefdoms of regulation that have evolved in separate sectors seeking to ensure service, public safety, and competition. The government needs private partners to undertake the task of protecting the information superinfrastructure, yet these are the same corporations that are often reluctant even to report break-ins or breakdowns in their operations and who are very distrustful of joint operations with the government.
At bottom, this is a national security problem, but it is also a problem for international security, because the infrastructure we must protect is increasingly international. This fact is a by-product of a much desired goal of the market-state, the creation of a world economy. If states seek to expand the opportunities for every individual, then this will necessarily lead to a globalization of the infrastructure. If a market-state attempted to interfere with this development in order to protect the national security—that is, the security of the national critical infrastructure—it would inevitably sacrifice the expansion of opportunity that is its purpose and thus the reason for which it claimed that it is important to keep the State secure in the first place.
The potency of particular threats to the State changes with each era. A modern army could be quickly suffocated if its logistical umbilical cord were severed by infrastructure attacks, while the mercenaries of the Thirty Years' War, who lived mainly by foraging, could have continued functioning. The reliance of modern armies on telecommunications and electronic computation* has created new and more valuable targets for cyberattack and weapons of mass destruction.
Yet the problem of attacks on critical infrastructure is in large part a private sector problem. If we bring to bear on this problem the strategic habits of the Long War—of the nation-state, that is—we may actually sacrifice the tort liability and corporate responsibility necessary for innovative insurance and improved security practices that would arise from the private sector responding to economic disincentives. If states (in a nation-state mentality) were to try to impose regulatory solutions, these might well be ineffective in any case: there will never be sufficient time or resources to write legal rules ahead of the imaginative cyber designer. Only experienced managers in the sectors themselves, acting daily and learning constantly, can stay ahead of this threat; regulations will always come too late.
These two aspects of the critical infrastructure problem—its private and international dimensions—are unwelcome to most states: internationalizing national security is only a little more distasteful than privatizing it. But there are really very few practical alternatives. Most of us are unlikely to be attackers in this new era, but we will probably all be defenders at one time or another. Cyber threats, in themselves, are poorly analogized to the wars of the past, which depended on violence for their essential character. Rather cyber threats are more like epidemiological threats, in which our ultimate security will lie in the good sense of private persons in many countries, cooperating through a central clearinghouse but assessing their own health and taking the appropriate measures to maintain it. To continue the metaphor, the U.S. Centers for Disease Control (CDC) and Prevention, not the Pentagon, is the model the market-state should pursue in addressing this problem.
Nevertheless, the defense planners of many developed states have an important role to play. Their first step must be to free themselves from the habits they acquired planning for nuclear strategy (just as nuclear strategy fifty years ago required that they free themselves from the habits inculcated by theories of conventional bombing). We must learn to think in terms of vulnerabilities instead of threats; of mitigation instead of fortress defense; of reconstitution instead of retaliation. These changes in our ways of thinking are as crucial to dealing with the problem of critical infrastructure protection as are the technological aspects of the problem.
Vulnerability-based strategies against chemical/biological, nuclear, or cyberattacks will depend upon heterogeneity (the use of multiple means of protection and communication), reassessment (the use of dynamic systems that reallocate resources automatically), redundancy (which depends upon excess information), resilience (which depends upon excess capacity), integrity (which depends upon strong encryption), decentralization (which enables the use of quarantines of both persons and networks), and deception. None of these concepts are new to military planners, but they have to be applied in new, defensive modalities. Our current planning— which depends entirely on detecting a computer intrusion, monitoring it, and tracking down the attacker—is hopelessly ill-suited to our situation. Such retaliatory strategies surrender initiative and permit the aggressor to soak up our resources with little more cost to him than the press of a key. Yet 90 percent of the proposed U.S. 2000 budget in this arena was earmarked for intrusion detection and prevention. The developed market-states should be spending their resources on technologies that make the critical infrastructure more slippery, more difficult to damage, more quickly reconstituted, and, above all, more deceptive.
An historical analogy may also provide some help. At the beginning of the twentieth century, many industrial societies experienced unprecedented migration from rural to urban areas. In America this was augmented by large-scale immigration from Europe. One result was the construction of vast tracts of substandard housing in densely populated city areas. At about the same time the first modern housing codes were promulgated. These set minimal standards for building construction and emergency access. But the real work of protecting cities from fires was done by private insurance companies that required compliance with these codes as a condition for insurance (which was itself a condition for mortgage financing). The increased vulnerability of critical infrastructure has been brought about by the same volcanic economic growth that the United States experienced early in the twentieth century. This vulnerability is also driven partly by consumer dem
and and partly by the familiar problem of single-actor transaction costs (which tend to jeopardize an entire neighborhood, for example, because the cost to any one actor of a fire does not justify the expense of organizing protection for all). Some similar sort of information security requirement for private insurance can also be useful in addressing the problem of critical infrastructure. Using the market in this way—because insurance is a globalized service—can internationalize a solution far more effectively (and more quickly) than a network of international treaties.
As with environmental threats imposed by a single irresponsible state on all others, it is highly possible that a state linked by the Internet to all other states might threaten, however inadvertently, the critical infrastructure of the entire developed world. And as with global environmental threats, rules for timely intervention are needed. In their absence, we run the risk of introducing some of the classic and familiar causes of war that, when played across the dimension of constitutional change, make the strategic innovation of a cybernated infrastructure attack the kind of tinderbox that could ignite a war in the twenty-first century.
(6)
Owing to the development of public health measures, inoculations and vaccines, and modern antibiotics the ancient practice of quarantine largely vanished from twentieth-century developed states. This coincided with the emergence in the United States of omnicompetent nation-state governments that replaced the more limited domestic authorities of the state-nation. Thus even though potential federal authority of the States expanded, with respect to quarantine, this authority lay fallow. Today we have the paradoxical situation that a threat to the entire nation by means of infectious agents in a terrorist war against the United States would confront a patchwork of state laws, only a few of which have been updated to apply to all diseases. The federal government is, statutorily, largely out of the picture and the state laws are often antiquated. Many state laws require judicial approval in order to enforce a quarantine. Other state laws restrict the authority of public health officials to share information about an individual's health status. Some states forbid the sharing of information among state agencies or even for one state to inform another state of a health emergency. In October 2001 the CDC released a model state Emergency Health Powers Act. This statute is designed to give officials the power to act decisively in the event of a biological attack or the outbreak of an infectious epidemic.
THE SHIELD OF ACHILLES Page 113