Essays. FSF Columns

Home > Other > Essays. FSF Columns > Page 13
Essays. FSF Columns Page 13

by Bruce Sterling


  Ancient Assyria also used cryptography, including the unique and curious custom of “funerary cryptography.” Assyrian tombs sometimes featured odd sets of cryptographic cuneiform symbols. The Assyrian passerby, puzzling out the import of the text, would mutter the syllables aloud, and find himself accidentally uttering a blessing for the dead. Funerary cryptography was a way to steal a prayer from passing strangers.

  Julius Caesar lent his name to the famous “Caesar cypher,” which he used to secure Roman military and political communications.

  Modern cryptographic science is deeply entangled with the science of computing. In 1949, Claude Shannon, the pioneer of information theory, gave cryptography its theoretical foundation by establishing the “entropy” of a message and a formal measurement for the “amount of information” encoded in any stream of digital bits. Shannon’s theories brought new power and sophistication to the codebreaker’s historic efforts. After Shannon, digital machinery could pore tirelessly and repeatedly over the stream of encrypted gibberish, looking for repetitions, structures, coincidences, any slight variation from the random that could serve as a weak point for attack.

  Computer pioneer Alan Turing, mathematician and proponent of the famous “Turing Test” for artificial intelligence, was a British cryptographer in the 1940s. In World War II, Turing and his colleagues in espionage used electronic machinery to defeat the elaborate mechanical wheels and gearing of the German Enigma code-machine. Britain’s secret triumph over Nazi communication security had a very great deal to do with the eventual military triumph of the Allies. Britain’s codebreaking triumph further assured that cryptography would remain a state secret and one of the most jealously guarded of all sciences.

  After World War II, cryptography became, and has remained, one of the crown jewels of the American national security establishment. In the United States, the science of cryptography became the high-tech demesne of the National Security Agency (NSA), an extremely secretive bureaucracy that President Truman founded by executive order in 1952, one of the chilliest years of the Cold War.

  Very little can be said with surety about the NSA. The very existence of the organization was not publicly confirmed until 1962. The first appearance of an NSA director before Congress was in 1975. The NSA is said to be based in Fort Meade, Maryland. It is said to have a budget much larger than that of the CIA, but this is impossible to determine since the budget of the NSA has never been a matter of public record. The NSA is said to the the largest single employer of mathematicians in the world. The NSA is estimated to have about 40,000 employees. The acronym NSA is aptly said to stand for “Never Say Anything.”

  The NSA almost never says anything publicly. However, the NSA’s primary role in the shadow-world of electronic espionage is to protect the communications of the US government, and crack those of the US government’s real, imagined, or potential adversaries. Since this list of possible adversaries includes practically everyone, the NSA is determined to defeat every conceivable cryptographic technique. In pursuit of their institutional goal, the NSA labors (in utter secrecy) to crack codes and cyphers and invent its own less breakable ones.

  The NSA also tries hard to retard civilian progress in the science of cryptography outside its own walls. The NSA can suppress cryptographic inventions through the little-known but often-used Invention Secrecy Act of 1952, which allows the Commissioner of Patents and Trademarks to withhold patents on certain new inventions and to order that those inventions be kept secret indefinitely, “as the national interest requires.” The NSA also seeks to control dissemination of information about cryptography, and to control and shape the flow and direction of civilian scientific research in the field.

  Cryptographic devices are formally defined as “munitions” by Title 22 of the United States Code, and are subject to the same import and export restrictions as arms, ammunition and other instruments of warfare. Violation of the International Traffic in Arms Regulations (ITAR) is a criminal affair investigated and administered by the Department of State. It is said that the Department of State relies heavily on NSA expert advice in determining when to investigate and/or criminally prosecute illicit cryptography cases (though this too is impossible to prove).

  The “munitions” classification for cryptographic devices applies not only to physical devices such as telephone scramblers, but also to “related technical data” such as software and mathematical encryption algorithms. This specifically includes scientific “information” that can be “exported” in all manner of ways, including simply verbally discussing cryptography techniques out loud. One does not have to go overseas and set up shop to be regarded by the Department of State as a criminal international arms trafficker. The security ban specifically covers disclosing such information to any foreign national anywhere, including within the borders of the United States.

  These ITAR restrictions have come into increasingly harsh conflict with the modern realities of global economics and everyday real life in the sciences and academia. Over a third of the grad students in computer science on American campuses are foreign nationals. Strictly appled ITAR regulations would prevent communication on cryptography, inside an American campus, between faculty and students. Most scientific journals have at least a few foreign subscribers, so an exclusively “domestic” publication about cryptography is also practically impossible. Even writing the data down on a cocktail napkin could be hazardous: the world is full of photocopiers, modems and fax machines, all of them potentially linked to satellites and undersea fiber-optic cables.

  In the 1970s and 1980s, the NSA used its surreptitious influence at the National Science Foundation to shape scientific research on cryptography through restricting grants to mathematicians. Scientists reacted mulishly, so in 1978 the Public Cryptography Study Group was founded as an interface between mathematical scientists in civilian life and the cryptographic security establishment. This Group established a series of “voluntary control” measures, the upshot being that papers by civilian researchers would be vetted by the NSA well before any publication.

  This was one of the oddest situations in the entire scientific enterprise, but the situation was tolerated for years. Most US civilian cryptographers felt, through patriotic conviction, that it was in the best interests of the United States if the NSA remained far ahead of the curve in cryptographic science. After all, were some other national government’s electronic spies to become more advanced than those of the NSA, then American government and military transmissions would be cracked and penetrated. World War II had proven that the consequences of a defeat in the cryptographic arms race could be very dire indeed for the loser.

  So the “voluntary restraint” measures worked well for over a decade. Few mathematicians were so enamored of the doctrine of academic freedom that they were prepared to fight the National Security Agency over their supposed right to invent codes that could baffle the US government. In any case, the mathematical cryptography community was a small group without much real political clout, while the NSA was a vast, powerful, well-financed agency unaccountable to the American public, and reputed to possess many deeply shadowed avenues of influence in the corridors of power.

  However, as the years rolled on, the electronic exchange of information became a commonplace, and users of computer data became intensely aware of their necessity for electronic security over transmissions and data. One answer was physical security — protect the wiring, keep the physical computers behind a physical lock and key. But as personal computers spread and computer networking grew ever more sophisticated, widespread and complex, this bar-the-door technique became unworkable.

  The volume and importance of information transferred over the Internet was increasing by orders of magnitude. But the Internet was a notoriously leaky channel of information — its packet-switching technology meant that packets of vital information might be dumped into the machines of unknown parties at almost any time. If the Internet itself could not locked up and made leakproo
f — and this was impossible by the nature of the system — then the only secure solution was to encrypt the message itself, to make that message unusable and unreadable, even if it sometimes fell into improper hands.

  Computers outside the Internet were also at risk. Corporate computers faced the threat of computer-intrusion hacking, from bored and reckless teenagers, or from professional snoops and unethical business rivals both inside and outside the company. Electronic espionage, especially industrial espionage, was intensifying. The French secret services were especially bold in this regard, as American computer and aircraft executives found to their dismay as their laptops went missing during Paris air and trade shows. Transatlantic commercial phone calls were routinely tapped by French government spooks seeking commercial advantage for French companies in the computer industry, aviation, and the arms trade. And the French were far from alone when it came to government-supported industrial espionage.

  Protection of private civilian data from foreign government spies required that seriously powerful encryption techniques be placed into private hands. Unfortunately, an ability to baffle French spies also means an ability to baffle American spies. This was not good news for the NSA.

  By 1993, encryption had become big business. There were one and half million copies of legal encryption software publicly available, including widely-known and commonly-used personal computer products such as Norton Utilities, Lotus Notes, StuffIt, and several Microsoft products. People all over the world, in every walk of life, were using computer encryption as a matter of course. They were securing hard disks from spies or thieves, protecting certain sections of the family computer from sticky-fingered children, or rendering entire laptops and portables into a solid mess of powerfully-encrypted Sanskrit, so that no stranger could walk off with those accidental but highly personal life-histories that are stored in almost every PowerBook.

  People were no longer afraid of encryption. Encryption was no longer secret, obscure, and arcane; encryption was a business tool. Computer users wanted more encryption, faster, sleeker, more advanced, and better.

  The real wild-card in the mix, however, was the new cryptography. A new technique arose in the 1970s: public-key cryptography. This was an element the codemasters of World War II and the Cold War had never foreseen.

  Public-key cryptography was invented by American civilian researchers Whitfield Diffie and Martin Hellman, who first published their results in 1976.

  Conventional classical cryptographic systems, from the Caesar cipher to the Nazi Enigma machine defeated by Alan Turing, require a single key. The sender of the message uses that key to turn his plain text message into cyphertext gibberish. He shares the key secretly with the recipients of the message, who use that same key to turn the cyphertext back into readable plain text.

  This is a simple scheme; but if the key is lost to unfriendly forces such as the ingenious Alan Turing, then all is lost. The key must therefore always remain hidden, and it must always be fiercely protected from enemy cryptanalysts. Unfortunately, the more widely that key is distributed, the more likely it is that some user in on the secret will crack or fink. As an additional burden, the key cannot be sent by the same channel as the communications are sent, since the key itself might be picked-up by eavesdroppers.

  In the new public-key cryptography, however, there are two keys. The first is a key for writing secret text, the second the key for reading that text. The keys are related to one another through a complex mathematical dependency; they determine one another, but it is mathematically extremely difficult to deduce one key from the other.

  The user simply gives away the first key, the “public key,” to all and sundry. The public key can even be printed on a business card, or given away in mail or in a public electronic message. Now anyone in the public, any random personage who has the proper (not secret, easily available) cryptographic software, can use that public key to send the user a cyphertext message. However, that message can only be read by using the second key — the private key, which the user always keeps safely in his own possession.

  Obviously, if the private key is lost, all is lost. But only one person knows that private key. That private key is generated in the user’s home computer, and is never revealed to anyone but the very person who created it.

  To reply to a message, one has to use the public key of the other party. This means that a conversation between two people requires four keys. Before computers, all this key-juggling would have been rather unwieldy, but with computers, the chips and software do all the necessary drudgework and number-crunching.

  The public/private dual keys have an interesting alternate application. Instead of the public key, one can use one’s private key to encrypt a message. That message can then be read by anyone with the public key, i.e,. pretty much everybody, so it is no longer a “secret” message at all. However, that message, even though it is no longer secret, now has a very valuable property: it is authentic. Only the individual holder of the private key could have sent that message.

  This authentication power is a crucial aspect of the new cryptography, and may prove to be more socially important than secrecy. Authenticity means that electronic promises can be made, electronic proofs can be established, electronic contracts can be signed, electronic documents can be made tamperproof. Electronic impostors and fraudsters can be foiled and defeated — and it is possible for someone you have never seen, and will never see, to prove his bona fides through entirely electronic means.

  That means that economic relations can become electronic. Theoretically, it means that digital cash is possible — that electronic mail, e-mail, can be joined by a strange and powerful new cousin, electronic cash, e-money.

  Money that is made out of text — encrypted text. At first consideration such money doesn’t seem possible, since it is so far outside our normal experience. But look at this:

  ASCII-picture of US dollar

  This parody US banknote made of mere letters and numbers is being circulated in e-mail as an in-joke in network circles. But electronic money, once established, would be no more a joke than any other kind of money. Imagine that you could store a text in your computer and send it to a recipient; and that once gone, it would be gone from your computer forever, and registered infallibly in his. With the proper use of the new encryption and authentication, this is actually possible. Odder yet, it is possible to make the note itself an authentic, usable, fungible, transferrable note of genuine economic value, without the identity of its temporary owner ever being made known to anyone. This would be electronic cash — like normal cash, anonymous — but unlike normal cash, lightning-fast and global in reach.

  There is already a great deal of electronic funds transfer occurring in the modern world, everything from gigantic currency-exchange clearinghouses to the individual’s VISA and MASTERCARD bills. However, charge-card funds are not so much “money” per se as a purchase via proof of personal identity. Merchants are willing to take VISA and MASTERCARD payments because they know that they can physically find the owner in short order and, if necessary, force him to pay up in a more conventional fashion. The VISA and MASTERCARD user is considered a good risk because his identity and credit history are known.

  VISA and MASTERCARD also have the power to accumulate potentially damaging information about the commercial habits of individuals, for instance, the video stores one patronizes, the bookstores one frequents, the restaurants one dines in, or one’s travel habits and one’s choice of company.

  Digital cash could be very different. With proper protection from the new cryptography, even the world’s most powerful governments would be unable to find the owner and user of digital cash. That cash would secured by a “bank” — (it needn’t be a conventional, legally established bank) — through the use of an encrypted digital signature from the bank, a signature that neither the payer nor the payee could break.

  The bank could register the transaction. The bank would know that the payer had spent t
he e-money, and the bank could prove that the money had been spent once and only once. But the bank would not know that the payee had gained the money spent by the payer. The bank could track the electronic funds themselves, but not their location or their ownership. The bank would guarantee the worth of the digital cash, but the bank would have no way to tie the transactions together.

  The potential therefore exists for a new form of network economics made of nothing but ones and zeroes, placed beyond anyone’s controls by the very laws of mathematics. Whether this will actually happen is anyone’s guess. It seems likely that if it did happen, it would prove extremely difficult to stop.

  Public-key cryptography uses prime numbers. It is a swift and simple matter to multiply prime numbers together and obtain a result, but it is an exceedingly difficult matter to take a large number and determine the prime numbers used to produce it. The RSA algorithm, the commonest and best-tested method in public-key cryptography, uses 256-bit and 258-bit prime numbers. These two large prime numbers (“p” and “q”) are used to produce very large numbers (“d” and “e”) so that (de-1) is divisible by (p-1) times (q-1). These numbers are easy to multiply together, yielding the public key, but extremely difficult to pull apart mathematically to yield the private key.

  To date, there has been no way to mathematically prove that it is inherently difficult to crack this prime-number cipher. It might be very easy to do if one knew the proper advanced mathematical technique for it, and the clumsy brute-power techniques for prime-number factorization have been improving in past years. However, mathematicians have been working steadily on prime number factorization problems for many centuries, with few dramatic advances. An advance that could shatter the RSA algorithm would mean an explosive breakthrough across a broad front of mathematical science. This seems intuitively unlikely, so prime-number public keys seem safe and secure for the time being — as safe and secure as any other form of cryptography short of “the onetime pad.” (The onetime pad is a truly unbreakable cypher. Unfortunately it requires a key that is every bit as long as the message, and that key can only be used once. The onetime pad is solid as Gibraltar, but it is not much practical use.)

 

‹ Prev