This Machine Kills Secrets

Home > Other > This Machine Kills Secrets > Page 8
This Machine Kills Secrets Page 8

by Andy Greenberg


  If the numbers on a one-time pad are truly random, and if it’s used just one time, that simple scheme is mathematically proven to be impossible to crack. But those are significant “ifs.” As early as 1942, for instance, U.S. intelligence found that the Soviets were carelessly reusing one-time pads for communication with different countries, and analyzed the multiple examples of the scrambled text to find patterns that allowed them to remove the pads’ random noise, breaking the ciphers.

  The digital one-time pad that Zimmermann programmed as a hobby project generated its random numbers with FORTRAN’s random number generator. Never mind that FORTRAN actually used a pseudorandom number generator based on a math operation known as a linear congruential equation. Zimmermann thought he’d created an uncrackable encryption program before his senior year of college. “It was very simpleminded crypto, but I believed it was fiendishly clever,” Zimmermann says. A few years later, he would find that same scheme he had “invented” in the homework section of a textbook by Georgetown cryptography professor Dorothy Denning. The assignment, Zimmermann discovered to his embarrassment, was to break the code, and it was considered a relatively easy problem.

  Zimmermann, a good-natured and humble crypto devotee, took that intellectual blow in stride and lost none of his gusto for the science of scrambling data. And in 1977, he read an article in the Scientific American written by Martin Gardner that would change the course of his life as swiftly as Ellsberg’s speech in Colorado.

  Gardner’s article explained a revolutionary new breed of encryption called public key cryptography. And it solved a problem that had plagued cryptographers since the birth of codes: how to share secrets between two people who have never met.

  With traditional encryption, or what’s known as private key or sometimes symmetric key cryptography, the individuals communicating must somehow both have the secret bit of data, known as the key, that locks and unlocks the encryption on their messages, just as a one-time pad can be added to or subtracted from a message to scramble or unscramble it. If Alice in New York wants to send a private message to Bob in London, she uses a private key to encrypt her message and Bob uses the same key to decrypt it.

  But there’s an inherent Achilles’ heel in that scheme: If Bob has never met Alice, how does Bob get Alice’s key securely? She has to send it to him somehow. But they can’t encrypt the message that carries the key—they come up against the same problem of how to send a key that decrypts that message. If Alice gives up and mails Bob an unencrypted key, on the other hand, any sinister man-in-the-middle could intercept it, copy it, send it on its way, and then decode all their future messages. Unless Alice and Bob have already met in some dark alley and shared their key, private key encryption is hardly private at all. (In fact, it’s called “private key encryption” precisely because the key must be kept private, which is what makes actually using it so tough.)

  Public key encryption, on the other hand, uses some mathematical tricks that vaporize that private key problem as thoroughly as a used one-time pad in a burn bag. In the public key cryptographic scheme, Alice doesn’t need to use a private key to encrypt her message and then messenger a copy of the key to Bob. Instead, Bob performs some computational sleights of hand that generate two keys, one known as the public key and one as the private key. That public key isn’t for decrypting secrets. It’s only for encrypting them. And it has the unique, almost magical property: What’s encrypted with that key can only be decrypted with Bob’s private key.

  Suddenly the conundrum of how Alice mails the private key to Bob disappears. Bob already has the private key, and he can send his public key—the key Alice needs to encrypt messages that only Bob can unlock—to Alice on a postcard from London to New York. The sinister man-in-the-middle can read that postcard all he likes. Not only that, Bob posts his public key on his website, prints it on his business card, and even adds it to the signature of his e-mail. In fact, Bob wants everyone to see the public key, because it’s used for harmlessly scrambling secrets, not unscrambling them. Bob’s private key, meanwhile, remains cozily stored on his hard drive, and never has to be shipped across the Atlantic Ocean. Using Bob’s widely available public key, Alice can now send Bob messages that only he can read. Mission accomplished.

  In his article, Gardner quoted a dictum from Edgar Allan Poe, that “human ingenuity cannot concoct a cipher which human ingenuity cannot resolve.” Poe, in other words, believed no seemingly unbreakable cipher exists that can’t be outsmarted by some other, cleverer cryptographer. But Poe had been proven wrong, Gardner wrote, by the implementation of public key encryption invented by three MIT scientists, what would come to be known as RSA. He concluded that the scheme was no less than potential proof that just such a practical, unbreakable form of encryption was possible.

  “If the M.I.T. cipher withstands [cryptanalysts’] attacks, as it seems almost certain it will,” he wrote, “Poe’s dictum will be hard to defend in any form.” By Gardner’s calculation, cracking MIT’s code would take about forty quadrillion years. (In fact, that was a few zeros too many. But even so, requiring somewhere between twenty and forty times the age of humanity to crack meant the scheme remained fairly secure.)

  Here was an invention as boundlessly powerful, in its own way, as the atomic bomb, but one that could shield dissidents instead of arming despots. All that was needed was a tool to bring public key encryption out of the realm of academics and spooks, and into the hands of political troublemakers. Zimmermann, naturally, aimed to build it.

  the same year as Zimmermann, and its promise of unbreakable cryptography planted a seed deep in his science-fiction-fueled imagination. “I came to see [encryption] as a kind of force shield, where the energy to pierce it is more than the entire energy of the universe,” May says in the hurried tone that he adopts when building toward something that excites him. “It was a truly impenetrable bubble of privacy.”

  The seed was still there in 1987, when May learned of Phil Salin’s AMIX information exchange plan. Salin would die of stomach cancer several years later, with AMIX still unrealized. But May’s obsessive mind never let go of the idea’s subversive potential. If encryption could hide not only what was said, but who was saying it, he realized, that new flavor of secrecy could transform Salin’s innocent information market into a guerrilla bazaar for buying, selling, and distributing all the world’s secrets.

  With those inchoate thoughts of anonymous security breaches whispering in May’s ear, he discovered the article whose ideas would finally make his crypto-libertarian dreams possible. It was a 1981 cover story in the journal Communications of the Association of Computing Machinery, already years old when May came upon it. Its author: David Chaum, a man who would come to be known as the prophet and godfather of digital anonymity.

  Chaum, a bearish, bearded, and white-maned academic, today heads a foundation devoted to secure voting, and spent a decade pitching an anonymous transactions system called eCash. Despite signing up a few major banks, Chaum’s crypto-currency never quite caught on, a result of what some say is bad luck and others say was Chaum’s overly controlling style of doing business, which may have quashed many of his company’s attempts to find mainstream partnerships. But few in the computer security world doubt Chaum’s sheer cryptographic brilliance—his patents range from physical locks to software security systems to anonymity and pseudonymity mechanisms that would secure his reputation as a computer science and information security powerhouse.

  Growing up and attending high school in an L.A. suburb, Chaum lived the rebellious life of a child who understands he is smarter than everyone he knows. He would show up for shop class and then play hooky the rest of the day, crossing town to sneak into computer science classes at UCLA. He ordered technical manuals for IBM and Fairchild Semiconductor chipsets, and read them the way other kids read comic books. Since no engineers at tech firms would answer the questions of a teenage upstart, he even i
ncorporated a shell company—Security Technology Corporation—and would use it as a front to call up firms and ask questions. “I sensed that secrecy was this powerful mechanism,” he says. “I was fascinated by all of it: dead drops, document security, burglar alarms, safes and vaults, locks, flaps and seals.”

  Attending the University of California, San Diego, in the early seventies, he breathed in the era’s liberal sense of privacy and left-wing distrust of power. Chaum later left a four-year graduate fellowship at UCLA after just one quarter, disgusted with the program’s military funding. Escaping to Berkeley, he focused on privacy and security features in computing, technologies that he argued were needed in a world where personal data would be ubiquitous and governments could mine it endlessly to track citizens.

  The department head at Berkeley, Manuel Blum, chastised Chaum for taking such a focused view of his work’s social goals—no scientist should attempt to predict the effects of his or her research, Blum warned. Chaum responded with a dry thank-you note in the introduction to his master’s thesis, writing that the urge to prove his adviser wrong had been a central motivating factor in working on the paper.

  Later, as a professor at New York University and the University of California, Chaum became obsessed with the problem of anonymity and its political implications, neglecting his teaching for a year to pore over the entire literature of the social benefits and evils of protecting individuals’ identity, works by thinkers like Thomas Kuhn and Lewis Mumford. He came out of that personal study surer than ever of his views on privacy, and it was soon after that he unleashed the article that would ignite an entire generation of crypto-focused anonymity advocates.

  It was titled “Security without Identification: Transaction Systems to make Big Brother Obsolete.” And to a reader like May, it must have seemed like one brilliant gift to the world of ideas after another.

  It began with a prescient description of how the digital world would allow surveillance and manipulation of normal people on a terrifying scale. “New and . . . serious dangers derive from computerized pattern recognition techniques: even a small group using these and tapping into data gathered in everyday consumer transactions could secretly conduct mass surveillance, inferring individuals’ lifestyles, activities, and associations,” Chaum wrote. “The automation of payment and other consumer transactions is expanding these dangers to an unprecedented extent.” Big Brother was no longer a character in 1984. Data tracking and surveillance was an immense societal problem looming just over the newly formed Internet’s horizon.

  And then, over the next fifteen thousand relentlessly logical words, he offered a collection of semimagical solutions, what he intended to be a comprehensive system for ensuring both the security of information from abuse and safeguarding civil liberties in a digital era.

  First, Chaum outlined a method of using “card computers”—tiny machines that resembled credit card–size calculators, as he described them. They would work as virtual wallets, holding a database of encrypted ID credentials and allowing users to spend and receive digital currencies, unique numbers cryptographically protected to prevent forgery or double-spending of the same dollar or deutschmark.

  Those crypto-card computers would enable encrypted transaction tricks that wouldn’t be possible with mere cash: One such mathematical feat was what Chaum called a “blind signature.” A typical cryptographic signature allows anyone to put their personal stamp on a message in a way that no one else can forge. Alice’s signature can prove to Bob that a message to him came from Alice and only Alice. That concept had been proposed in the same 1976 paper that first suggested public key cryptography. In his paper, Chaum took the idea a step further, showing a “blind” method of applying that unforgeable stamp—that is, now anyone could put their one-of-a-kind cryptographic signature on a chunk of encrypted data without ever decrypting its contents.

  Why does that blind stamp matter? Because, as Chaum described it, now a bank or store could put an unforgeable cryptographic signature on a piece of digital currency without being able to identify and trace each individual virtual coin. Consider the analogy of a money order in a carbon paper envelope. In Chaum’s system, someone could write a money order for ten dollars, put it in a sealed cryptographic envelope, and take it to a bank that would remove ten dollars from the user’s account and apply Chaum’s blind signature function to certify the money order without opening the carbon paper envelope, like a stamp that leaves its unforgeable signature on the paper inside.

  When the money order is spent, the user opens the envelope and hands it to the cashier, who checks the now-visible signature on the money order with a bank that verifies the money order is real and worth ten dollars. But because the order was sealed when the bank initially signed it, the financial institution can’t put together who withdrew the currency and who finally received it. The result of all that sealing, stamping, and unsealing of envelopes, made effortless by Chaum’s card computers, would be a usable digital currency that can’t be traced. Strike one against Big Brother.

  Chaum intended his system of card computers and blind signatures to be used for more than money. A credentialing organization—say, the department of motor vehicles—could similarly put a blind stamp on the card computers’ digital equivalent of a driver’s license. The DMV wouldn’t ever see the user’s full identifying information, but a cop that pulls over a driver could be shown the signature to see that the driver was certified. The necessary credentials of daily life could be split from identification just as easily as financial transactions. Strike two for ubiquitous privacy.

  But Chaum wanted to go beyond merely hiding the path of transactions or the personal details on credentials. He aimed to hide the source of any communications from any snoop. And the third major idea in his paper would be the most elegant blow yet against any would-be surveillance society: a compact method for a group of people to communicate without ever exposing who was doing the talking at any time, a force shield around the identity of a message’s sender even more powerful than the one that public key encryption provided for the content of that message. A foolproof cloak of anonymity. Chaum called his privacy panacea the Dining Cryptographers Network, or DC-Net.

  Imagine that three cryptographers are having dinner at a restaurant. At the end of the meal, no bill arrives. The three diners want to know if the check has been paid, but out of discretion, none wants to directly ask a waiter or either of their fellow diners if some generous friend among the three secretly paid it.

  So instead, they play a game. Two of the cryptographers flip a coin behind a menu to prevent the third from seeing whether it lands heads or tails. Then they go around the table, repeating the secret coin flip between each pair of cryptographers, always keeping the coin toss behind the menu to hide the result from the third friend.

  When all that coin flipping is done, each cryptographer gives a thumbs-up or a thumbs-down: up if the results of the two coin tosses he or she saw were the same, and down if the results were different. But there’s one important exception: If one of the three paid the bill, that magnanimous cryptographer flips his or her thumb in the opposite direction.

  If the total number of thumbs up is even or zero, everyone knows the bill has been paid, and no one’s secret generosity has been violated. If it hasn’t been paid, the sum of thumbs up will be odd, and the three stingy cryptographers can start arguing about which cheapskate’s turn it is to pick up the check.

  Silly as that dining cryptographers parlor game sounds, it represented a groundbreaking new idea: that a group of people can communicate among themselves without ever identifying who’s doing the talking. In more academic-focused papers, Chaum would show that his DC-Net system was capable of much more than anonymously determining whether a bill had been paid among three friends. Just as it could communicate a single binary “yes” or “no” question in the bill-paying case, it could be expanded to any number of people
and any digital message—all computer communications are composed of ones and zeros, after all—whether it be a financial transaction or the launch codes for nuclear missiles.

  For an interloper like the NSA who watches the network and tries to locate a message’s source, a DC-Net isn’t just hard to break. Chaum wrote that it was “unconditionally untraceable.” He could mathematically prove that when a DC-Net is properly implemented, there is no evidence whatsoever available to a snoop hoping to find the source of a payment, letter, or leak. In Chaum’s world, mathematically perfect anonymity was as real and achievable as a flipped coin behind a menu.

  Chaum’s paper set Tim May’s mind racing. He immediately saw the ideas’ darkest implications—ones that Chaum says he never intended to enable. (One cryptographer would tell me that it was as if the crypto-anarchist movement Chaum inspired came upon the advanced technology of an alien civilization and “chose to take only the weapons.”)

  If financial transactions could be rigorously anonymous, May realized, they could fund anything: illegal drugs, assassinations, everyday transactions shielded from all taxes. If communications could be wholly split from identification, state secrets could be traded like pie recipes. Protected data havens could be created to store and allow anonymous access to illegal or taboo information: massive troves of stolen financial data and intellectual property, incriminating credit reports supposedly erased under the Fair Credit Reporting Act, the purposefully forgotten scientific results from horrific Nazi medical experiments.

 

‹ Prev