Emerging as the grim reaper at the party of twentieth-century triumphalism, however, Gödel proved that Hilbert’s, Carnap’s, and von Neumann’s most cherished mathematical goals were impossible. Not only mathematics but all logical systems, Gödel showed in his paper—even the canonical system enshrined in the Principia Mathematica of Alfred North Whitehead and Bertrand Russell, even the set theory of Carnap and von Neumann—were fated to incompleteness and inconsistency. They necessarily harbored paradoxes and aporias. Mere consistency of a formal system offered no assurance that what the system proved was correct. Every logical system necessarily depends on propositions that cannot be proved within the system.
Gödel’s argument was iconoclastic. But his method of proving it was providential. He devised a set of algorithms in which all the symbols and instructions were numbers. Thus in refuting the determinist philosophy behind the mathematics of Newton and the imperial logic of Hilbert, he opened the way to a new mathematics, the mathematics of information.8 From this demarche emerged a new industry of computers and communications currently led by Google and informed by a new mathematics of creativity and surprise.
Gödel’s proof reads like a functional software program in which every axiom, every instruction, and every variable is couched in mathematical language suitable for computation. In proving the limits of logic, he articulated the lineaments of computing machines that would serve human masters.
No one in the audience showed any sign of recognizing the significance of Gödel’s proof except von Neumann, who might have been expected to resent this incisive attack on the mathematics he loved. But his reaction was fitting for the world’s leading mathematical intellect. He encouraged Gödel to speak and followed up afterwards.
Though Gödel’s proof frustrated many, von Neumann found it liberating. The limits of logic—the futility of Hilbert’s quest for a hermetically sealed universal theory—would emancipate human creators, the programmers of their machines. As the philosopher William Briggs observes, “Gödel proved that axiomatizing never stops, that induction-intuition must always be present, that not all things can be proved by reason alone.”9 This recognition would liberate von Neumann himself. Not only could men discover algorithms, they could compose them. The new vision ultimately led to a new information theory of biology, anticipated in principle by von Neumann and developed most fully by Hubert Yockey,10 in which human beings might eventually reprogram parts of their own DNA.
More immediately, Gödel’s proof prompted Alan Turing’s invention in 1936 of the Turing machine—the universal computing architecture with which he showed that computer programs, like other logical schemes, not only were incomplete but could not even be proved to reach any conclusion. Any particular program might cause it to churn away forever. This was the “halting problem.” Computers required what Turing called “oracles” to give them instructions and judge their outputs.11
Turing showed that just as the uncertainties of physics stem from using electrons and photons to measure themselves, the limitations of computers stem from recursive self-reference. Just as quantum theory fell into self-referential loops of uncertainty because it measured atoms and electrons using instruments composed of atoms and electrons, computer logic could not escape self-referential loops as its own logical structures informed its own algorithms.12
Gödel’s insights led directly to Claude Shannon’s information theory, which underlies all computers and networks today. Conceiving the bit as the basic unit of digital computation, Shannon defined information as surprising bits—that is, bits not predetermined by the machine. Information became the contents of Turing-oracular messages—unexpected bits—not entailed by the hermetic logic of the machine itself.
Shannon’s canonical equation translated Ludwig Boltzmann’s analog entropy into digital terms. Boltzmann’s equation, formulated in 1877, had broadened and deepened the meaning of entropy as “missing information”. Seventy years and two world wars later, Shannon was broadening and deepening it again. Boltzmann’s entropy is thermodynamic disorder; Shannon’s entropy is informational disorder, and the equations are the same.
Using his entropy index of surprisal as the gauge of information, Shannon showed how to calculate the bandwidth or communications power of any channel or conduit and how to gauge the degree of redundancy that would reduce errors to any arbitrary level. Thus computers could eventually fly airplanes and drive cars. This tool made possible the development of dependable software for vast computer systems and networks such as the Internet.
Information as entropy also linked logic to the irreversible passage of time, which is also assured by the one-way passage of thermodynamic entropy.
Gödel’s work, and Turing’s, led to Gregory Chaitin’s concept of algorithmic information theory. This important breakthrough tested the “complexity” of a message by the length of the computer program needed to generate it. Chaitin proved that physical laws alone, for example, could not explain chemistry or biology, because the laws of physics contain drastically less information than do chemical or biological phenomena. The universe is a hierarchy of information tiers, a universal “stack,” governed from the top down.
Chaitin believes that the problem of computer science reflects the very successes of the modern mathematics that began with Newton. Its determinism and rigor give it supreme power in describing predictable and repeatable phenomena such as machines and systems. But “life,” as he says, “is plastic, creative! How can we build this out of static, eternal, perfect mathematics? We shall use postmodern math, the mathematics that comes after Gödel, 1931, and Turing, 1936, open not closed math, the math of creativity. . . . ”13 That is the mathematics of information theory, of which Chaitin is the supreme living exponent.
Cleaving all information is the great divide between creativity and determinism, between information entropy of surprise and thermodynamic entropy of predictable decline, between stories that capture a particular truth and statistics that reveal a sterile generality, between cryptographic hashes that preserve information and mathematical blends that dissolve it, between the butterfly effect and the law of averages, between genetics and the law of large numbers, between singularities and big data—in a word, the impassible gulf between consciousness and machines.
Not only was a new science born but also a new economy, based on a new system of the world—the information theory articulated in 1948 by Shannon on the foundations first launched in a room in Königsberg in September 1930.
This new system of the world was consummated by the company we know as Google. Google, though still second in the market cap race is by far most important, paradigmatic company of our time. Yet I believe the Google system of the world will fail, indeed be swept away in our time (and I am seventy-eight!). It will fail because its every major premise will fail.
Having begun with the exalted Newton, how can we proceed to ascribe a “system of the world” to a couple of callow kids, who started a computer company in a college lab, invented a Web crawler and search engine, and dominated advertising on the Web?
A system of the world necessarily combines science and commerce, religion and philosophy, economics and epistemology. It cannot merely describe or study change; it also must embody and propel change. In its intellectual power, commercial genius, and strategic creativity, Google is a worthy contender to follow Newton, Gödel, and Shannon. It is the first company in history to develop and carry out a system of the world. Predecessors such as IBM and Intel were comparable in their technological drive and accomplishment, from Thomas Watson’s mainframes and semiconductor memories to Bob Noyce’s processors and Gordon Moore’s learning curves. But Moore’s Law and Big Blue do not provide a coherent system of the world.
Under the leadership of Larry Page and Sergey Brin, Google developed an integrated philosophy that aspires, with growing success, to shape our lives and fortunes. Google has proposed a theory of knowledge and a theory of mind to animate a vision for the dominant technol
ogy of the world; a new concept of money and therefore price signals; a new morality and a new idea of the meaning and process of progress.
The Google theory of knowledge, nicknamed “big data,” is as radical as Newton’s and as intimidating as Newton’s was liberating. Newton proposed a few relatively simple laws by which any new datum could be interpreted and the store of knowledge augmented and adjusted. In principle anyone can do physics and calculus or any of the studies and crafts it spawned, aided by tools that are readily affordable and available in any university, many high schools, and thousands of companies around the world. Hundreds of thousands of engineers at this moment are adding to the store of human knowledge, interpreting one datum at a time.
“Big data” takes just the opposite approach. The idea of big data is that the previous slow, clumsy, step-by-step search for knowledge by human brains can be replaced if two conditions are met: All the data in the world can be compiled in a single “place,” and algorithms sufficiently comprehensive to analyze them can be written.
Upholding this theory of knowledge is a theory of mind derived from the pursuit of artificial intelligence. In this view, the brain is also fundamentally algorithmic, iteratively processing data to reach conclusions. Belying this notion of the brain is the study of actual brains, which turn out be much more like sensory processors than logic machines. Yet the direction of AI research is essentially unchanged. Like method actors, the AI industry has accepted that its job is to act “as if” the brain were a logic machine. Therefore, most efforts to duplicate human intelligence remain exercises in faster and faster processing of the sort computers handle well. Ultimately, the AI priesthood maintains that the human mind will be surpassed—not just in this or that specialized procedure but in all ways—by extremely fast logic machines processing unlimited data.
The Google theory of knowledge and mind are not mere abstract exercises. They dictate Google’s business model, which has progressed from “search” to “satisfy.” Google’s path to riches, for which it can show considerable evidence, is that with enough data and enough processors it can know better than we do what will satisfy our longings.
Even as the previous systems of the world were embodied and enabled in crucial technologies, so the Google system of the world is embodied and enabled in a technological vision called cloud computing. If the Google theory is that universal knowledge is attained through the iterative processing of enormous amounts of data, then the data have to be somewhere accessible to the processors. Accessible in this case is defined by the speed of light. The speed-of-light limit—nine inches in a billionth of a second—requires the aggregation of processors and the memory in some central place, with energy available to access and process the data.
The “cloud,” then, is an artful name for the great new heavy industry of our times: gargantuan data centers composed of immense systems of data storage and processors, linked together by millions of miles of fiber-optic lines and consuming electrical power and radiating heat to an extent that excels most industrial enterprises in history.
So dependent were the machines of the industrial revolution on sources of power that propinquity to a power source—first and foremost, water—was often a more important consideration in deciding where to build a factory than the supply of raw material or manpower. Today Google’s data centers face similar constraints.
Google’s idea of progress stems from its technological vision. Newton and his fellows, inspired by their Judeo-Christian world view, unleashed a theory of progress with human creativity and free will at its core. Google must demur. If the path to knowledge is the infinitely fast processing of all data, if the mind—that engine by which we pursue the truth of things—is simply a logic machine, then the combination of algorithm and data can produce one and only one result. Such a vision is not only deterministic but ultimately dictatorial. If there is a moral imperative to pursue the truth, and the truth can be found only by the centralized processing of all the data in the world, then all the data in the world must, by the moral order implied, be gathered into one fold with one shepherd. Google may talk a good game about privacy, but private data are the mortal enemy of its system of the world.
Finally, Google proposes, and must propose, an economic standard, a theory of money and value, of transactions and the information they convey, radically opposed to what Newton wrought by giving the world a reliable gold standard.
As with the gentle image of cloud computing, Google’s theory of money and prices seems at first utterly benign and even in some sense deeply Christian. For Google ordains that, at least within the realm under its direct control, there shall be no prices at all. With a few small (but significant) exceptions, everything Google offers to its “customers” is free. Internet searches are free. Email is free. The vast resources of the data centers, costing Google an estimated thirty billion dollars to build, are provided essentially for free.
Free is not by accident. If your business plan is to have access to the data of the entire world, then free is an imperative. At least for your “products.” For your advertisers, it’s another matter. What your advertisers are paying for is the enormous data and the insights gained by processing it, all of which is made possible by “free.”
So the cascades of “free” began: free maps of phenomenal coverage and resolution, making Google master of mobile and local services; free YouTube videos of luminous quality and stunning diversity that are becoming a preferred vessel for Internet music as well; free email of elegant simplicity, with uncanny spam filters, facile attachments, and hundreds of gigabytes of storage, with links to free calendars and contact lists; free Android apps, free games, and free search of consummate speed and effectiveness; free, free, free, free vacation slideshows, free naked ladies, free moral uplift (“Do no evil”), free classics of world literature, and then free answers, tailored to your every whim by Google Mind.
So what’s wrong with free? It is always a lie, because on this earth nothing, in the end, is free. You are exchanging incommensurable items. For glimpses of a short video that you may or may not want to see to the end, you agree to watch an ad long enough to click it closed. Instead of paying—and signaling—with the fungible precision of money, you pay in the slippery coin of information and distraction.
If you do not charge for your software services—if they are “open source”—you can avoid liability for buggy “betas”. You can happily escape the overreach of the patent bureau’s ridiculous seventeen-year protection for minor software advances or “business processes” like one-click shopping. But don’t pretend that you have customers.
Of all Google’s foundational principles, the zero price, is apparently its most benign. Yet it will prove to be not only its most pernicious principle but the fatal flaw that dooms Google itself. Google will likely be an important company ten years from now. Search is a valuable service, and search it will continue to provide. On search it may prosper, even at a price of zero. But Google’s insidious system of the world will be swept away.
CHAPTER 3
Google’s Roots and Religions
Under the leadership of Larry Page and Sergey Brin, Google developed the integrated philosophy that currently shapes our lives and fortunes, combining a theory of knowledge (nicknamed “Big Data”), a technological vision (centralized cloud computing), a cult of the commons (rooted in “open source” software), a concept of money and value (based on free goods and automated advertising), a theory of morality as “gifts” rather than profits, and a view of progress as evolutionary inevitability and an ever diminishing “carbon footprint.”
This philosophy rules our economic lives in America and, increasingly, around the globe. With its development of “deep learning” by machines and its hiring of the inventor-prophet Raymond Kurzweil in 2014, Google enlisted in a chiliastic campaign to blend human and machine cognition. Kurzweil calls it a “singularity,” marked by the triumph of computation over human intelligence. Google networks, clouds,
and server farms could be said to have already accomplished much of it.
Google was never just a computer or software company. From its beginning in the late 1990s, when its founders were students at Stanford, it was the favorite child of the Stanford Computer Science Department, married to Sand Hill Road finance across the street, and its ambitions far transcended mere business.
Born in the labs of the university’s newly opened (Bill) Gates Computer Science Building in 1996 and enjoying the patronage of its president, John Hennessy, the company enjoyed access to the school’s vast computer resources. (In 2018 Hennessy would become chairman of Alphabet, the Google holding company). In embryo, Google had at its disposal the full bandwidth of the university’s T-3 line, then a lordly forty-five megabits a second, and ties to such venture capital titans as John Doerr, Vinod Khosla, Mike Moritz, and Don Valentine. The computer theorists Terry Winograd and Hector Garcia Molina supervised the doctoral work of the founders.
Rollerblading down the corridors of Stanford’s computer science pantheon in the madcap spirit of Claude Shannon, the Google founders consorted with such academic giants as Donald Knuth, the conceptual king of software, Bill Dally, a trailblazer of parallel computation, and even John McCarthy, the founding father of artificial intelligence.
By 1998, Brin and Page were teaching the course CS 349, “Data Mining, Search, and the World Wide Web.” Sun founder Andy Bechtolsheim, Amazon founder Jeff Bezos, and Cisco networking guru Dave Cheriton had all blessed the Google project with substantial investments. Stanford itself earned 1.8 million shares in exchange for Google’s access to Page’s patents held by the university. (Stanford had cashed in those shares for $336 million by 2005).
Life After Google Page 3