In the patronizing spirit common in the field, he writes: “The small fraction of humans who have opted to live in these zones, effectively exist in a lower and more limited plane of awareness from everyone else, and have limited understanding of what their more intelligent fellow minds are doing in the other zones. However, many of them are quite happy with their lives.”
The problem is not AI itself, which is an impressive technology with much promise for improving human life. What transforms “super-AI” from a technology into a religious cult is the assumption that the human mind is essentially a computer, a material machine. That assumption springs from a belief in evolution as a random process that has produced sub-optimal human brains, relatively crude computer “wetware,” which in time can be excelled in silicon.
This assumption leads to a preoccupation with the likelihood of extraterrestrial beings. Although Kurzweil and Tegmark are both smart enough or canny enough to dismiss the existence of extraterrestrial minds, most of the movement is intoxicated by the view that we are not alone. The usual conclusion is that intelligent life on other planets is so easy, so determined by material forces, that it is “inevitable.” Expressing this assurance is SETI, the “search for extraterrestrial intelligence,” a collective effort conducted on hundreds of thousands of computers around the globe searching through electromagnetic debris for a glint of mind elsewhere in the universe. Nothing has turned up in thirty-five years or so, but Yuri Milner, the great Russian physicist-investor, has pumped another $100 million into the cause in his “Breakthrough Listen” project.
All these pursuits reflect a breakdown of terrestrial intelligence. The intellectuals of this era are simply blind to the reality of consciousness. Consciousness is who we are, how we think, and how we know. It echoes with religious intuitions and psychological identity. It is the essence of mind as opposed to machine. It is the source of creativity and free will. If you don’t understand it, you may have a theory of computers but you do not have a notion of intelligence.
All the AI scenarios assume the premise of AI super-intelligence with anthropomorphic consciousness, will, feelings, imagination, creativity, and independence. But in presenting every cockamamie view they can imagine, Tegmark and the other AI champions never come close to demonstrating that voltages, transistor gates, memory capacitors, and flip-flops can somehow know or learn anything, let alone become willful and conscious or independent of their human programmers.
The debating-point response of the super-AI proponents is that the human mind consists of electrical and chemical components that are unintelligent in themselves. But here we encounter the Gödel-Turing difficulty of self-reference. By referring back to their own brains, which they don’t really understand, the AI scientists plunge directly into the self-referential Gödel perplex. By using their own minds and consciousness to deny the significance of consciousness in minds, they refute themselves.
As Turing concluded, they need an “oracle”—a source of intelligence outside the system itself—and all he could say about the oracle is that it “could not be a machine.” Turing saw that computers repeat the uncertainties of physics that stem from recursive self-reference. Just as physics founders when it tries to use instruments made of electrons and photons to measure electrons and photons, artificial intelligence founders when computers use computers to explain themselves.
Consciousness and free will are self-reference without determinism. The AI experts want to deny it, but until they come to terms with consciousness they cannot explain mind. Kurzweil seems to believe that consciousness can be put to the side. His book How to Create a Mind is the most systematic exposition of AI and, like his masterpiece, The Singularity Is Near, full of original insights. But on the issue of consciousness both books plunge into circularity, merely asserting that when a machine is fully intelligent it will be recognized as conscious. Gödel smiled.
A symbol machine does not know anything. Software symbols represent phenomena that have been perceived consciously—known—by the outside Turing oracle, the programmer. This “cannot be a machine” because it supplies the assumptions and axioms and procedures on which the computer’s logical machine depends.
The blind spot of AI is that consciousness does not emerge from thought; it is the source of it. As Leibnitz, imagining a computer blown up to the size of a building, observed in the seventeenth century, inside the machine (the determinist scheme), you find cogs and gears but no cognition. The oracle programmer must be outside. How a software programmer can miss the essence of his own trade is a mystery, but Chesterton understood the myopia of the expert:
The . . . argument of the expert, that the man who is trained should be the man who is trusted, would be absolutely unanswerable if it were really true that the man who studied a thing and practiced it every day went on seeing more and more of its significance. But he does not. He goes on seeing less and less of its significance.11
The materialist superstition is a strange growth in an age of information. Writing from his home, which he named “Entropy House,” Shannon showed that information itself is measured by unexpected bits—by its surprisal. This is a form of disorder echoing the disorder of thermodynamic entropy. Information is surprise. A determinist machine is by definition free from surprise. The answers are always implicit in the questions. There is no entropy, nothing unexpected.
This point eludes many of the great minds of the era, who imagine that information is order, or, as they sometimes put it, revealing their incomprehension, negentropy. In both thermodynamics and information theory, entropy is disorder, not order. Order defines the expected bits, the redundancy. Entropy measures the unexpected ones and gauges the information, measured by the degrees of freedom in the message.
Gauged by the unexpected deformation of a regularity, information is neither fully determined nor fully random. As Shannon put it, information is stochastic, adapting a Greek word that means “to aim at.” It combines probabilities with skills, and randomness with structure. Information is maximized in a high-entropy message borne by a low-entropy carrier, such as the modulated code-bearing light in a fiber-optic line.
After von Neumann, Shannon was the most important figure in the establishment of the system of the world that Google now embodies. I would like to say that he showed the way out. But Shannon himself ended up enmeshed in the same materialist superstition that afflicts the Google Age. “I think man is a machine of a very complex sort,” he wrote, “different from a computer, i.e., different in organization. But it could be easily reproduced—it has about ten billion nerve cells. . . . And if you model each one of these with electronic equipment it will act like a human brain. If you take [chess master Bobby] Fischer’s head and make a model of that, it would play like Fischer.”
Shannon here is expressing the materialist faith. The brain consists of ten billion neurons governed by electrical impulses and presumably chemical reactions. For a devotee of materialism, this view is apodictically true; after all in the flat-universe theory there is nothing else present except the chemical and physical elements.
To a closer observer, as Shannon or Kurzweil understands, there is something else: the pattern, the design, the form, the configuration—altogether the information. But if you challenge the bottom-up assumption of the sufficiency of physics and chemistry to explain it all, he might say, “More dimensions—I have no need for that hypothesis.” Eclipsing consciousness, freedom of choice, and surprise, this faith ultimately defies information theory itself. Information depends on the range of freedom of choice and the surprise that can be perceived only by a conscious being.
This materialist superstition keeps the entire Google generation from understanding mind and creation. Consciousness depends on faith—the ability to act without full knowledge and thus the ability to be surprised and to surprise. A machine by definition lacks consciousness. A machine is part of a determinist order. Lacking surprise or the ability to be surprised, it is self-contained and determin
ed.
An unconscious body is simply a hermetically logical system, which as both Gödel and Turing proved is necessarily incomplete and in need of an “oracle.” Knowledge of this incompleteness is the human condition, felt intuitively and manifested in consciousness. The “I” emerges in the domain of faith beyond the machines of logic.
Real science shows that the universe is a singularity and thus a creation. Creation is an entropic product of a higher consciousness echoed by human consciousness. This higher consciousness, which throughout human history we have found it convenient to call God, endows human creators with the space to originate surprising things.
This is the mirrored room of cosmic thought, reflective intelligence. Consciousness precedes creation, the word precedes the flesh.
“The central mistake of recent digital culture,” writes Jaron Lanier, “is to chop up a network of individuals so finely that you end up with a mush. You then start to care about the abstraction of the network more than the real people who are networked, even though the network by itself is meaningless. Only the people were ever meaningful.”12
AI cannot compete with the human intelligence that connects symbols and objects. AI cannot do without the human minds that provide it with symbol systems and languages; programs it; structures the information it absorbs in training, whether word patterns or pixels; provides and formulates the big data in which it finds numerical correlations; and sets up the goals and reward schemes and target sequences that allow it to iterate, optimize, and converge on a solution. Consisting of inputs cascading through complex sets of algorithms to produce outputs, AI cannot think at all.
Thinking is conscious, willful, imaginative, and creative. A computer running at gigahertz speeds and playing a deterministic game like chess or Go is only a machine. The idea that it is superhuman makes sense only if the abacus or calculator is superhuman. Artificial intelligence refers to the output of computer algorithms that consist of ingeniously arranged electronic elements—currents, voltages, inductances, and capacitances—that gain their meaning from Boolean logical schemes, tree structures, and “neural nets.” They achieve their utility from human languages and other symbol systems, including the computer languages and mathematical reasoning that program them.
America’s greatest philosopher, Charles Sanders Peirce, expounded this underlying reality when he developed his theory of signs and symbols, objects and interpreters. Although he wrote some 150 years ago, Peirce’s insights remain relevant to the latest software package or machine learning claim. In words that Turing would echo in describing his “oracle,” Peirce showed that symbols and objects are sterile without “interpretants,” who open the symbols to the reaches of imagination. Peirce’s “sign relation” binds object, sign, and interpreter into an irreducible triad. It is fundamental to any coherent theory of information that every symbol be linked inexorably to its object by an interpreter, a human mind. An uninterpreted symbol is meaningless by definition, and any philosophy that deals in such vacuities is sure to succumb to hidden assumptions and interpretive judgments.13
In an industry based on substrate-independent information, materialism’s basic error of banishing the interpreter is deadly to the development of new technology. You cannot grasp the intricacies of computer science with a model of the world consisting of fluctuating particles any more than a model of fluctuating particles can illuminate the brain. Knowledge of every quark and electron in a computer tells you virtually nothing of what the computer is doing. To know that, you have to address the source code, and the source code is the ground state where human interpretation is imparted.
The 2017 Asilomar conference called to mind a conference held at the same place in February 1975, at which scientists warned about the future of technology—in that case, genetic engineering. They feared that experiments enabling molecular biologists to splice DNA from two different organisms, producing novel recombinant DNA molecules and chimeras, would threaten all human life. Within a decade, so the attendees prophesied, “scientists will be able to create new species and carry out the equivalent of 10 billion years of evolution in one year.”
More than four decades later, the hopes and fears of the 1975 Asilomar conference are nowhere near to coming true. The roots of nearly a half-century of frustration reach back to the meeting in Königsberg in 1930, where von Neumann met Gödel and launched the computer age by showing that determinist mathematics could not produce creative consciousness. Von Neumann stepped forward to become the oracle of the age we are now consummating.
Reflecting on the 1975 conference, the eminent chemist-biologist Michael Denton concludes, “The actual achievements of genetic engineering are rather more mundane . . . , a relatively trivial tinkering rather than genuine engineering, analogous to tuning a car engine rather than redesigning it, an exploitation of the already existing potential for variation which is built into all living systems. . . . ” Thousands of transgenic plants have been developed with results “far from the creation or radical reconstruction of a living organism.”14 All that the first Asilomar conference managed to achieve was triggering an obtuse paranoia about “genetically modified organisms” that hinders agricultural progress around the world.
That danger of paranoid politics is the chief peril that all the Deep Learners at the new Asilomar should have recognized.
Among the Deep Learners and Google brains at the AI Asilomar was Vitalik Buterin, a twenty-three-year-old college dropout with the same etiolated, wide-eared, boy-genius look that characterized Gödel and Turing. The assembled masters of the high-tech universe may have understood him about as well as the mathematicians in Königsberg understood the twenty-four-year-old Gödel in 1930, though the audience at Asilomar had advance notice of the significance of Buterin’s work.
Buterin succinctly described his company, Ethereum, launched in July 2015, as a “blockchain app platform.” The blockchain is an open, distributed, unhackable ledger devised in 2008 by the unknown person (or perhaps group) known as “Satoshi Nakamoto” to support his cryptocurrency, bitcoin. Buterin’s meteoric rise was such that soon after the Asilomar conference the central bank of Singapore announced that it was moving forward with an Ethereum-backed currency, and other central banks, including those of Canada and Russia, are investigating its potential as a new foundation for money transactions and smart contracts.
But Buterin’s vision for the blockchain has long been broader than cryptocurrency. Ethereum’s contribution, its co-founder Joe Lubin predicts, will be an Internet without “a single powerful entity that controls the system or controls gatekeeping into the system.”15 Wired magazine speculated in 2014 that smart contracts, such as Buterin designed Ethereum to facilitate, “could lead to the creation of autonomous corporations—entire companies run by bots instead of humans.”16 If you were convening a summit of futuristic technologists in 2017, it would have been hard to avoid inviting the prophetic protagonist of Ethereum.
Perhaps Buterin, who launched Bitcoin Magazine while working as research assistant to the cryptographer Ian Goldberg, is the truest legatee of Shannon’s vision. Like Shannon he can move seamlessly between the light and dark sides of information, between communication and cryptography. Shannon’s information theory, like Turing’s computational vision, began with an understanding of codes. His first major paper, “A Mathematical Theory of Cryptography” (1945) proved that a perfect randomized one-time pad constitutes an unbreakable code, a singularity. The theory of information deals with a continuum between white noise (purely random) and perfect order (predictable and information-free). Shannon’s paper focused attention on the fertile domains of redundancy in between, which he dubbed “stochastic.” This region of controlled or bounded probability comprises the subject of communications, information codes, encryption, and decryption that is the heart of bitcoin, blockchain, and Ethereum.
At Asilomar, Buterin might have offered incisive recommendations for how to control the machine through the blockchain. But
Tegmark does not mention him in Life 3.0. Larry Page, Elon Musk, and the paladins of Google’s DeepMind are the heroes. On page 236, though, it is suggested that a ruling super-intelligent AI might well invent a new cosmic cryptocurrency “in the spirit of bitcoin”—as if the mysterious Satoshi might have been an AI program.
The strong implication is that Buterin and his colleagues will have to take a back seat in the AI bandwagon, which represents the climactic technology in the history of human invention. The idea of a new generation of transformational technologists does not fit the plot line of a new eschaton.
But Google and its world are looking in the wrong direction. They are actually in jeopardy, not from an all-powerful artificial intelligence, but from a distributed, peer-to-peer revolution supporting human intelligence—the blockchain and new crypto-efflorescence. Buterin and his allies are dedicated to restoring data to its originators and incorporating it horizontally and interoperatively across the cryptocosm. Google’s security foibles and AI fantasies are unlikely to survive the onslaught of this new generation of cryptocosmic technology.
CHAPTER 10
1517
Nothing helps you understand something like investing in it. To participate in this new generational movement in technology, in July 2015 I became a founding partner in the 1517 Fund, led by venture capitalist–hackers Danielle Strachman and Mike Gibson and partly financed by Peter Thiel.
With a combination of thoughtful authority and seemingly boundless energy reminiscent of Thiel himself, Strachman and Gibson ran the Thiel Fellowship for its first five years. Founded in 2011 to pluck young people from the cradle of credentialism, the Fellowship induces students in their early twenties or below to “skip or stop out of college.” While they work on their own “unique project, they receive a [two-year] $100,000 grant and support from the Thiel Foundation’s network of founders, investors, and scientists.”1
Life After Google Page 11