The following day, I met a neuroengineer named Ed Boyden. Boyden, a bearded and bespectacled and serenely exuberant American in his mid-thirties, led the Synthetic Neurobiology research group at MIT Media Lab. His work involved building tools for mapping and controlling and observing the brain, and using them to figure out how the thing actually works. He had gained considerable fame in recent years for his role in the creation of optogenetics, a neuromodulation technique whereby individual neurons in the brains of living animals could be switched on and off by the application of directed light photons.
Randal had mentioned his name on several occasions during our discussions—both as someone broadly supportive of whole brain emulation and whose work was of significant relevance to that project—and Boyden had been a speaker at the Global Future 2045 event in New York the previous year.
It was Boyden’s belief, he told me, that it would eventually be possible to build neuroprosthetic replacements for brain parts—which, if you take the Ship of Theseus view of things, is essentially the same as believing that whole brain emulation is possible.
“Our goal is to solve the brain,” he said. He was referring here to the ultimate goal of neuroscience, which was to understand how the brain did what it did, how its billions of neurons, and the trillions of connections between them, organized themselves in such a way as to produce specific phenomena of consciousness. I was struck by the mathematical implications of the term solve, as though the brain could, in the end, be worked out like an equation or a crossword puzzle.
“To solve the brain,” he said, “you have to be able to simulate it in a computer. We’re working very hard on ways to map the brain, using connectomics. But I would argue that connections are not enough. To understand how information is being processed, what you really need are all the molecules in the brain. And I think a reasonable goal at this point would be to simulate a small organism, but to do this you need a way to map a 3D object such as the brain with nanoscale precision.”
Boyden’s team at MIT, as it happened, had recently developed just such a radical tool. It was called expansion microscopy, and it involved physically inflating samples of brain tissue using a polymer most commonly found in baby’s diapers. The polymer allowed for a scale blowup of the tissue, which kept all the proportions and connections in place, and facilitated a radically increased level of detail in mapping.
Boyden took out his laptop and showed me some 3D images of brain tissue samples that had been made using the technique.
“So what’s the ultimate aim of this?” I asked him.
“Well, I think it’d be great if we could actually localize and identify all the key proteins and molecules in the brain circuit. And then you could potentially make a simulation, and model what’s going on in the brain.”
“When you say simulation, what are you talking about? Are you talking about a functioning, conscious mind?”
Boyden paused for a moment and, in a quietly rhetorical flourish, confessed to not really understanding what the term “consciousness” meant—at least not precisely enough to answer my question.
“The problem with consciousness as a word,” he said, “is we have no way of judging whether it’s there or not. There’s not like a test that you can run and if it scores ten or higher, then that’s consciousness. So it’s hard to know whether a simulation would be conscious per se.”
He gestured toward the laptop in front of him on the table, in the conference venue’s vast empty banquet room where we’d come to talk, and he said that in order to understand a computer, it was not enough to understand the wiring, you needed to understand the dynamics.
“There are five hundred million of these laptops on the planet,” he said, “and they all have the same static wiring, but right now, at this moment, they’re all doing different things dynamically. So you need to understand the dynamics, not just what’s in there in terms of wiring and microchips and so on.”
He clicked around for a few seconds on his trackpad, and brought up an image of a worm, animated with twinkling colored spots of light. This was the C. elegans nematode, a transparent roundworm about a millimeter in length, much favored by neurologists for its manageably tiny number of neurons (302). This worm was the first multicellular organism ever to have its genome sequenced, and is to date the only creature to have its connectome fully mapped.
“So this is the first attempt to image all the neural activity in a whole organism,” he said, “at a rate that’s fast enough to capture the activation of all those neurons. And so if we can capture the connectivity and molecules in a circuit, and if we can watch what’s happening in real time, then we can try to really see whether the simulated dynamics recapitulates the empirical observation.”
“At which point you’ll what? Be able to translate this worm’s neural activity into code? Into a computable form?”
“Yes,” said Boyden. “That’s the hope.”
I felt that he was holding back from telling me he believed that whole brain emulations would at some point become a reality, but it was clear that he felt the principle to be sound, in a way that Nicolelis did not. And what he was telling me, ultimately, was that whether or not it led to it in the end, and whether or not it was his own ultimate goal, the kind of research that was necessary for the achievement of whole brain emulation was precisely the kind of research he himself was doing at MIT.
This was all clearly a very long way from where Randal wanted to get to, a very long way from his mind, or mine, or yours, on a laptop screen, with its hundred billion firing neurons glimmering with the light of purified consciousness. But it was an illustration of a principle, a statement of a possibility: an indicator that this thing that Randal wanted to do was not entirely crazy, or not, at least, entirely outside of the bounds of the thinkable.
—
In my first couple of conversations with Randal, my questions tended to focus on the technical aspects of whole brain emulation—on the means by which it might be achieved, and on the overall feasibility of the project. This was useful insofar as it confirmed for me that Randal at least knew what he was talking about, and that he was not insane, but this should not be taken to imply that I myself understood these matters in anything but the most rudimentary fashion.
One evening, we were sitting outside a combination bar/laundromat/stand-up comedy venue on Folsom Street—a place with the fortuitous name of Brainwash—when I confessed to Randal that the idea of having my mind uploaded to some technological substrate was deeply unappealing to me, horrifying even. The effects of technology on my own life, even now, were something about which I was profoundly ambivalent; for all I had gained in convenience and “connectedness,” I was increasingly aware of the extent to which my movements in the world were mediated and circumscribed by corporations whose only real interest was in reducing the lives of human beings to data, as a means to further reducing us to profit. The “content” we consumed, the people with whom we had romantic encounters, the news we read about the outside world: all these movements were coming increasingly under the influence of unseen algorithms, the creations of these corporations—whose complicity with government, moreover, had come to seem like the great submerged narrative of our time. Given the world we were now living in, where the fragile liberal ideal of the autonomous self was already receding like a half-remembered dream into the doubtful haze of history, wouldn’t a radical fusion of ourselves with technology amount, in the end, to a final capitulation of the very idea of personhood?
Randal nodded again, and took a sip of his beer.
“Hearing you say that,” he said, “makes it clear that there’s a major hurdle there for people. I’m more comfortable than you are with the idea, but that’s because I’ve been exposed to it for so long that I’ve just got used to it.”
The most persistently troubling philosophical question raised by all of this is also the most basic: Would it be me? If the incalculable complexity of my neural pathways and processes could
somehow be mapped and emulated and run on a platform other than the 3.3 lbs of gelatinous nervous tissue contained inside my skull, in what sense would that reproduction or simulation be “me”? Even if you allow that the upload is conscious, and that the way in which that consciousness presents itself is indistinguishable from the way I present myself, does that make it me? If the upload believes itself to be me, is that enough? (Is it enough that I believe myself to be me right now, and does that even mean anything at all?)
I had a very strong feeling—an instinctual burst of subcortical signals—that there was no distinction between “me” and my body, that I could never exist independently of the substrate on which I operated because the self was the substrate, and the substrate was the self.
The idea of whole brain emulation—which was, in effect, the liberation from matter, from the physical world—seemed to me an extreme example of the way in which science, or the belief in scientific progress, was replacing religion as the vector of deep cultural desires and delusions.
Beneath the talk of future technologies, I could hear the murmur of ancient ideas. We were talking about the transmigration of souls, eternal return, reincarnation. Nothing is ever new. Nothing ever truly dies, but is reborn in a new form, a new language, a new substrate.
We were talking about immortality: the extraction of the essence of the person from the decaying structure of the body, the same basic deal humanity had been dreaming of closing since at least as far back as Gilgamesh. Transhumanism is sometimes framed as a contemporary resurgence of the Gnostic heresies, as a quasi-scientific reimagining of a very ancient religious idea. (“At present,” as the political philosopher John Gray puts it, “Gnosticism is the faith of people who believe themselves to be machines.”) The adherents of this early Christian heretical sect held that the material world, and the material bodies with which human beings negotiated that world, were the creation not of God but of an evil second-order deity they called the demiurge. For the Gnostics, we humans were divine spirits trapped in a flesh that was the very material of evil. In his book Primitive Christianity, Rudolf Bultmann quotes a passage from a Gnostic text outlining what must be done in order to ascend to the realm of divine light:
First thou must rend the
garment that now thou
wearest, the attire of igno-
rance, the bulwark of evil,
the bond of corruption,
the dark prison, the living
death, the sense-endowed
corpse, the grave thou
bearest about with thee,
the grave, which thou ca
-rriest around with thee, the
thievish companion who
hateth thee in loving thee,
and envieth thee in hating
thee…
It was only through the achievement of higher refinements of knowledge that an elect few—the Gnostics themselves, initiates of divine information—could escape from the evil of embodiment into the rarefied truth of pure spirit. The Jesus of the Gnostic Apocrypha is contemptuous of the body in a way that is much more explicit and unambiguous than anything in canonical Scripture. In the Gnostic Gospel of Thomas, He is quoted as saying that “If spirit came into being because of the body, it is a wonder of wonders. Indeed, I am amazed at how this great wealth has made its home in this poverty.”
These beliefs, as Elaine Pagels puts it in her book The Gnostic Gospels, “stood close to the Greek philosophic tradition (and, for that matter, to Hindu and Buddhist tradition) that regards the human spirit as residing ‘in’ a body—as if the actual person were some sort of disembodied being who uses the body as an instrument but does not identify with it.” For the Gnostics, the only redemption would come in the form of liberation from that body. And a technological version of this liberation seemed to me to be what whole brain emulation was ultimately all about.
This techno-dualistic account of ourselves, as software running on the hardware of our bodies, had grown out of an immemorial human propensity to identify ourselves with, and explain ourselves through, our most advanced machines. In a paper called “Brain Metaphor and Brain Theory,” the computer scientist John G. Daugman outlines the history of this tendency. Just as the water technologies of antiquity (pumps, fountains, water-based clocks) gave rise to the Greek and Roman languages of pneuma and the humors; and just as the presiding metaphor for human life during the Renaissance was clockwork; and just as in the wake of the Industrial Revolution, with its steam engines and pressurized energies, Freud brought these forces to bear on our conception of the unconscious, there was now a vision of the minds of humans as devices for the storing and processing of data, as neural code running on the wetware of the central nervous system.
If we are anything at all, in this view, what we are is information, and information has become an unbodied abstraction now, and so the material through which that information is transmitted is of secondary importance to its content, which can be endlessly transferred, duplicated, preserved. (“When information loses its body,” writes the literary critic N. Katherine Hayles, “equating humans and computers is especially easy, for the materiality in which the thinking mind is instantiated appears incidental to its essential nature.”)
A strange paradox lies at the heart of the idea of simulation: it arises out of an absolute materialism, out of a sense of the mind as an emergent property of the interactions between physical things, and yet it manifests as a conviction that mind and matter are separate, or separable. Which is to say that it manifests as a new form of dualism, even a kind of mysticism.
The more time I spent with Randal, the more preoccupied I became with finding out what exactly it was that he envisioned when he thought about the eventual achievement of his project. What would be the experience of an uploaded version of the self? What did he imagine it would feel like to be a digital ghost, a consciousness untethered to any physical thing?
His answers varied whenever I asked him about this, and he was open about the fact that he didn’t have any one clear picture. It would depend, he told me, on the substrate; it would depend on the material of being. Sometimes he told me that there would always be a material presence, some version of flesh and blood, and then sometimes he would invoke the idea of virtual selves, of embodied presences in virtual worlds.
“I often think,” he said, “that it might be like the experience of a person who is, say, really good at kayaking, who feels like the kayak is physically an extension of his lower body, and it just totally feels natural. So maybe it wouldn’t be that much of a shock to the system to be uploaded, because we already exist in this prosthetic relationship to the physical world anyway, where so many things are experienced as extensions of our bodies.”
I realized, at this point, that I was holding my phone in my hand; I placed it on the table, and we both smiled.
I mentioned to Randal some concerns I had about potential consequences of his project. I was already troubled enough, I said, about the extent to which modern lives had been converted into code, into highly transferable and marketable stockpiles of personal information. Our every engagement with technology created an increasingly detailed portrait of our consumer selves, which was the only version of the self that mattered to the makers of these technologies. How much worse would it be if we existed purely as information? Would consciousness itself become a form of cognitive clickbait? Even now, I said, I was already imagining some terrifying extrapolation of native advertising, whereby my inclination to order another Sierra Nevada arose not out of any self-contained nexus of desire and volition, but from some clever piece of code that had been frictionlessly insinuated into the direct-marketing platform of my consciousness.
What if the immortalization procedure, the emulation and the upload, wound up being so expensive that only the extremely wealthy could afford the ad-free premium subscription, and the rest of us losers had to put up with subsidizing our continued existence through periodic exposure to thoughts or emotions o
r desires imposed from above, by some external commercial source, in some hellish sponsored content partnership of the self?
Randal did not disagree that such a situation would be undesirable. None of it, though, was directly relevant to his immediate project here, which was to solve the basic problem of human embodiment, rather than to head off at the pass any unintended consequences thereof.
“Plus,” he said, “it’s not like that kind of influence is exactly unique to software. It can be done with biological brains. By means of advertising, say. Or by means of chemicals. It’s not like you wanting another beer doesn’t have anything to do with the alcohol you’ve already consumed. It’s not like your desires are entirely independent of outside influence.”
I took a long drink of my beer, resolving as I did so to forgo ordering a second. A stench of weed had settled heavily on the warm evening, like a dank fog drifting in from the bay, and the air itself seemed in the grip of a teeming, paranoid high. Just a few feet from where we were sitting, on the corner of Folsom and Langton Streets, a young homeless man lay curled in the fetal position by a lamppost; he’d been keeping up a low, muttering monologue the entire time we’d been there, and as I put down my beer and thought about the things Randal had been saying, the man, whose face I could not see, let out a high, hysterical series of rapid-fire titters. I found myself thinking of a line from Nietzsche’s The Gay Science, about how strange, how uncannily wrong-natured, we must seem to other animals: “I fear that the animals see the human being as a being like themselves who in a most dangerous manner has lost its animal common sense—as the insane animal, as the laughing animal, the weeping animal, the unhappy animal.”
To Be a Machine Page 7