Blogs thrive on leaping to conclusions. They accept all inputs credulously, the excuse being that they can always correct the post later. Before long, anyone who had ever known Kyle—relatives, playmates, schoolmates, business associates, and former lovers—was sharing Kyle Buskirk stories. So it was not long before they discovered Jared.
But while Jared was quite willing to argue with Kyle into the small of the night, he was disinclined to do so for the entertainment of strangers. When a well-known blog host asked him what he thought of Kyle’s project, Jared smiled and answered that “if anyone can do it, Kyle can.” A lot of people took this as a vote of confidence, but I noticed the conditional clause, and of course so did Kyle.
Attention-deficit disorder is the disease of the age. Whether this is a Pavlovian consequence of weaning children on the flickering, transient imageries of Sesame Street or the general replacement of reading by surfing, people seem to have misplaced the ability to concentrate. Which is to say that the proverbial fifteen minutes of fame now barely lasts five. That Kyle Buskirk proposed to create intelligent computers, let alone that Jared Holtzmann thought them impossible, flashed across the public awareness like a white-hot meteorite. The afterimage lingered for a few eyeblinks, then people were onto the latest antics of Stephanie Bloom or the possibility of mandated regime change in Algeria.
I cannot say that Jared was not relieved. He disliked the spotlight in any case. But I think Kyle was equally thankful. He knew, if the public did not, that real achievement comes incrementally from painstaking attentions paid to fine details, regardless how sudden and unexpected the press releases might seem to an inattentive public. It was okay to paint goals in broad brush strokes, but getting there was another matter. He was less unwelcoming of attention than Jared, but he did find it distracting.
I did not see either man for a couple of years, though the occasional email showed up in my comm. box, and once Jared posted a comment on my math blog regarding a point on modal logic, and a vigorous debate ensued. We exchanged cards at appropriate times of year. Kyle hinted at some mysterious sort of breakthrough imminent at NM. Otherwise, matters remained quiet and polite. I supposed that, since they lived nearer each other, Jared and Kyle got together more often. Only later did I learn that I was mistaken.
Then one semester Jared came to the University of Chicago as a guest lecturer. He came without Gladdys, who was soloing with the Philadelphia Orchestra for a concert series, so he and I spent time together. After his final lecture, he came to see me in my office in the old Statistics and Mathematics building, a three-story brick residence across the street from Ryerson and covered with the traditional ivy. Most of the offices belonged to the Financial Mathematics group, but they put up with a few of us more exotic spillovers. Years ago, a fire had damaged the roof, but the only reminder of it now was a very faint smoky smell that emerged during heavy rainstorms.
My office was about what you might expect. Stacks of papers, shelves stuffed with textbooks, monographs, and the like. Two or three computers and ebooks even more stuffed than the shelves. I had seen Jared’s office in Princeton, and it was an altogether more orderly place. Of course, I had a couple of flatscreens, with lots of file storage, too. Very portable. But when I’m thinking through a problem, I like to spread papers over a table, or tape things to the wall. There’s a lot to be said for gestalt.
“Kyle texted me last week,” I told him. “He asked me why you keep trying to block his AI project.”
Jared had settled into a big soft chair with an old copy of Slattery balanced on the arm, and last semester’s term papers stacked on the end table beside him. He had picked up the textbook with the intent of placing it somewhere, and had seen with a blink of despair that he dared place it nowhere without the likelihood of triggering an avalanche. I took it off his hands to take it off his mind.
“I’m not trying to block him,” he said, as he settled back. “The Great Wall of China could not block him.” In those days he liked to affect a tweedy “English” look, complete with elbow patches on his jackets. “He keeps asking for my opinion, and I keep giving it. A couple months ago, I kirked his system.”
“Kirked?”
“You remember that TV character, Captain Kirk? He used to pose paradoxes to computers and they would get all tied up in logical knots and start smoking and sparking. That’s because computers are innocent. They believe what you tell them. What Kyle wants to accomplish is not just difficult; it’s impossible in principle. He doesn’t contact me as much anymore. I asked him why, and he said he didn’t need the negativity.” Jared bit his lip and shoved his hands into his jacket pockets. “He was wrong.”
“Wrong not to ask you?”
“Wrong about not needing the negativity. No one learns anything in an echo chamber.” He sighed. “And you, you’re his enabler.”
“Me!” Jared could be awfully blunt. “I proved he could approximate human intelligence arbitrarily closely for all practical purposes.”
“Maybe so; but not all human purposes are practical. An approximation is still an approximation, and the devil is in the details. If your Density Theorem is correct, Kyle may believe he has succeeded.”
That “if” nettled me. Mathematics, it seemed to me, dealt in absolutes in a way philosophy and science never could. “What’s the harm in that?”
“What’s the harm in any illusion? If we grow accustomed to sims indistinguishable from humans…”
“We’ll start treating sims as if they were real people. But how can that…?”
“No, Mac. The danger is that we’ll start treating people as if they were sims.”
I considered that for a moment. “Seems pretty far-fetched.”
Jared seemed on the brink of responding. Then he shrugged and settled back in the chair. “Maybe I worry too much. Never mind. We should head up to the Quadrangle Club for lunch.”
I checked my watch. “Wait a while. I called Beth Phillips—you remember her from the commune? She’s teaching history up at Northwestern now and I asked her to come down and meet us there.” Anything else, I had thought. Let’s talk about old commune days, about history, Beth’s new book, Jared’s new book—anything but the interminable subject of Kyle and his damned AI project.
So of course my computer chimed, and it was Kyle calling. Whatever other divine attributes God may have, a sense of humor is among them.
Kyle was using his own product, appearing on-screen as a Bill Gates imago, with the features morphed enough to look like Kyle disguised as Gates.
“Hey, Mac!” the image said. “Jared there yet? He is? Good. Turn the cam so I can see him.” I shifted the angle of the computer to take in the rest of the room, and Kyle/Gates waved. “Yo, Jared. Still thinking those long thoughts?”
Jared had leaned forward with his arms on his knees and peered intently at the screen. “Hi, Kyle,” he said. “Sorry to hear about Michele.”
Kyle made a brushing motion with his hand. “I don’t want to talk about that right now. I called to ask you out to Detroit next week, for the holidays. My expense.”
“Gladdys and I…”
“I’ll fly her out, too. We can all get together, like in Vienna. God, how long ago was that?”
“What’s such a big deal?” Jared asked, with a touch of suspicion in his voice.
The Kyle Buskirk grin split the Bill Gates face. “NM has made a breakthrough. We’ve created the artificial neuron, what we call a ‘neuristor.’ It’s a nano-scale self-powered integrated circuit that can be spliced in place of faulty axons. It works just like a prosthesis…”
“That’s wonderful,” I said. “That’s tremendous!”
Jared’s congratulations were also heartfelt. He had always believed that in reaching for the unattainable Kyle would grab hold of something remarkable. “That could revolutionize treatment not just for brain surgery, but for epilepsy, ALS, Alzheimer’s, even schizophrenia.”
It could and did, and the research team at NM—Boland,
Singh, et al.—would later receive the Nobel Prize in medicine. NM was Kyle’s baby and he deserved part of the credit. But the medical possibilities took second place in his enthusiasms.
“I don’t want to talk about that right now. I want to talk about AI. We can record the neural patterns that pass through the neuristors. Now tell me,” he said earnestly, leaning toward the camera. “What’s the difference between replacing one damaged axon with a neuristor and replacing all of them? Just quantity. It’s the breakthrough we’ve always wanted. If we replace all the axons in the brain with neuristors, we can record the entire suite of neural patterns, and that means we can download the entire mind into a mainframe. The computer would have to be massive but there is no defeater blocking us. It’s all engineering now. Come out to the NM Lab in Detroit and I’ll show you something that’ll knock your socks off. You’re done with the lectures, right? How about you, Mac, when are you free?”
We agreed on a date and as Jared returned his data pad to his jacket pocket, he said casually. “How’s the weather in Detroit?”
Kyle said, “I don’t want to talk about that right now.”
Jared smiled and pulled a phone from his pocket. “I didn’t think you would.”
He punched a number. “Hello,” he said, “put me through to Kyle Buskirk. Yes, it’s important, or I would not have called his private number.” He waited a few moments, then said in a cheery voice, “Kyle! Jared. That’s a great sim you have there.”
I could hear Kyle’s “damn!” from where I sat. The mask on the computer screen dissolved and an unaltered Kyle replaced it. He was sitting behind his desk in his office. “What tipped you off?”
Jared shrugged. “I don’t want to talk about that right now.”
Kyle drummed his fingers on his desk. “Damn. It was supposed to randomize those responses. We couldn’t give our net knowledge for every possible topic of conversation, so we set boundary responses.”
Jared said, “Your topic field was too constrained. There is always topic drift. And you need to work on your grammar engine. It sounded like you were reading a technical paper.”
“It was only a demo,” Kyle said. “Mac, what about you? Did it seem human to you?”
I was loath to appear more gullible than Jared, but I nodded. “I didn’t think to question it; but in hindsight, it’s obvious.”
Jared nodded. “Kyle…How is the weather in Detroit?”
Kyle smiled. “I don’t want to talk about that right now.”
“Make me a promise,” I told Jared as we bundled up for the walk to the Quadrangle Club. An early winter wind was blowing off the Lake, drilling the dry cold into the bone. Chicagoans have no idea what temperatures might be without the wind chill. “Promise me that we won’t talk to Beth about Kyle and his project.”
“That was a telling prank he pulled at the end there.”
I paused before tightening my muffler. “Yeah, you should’ve seen your face. Promise.”
“I promise. Mac, if Kyle wants to approximate an AI, there are two problems he has to overcome. The credulity problem…”
“I don’t want to talk about that right now.”
Jared gave me a sour look, and I could tell that that line would become a running gag among the three of us. “Look,” I pleaded with him. “Beth has an important new book out on the Elamite Tablets and their relationship to Harappa and the Dravidian language family. We are not going to burden her with this business with Kyle.”
Jared pulled the fur cap over his head. “She’ll ask about him. You know she will.”
“Besides,” I muttered, “humans can be credulous, too.”
Jared simply waited, and I finally succumbed. “Okay, okay. What’s the second obstacle?”
“Intuition,” he said. “Insight. Creativity. Whatever you want to call it. Look, Kyle is right about this much: a lot of human thought really is algorithmic. Habit, culture, genetics—eighty percent of life is on autopilot, and should be. But he hasn’t considered that not all thought is computational. Thinking is fluid, dynamic, tentative, spontaneous, sometimes creative. It’s not rule-bound, rigid, static, mechanical, and formalized. Computation and thinking are two distinct activities. Read Feser on Leibniz’ Mill. There’s no prospect of AI at all, as long as computers are machines.”
“Well,” I suggested, “isn’t that exactly what Kyle is trying to change?”
We walked a little while in silence, bundled against the biting wind and bitter sleet. When we came to the corner, Jared turned abruptly and put his back to the wind, facing me. “Speaking of Beth and old commune days,” he said, “you know that Kyle has been in contact with your sister.”
I did not deny knowing.
“He’ll have no joy of it.” Jared spun about and continued toward the club.
COMPUTER MOUSE
Jared and I booked first-class tickets on the 80-90 Bullet the following week, connecting with the local at Toledo, and so north toward Detroit. A limo met us at the maglev station in Romulus and whisked us in comfort to the labs in River Rouge. Kyle had made a policy of locating his facilities in regions that had been in economic decline. Cynics said this was because land there was cheap. Kyle wondered how many communities the cynics had helped revitalize.
The driver took us directly to the labs—a complex of tall, streamlined buildings connected by walkways on their upper stories. Chrome-and-glass were by then very retro and Kyle hated retro, so the façades had been faced in a white-to-cream ceramic. There were plenty of windows on three faces, but much of the fourth façade was unbroken wall decorated in geometric patterns and bas-reliefs. From the highway, the complex resembled nothing so much as the white-washed castles of the middle ages, and how was that for retro?
The limo dropped us off with a promise to deliver our luggage to our hotel rooms, which were in the nearby Turing Towers and Suites. The Lead Investigator for the Network Topology group, a Young Turk named Neill, met us at the door and took us around the building. I was stroked and praised by the staff for the Density Theorem that had made all their work possible.
That might have been painting with too broad a brush, but Kyle had always known how to lay egoboo on with a trowel. I noted a general consensus that I was “on their side” and not among the “denialists” and the “religious nuts.” I don’t know if they meant Jared and did not ask. So far as I know, Jared had never made a religious argument against AI, and those arguments I had heard him make were sound ones well-deserving of serious rebuttal. If anything, some of the “transhumanists” who had blogged on “the Great Project” came across as more overtly religious. They were certainly more faith-based.
“It’s an old dream,” Jared had said during the train ride. “Put away the old corrupted body, and put on the new body, renewed in the spirit of your mind.”
“That sounds like a quote,” I commented.
“Paraphrase,” said Jared. “It’s from the Bible.”
***
Kyle was perfectly capable of “duding up” in coat and tie. He often had to do so in Japan and China, and sometimes even in Europe. But here in his own environment, he was content in tan slacks and forest green polo shirt. He hopped to his feet when Jared and I entered the demonstration lab and pumped our hands, introducing us to others, most of whose names I have now long forgotten.
He allowed the Lead Investigator for Sims to explain the computer mouse. This was a holographic image of a mouse in a virtual environment. “Cheaper than building a robot,” Kyle interjected. I made a private bet with myself that the staff called the thing “Mickey,” and might have grown wealthy had not the odds against been so poor.
Then the Lead Investigator for Neural Recordings—her name I do remember as Danielle—introduced us to a real mouse, called “Algernon” for some reason. Algernon, it seemed, could learn mazes with astonishing rapidity. For this demonstration, they brought in a fairly simple maze-box, and Kyle invited Jared to assemble the walls into whatever configuration pleased him.
Afterward, Algernon mastered it with the breezy competence of a professional.
“Now here,” Danielle said, handing me a half-wall. “Snap this hurdle into the post holes anywhere along the main run.”
I did so, with the comment that Algernon could easily leap over it and continue his quest.
“Of course,” she replied. “We’re counting on it.”
Within a few trials, Algernon was hopping over the barrier with ease. Then Danielle removed the barrier and let Algernon run it once more.
When the mouse came to the place where the hurdle had once been, he leaped.
“Even though it’s not there anymore?” I said with some wonderment. Jared blinked, startled, but said nothing.
“Even though it’s not there anymore,” the Lead Investigator agreed. “Mice are blind. Their eyes see only vague shapes and shadows. They navigate by smell, touch, and memory. Algernon remembers there was a barrier at this point in the run and leaps over it by habit.”
Then it was time to record the mouse’s brain. This involved, of all things, a little cap that fit over Algernon’s head that would scan the neuristors installed earlier by nanomachines. Danielle adjusted the fit and settings and explained what the cap would do. That was the first time I heard the “flashbulb in the head” metaphor.
“We’re going to record only the cognitive parts of the brain,” Danielle explained. “We’ve modeled the somatic aspects with the computer. Those are pretty much the same from mouse to mouse.”
I was still mulling over “the cognitive parts” of a mouse’s brain, when Algernon began to flail about.
Kyle, who had been sitting in a chair in the corner, leaning forward with his hands clasped between his knees, spoke up. “Don’t worry. The mouse doesn’t feel anything, and we haven’t killed it, since we’ve only flashed the cortex and the hippocampus. The somatic part will keep the body alive and kicking. It will be a zombie for a while, but…”
Captive Dreams Page 24