Valley of the Gods

Home > Nonfiction > Valley of the Gods > Page 17
Valley of the Gods Page 17

by Alexandra Wolfe


  11

  Is This Really Right?

  While the baby companies of Thiel fellows such as James Proud and Paul Gu had their growing pains—with Proud still trying to get the tracker made at scale, and Gu rejiggering his business model over and over again—they were comfortably navigating Silicon Valley, if not as stars, then at least as survivors. John Burnham wasn’t finding the same fate.

  By fall 2014, the twenty-one-year-old wasn’t sure he liked what he saw there. As he struggled to launch Urbit, a personal server platform, he was often frustrated with how little the valley’s hierarchy valued the actual people who built the products or used them. Whether it was due in part to the convoluted nature of his company, which neither he nor his cofounder, Curtis Yarvin, could seem to explain in English, Burnham was feeling more and more detached. He felt like his reason for being out there was to build a company that would get a mythical high valuation, which he didn’t think would come from anything like his own merit or character. And what did Silicon Valley think character was, anyway? Were they even aware of it? If Gu’s company was giving it the value of a statistic on an Excel spreadsheet, what did that mean?

  Burnham felt like the valley’s attitude toward what it meant to be human wasn’t that different from what it meant to function as a machine. Philosophically, this idea didn’t sit well with him. It didn’t fit into his basic sense of right and wrong. No matter that it was also a convenient reason for why none of his companies was really working for him. Maybe he was just too moral, too principled, and too ethically correct. And now he wasn’t getting along with his cofounder.

  The qualities that had made him interested in Yarvin, otherwise known as his blogosphere hero Mencius Moldbug, he found odious in person. The man was stubborn and intractable. He didn’t really turn his ideas into reality, and they certainly didn’t translate into financial success.

  Yarvin-Moldbug was to have given a presentation at the annual Strange Loop programming conference in 2015, but was booted months before the conference because of his libertarian, non–­politically correct viewpoints, such as his argument to do away with democracy in favor of monarchy or dictatorship. Yarvin was spending more time riding that wave of fame than working on Urbit. Oddly for Silicon Valley, each owned 50 percent of the stock. It meant that should a disagreement arise, they were stuck. They couldn’t move forward, since neither had more power than the other.

  Reading Yarvin’s contrarian posts and following his philosophy online was one thing; listening to it all day was another. Plus, Burnham seemed to think because of his young age and his Thiel Fellowship, along with his previous experience attempting three other start-ups—a badge of honor in Silicon Valley—the engineering expertise would be easier to pick up. Urbit was centered on computer science.

  But Burnham much preferred reading books and writing. Plus, Urbit was having trouble raising money—though it recently raised $200,000, well after Burnham had left. The company hadn’t gone anywhere since its initial burst of blogosphere attention while Burnham was still at Dartmouth. John missed the East Coast. He missed his parents—and having means. He wasn’t wild about the rut he was falling into (along with camping out at the office), and he was feeling increased pressure to have something to show for his long stay out west.

  So Burnham gave up and decided to head back to Dartmouth. It was an odd return. He felt funny having to explain himself again to the other students. “It was a little bit embarrassing,” he remembered. It was like he’d come back a failure, after leaving campus the first time after two weeks to pursue his dream. Plus, at twenty-two, he was older than everyone else.

  In New England, especially at Dartmouth, sports and fraternities were king, not hackathons and start-up clubs. He’d become so used to the forward, game-to-meet-anyone attitude of Silicon Valley that being back on a college campus where there were already cliques and crews with their own habits and social calendars left him unsure of what to do with himself. John stuck it out at Dartmouth for a term and a half but didn’t feel comfortable there. He didn’t find the classes challenging enough, either. So at the end of spring term, he went on leave to attend a tiny Catholic liberal arts college in New Hampshire called Thomas More College. It was the only place where he felt like he fit in. There, bright and hardworking kids who didn’t really have a place in normal university life and were interested in the humanities came to burrow away on campus and study philosophy.

  Classes were much harder there than at Dartmouth too. On break from Thomas More, he recalled a course at Dartmouth on William Shakespeare’s play Hamlet. He said the professor adored him because he had done the reading. “That was the whole course,” he said. “The only requirement was to read Hamlet and write three papers, and show up to class once in a while.” Burnham didn’t see the point.

  At Thomas More, he read The Iliad and The Odyssey, Plato’s The Republic, and Aristophanes’s The Clouds. He took Greek literature. He saw it as a needed retreat from the madness he’d just experienced out west. It was the anti–Silicon Valley.

  What was really different, he thought, was the spiritual focus of the school. Burnham had never been religious, but here he found solace in religion. “Everyone is very much on the same page in terms of values and the goal of education,” he said. “I think that’s an essential basis, that you have to have shared values and shared ways of looking at the world.” Thomas More was so small that it had only two dorms: a male dorm and a female dorm. Although they weren’t nearly as nice as the dorms at Dartmouth, Burnham liked them better. “I think that roughing it a little bit is really excellent for building character and community,” he said. “Although I miss the elevator at Dartmouth.”

  It was the opposite of the Thiel Fellowship in more ways than one. First of all, everyone did the same thing all together. They all went to Rome, all took the same classes and tutorials, and all did the same reading—often about the lives of the saints. Those ancient thinkers, he said, answered more questions for him than the people in Silicon Valley did. There, he said, he often wondered what was the purpose of the start-ups that employed him. Were the tech companies actually making the world a better place? he wondered. “Is this going to be a net gain for the human experience or a net harm?” he asked himself. John realized later that the only way those questions were answerable was if you had some kind of basis for assessing what “the good life” really was, or “what the purpose of man’s existence is.”

  Burnham didn’t think Silicon Valley particularly cared about those questions. He thought its perspectives were varied, from utilitarian to commercial, but the only criterion was whether a company was profitable. “It’s not a complete picture of people and what people are,” he said. “So I guess for me, what formed my decision to go along a different path, is that I had a lot of questions, and I wanted to answer them, and I wanted to devote some time and serious study to figuring out why certain things occur; why do they happen, and what can we do about it?”

  In Silicon Valley, said Burnham, “It’s just a really interesting phenomenon that if you’re running the company that does nothing, you can feel like king of the world.” People felt they were accomplishing things anyway. “If it’s smoke, then there are people who go from smoke to smoke to smoke, and they have a successful track record.”

  He saw a lot of companies selling things they didn’t really have—just marketing products for hype. “In a way, that’s a kind of manipulative thing,” he said. “It’s one thing if you’ve really got something and want to bring it to the world, but I think it’s another thing if you don’t have anything, and you’re trying to raise hype and thereby sort of growing nothing really big.”

  Looking back, though, John said he wouldn’t change his course. “I don’t really know if I can second-guess the decisions I made. I was a really different person when I was eighteen, so if I had to make those decisions over again, I might, but I wouldn’t have known if I hadn’t
made them.” At Thomas More, Burnham was starting over as a freshman. He’d be about twenty-five when he graduated. “It’s a very strange thing,” he said, especially since the whole point of the Thiel Fellowship was to start out in the real world earlier than usual, not later.

  Silicon Valley, he said, had given him a new perspective on the world. A different breed of person fueled technological innovation, he discovered. It was a single-mindedness that he wasn’t so sure he had. Some of their acute focus didn’t point in the same direction he wanted to go. “How the world of technology sees itself and the rest of the world helps you understand a lot of what’s changing,” he reflected. He thought that technological tools that replaced tasks that people used to do by hand, such as writing letters or sending Christmas cards, or putting photographs in a physical album, were a little bit sad.

  All that had been digitized was now stored in the mysterious ether. “Having been in that world, I know how the sausage is made, and parts of it are not really all that pretty,” he said. It bothered him that the data people gave to these new tools and toys were no longer theirs. “Their control over it is vastly smaller than what they’re used to, but if you have photo albums, no one can look unless they get a warrant from a judge,” said Burnham. “If you’ve got photos on Facebook, or make some post, that’s gone into some enormously complex system that no one really understands.” It reflected a world, he thought, that was going from intelligible to complex and unintelligible. He quoted the science fiction writer Arthur C. Clarke, author of 2001: A Space Odyssey: “Any sufficiently advanced technology is indistinguishable from magic.”

  Sometimes the magic went to people’s heads. “I think there’s a real attitude in the tech industry that they know better than the rest of the world, or the government, or, let’s say, other industries,” he said. Those in the start-up world thought everyone else didn’t do anything as efficiently as they did. Sometimes they were right. But Burnham often found that this attitude was backed only in theory rather than reality. Look at his own experience. His dreams of asteroid mining attracted a lot of attention and sounded good, but he could never attract any investors or think of a practical way to actually travel to the asteroids and then mine them for minerals. He saw a lot of companies having the same problem, where they promised some sort of magic but had a difficult time delivering, such as Theranos with its revolutionary blood test device that wasn’t.

  12

  We Will Be God

  In Silicon Valley, much of this “magic” centered on artificial intelligence. AI was spoken of like a futuristic Merlin: a wizard who would someday descend upon the human race and turn everyone into high-functioning robots. That was one school of thought, at least. There were those who believed humans would “evolve” into machine-like creatures—cyborgs—enhanced by technology, with software that would program certain aspects of biological and cognitive functions. Then there were those who shared the humanistic view, in which humans used technology to make themselves better humans, even more human. They sounded similar, but they were two different mentalities, and there were two different groups of people who believed in each.

  In some respects, belief in artificial intelligence’s capabilities was divided into evolutionary AI and humanistic AI. Evolutionary artificial intelligence aficionados believed that machines would take over humans; that men and women were innately weak and full of imperfections, and that eventually a smarter, more intelligent machine would replace our meek human capabilities and guilt-ridden consciences (for such crimes as environmental degradation, violence, sexism) and make our species more effective and enlightened.

  Humanists who studied AI weren’t often in San Francisco, but those who were believed in its ability to empower humans. In their view, humans could use their superior brainpower to master and harness the machines, never giving up their own qualia, or command of the humanities, emotions, and uniquely human attributes. They would use technology only to function at a higher level; to become more efficient, such as by making superior software and completing more tasks. But unlike the evolutionary camp’s point of view, computers wouldn’t replace or enhance or affect human feelings in any way.

  The AI humanists left the intention with people. They left faith and higher purpose to God, more often than not. They had a capitalist view of artificial intelligence that sought to make what was already good—human intelligence—better, with additional machine learning; whereas evolutionary AI proponents thought feelings and emotions all boiled down to neurons firing as electric currents did in a computer anyway.

  By reducing humans to machines, and then allowing machines to take them over, it was a way to make everyone equal again: a socialist worldview in which we were all just bunches of cells and neurons, no one was better than the next person, and one person had merely another, possibly luckier, reconfiguration of cells than another. To them, it was madness to think there were innate differences between people, even in terms of intelligence, since the first intelligence machine would have the power to be more intelligent than the most intelligent human.

  The two kingpins of each of these movements lived on opposite coasts. Ray Kurzweil, the futurist and author of The Singularity Is Near: When Humans Transcend Biology, represented the West Coast evolutionary AI figures (though they would never call themselves that), while David Gelernter, based at Yale University, was one of the few, albeit influential, humanist artificial intelligence experts. Humanists were more attuned to the danger that artificial intelligence posed, not in terms of “evil AI” or computers gone rogue in satanic opposition to their positive AI God, but in terms of what the rise of machines would mean for the fall of humanity. Some truly believed AI was dangerous, threatening to eradicate the values that make us human. They considered life ingredients such as art, family, and culture as individual urges with intentions that made life worth living. They didn’t think computers could replicate that idea of human progress, human reason, or our noble cause.

  In Silicon Valley, the phrase “changing of the world” had become a cliché. But in recent years, it had turned into changing of the species, though no one could say that line enough to turn it into a cliché. For what they were doing was turning the idea of changing what it meant to be human into a measurement.

  That wasn’t so in New York. There, those in high places clung to what made humans human for centuries. Hedge fund managers who made it turned Fifth Avenue townhouses into ancient palazzos, they got grand pianos like they used to see in their fantastical visions of robber barons’ high-flying lives, such as the late Salomon Brothers head John Gutfreund, who once had a twenty-two-foot-tall Norwegian Christmas tree hoisted into his apartment through a balcony window. They went on long weekends to grouse hunts in England, even if they grew up spending Saturdays at the Short Hills Mall in suburban New Jersey. It was a retrowealth, a harkening back to what it was to be human last century.

  Across the country, this century was passé, let alone last. Who cared about the birds? How backward to fly across an ocean to go shoot them in funny-looking, uncomfortable outfits. In the Bay Area, the focus was on human evolution, and the next step seemed to be through this increasingly realistic literal marriage of man and machine. The greater ambition was a different kind of goal: not to buy the most expensive houses, cars, and boats, or be invited to the most exclusive parties, but to change the species into its highest iteration.

  Of course, they still wanted the party invites too. But by 2016, they were finally media darlings, with Sean Parker and Elon Musk and Larry Page not only invited but entreated to come to high-end events such as the Vanity Fair Oscar party and the Costume Institute Ball hosted by Vogue editor Anna Wintour at New York’s Metropolitan Museum of Art. (Oh, and could you make a donation too?) So that was all set. They were now free to focus on higher matters.

  Onward! Time to disrupt, transgress, and reengineer—­themselves and humanity as a whole. While some of the Silicon Valley
gods considered species improvement an underlying goal, they couldn’t say it out loud. Instead, their minions would have to work on it through secret projects or through safe organizations where it was okay to talk about. At Singularity University, the singularity wasn’t just near, it was imminent. Ray Kurzweil was their king. He had a cult following.

  Kurzweil believed that people would eventually stay young, ideally around age thirty, for hundreds of years. He conceded that living that way could get boring, so he explained away that phenomenon by saying that along with inevitable radical life extension would come radical life expansion, with new experiences, knowledge, music, and literature. He thought artificial intelligence would enable us to take our lives into our own hands with endless choices that would go on forever. If you were killed in a car accident, you had already backed up your mind and body, so could be re-created. He wasn’t joking! This way, we had more options at all times, he thought. Instead of death giving life meaning, he believed that culture, creating, music, and science gave it meaning. “Death interrupts science,” he said.

  To prepare for this upcoming event, he treated his body as a complex machine, downing 250 pills a day, sitting in a lab getting hormone drips one day a week, and drinking gallons of green tea daily. In 2013 the sixty-five-year-old inventor moved closer to the hotbed of where all the technological activity was happening. He left his home outside of Boston, where he lived in a Newton mansion lined with Marc Chagall paintings and holograms of Cheshire cats, to go to Silicon Valley to work at Google. What Kurzweil did there wasn’t secret, but how he planned on building humanity a new neocortex was. He hoped to upload human brains and expand them through technology, eventually allowing humans and computers to merge as one—in year 2045. That was when human intelligence would be enhanced a billionfold thanks to high-tech brain extensions.

 

‹ Prev