1960s night scene in downtown Las Vegas, Nevada
While neon is technically considered one of the “rare gases,” it is actually ubiquitous in the earth’s atmosphere, just in very small quantities. Each time you take a breath, you are inhaling a tiny amount of neon, mixed in with all the nitrogen and oxygen that saturate breathable air. In the first years of the twentieth century, a French scientist named Georges Claude created a system for liquefying air, which enabled the production of large quantities of liquid nitrogen and oxygen. Processing these elements at industrial scale created an intriguing waste product: neon. Even though neon appears as only one part per 66,000 in ordinary air, Claude’s apparatus could produce one hundred liters of neon in a day’s work.
With so much neon lying around, Claude decided to see if it was good for anything, and so in proper mad-scientist fashion, he isolated the gas and passed an electrical current through it. Exposed to an electric charge, the gas glowed a vivid shade of red. (The technical term for this process is ionization.) Further experiments revealed that other rare gases such as argon and mercury vapor would produce different colors when electrified, and they were more than five times brighter than conventional incandescent light. Claude quickly patented his neon lights, and set up a display showcasing the invention in front of the Grand Palais in Paris. When demand surged for his product, he established a franchise business for his innovation, not unlike the model employed by McDonald’s and Kentucky Fried Chicken years later, and neon lights began to spread across the urban landscapes of Europe and the United States.
In the early 1920s, the electric glow of neon found its way to Tom Young, a British immigrant living in Utah who had started a small business hand-lettering signs. Young recognized that neon could be used for more than just colored light; with the gas enclosed in glass tubes, neon signs could spell out words much more easily than collections of lightbulbs. Licensing Claude’s invention, he set up a new business covering the American Southwest. Young realized that the soon-to-be-completed Hoover Dam would bring a vast new source of electricity to the desert, providing a current that could ionize an entire city of neon lights. He formed a new venture, the Young Electric Sign Company, or YESCO. Before long, he found himself building a sign for a new casino and hotel, The Boulders, that was opening in an obscure Nevada town named Las Vegas.
It was a chance collision—a new technology from France finding its way to a sign letterer in Utah—that would create one of the most iconic of twentieth-century urban experiences. Neon advertisements would become a defining feature of big-city centers around the world—think Times Square or Tokyo’s Shibuya Crossing. But no city embraced neon with the same unchecked enthusiasm that Las Vegas did, and most of those neon extravaganzas were designed, erected, and maintained by YESCO. “Las Vegas is the only city in the world whose skyline is made not of buildings … but signs,” Tom Wolfe wrote in the middle of the 1960s. “One can look at Las Vegas from a mile away on route 91 and see no buildings, no trees, only signs. But such signs! They tower. They revolve, they oscillate, they soar in shapes before which the existing vocabulary of art history is helpless.”
It was precisely that helplessness that brought Venturi and Brown to Vegas with their retinue of architecture students in the fall of 1968. Brown and Venturi had sensed that there was a new visual language emerging in that glittering desert oasis, one that didn’t fit well with the existing languages of modernist design. To begin with, Vegas had oriented itself around the vantage point of the automobile driver, cruising down Fremont Street or the strip: shop windows and sidewalk displays had given way to sixty-foot neon cowboys. The geometric seriousness of the Seagram Building or Brasília had given way to a playful anarchy: the Wild West of the gold rush thrust up against Olde English feudal designs, sitting next to cartoon arabesques, fronted by an endless stream of wedding chapels. “Allusion and comment, on the past or present or on our great commonplaces or old clichés, and inclusion of the everyday in the environment, sacred and profane—these are what are lacking in present-day Modern architecture,” Brown and Venturi wrote. “We can learn about them from Las Vegas as have other artists from their own profane and stylistic sources.”
That language of allusion and comment and cliché was written in neon. Brown and Venturi went so far as to map every single illuminated word visible on Fremont Street. “In the seventeenth century,” they wrote, “Rubens created a painting ‘factory’ wherein different workers specialized in drapery, foliage, or nudes. In Las Vegas, there is just such a sign ‘factory,’ the Young Electric Sign Company.” Until then, the symbolic frenzy of Vegas had belonged purely to the world of lowbrow commerce: garish signs pointing the way to gambling dens, or worse. But Brown and Venturi had seen something more interesting in all that detritus. As Georges Claude had experienced more than sixty years before, one person’s waste is another one’s treasure.
Think about these different strands: the atoms of a rare gas, unnoticed until 1898; a scientist and engineer tinkering with the waste product from his “liquid air”; an enterprising sign designer; and a city blooming implausibly in the desert. All these strands somehow converged to make Learning from Las Vegas—a book that architects and urban planners would study and debate for decades—even imaginable as an argument. No other book had as much influence on the postmodern style that would dominate art and architecture over the next two decades.
Learning from Las Vegas gives us a clear case study in how the long-zoom approach reveals elements that are ignored by history’s traditional explanatory frameworks: economic or art history, or the “lone genius” model of innovation. When you ask the question of why postmodernism came about as a movement, on some fundamental level the answer has to include Georges Claude and his hundred liters of neon. Claude’s innovation wasn’t the only cause, by any means, but, in an alternate universe somehow stripped of neon lights, the emergence of postmodern architecture would have in all likelihood followed a different path. The strange interaction between neon gas and electricity, the franchise model of licensing new technology—each served as part of the support structure that made it even possible to conceive of Learning from Las Vegas.
This might seem like yet another game of Six Degrees of Kevin Bacon: follow enough chains of causality and you can link postmodernism back to the building of the Great Wall of China or the extinction of the dinosaurs. But the neon-to-postmodernism connections are direct links: Claude creates neon light; Young brings it to Vegas, where Venturi and Brown decide to take its “revolving and oscillating” glow seriously for the first time. Yes, Venturi and Brown needed electricity, too, but just about everything needed electricity in the 1960s: the moon landing, the Velvet Underground, the “I Have a Dream” speech. By the same token, Venturi and Brown required the noble gases, too; the odds are pretty good that they needed oxygen to write Learning from Las Vegas. But it was the rare gas of neon that made their story unique.
—
IDEAS TRICKLE OUT OF SCIENCE, into the flow of commerce, where they drift into the less predictable eddies of art and philosophy. But sometimes they venture upstream: from aesthetic speculation into hard science. When H. G. Wells published his groundbreaking novel The War of the Worlds in 1898, he helped invent the genre of science fiction that would play such a prominent role in the popular imagination during the century that followed. But that book introduced a more specific item to the fledging sci-fi canon: the “heat ray,” used by the invading Martians to destroy entire towns. “In some way,” Wells wrote of his technologically savvy aliens, “they are able to generate an intense heat in a chamber of practically absolute non-conductivity. This intense heat they project in a parallel beam against any object they choose, by means of a polished parabolic mirror of unknown composition, much as the parabolic mirror of a lighthouse projects a beam of light.”
The heat ray was one of those imagined concoctions that somehow get locked into the popular psyche. From Flash Gordon to Star Trek to Star Wars, weapons using
concentrated beams of light became almost de rigueur in any sufficiently advanced future civilization. And yet, actual laser beams did not exist until the late 1950s, and didn’t become part of everyday life for another two decades after that. Not for the last time, the science-fiction authors were a step or two ahead of the scientists.
But the sci-fi crowd got one thing wrong, at least in the short term. There are no death rays, and the closest thing we have to Flash Gordon’s arsenal is laser tag. When lasers did finally enter our lives, they turned out to be lousy for weapons, but brilliant for something the sci-fi authors never imagined: figuring out the cost of a stick of chewing gum.
Like the lightbulb, the laser was not a single invention; instead, as the technology historian Jon Gertner puts it, “it was the result of a storm of inventions during the 1960s.” Its roots lie in research at Bell Labs and Hughes Aircraft and, most entertainingly, in the independent tinkering of physicist Gordon Gould, who memorably notarized his original design for the laser in a Manhattan candy store, and who went on to have a thirty-year legal battle over the laser patent (a battle he eventually won). A laser is a prodigiously concentrated beam, light’s normal chaos reduced down to a single, ordered frequency. “The laser is to ordinary light,” Bell Lab’s John Pierce once remarked, “as a broadcast signal is to static.”
Unlike the lightbulb, however, the early interest in the laser was not motivated by a clear vision of a consumer product. Researchers knew that the concentrated signal of the laser could be used to embed information more efficiently than could existing electrical wiring, but exactly how that bandwidth would be put to use was less evident. “When something as closely related to signaling and communication as this comes along,” Pierce explained at the time, “and something is new and little understood, and you have the people who can do something about it, you’d just better do it, and worry later just about the details of why you went into it.” Eventually, as we have already seen, laser technology would prove crucial to digital communications, thanks to its role in fiber optics. But the laser’s first critical application would appear at the checkout counter, with the emergence of bar-code scanners in the mid-1970s.
The idea of creating some kind of machine-readable code to identify products and prices had been floating around for nearly half a century. Inspired by the dashes and dots of Morse code, an inventor named Norman Joseph Woodland designed a visual code that resembled a bull’s-eye in the 1950s, but it required a five-hundred-watt bulb—almost ten times brighter than your average lightbulb—to read the code, and even then it wasn’t very accurate. Scanning a series of black-and-white symbols turned out to be the kind of job that lasers immediately excelled at, even in their infancy. By the early 1970s, just a few years after the first working lasers debuted, the modern system of bar codes—known as the Universal Product Code—emerged as the dominant standard. On June 26, 1974, a stick of chewing gum in a supermarket in Ohio became the first product in history to have its bar code scanned by a laser. The technology spread slowly: only one percent of stores had bar-code scanners as late as 1978. But today, almost everything you can buy has a bar code on it.
In 2012, an economics professor named Emek Basker published a paper that assessed the impact of bar-code scanning on the economy, documenting the spread of the technology through both mom-and-pop stores and big chains. Basker’s data confirmed the classic trade-offs of early adoption: most stores that integrated bar-code scanners in the early years didn’t see much benefit from them, since employees had to be trained to use the new technology, and many products didn’t have bar codes yet. Over time, however, the productivity gains became substantial, as bar codes became ubiquitous. But the most striking discovery in Basker’s research was this: The productivity gains from bar-code scanners were not evenly distributed. Big stores did much better than small stores.
There have always been inherent advantages to maintaining a large inventory of items in a store: the customer has more options to choose from, and items can be purchased in bulk from wholesalers for less money. But in the days before bar codes and other forms of computerized inventory-management tools, the benefits of housing a vast inventory were largely offset by the costs of keeping track of everything. If you kept a thousand items in stock instead of a hundred, you needed more people and time to figure out which sought-after items needed restocking and which were just sitting on the shelves taking up space. But bar codes and scanners greatly reduced the costs of maintaining a large inventory. The decades after the introduction of the bar-code scanner in the United States witnessed an explosion in the size of retail stores; with automated inventory management, chains were free to balloon into the epic big-box stores that now dominate retail shopping. Without bar-code scanning, the modern shopping landscape of Target and Best Buy and supermarkets the size of airport terminals would have had a much harder time coming into being. If there was a death ray in the history of the laser, it was the metaphoric one directed at the mom-and-pop, indie stores demolished by the big-box revolution.
—
WHILE THE EARLY SCI-FI FANS of War of the Worlds and Flash Gordon would be disappointed to see the mighty laser scanning packets of chewing gum—its brilliantly concentrated light harnessed for inventory management—their spirits would likely improve contemplating the National Ignition Facility, at the Lawrence Livermore Labs in Northern California, where scientists have built the world’s largest and highest-energy laser system. Artificial light began as simple illumination, helping us read and entertain ourselves after dark; before long it had been transformed into advertising and art and information. But at NIF, they are taking light full circle, using lasers to create a new source of energy based on nuclear fusion, re-creating the process that occurs naturally in the dense core of the sun, our original source of natural light.
Deep inside the NIF, near the “target chamber,” where the fusion takes place, a long hallway is decorated with what appears, at first glance, to be a series of identical Rothko paintings, each displaying eight large red squares the size of a dinner plate. There are 192 of them in total, each representing one of the lasers that simultaneously fire on a tiny bead of hydrogen in the ignition chamber. We are used to seeing lasers as a pinpoint of concentrated light, but at NIF, the lasers are more like cannonballs, almost two hundred of them summed together to create a beam of energy that would have made H. G. Wells proud.
The multibillion-dollar complex has all been engineered to execute discrete, microsecond-long events: firing the lasers at the hydrogen fuel while hundreds of sensors and high-speed cameras observe the activity. Inside the NIF, they refer to these events as “shots.” Each shot requires the meticulous orchestration of more than six hundred thousand controls. Each laser beam travels 1.5 kilometers guided by a series of lenses and mirrors, and combined they build in power until they reach 1.8 million joules of energy and five-hundred-trillion watts, all converging on a fuel source the size of a peppercorn. The lasers have to be positioned with a breathtaking accuracy, the equivalent of standing on the pitcher’s mound at AT&T Park in San Francisco and throwing a strike at Dodger Stadium in Los Angeles, some 350 miles away. Each microsecond pulse of light has, for its brief existence, a thousand times the amount of energy in America’s entire national grid.
When all of NIF’s energy slams into its millimeter-sized targets, unprecedented conditions are generated in the target materials—temperatures of more than a hundred million degrees, densities up to a hundred times the density of lead, and pressures more than a hundred billion times Earth’s atmospheric pressure. These conditions are similar to those inside stars, the cores of giant planets, and nuclear weapons—allowing NIF to create, in essence, a miniature star on Earth, fusing hydrogen atoms together and releasing a staggering amount of energy. For that fleeting moment, as the lasers compress the hydrogen, that fuel pellet is the hottest place in the solar system—hotter, even, than the core of the sun.
The goal of the NIF is not to create a death ray�
��or the ultimate bar-code scanner. The goal is to create a sustainable source of clean energy. In 2013, NIF announced that the device had for the first time generated net positive energy during several of its shots; by a slender margin, the fusion process required less energy than it created. It is still not enough to reproduce efficiently on a mass scale, but NIF scientists believe that with enough experimentation, they will eventually be able to use their lasers to compress the fuel pellet with almost perfect symmetry. At that point, we would have a potentially limitless source of energy to power all the lightbulbs and neon signs and bar-code scanners—not to mention computers and air-conditioners and electric cars—that modern life depends on.
Vaughn Draggoo inspects a huge target chamber at the National Ignition Facility in California, a future test site for light-induced nuclear fusion. Beams from 192 lasers will be aimed at a pellet of fusion fuel to produce a controlled thermonuclear blast (2001).
Those 192 lasers converging on the hydrogen pellet are a telling reminder of how far we have come in a remarkably short amount of time. Just two hundred years ago, the most advanced form of artificial light involved cutting up a whale on the deck of a boat in the middle of the ocean. Today we can use light to create an artificial sun on Earth, if only for a split-second. No one knows if the NIF scientists will reach their goal of a clean, sustainable fusion-based energy source. Some might even see it as a fool’s errand, a glorified laser show that will never return more energy than it takes in. But setting off for a three-year voyage into the middle of the Pacific Ocean in search of eighty-foot sea mammals was every bit as crazy, and somehow that quest fueled our appetite for light for a century. Perhaps the visionaries at NIF—or another team of muckers somewhere in the world—will eventually do the same. One way or another, we are still chasing new light.
How We Got to Now: Six Innovations That Made the Modern World Page 17