The One Device

Home > Other > The One Device > Page 8
The One Device Page 8

by Brian Merchant


  “The very first development was done in 1972 for use in the SPS accelerator and the principle was published in a CERN publication in 1973,” he told me. “Already this screen was a true capacitive transparent multitouch screen.”

  So it came to pass that Stumpe picked me up from an Airbnb in Geneva one autumn morning. He’s a spry seventy-eight; he has short white hair, and his expression defaults to a mischievous smile. His eyes broadcast a curious glint (Frank Canova had it too; let’s call it the unrequited-inventor’s spark). As we drove to CERN, he made amiable small talk and pointed out the landmarks.

  There was a giant brutalist-looking dome, the Globe of Science and Innovation, and a fifteen-ton steel ribbon sculpture called Wandering the Immeasurable, which is also a pretty good way to describe the rest of the day.

  Before we get to Stumpe’s touchscreen, we stop by a site that was instrumental to the age of mobile computing, and modern computing, period—the birthplace of the World Wide Web. There would be no great desire for an “internet communicator” without it, after all.

  Ground zero for the web, is, well, a pretty unremarkable office space. Apart from a commemorative plaque, it looks exactly the way you’d expect an office at a research center to look: functional, kind of drab. The future isn’t made in crystal palaces, folks. But it was developed here, in the 1980s, when Tim Berners-Lee built what he’d taken to calling the World Wide Web. While trying to streamline the sharing of data between CERN’s myriad physicists, he devised a system that linked pages of information together with hypertext.

  That story is firmly planted in the annals of technology. Bent Stumpe’s much lesser known step in the evolution of modern computing unfolded a stone’s throw away, in a wooden hut within shouting distance of Berners-Lee’s nook. Yes, one of the earliest multitouch-capable devices was developed in the same environment—same institution, same setting—that the World Wide Web was born into, albeit a decade earlier. A major leap of the iPhone was that it used multitouch to allow us to interact with the web’s bounty in a smooth, satisfying way. Yet there’s no plaque for a touchscreen—it’s just as invisible here as everywhere else. Stumpe’s screen is a footnote that even technology historians have to squint to see.

  Then again, most touchscreen innovators remain footnotes. It’s a vital, underappreciated field, as ideas from remarkably disparate industries and disciplines had to flow together to bring multitouch to life. Some of the earliest touch-technology pioneers were musicians looking for ways to translate creative ideas into sounds. Others were technicians seeking more efficient ways to navigate data streams. An early tech “visionary” felt touch was the key to digital education. A later one felt it’d be healthier for people’s hands than keyboards. Over the course of half a century, impassioned efforts to improve creativity, efficiency, education, and ergonomics combined to push touch and, eventually, multitouch into the iPhone, and into the mainstream.

  In the wake of Steve Jobs’s 2007 keynote, in which he mentioned that he and Apple had invented multitouch, Bill Buxton’s in-box started filling up. “Can that be right?” “Didn’t you do something like that years ago?”

  If there’s a generally recognized godfather of multitouch, it’s probably Buxton, whose research helped put him at the forefront of interaction design. Buxton worked at the famed Xerox PARC in Silicon Valley and experimented with music technology with Bob Moog, and in 1984, his team developed a tablet-style device that allowed for continuous, multitouch sensing. “A Multi-Touch Three Dimensional Touch-Sensitive Tablet,” a paper he co-authored at the University of Toronto in 1985, contains one of the first uses of the term.

  Instead of answering each query that showed up in his email individually, Buxton compiled the answers to all of them into a single document and put it online. “Multitouch technologies have a long history,” Buxton explains. “To put it in perspective, my group at the University of Toronto was working on multitouch in 1984, the same year that the first Macintosh computer was released, and we were not the first.”

  Who was, then? “Bob Boie, at Bell Labs, probably came up with the first working multitouch system that I ever saw,” he tells me, “and almost nobody knows it. He never patented it.” Like so many inventions, its parent company couldn’t quite decide what to do with it.

  Before we get to multitouch prototypes, though, Buxton says, if we really want to understand the root of touch technology, we need to look at electronic music.

  “Musicians have a longer history of expressing powerful creative ideas through a technological intermediary than perhaps any other profession that ever has existed,” Buxton says. “Some people would argue weapons, but they are perhaps less creative.” Remember Elisha Gray, one of Graham Bell’s prime telephone competitors? He’s seen as a father of the synth. That was at the dawn of the twentieth century. “The history of the synthesizer goes way back,” Buxton says, “and it goes way back in all different directions and it’s really hard to say who invented what.” There were different techniques used, he says, varying volume, pressure, or capacitance. “This is equally true in touchscreens,” he adds.

  “It is certainly true that a touch from the sense of a human perspective—like what humans are doing with their fingers—was always part of a musical instrument. Like how you hit a note, how you do the vibrato with a violin string and so on,” Buxton says. “People started to work on circuits that were capable of capturing that kind of nuance. It wasn’t just, ‘Did I touch it or not?’ but ‘How hard did I touch it?’ and ‘If I move my fingers and so on, could it start to get louder?’”

  One of the first to experiment with electronic, gesture-based music was Léon Thérémin. The Russian émigré’s instrument—the theremin, clearly—was patented in 1928 and consisted of two antennas; one controlled pitch, the other loudness. It’s a difficult instrument to play, and you probably know it best as the generator of retro-spooky sound effects in old sci-fi films and psychedelic rock tunes. But in its day, it was taken quite seriously, at least when it was in the hands of its star player, the virtuosa Clara Rockmore, who recorded duets with world-class musicians like Sergey Rachmaninoff.

  The theremin inspired Robert Moog, who would go on to create pop music’s most famous synthesizer. In addition to establishing a benchmark for how machines could interpret nuance when touched by human hands, he laid out a form for touchpads. “At the same time, Bob also started making touch-sensitive touchpads to driver synthesizers,” Buxton says. Of course, he wasn’t necessarily the first—one of his peers, the Canadian academic Hugh Le Caine, made capacitive-touch sensors. (Recall, that’s the more complex kind of touchscreen that works by sensing when a human finger creates a change in capacitance.) Then there was Don Buchla, the Berkeley techno-hippie who wired Ken Kesey’s bus for the Merry Prankster expeditions and who was also a synth innovator, but he’d make an instrument only for those he deemed worthy. They all pioneered capacitive-touch technology, as did Buxton, in their aural experiments.

  The first device that we would recognize as a touchscreen today is believed to have been invented by Eric Arthur Johnson, an engineer at England’s Royal Radar Establishment, in 1965. And it was created to improve air traffic control.

  In Johnson’s day, whenever a pilot called in an update to his or her flight plan, an air traffic controller had to type a five-to seven-character call sign into a teleprinter in order to enter it on an electronic data display. That extra step was time-consuming and allowed for user error.

  A touch-based air traffic control system, he reckoned, would allow controllers to make changes to aircraft’s flight plans more efficiently.

  Johnson’s initial touchscreen proposal was to run copper wires across the surface of a cathode-ray tube, basically creating a touchable TV. The system could register only one touch at a time, but the groundwork for modern touchscreens was there—and it was capacitive, the more complex kind of touchscreen that senses when a finger creates a change in capacitance, right from the start.
r />   The touchscreen was linked to a database that contained all of the call signs of all the aircraft in a particular sector. The screen would display the call signs, “one against each touch wire.” When an aircraft called to identify itself, the controller would simply touch the wire against its call sign. The system would then offer the option to input only changes to the flight plan that were allowable. It was a smart way to reduce response times in a field where every detail counted—and where a couple of incorrect characters could result in a crash.

  “Of course other possible applications exist,” Johnson wrote. For instance, if someone wanted to open an app on a home screen. Or had a particle accelerator to control.

  For a man who made such an important contribution to technology, little is on the record about E. A. Johnson. So it’s a matter of speculation as to what made him take the leap into touchscreens. We do know what Johnson cited as prior art in his patent, at least: two Otis Elevator patents, one for capacitance-based proximity sensing (the technology that keeps the doors from closing when passengers are in the way) and one for touch-responsive elevator controls. He also named patents from General Electric, IBM, the U.S. military, and American Mach and Foundry. All six were filed in the early to mid-1960s; the idea for touch control was “in the air” even if it wasn’t being used to control computer systems.

  Finally, he cites a 1918 patent for a “type-writing telegraph system.” Invented by Frederick Ghio, a young Italian immigrant who lived in Connecticut, it’s basically a typewriter that’s been flattened into a tablet-size grid so each key can be wired into a touch system. It’s like the analog version of your smartphone’s keyboard. It would have allowed for the automatic transmission of messages based on letters, numbers, and inputs—the touch-typing telegraph was basically a pre-proto–Instant Messenger. Which means touchscreens have been tightly intertwined with telecommunications from the beginning—and they probably wouldn’t have been conceived without elevators either.

  E. A. Johnson’s touchscreen was indeed adopted by Britain’s air traffic controllers, and his system remained in use until the 1990s. But his capacitive-touch system was soon overtaken by resistive-touch systems, invented by a team under the American atomic scientist G. Samuel Hurst as a way to keep track of his research. Pressure-based resistive touch was cheaper, but it was inexact, inaccurate, and often frustrating—it would give touch tech a bad name for a couple of decades.

  Back at CERN, I’m led through a crowded open hall—there’s some kind of conference in progress, and there are scientists everywhere—into a stark meeting room. Stumpe takes out a massive folder, then another, and then an actual touchscreen prototype from the 1970s.

  The mood suddenly grows a little tense as I begin to realize that while Stumpe is here to make the case that his technology wound up in the iPhone, Mazur is here to make sure I don’t take that to be CERN’s official position. They spar—politely—over details as Stumpe begins to tell me the story of how he arrived at multitouch.

  Stumpe was born in Copenhagen in 1938. After high school, he joined the Danish air force, where he studied radio and radar engineering. After the service, he worked in a TV factory’s development lab, tinkering with new display technologies and prototypes for future products. In 1961, he landed a job at CERN. When it came time for CERN to upgrade its first particle accelerator, the PS (Proton Synchrotron), to the Super PS, it needed a way to control the massive new machine. The PS had been small enough that each piece of equipment that was used to set the controls could be manipulated individually. But the PS measured a third of a mile in circumference—the SPS was slated to run 4.3 miles.

  “It was economically impossible to use the old methods of direct connections from the equipment to the control room by hardwire,” Stumpe says. His colleague Frank Beck had been tasked with creating a control system for the new accelerator. Beck was aware of the nascent field of touchscreen technology and thought it might work for the SPS, so he went to Stumpe and asked him if he could think of anything.

  “I remembered an experiment I did in 1960 when I worked in the TV lab,” Stumpe says. “When observing the time it took for the ladies to make the tiny coils needed for the TV, which was later put on the printed circuit board for the TV set, I had the idea that there might be a possibility to print these coils directly on the printed circuit board, with considerable cost savings as a result.” He figured the concept could work again. “I thought if you could print a coil, you could also print a capacitor with very tiny lines, now on a transparent substrate”—like glass—“and then incorporate the capacitor to be a part of an electronic circuit, allowing it to detect a change in capacity when the glass screen was touched by a finger.… With some truth you can say that the iPhone touch technology goes back to 1960.”

  In March 1972, in a handwritten note, he outlined his proposal for a capacitive-touch screen with a fixed number of programmable buttons. Together, Beck and Stumpe drafted a proposal to give to the larger group at CERN. At the end of 1972, they announced the design of the new system, centered on the touchscreen and minicomputers. “By presenting successive choices that depend on previous decisions, the touch screen would make it possible for a single operator to access a large look-up table of controls using only a few buttons,” Stumpe wrote. The screens would be built on cathode-ray tubes, just like TVs.

  CERN accepted the proposal. The SPS hadn’t been built yet, but work had to start, so its administrators set him up with what was known as a “Norwegian barrack”—a makeshift workshop erected on the open grass. The whole thing was about twenty square meters. Concept in hand, Stumpe tapped CERN’s considerable resources to build a prototype. Another colleague had mastered a new technique known as ion sputtering, which allowed him to deposit a layer of copper on a clear and flexible Mylar sheet. “We worked together to create the first basic materials,” he says. “That experiment resulted in the first transparent touch capacitor being embedded on a transparent surface,” Stumpe says.

  His sixteen-button touchscreen controls became operational in 1976, when the SPS went online. And he didn’t stop working on touch tech there—eventually, he devised an updated version of his screen that would register touches much more precisely along wires arranged in an x- and y-axis, making it capable of something closer to the modern multitouch we know today. The SPS control, he says, was capable of multitouch—it could register up to sixteen simultaneous impressions—but programmers never made use of the potential. There simply wasn’t a need to. Which is why his next-generation touchscreen didn’t get built either.

  “The present iPhones are using a touch technology which was proposed in this report here in 1977,” Stumpe says, pointing to a stapled document.

  He built working prototypes but couldn’t gin up institutional support to fund them. “CERN told me kindly that the first screens worked fine, and why should we pay for research for the other ones? I didn’t pursue the thing.” However, he says, decades after, “when businesses needed to put touchscreens on mobile phones, of course people dipped into the old technology and thought, Is this a possibility? Industry built on the previous experience and built today what is iPhone technology.”

  So touch tech had been developed to manipulate music, air traffic, and particle accelerators. But the first “touch” based computers to see wide-scale use didn’t even deploy proper touchscreens at all—yet they’d be crucial in promoting the concept of hands-on computing. And William Norris, the CEO of the supercomputer firm, Control Data Corporation (CDC), embraced them because he believed touching screens was the key to a digital education.

  Bill Buxton calls Norris “this amazing visionary you would never expect from the seventies when you think about how computers were at the time”—i.e., terminals used for research and business. “At CDC, he saw the potential of touchscreens.” Norris had experienced something of an awakening after the 1967 Detroit riots, and he vowed to use his company—and its technology—as an engine for social equality. That meant
building manufacturing plants in economically depressed areas, offering day care for workers’ children, providing counseling, and offering jobs to the chronically unemployed. It also meant finding ways to give more people access to computers, and finding ways to use technology to bolster education. PLATO fit the bill.

  Programmed Logic for Automatic Teaching Operations was an education and training system first developed in 1960. The terminal monitors had the distinctive orange glow of the first plasma-display panels. By 1964, the PLATO IV had a “touch” screen and an elaborate, programmable interface designed to provide digital education courses. PLATO IV’s screen itself didn’t register touch; rather, it had light sensors mounted along each of its four sides, so the beams covered the entire surface. Thus, when you touched a certain point, you interrupted the light beams on the grid, which would tell the computer where your finger was. Norris thought the system was the future. The easy, touch-based interaction and simple, interactive navigation meant that a lesson could be beamed in to anyone with access to a terminal.

  Norris “commercialized PLATO, but he deployed these things in classrooms from K through twelve throughout the state. Not every school, but he was putting computers in the classroom—more than fifteen years before the Macintosh came out—with touchscreens,” Buxton says, comparing Norris’s visionariness to Jobs’s. “More than that, this guy wrote these manifestos about how computers are going to revolutionize education.… It’s absolutely inconceivable! He actually puts money where his mouth is in a way that almost no major corporation has in the past.”

  Norris reportedly sunk nine hundred million dollars into PLATO, and it took nearly two decades before the program showed any signs of turning even a small profit. But the PLATO system had ushered in a vibrant early online community that in many ways resembled the WWW that was yet to come. It boasted message boards, multimedia, and a digital newspaper, all of which could be navigated by “touch” on a plasma display—and it promulgated the concept of a touchable computer. Norris continued to market, push, and praise the PLATO until 1984, when CDC’s financial fortunes began to flag and its board urged him to step down. But with Norris behind it, PLATO spread to universities and classrooms across the country (especially in the Midwest) and even abroad. Though PLATO didn’t have a true touchscreen, the idea that hands-on computing should be easy and intuitive was spread along with it.

 

‹ Prev