Machines of Loving Grace
Page 14
Moravec would modify and hack the system for a decade so that ultimately it would be able to make it across a room, correctly navigating an obstacle course about half the time. The Cart failed in many ways. Attempting to simultaneously map and locate using only single camera data, Moravec had undertaken one of the hardest problems in AI. His goal was to build an accurate three-dimensional model of the world as a key step toward understanding it.
At the time the only feedback came from seeing how far the Cart had moved. It didn’t have true stereoscopic vision, so the Cart lacked depth perception. As a cost-saving measure, he would move the camera back and forth along a bar at right angles to the field of view, making it possible for the software to calculate a stereo view from a single camera. It was an early predecessor of the software approach taken decades later by the Israeli computer vision company Mobileye.
Driving automatically was rather slow and boring, and so with its remote link and video camera connection Moravec enjoyed controlling the Cart remotely from his computer workstation. It all seemed very futuristic to pretend that he was at the controls of a lunar rover, wandering around SAIL, which was housed in a circular building in the hills west of Stanford. Before long the driveway leading up to the lab was sporting a yellow traffic sign that read CAUTION ROBOTIC VEHICLE. The Cart did venture outside but not very successfully. Indeed it seemed to have a propensity to find trouble. One setback occurred in October of 1973 when the Cart, being driven manually, ran off an exit ramp, tipped over, leaked acid from a battery, and in the process destroyed precious electronic circuitry.35 It took almost a year to rebuild.
Moravec would often try to drive the Cart around the building, but toward the rear of the lab the road dipped, causing the radio signal to weaken and making it hard to see exactly where the Cart was. Once, while the Cart was circling the building, he misjudged its location and made a wrong turn. Rather than returning in a circle, the robot headed down the driveway to busy Arastradero Road, which runs through the Palo Alto foothills. Moravec kept waiting for the signal from the robot to improve, but it stayed hazy. The television image was filled with static. Then to his surprise, he saw a car drive by the robot. That seemed odd. Finally he got up from his computer terminal and went outside to track the robot down physically. He walked to where he assumed the robot would be, but found nothing. He decided that someone was playing a joke on him. Finally, as he continued to hunt for the errant machine it came rolling back up the driveway with a technician sitting on it. The Stanford Cart had managed to make its way far down Arastradero Road and was driving away from the lab by the time it was captured. Since those baby steps, engineers have made remarkable progress in designing self-driving cars. Moravec’s original premise that it is only necessary to wait for computing to fall in cost and grow more powerful has largely proven true.
He has quietly continued to pursue machine vision technology, but there have been setbacks. In October of 2014, his start-up factory vision system declared bankruptcy and underwent a court-ordered restructuring. Despite the disappointments, only the timeline in his agenda has changed. To the question of whether the current wave of artificial intelligence and robotics will replace human labor, he responds with a twinkle that what he is about is replacing humanity—“labor is such a minimalist goal.”
He originally sketched out his vision of the near future in his second book, Robot: Mere Machine to Transcendent Mind. Here, Moravec concludes there is no need to replace capitalism because it is worthwhile for evolving machines to compete against each other. “The suggestion,” he said, “is in fact that we engineer a rather modest retirement for ourselves.” The specter of the “end of labor,” which today is viewed with growing alarm by many technologists, is a relatively minor annoyance in Moravec’s worldview. Humans are adept at entertaining each other. Like many of his Singularitarian brethren, he instead wonders what we will do with a superabundance of all of society’s goods and services. Democracy, he suggests, provides a path to sharing the vast accumulation of capital that will increasingly come from superproductive corporations. It would be possible, for example, to increase Social Security payments and lower the retirement age until it eventually equals birth.
In Moravec’s worldview, augmentation is an interim stage of technology development, only necessary during the brief period when humans can still do things that the machines can’t. Like Licklider he assumes that machines will continue to improve at a faster and faster rate, while humans will evolve only incrementally. Not by 2020—and at one point he believed 2010—but sometime reasonably soon thereafter so-called universal robots will arrive that will be capable of a wide set of basic applications. It was an idea he first proposed in 1991, and only the timeline has been altered. At some point these machines will improve to the point where they will be able to learn from experience and gradually adapt to their surroundings. He still retains his faith in Asimov’s three laws of robotics. The market will ensure that robots behave humanely—robots that cause too many deaths simply won’t sell very well. And at some point, in Moravec’s view, machine consciousness will emerge as well.
Also in Robot: Mere Machine to Transcendent Mind, he argues that strict laws be passed and applied to fully automated corporations. The laws would limit the growth of these corporations—and the robots they control—and prohibit them from taking too much power. If they grow too large, an automatic antitrust mechanism will go into effect, forcing them to divide. In Moravec’s future world, rogue corporations will be held in check by a society of AI-based corporations, working to protect the common good. There is nothing romantic in his worldview: “We can’t get too sentimental about the robots, because unlike human beings the robots don’t have this evolutionary history where their own survival is really the most important thing,” he said. He still holds to the basic premise—the arrival of the AI-based corporation and the universal robot will mark a utopia that will satisfy every human desire.
His worldview isn’t entirely utopian, however. There is also a darker framing to his AI/robot future. Robots will be expanding into the solar system, mining the asteroids and reproducing and building copies of themselves. This is where his ideas begin to resemble Blade Runner—the dystopian Ridley Scott movie in which androids have begun to colonize the solar system. “Something can go wrong, you will have rogue robots out there,” he said. “After a while you will end up with an asteroid belt and beyond that is full of wildlife that won’t have the mind-numbing restrictions that the tame robots on Earth have.” Will we still need a planetary defense system to protect us from our progeny? Probably not, he reasons. This new technological life-form will be more interested in expanding into the universe—hopefully.
From his cozy and solitary command center in suburban Pittsburgh in a room full of computer screens it is easy to buy into Moravec’s science-fiction vision. So far, however, there is precious little solid evidence that there will be a rapid technological acceleration that will bring about the AI promised land in his lifetime. Despite the reality that we don’t yet have self-driving cars and the fact that he has been forced to revise the timing of his estimates, he displays the curves on his giant computer screens and remains firm in his basic belief that society is still on track to create its successor species.
Will humans join in this grand adventure? Although he proposed the idea of uploading a human mind into a computer in Mind Children, Moravec is not committed to Ray Kurzweil’s goal of “living long enough to live forever.” Kurzweil is undergoing extraordinary and questionable medical procedures to extend his life. Moravec confines his efforts to survive until 2050 to eating well and walking frequently, and at the age of sixty-four, he does perhaps stand a plausible chance of surviving that long.
During the 1970s and 1980s the allure of artificial intelligence would draw a generation of brilliant engineers, but it would also disappoint. When AI failed to deliver on its promise they would frequently turn to the contrasting ideal of intelligence augmentation.
r /> Sheldon Breiner grew up in a middle-class Jewish family in St. Louis and, from an early age, was extraordinarily curious about virtually everything he came in contact with. He chose college at Stanford during the 1950s, in part to put as much distance as possible between himself and the family bakery. He wanted to see the world, and even in high school realized that if he stayed in St. Louis his father would likely compel him to take over the family business.
After graduating he traveled in Europe, spent some time in the army reserve, and then came back to Stanford to become a geophysicist. Early on he had become obsessed with the idea that magnetic forces might play a role in either causing or perhaps predicting earthquakes. In 1962 he had taken a job at Varian Associates, an early Silicon Valley firm making a range of magnetometers. His assignment was to find new applications for these instruments that could detect minute variations in the Earth’s magnetic field. Varian was the perfect match for Breiner’s 360-degree intelligence. For the first time highly sensitive magnetometers were becoming portable, and there was a willing market for clever new applications that would range from finding oil to airport security. Years later Breiner would become something of a high-tech Indiana Jones, using the technology to explore archaeological settings. In Breiner’s expert hands, Varian magnetometers would find avalanche victims, buried treasure, missing nuclear submarines, and even buried cities. Early on he conducted a field experiment from a site behind Stanford, where he measured the electromagnetic pulse (EMP) generated by a 1.4-megaton nuclear detonation 250 miles above the Earth. The classified test, known as Starfish Prime, led to new understanding about the impact of nuclear explosions on Earth-based electronics.
For his 1967 doctoral thesis he set out to explore the question of whether minute changes in the huge magnetic forces deep in the Earth could play a role in predicting earthquakes. He set up an array of magnetometers in individual trailers along a 120-mile stretch on the San Andreas Fault and used phone lines to send the data back to a laboratory in an old shack on the Stanford campus. There he installed a pen plotter that would record signals from the various magnetometers. It was an ungainly device that pushed rather than pulled a roll of chart paper under five colored ink pens. He hired a teenager from a local high school to change the paper and time-stamp the charts, but the device was so flawed that it caused the paper to ball up in huge piles almost every other day. He redesigned the system around a new digital printer from Hewlett-Packard and the high school student, who had been changing the paper for a dollar a day, was an early automation casualty.
Later, Breiner was hired by Hughes Corp. to work on the design of a deep ocean magnetometer to be towed by the Glomar Explorer. The cover story was to hunt for minerals such as manganese nodules on the seabed at depths of ten thousand to twelve thousand feet. A decade later the story leaked that the actual mission was a Central Intelligence Agency operation to find and raise a sunken Soviet submarine from the bottom of the Pacific Ocean. In 1968, after the assassination of Robert Kennedy, Breiner was asked by the White House science advisor to demonstrate technology to detect hidden weapons. He went to the Executive Office Building and demonstrated a relatively simple scheme employing four magnetometers that would become the basis for the modern metal detectors still widely used in airports and public buildings.36
Ultimately Breiner was able to demonstrate evidence of magnetic variation along the fault associated with earthquakes, but the data was clouded by geomagnetic activity and his hypothesis did not gain wide acceptance. He didn’t let the lack of scientific certainty hold him back. At Varian he had been paid to think up a wide range of commercial applications for magnetometers and in 1969 he and five Varian colleagues founded Geometrics, a company that used airborne magnetometers to prospect for oil deposits.
He would run his oil prospecting company for seven years before selling to Edgerton, Germeshausen, and Grier (EG&G), and then work for seven more years at their subsidiary before leaving in 1983. By then, the artificial intelligence technology that had been pioneered in John McCarthy’s SAIL and in the work that Feigenbaum and Lederberg were doing to capture and bottle human expertise was beginning to leak out into the surrounding environment in Silicon Valley. A Businessweek cover story in July of 1984 enthused, “Artificial Intelligence—It’s Here!” Two months later on CBS Evening News Dan Rather gave glowing coverage to the SRI work in developing expert systems to hunt for mineral deposits. Bathed in the enthusiasm, Breiner would become part of a wave of technology-oriented entrepreneurs who came to believe that the time was ripe to commercialize the field.
The earlier work on Dendral in 1977 had led to a cascade of similar systems. Mycin, also produced at Stanford, was based on an “inference engine” that did if/then–style logic and a “knowledge base” of roughly six hundred rules to reason about blood infections. At the University of Pittsburgh during the 1970s a program called Internist-I was another early effort to tackle the challenge of disease diagnosis and therapy. In 1977 at SRI, Peter Hart, who began his career in artificial intelligence working on Shakey the robot, and Richard Duda, another pioneering artificial intelligence researcher, built Prospector to aid in the discovery of mineral deposits. That work would eventually get CBS’s overheated attention. In the midst of all of this, in 1982, Japan announced its Fifth Generation Computer program. Heavily focused on artificial intelligence, it added an air of competition and inevitability to the AI boom that would lead to a market in which newly minted Ph.D.s could command unheard-of $30,000 annual salaries right out of school.
The genie was definitely out of the bottle. Developing expert systems was becoming a discipline called “knowledge engineering”—the idea was that you could package the expertise of a scientist, an engineer, or a manager and apply it to the data of an enterprise. The computer would effectively become an oracle. In principle that technology could be used to augment a human, but software enterprises in the 1980s would sell it into corporations based on the promise of cost savings. As a productivity tool its purpose was as often as not to displace workers.
Breiner looked around for industries where it might be easy to package the knowledge of human experts and quickly settled on commercial lending and insurance underwriting. At the time there was no widespread alarm about automation and he didn’t see the problem framed in those terms. The computing world was broken down into increasingly inexpensive personal computers and more costly “workstations,” generally souped-up machines for computer-aided design applications. Two companies, Symbolics and Lisp Machines, Inc., spun directly out of the MIT AI Lab to focus on specialized computers running the Lisp programming language, designed for building AI applications.
Breiner founded his own start-up, Syntelligence. Along with Teknowledge and Intellicorp, it would become one of the three high-profile artificial intelligence companies in Silicon Valley in the 1980s. He went shopping for artificial intelligence talent and hired Hart and Duda from SRI. The company created its own programming language, Syntel, which ran on an advanced workstation used by the company’s software engineers. It also built two programs, Underwriting Advisor and Lending Advisor, which were intended for use on IBM PCs. He positioned the company as an information utility rather than as an artificial intelligence software publisher. “In every organization there is usually one person who is really good, who everybody calls for advice,” he told a New York Times reporter writing about the emergence of commercial expert systems. “He is usually promoted, so that he does not use his expertise anymore. We are trying to protect that expertise if that person quits, dies or retires and to disseminate it to a lot of other people.” The article, about the ability to codify human reasoning, ran on the paper’s front page in 1984.37
When marketing his loan expert and insurance expert software packages, Breiner demonstrated dramatic, continuing cost savings for customers. The idea of automating human expertise was compelling enough that he was able to secure preorders from banks and insurance companies and investments from venture
capital firms. AIG, St. Paul, and Fireman’s Fund as well as Wells Fargo and Wachovia advanced $6 million for the software. Breiner stuck with the project for almost a half decade, ultimately growing the company to more than a hundred employees and pushing revenues to $10 million annually. The problem was that wasn’t fast enough for his investors. In 1983 the five-year projections had been to be at $50 million of annual revenue. When the commercial market for artificial intelligence software failed to materialize quickly enough, inside the company he struggled, most bitterly with board member Pierre Lamond, a venture capitalist who was a veteran of the semiconductor industry with no software experience. Ultimately Breiner lost his battle and Lamond brought in an outside corporate manager who moved the company headquarters to Texas, where the manager lived.
Syntelligence itself would confront directly what would be become known as the “AI Winter.” One by one the artificial intelligence firms of the early 1980s were eclipsed either because they failed financially or because they returned to their roots as experimental efforts or consulting companies. The market failure became an enduring narrative that came to define artificial intelligence, with a repeated cycle of hype and failure fueled by overly ambitious scientific claims that are inevitably followed by performance and market disappointments. A generation of true believers, steeped in the technocratic and optimistic artificial intelligence literature of the 1960s, clearly played an early part in the collapse. Since then the same boom-and-bust cycle has continued for decades, even as AI has advanced.38 Today the cycle is likely to repeat itself again as a new wave of artificial intelligence technologies is being heralded by some as being on the cusp of offering “thinking machines.”