The first era, he says, is the “Tabulating Era,” which lasted from the early 1900s to the 1940s and was built on single-purpose, mechanical systems that counted things and used punch cards to calculate, sort, collate, and interpret data. That was followed by the “Programming Era”—the 1950s to the present. “As populations grew, and economic and societal systems got more complex, [the] manual, mechanical-based systems just couldn’t keep up. We turned to software programmed by humans that applied if/then logic and iteration to calculate answers to prescribed scenarios. This technology rode the wave of Moore’s law and gave us personal computers, the Internet, and smartphones. [The] problem is, as powerful and transformational as these breakthroughs have been—and for a very long time—programmable technology is inherently limited by our ability to design it.”
And so, from 2007 onward, we have seen the birth of the “Cognitive Era” of computing. It could happen only after Moore’s law entered the second half of the chessboard and gave us sufficient power to digitize almost everything imaginable—words, photos, data, spreadsheets, voice, video, and music—as well as the capacity to load it all into computers and the supernova, the networking ability to move it all around at high speed, and the software capacity to write multiple algorithms that could teach a computer to make sense of unstructured data, just as a human brain might, and thereby enhance every aspect of human decision making.
When IBM designed Watson to play Jeopardy!, Kelly explained to me, it knew from studying the show and the human contestants exactly how long it could take to digest the question and buzz in to answer it. Watson would have about a second to understand the question, half a second to decide the answer, and a second to buzz in to answer first. It meant that “every ten milliseconds was gold,” said Kelly. But what made Watson so fast, and eventually so accurate, was not that it was actually “learning” per se, but its ability to self-improve by using all its big data capacities and networking to make faster and faster statistical correlations over more and more raw material.
“Watson’s achievement is a sign of how much progress has been made in machine learning, the process by which computer algorithms self-improve at tasks involving analysis and prediction,” noted John Lanchester in the London Review of Books on March 5, 2015. “The techniques involved are primarily statistical: through trial and error the machine learns which answer has the highest probability of being correct. That sounds rough and ready, but because, as per Moore’s law, computers have become so astonishingly powerful, the loops of trial and error can take place at great speed, and the machine can quickly improve out of all recognition.”
That is the difference between a cognitive computer and a programmable computer. Programmable computers, Kelly explained in a 2015 essay for IBM Research entitled “Computing, Cognition and the Future of Knowing,” “are based on rules that shepherd data through a series of predetermined processes to arrive at outcomes. While they are powerful and complex, they are deterministic, thriving on structured data, but incapable of processing qualitative or unpredictable input. This rigidity limits their usefulness in addressing many aspects of a complex, emergent world in which ambiguity and uncertainty abound.”
Cognitive systems, on the other hand, he explained, are “probabilistic, meaning they are designed to adapt and make sense of the complexity and unpredictability of unstructured information. They can ‘read’ text, ‘see’ images and ‘hear’ natural speech. And they interpret that information, organize it and offer explanations of what it means, along with the rationale for their conclusions. They do not offer definitive answers. In fact, they do not ‘know’ the answer. Rather they are designed to weigh information and ideas from multiple sources, to reason, and then offer hypotheses for consideration.” These systems then assign a confidence level to each potential insight or answer. They even learn from their own mistakes.
So in building the Watson that won on Jeopardy!, Kelly noted, they first created a whole set of algorithms that enabled the computer to parse the question—much the way your reading teacher taught you to diagram a sentence. “The algorithm breaks down the language and tries to figure out what is being asked: Is it a name, a date, an animal—what am I looking for?” said Kelly. The second set of algorithms is designed to do a sweep of all the literature Watson had been uploaded with—everything from Wikipedia to the Bible—to try to find everything that might be relevant to a given subject area, person, or date. “The computer would look for many pieces of evidence and form a preliminary list of what might be the possible answers, and look for supporting evidence for each possible answer—[such as if] they are asking for a person who works at IBM and I know that Tom works there.”
Then, with another algorithm, Watson would rank what it thought were the right answers, assigning degrees of confidence to all of them. If it had a high enough degree of confidence, it would buzz in and answer.
The best way to understand the difference between programmable and cognitive computers is with two examples offered to me by Dario Gil, IBM’s vice president of science and solutions. When IBM first started to develop translation software, he explained, it created a team to develop an algorithm that could translate from English to Spanish. “We thought the best way to do that was to hire all kinds of linguists who would teach us grammar, and once we understood the nature of language we would figure how to write a translation program,” said Gil. It didn’t work. After going through a lot of linguists, IBM got rid of them all and tried a different approach.
“This time, we said, ‘What if we took a statistical approach and just take two texts translated by humans and compare them and see which one is most accurate?” And since computing and storage power had exploded in 2007, the capacity to do so was suddenly there. It led IBM to a fundamental insight: “Every time we got rid of a linguist, our accuracy went up,” said Gil. “So now all we use are statistical algorithms” that can compare massive amounts of texts for repeatable patterns. “We have no problem now translating Urdu into Chinese even if no one on our team knows Urdu or Chinese. Now you train through examples.” If you give the computer enough examples of what is right and what is wrong—and in the age of the supernova you can do that to an almost limitless degree—the computer will figure out how to properly weight answers, and learn by doing. And it never has to really learn grammar or Urdu or Chinese—only statistics!
That is how Watson won on Jeopardy! “The programmable systems that had revolutionized life over the previous six decades could never have made sense of the messy, unstructured data required to play Jeopardy!,” wrote Kelly. “Watson’s ability to answer subtle, complex, pun-laden questions with precision made clear that a new era of computing was at hand.”
That is best illustrated by the one question Watson answered incorrectly at the end of the first day’s competition, when the contestants were all given the same clue for “Final Jeopardy!” The category was “U.S. Cities,” and the clue was: “Its largest airport was named for a World War II hero; its second largest, for a World War II battle.” The answer was Chicago (O’Hare and Midway). But Watson guessed, “What is Toronto?????” With all those question marks included.
“There are many reasons why Watson was confused by this question, including its grammatical structure, the presence of a city in Illinois named Toronto, and the Toronto Blue Jays playing baseball in the American League,” said Kelly. “But the mistake illuminated an important truth about how Watson works. The system does not answer our questions because it ‘knows.’ Rather, it is designed to evaluate and weigh information from multiple sources, and then offer suggestions for consideration. And it assigns a confidence level to each response. In the case of “Final Jeopardy!,” Watson’s confidence level was quite low: 14 percent, Watson’s way of saying: ‘Don’t trust this answer.’ In a sense, it knew what it didn’t know.”
Because it is so new, a lot of scary stuff has been written about the Cognitive Era of computing—that cognitive computers are going to take over
the world from humans. That is not IBM’s view. “The popular perception of artificial intelligence and cognitive computing is far from reality—this whole idea of sentient computer systems that become conscious and aware and take their own direction by what they learn,” said Arvind Krishna, senior vice president and director of IBM Research. What we can do is teach computers about narrow domains—such as oncology, geology, geography—by writing algorithms that enable them to “learn” about each of these disciplines through multiple and overlapping systems of pattern recognition. “But if a computer is built to understand oncology, that is the only thing it can do—and it can keep learning as new literature comes out in the narrow domain that it was designed for. But the idea that it would then suddenly start designing cars is zero.”
By June 2016, Watson was already being used by fifteen of the world’s leading cancer institutes, had ingested more than twelve million pages of medical articles, three hundred medical journals, two hundred textbooks, and tens of millions of patient records, and that number is increasing every day. The idea is not to prove that Watson could ever replace doctors, said Kelly, but to prove what an incredible aid it can be to doctors, who have long been challenged to keep current with medical literature and new findings. The supernova simply heightens the challenge: estimates suggest that a primary care physician would need more than 630 hours a month to keep up with the flood of new literature that is being unleashed related to his or her practice.
The bridge to the future is a Watson that can make massive amounts of diagnostic complexity free. In the past, when it was determined that you had cancer, the oncologists decided between three different forms of known treatment based on the dozen latest medical articles they might have read. Today, the IBM team notes, you can get genetic sequencing of your tumor with a lab test in an hour and the doctor, using Watson, can pinpoint those drugs to which that particular tumor is known to best respond—also in an hour. Today, IBM will feed a medical Watson 3,000 images, 200 of which are of melanomas and 2,800 are not, and Watson then uses its algorithm to start to learn that the melanomas have these colors, topographies, and edges. And after looking at tens of thousands and understanding the features they have in common, it can, much quicker than a human, identify particularly cancerous ones. That capability frees up doctors to focus where they are most needed—with the patient.
In other words, the magic of Watson happens when it is combined with the unique capabilities of a human doctor—such as intuition, empathy, and judgment. The synthesis of the two can lead to the creation and application of knowledge that is far superior to anything either could do on their own. The Jeopardy! game, said Kelly, pitted two human champions against a machine; the future will be all about Watson and doctors—man and machine—solving problems together. Computer science, he added, will “evolve rapidly, and medicine will evolve with it. This is coevolution. We’ll help each other. I envision situations where myself, the patient, the computer, my nurse, and my graduate fellow are all in the examination room interacting with one another.”
In time, all of this will reshape medicine and change how we think about being smart, argues Kelly: “In the twenty-first century, knowing all the answers won’t distinguish someone’s intelligence—rather, the ability to ask all the right questions will be the mark of true genius.”
Indeed, every day we read about how artificial intelligence is being inserted into more and more machines, making them more supple, intuitive, human-like, and accessible with one touch, one gesture, or one voice command. Soon everyone who wants will have a personal intelligent assistant, their own little Watson or Siri or Alexa that learns more about their preferences and interests each time they engage with it so its assistance becomes more targeted and valuable every day. This is not science fiction. This is happening today.
That is why it was no surprise to me that Kelly, at the end of our interview at Watson’s home at IBM, mused: “You know how the mirror on your car says ‘Objects in your rearview mirror are closer than they appear’?” Well, he said, “that now applies to what’s in your front windshield, because now it’s the future that is much closer than you think.”
The Designers
It is fun to be around really, really creative makers in the second half of the chessboard, to see what they can do, as individuals, with all of the empowering tools that have been enabled by the supernova. I met Tom Wujec in San Francisco at an event at the Exploratorium. We thought we had a lot in common and agreed to follow up on a Skype call. Wujec is a fellow at Autodesk and a global leader in 3-D design, engineering, and entertainment software. While his title sounds like a guy designing hubcaps for an auto parts company, the truth is that Autodesk is another of those really important companies few people know about—it builds the software that architects, auto and game designers, and film studios use to imagine and design buildings, cars, and movies on their computers. It is the Microsoft of design. Autodesk offers roughly 180 software tools used by some twenty million professional designers as well as more than two hundred million amateur designers, and each year those tools reduce more and more complexity to one touch. Wujec is an expert in business visualization—using design thinking to help groups solve wicked problems. When we first talked on the phone, he illustrated our conversation real-time on a shared digital whiteboard. I was awed.
During our conversation, Wujec told me his favorite story of just how much the power of technology has transformed his work as a designer-maker. Back in 1995, he recalled,
I was a creative director of the Royal Ontario Museum, Canada’s largest museum, and my last big project there before joining the private sector was to bring to life a dinosaur called a Maiasaura. The process was complicated. It began by transporting a two-ton slab of rock, double the size of a table, from the field to the museum. Over the course of many months, several paleontologists carefully chiseled out the fossils of two specimens, an adult and an infant. It was thought the dinosaurs were a parent and child: Maiasaura means “mother lizard.” As the fossilized bones emerged, it was our job to scan them. We used hand-digitizing tools to precisely measure the three-dimensional coordinates of hundreds and thousands of points on the fossil surfaces. This took forever and strained our modest technology. We realized that we needed high-end tools.
So, we upgraded. We got a grant for two hundred thousand dollars for software and three hundred forty thousand dollars for hardware. After the fossils were fully exposed, we hired an artist to create a three-foot-long scale physical model of the adult, first from clay, then from bronze. This sculpture became an additional reference for our digital model. But creating the digital model wasn’t easy. We spent more months painstakingly measuring tiny features and hand-entering the data into our computers. The software was unstable, forcing us to do the work over and over each time the system crashed. Eventually, we ended up with decent digital models. With the help of more experts, we rigged, textured, lit, animated, and rendered [these models] into a series of high-resolution movies. The effort was worth it: museum visitors would be able to press buttons on an exhibit panel and watch life-sized dinosaurs—the size of big SUVs—move in ways our paleontologists thought they would behave. “Here’s how they would walk, here’s how they would feed, here’s how they might stand on their hind legs.” After the exhibit opened, I thought, “Oh, my God, that was a lot of work.”
From start to finish, it was a two-year project, costing more than $500,000.
Now fast-forward. In May 2015, roughly twenty years later, Wujec found himself at a cocktail party at the same museum, where he had not worked for many years, and saw that they had put out on display the original bronze cast of the scale model of the Maiasaura dinosaur that he had built. He recalled:
I was surprised to find the sculpture there. I wondered what the digitizing process might be like using modern tools. So, on a Friday night, with a glass of wine in my hand, I took out my iPhone and walked around the model, took twenty or so photographs over maybe ninety sec
onds, and uploaded them to a free cloud app our company produces called 123D Catch. The app converts photos of just about anything into a 3-D digital model. Four minutes later, it returned this amazing, accurate, animatable, photorealistic digital 3-D model—better than the one we produced twenty years ago. That night, I saw how a half-million dollars of hardware and software and months and months of hard, very technical, specialized work could be largely replaced by an app at a cocktail party with a glass of wine in one hand and a smartphone in the other. In a few minutes I reproduced the digital model for free—except it was better!
And that is the point, concluded Wujec, with the advances in sensing, digitization, computation, storage, networking and software: all “industries are becoming computable. When an industry becomes computable, it goes through a series of predictable changes: It moves from being digitized to being disrupted to being democratized.” With Uber, the very analog process of hailing a cab in a strange city got digitized. Then the whole industry got disrupted. And now the whole industry has been democratized—anyone can be a cab driver for anyone else anywhere, and anyone can now pretty easily start a cab company. With design, the analog process of rendering a dinosaur got digitized, then thanks to the supernova it got disrupted, and now it is being democratized, so anyone with a smartphone can do it, vastly enhancing the power of one. You can conceive an idea, get it funded, bring it to life, and scale it at an ease, speed, and cost that make the whole process accessible to so many more people.
Thank You for Being Late Page 12