Solomon's Code

Home > Other > Solomon's Code > Page 17
Solomon's Code Page 17

by Olaf Groth


  The ethics center, for example, will help address some of the stickier problems of AI development, including what Lord David Puttnam calls “society’s retreat from complexity.” “The net effect of reliance on artificial intelligence could find us looking for oversimplified answers and solutions to complex problems,” says Puttnam, a member of the House of Lords and the Oscar-winning producer of Chariots of Fire. Having dedicated his career to the exploration of societal issues in film and TV, he is part of the committee of the House of Lords that crafted the recent report on the UK’s readiness for AI.§§ His key takeaway from the exercise was that our discourse increasingly shirks complexity, he says, looking for quick, simple data-endorsed fixes instead. “But many decisions in human life require longer reflection and awareness-building and should be deliberated and debated on an ongoing basis. Not all of those debates can or should be resolved quickly.” Thinking machines can return clear-cut answers, but often without drawing attention to the many issues where “there isn’t a winner, but a dialog and a synthesis and a negotiation,” he says.

  Ultimately, all these initiatives are designed to keep the United Kingdom at the forefront of the AI field, an effort that becomes increasingly important as the country finalizes Brexit and leaves the European Union, Puttnam says. If it leaves, the United Kingdom would limit its access to the largescale market and data availability that drives most commercial AI advancements. For the moment, the British government has passed legislation in spring 2018 that adopts much of the EU’s data protection framework and expands the powers of its Information Commission to enforce those provisions. “In terms of our desire to strike a careful balance between economic growth and ethical safeguards we are in fact currently well aligned with Germany, France and Canada,” he says. “Together, we carry much more weight in the emerging world of thinking machines. But what weight will the UK’s voice carry in the global governance of it once we leave the EU?”

  French government officials have pushed forward assertively on country-level programs, especially after the election of President Emmanuel Macron. In the spring of 2018, Macron said the government alone would pledge $1.85 billion over five years to support AI research, start-ups, and the collection of shared data sets. Macron told Wired magazine that the clear leaders in the field lean two different directions—the United States toward the private sector, China toward government principles—so France and Europe have an opportunity to find a middle ground.¶¶ The French are a notoriously techno-critical society, but Macron hopes to create an interdisciplinary effort that would provide a new perspective on which AI is built. “If we want to defend our way to deal with privacy, our collective preference for individual freedom versus technological progress, integrity of human beings and human DNA, if you want to manage your own choice of society, your choice of civilization, you have to be able to be an acting part of this AI revolution,” he says in the Q&A. “That’s the condition of having a say in designing and defining the rules of AI. That is one of the main reasons why I want to be part of this revolution and even to be one of its leaders. I want to frame the discussion at a global scale.”

  In this sense, Macron has forged ahead of his counterparts in continental Europe, asserting a national strategy on AI that includes government leadership on behalf of the individual citizen and aspirations of making Paris the primary research hub for AI development in Europe. The German government has taken a more conservative, bottoms-up approach, seeking first to solicit the views of its industrial and scientific establishments through an AI summit at the Chancellery, the power hub around Chancellor Angela Merkel. The government now includes two high-ranking officials to lead the charge on digitization—State Minister Dorothee Bär and a department head, Eva Christiansen—along with a handful of digitization initiatives across its science, economic, and labor ministries. Meanwhile, these and most other Western European governments have noted the critical need for cooperation, even beyond EU initiatives, and several have started discussing potential partnerships. Some of those burgeoning efforts, such as a new French-German Center for AI, have been delayed because of other pressing matters, such as Brexit and immigration. But across the board, government officials in both France and Germany tend to believe the humanity of each individual citizen is an indispensable component of a vibrant democracy. So, in their view, the global AI competition becomes a race to simultaneously preserve the pre-eminence of human intelligence and European sovereignty.

  THE KNIGHTS OF THE COGNITIVE ERA

  The Israeli Defense Force knows it can’t keep every nefarious actor out of the country. So, they make it as hard as possible to sneak through, and then deploy a sophisticated internal net to quickly identify and catch as many threats as they can. Yossi Naar took the same mindset from his work for the Israeli military and applied it to a cybersecurity start-up. Rather than build a higher digital wall—something plenty of other firms already try to do—Cybereason analyzes everything attackers might do after they find their way in, identifying them and rooting them out of the environment. That requires both advanced technology and a deeper knowledge of how hackers operate, the cofounders say. “There was this old point of view, where you could clearly define right and wrong, and then figure out how you build a higher wall,” Naar says. “But in our nation-state background, we’ve known a lot of things to be simple and true: a. you can always get in; and b. the biggest and most difficult question for the attacker is what they do after they get in.”

  As Naar attests, the Israeli military has a deep influence on the development of a wide range of AI applications in the country. Every citizen must join the military, which developed an extremely effective system to identify top talent and track them into its sophisticated high-tech training programs. So, the research and development conducted there serves as a de facto training center and incubator for AI talent, he explains. Many of the country’s leading high-tech start-ups were launched by partners who met while serving their country. It’s an intense educational program, a full-time job in the classroom, and then graduates go to work on some of the most advanced technology platforms in the world. Through the reserve forces, others with extensive AI or digital experience cycle back to mentor in a “reinforcing system that brings knowledge in and takes knowledge out,” Naar says. “That gives smart young kids a lot of resources to work with, which you don’t get as a 21-year-old in college.”

  The defensive military perspective, especially in a country such as Israel, naturally generates nonmilitary companies that think in terms of defense, such as Cybereason. Yet, it also spawns an array of different ideas and talents born from its education and research. Of the 2,500 or so start-ups in the country, Naar estimates, about 500 to 700 are security related. For example, the intelligence community and its focus on informational analysis has helped power a great deal of research into big data of all types. And then there’s the array of AI-powered health-care applications emerging from Israeli entrepreneurs as well.

  While the United States doesn’t have the same breadth of high-tech education within the military, it has developed its powerful links between national defense and advanced research through the Defense Advanced Research Projects Agency. The United States, Israel, and China stand out among the Knights of the Cognitive Era—countries in which defense-based innovation radiates into academic and private spheres, driving a range of peaceful and commercial applications. The United States doesn’t conceive of every citizen as a soldier like China and Israel do, but it maintains an inextricable link between the military and civilian sectors. The US defense agencies shop in Silicon Valley, but they don’t expect Silicon Valley to carry out their battles. Likewise, DARPA remains one of the world’s premier facilitators of cutting-edge high-tech research, funding researchers who push the state-of-the-art on everything from autonomous vehicles to neural microchip implants and sophisticated systems analysis (e.g., climate change) to cybersecurity.

  After her tenure as a program manager at the agency, Kathleen
Fisher, head of the computer science department at Tufts University, observed DARPA’s 2016 Cyber Grand Challenge, an open-competition tournament in Las Vegas. The competition pitted teams against one another, with each trying to defend a set of programs on their own systems while hacking the programs running on the other’s. So, one team might write code to automatically patch their programs, and then figure out how to exploit those findings against their competitors. But in one intriguing twist on this capture the flag type of scenario, seven teams participated in a play-in tournament designed solely for fully automated systems. They had to design programs that could automatically protect and attack systems without human intervention during the game. One team’s system “found a vulnerability, found the patch, patched itself, and launched the exploit against another team,” Fisher says. “And while that was happening, yet another system identified that attack, reversed engineered a patch from that intercept, and patched itself. That all happened within 20 minutes.”

  The winning AI team out of Carnegie Mellon University competed the next day against the human teams, and it started out well because it could work faster. Over the course of the whole tournament, though, it fell to last place because humans could generalize and process a variety of different hacking concepts and strategies. “This will change,” Fisher says. “Computers will beat everyone eventually. People are still better at exploiting software than computers are right now.”

  Like the Israeli Defense Force, though, DARPA’s programs stretch well beyond cybersecurity and digital attacks. In fact, one of its key AI-related initiatives hopes to crack a problem plaguing just about anyone working in the field: developing an AI system that can explain how and why it comes to its decisions. The concept of explainable AI has baffled experts as these systems have become more complex. While thinking machines can learn on the fly and process massive and complex sets of data, developers still don’t know exactly why the machine decides that one picture depicts a wolf and the other a husky. In one infamous example, researchers tried to infer an image-recognition system’s reasoning by tweaking the input and seeing how it affected the output. They discovered that the neural network identified some huskies as wolves because they were sitting in snow.

  While explainable AI has definite defense implications, DARPA’s funding of work in the field has a broad range of ripple effects on how AI systems interact with humans, says Wade Shen, program manager of DARPA’s Information Innovation Office. Plenty of machines can generate accurate decisions, but they’re not put into use because people can’t trust them. “Explainability” is plausible for certain types of AI models, but we’re not close to understanding newer, increasingly complex technologies, Shen explains. So, while humans do quite well understanding cause and effect models, they’re far more limited when those relationships depend on a massive number of variables, as they do in climate models, for example. “Machines might be able to build models of very complex processes to take into account thousands of variables and make decisions that humans just can’t comprehend cognitively,” Shen says.

  Ultimately, we might need machines who understand and can interpret other machines for us in ways that simplify their inner workings for humans. Even many of the most elite and well-trained human minds struggle to understand how or why an AI system predicted stock prices to rise or fall. We still put our faith in many of these applications. But as these systems gain an ever-more pervasive role in our lives, we’ll have to ask whether we want modeling capabilities we can never understand or predict, and how much control we’re willing to give them. If self-consciousness is a higher form of consciousness because it reflects upon itself, as the philosopher David Chalmers suggests, machine consciousness is the equivalent of a toddler we’re proposing to task with, say, genetic engineering or other analyses of monumental consequence.

  THE IMPROV ARTISTS

  AI-powered object recognition has become a popular application for e-commerce and related companies around the world. Take a photo and then click on the object of interest, and the app identifies the product and lets you know how or where to buy it. The big Digital Barons could do this for years, but newer companies in developing markets are taking it in new directions. Like Grabango and others in the United States, the Chinese start-up Malong has put it to use in the supply chain to help track and inspect shipments. They envision a time when a shopper could push a full cart of groceries out of the store, and its systems would identify all the items and automatically charge the customer as he or she walks out the door.

  In Nigeria, Gabriel Eze is hoping a similar machine learning application can help open the web to fellow citizens who can’t read or write in English, or at all. He and his colleagues at Touchabl currently focus on e-commerce sales—someone sees a purse they like and clicks on a photo to find out what brand it is and where to buy it. They make money by getting retailers and brands to pay for placements. “Maybe you have a broken part in your car, but you don’t know what it is,” he explains. “You can use Touchabl to find out what it is.” If Touchabl hasn’t already labeled it, it will search the web for a comparable image—a clear step beyond a random search. Eze also hopes developers will build on the platform, with designs to help informal merchants offer wares online via image or, if combined with language processing, help blind or illiterate residents access online information about the objects and environments around them.

  He even imagines a time when similar systems could use photos people upload with their smartphones to diagnose health problems, such as cataracts. It turns out that’s exactly what CekMata is going in rural parts of Indonesia. The archipelago suffers from high rates of blindness due to cataracts, with an average of one person losing their sight a day, says CMO Ivan Sinarso, citing World Health Organization statistics. Doctors can diagnose and treat cataracts long before blindness sets in, but cataract patients in rural parts of Indonesia rarely seek medical intervention, thinking it too expensive or ineffective. So, CekMata targets the younger generations of Indonesians, many of whom carry smartphones, enlisting them to take photos of their parents and grandparents and upload them via an app or website.

  CekMata’s systems can identify likely cataracts, and then recommend doctors who can confirm the diagnosis and prescribe a course of action. (Clinics pay for placement on the list of recommendations, Sinarso says.) In its first eight months online, the company helped about 100 rural patients identify and treat their cataracts, but the system can scale up to serve as many people as can upload selfies of their eyes, Sinarso says. And as it expands, the company will be able to track patterns, finding areas where people display problems at a higher incidence rate and alerting health authorities who could intervene.

  Curtis and Mechelle Gittens also hope to address a critical health issue in their island country, but they’re developing an entirely new AI model to do so—something that caught the eye of the IBM Watson AI XPRIZE judges, who advanced their team, called Driven, into the second round of the competition. The husband and wife duo are creating “psychologically-realistic virtual agents” to help model the thought patterns and behaviors of diabetics in Barbados. “By questioning the agent, you could actually identify an extroversion or introversion personality trait, for example,” explains Curtis Gittens. “So, by simulating things like emotion and emotionally driven responses to stimuli, you’ll be able to take this psychologically-realistic agent and almost query it as if you’re a psychologist, and it would present traits as a human would.”

  Driven would take the patient information it derives from a survey of personality traits and behaviors, and then encode that to create a psychological representation of that person in a sort of virtual mind. Clinicians can run various what-if scenarios on the virtual patient to help identify ways to nudge real patients and keep them on their course of treatment. “We believe we’ll be able to identify ‘trigger’ memories that are the root causes of behavior, so a doctor can work on the real factors that affect behavioral change,” Curtis says. />
  Hope often springs from unlikely places, as illustrated by the popular William Gibson quote: “The future is already here, it’s just not very evenly distributed.” These Improv Artists of artificial intelligence—countries such as Indonesia, Nigeria, Barbados, and especially India—are developing new AI technologies or, more often, leveraging models to solve longstanding health, infrastructure, and other problems common in developing countries. Most of the world’s largest high-tech players already have significant operations in India and see it as a massive digital opportunity. According to a report by Capgemini, 58 percent of the companies using AI technologies in India have installed them across a wide range of operations, putting it in third place behind United States and China in terms of the scale of deployment.## The country has seen a bloom in health care applications that integrate machine learning and other AI models, in many cases to address some of the most basic barriers to medical care.

  India is home to more than a quarter of the 10.5 million people who suffer from tuberculosis, says Prashant Warier, the CEO of a medical imaging start-up called Qure.ai (pronounced “cure”). Many of those cases go undiagnosed, and even more of them get diagnosed very late, leading to the further spread of the disease. The problem, he says, is one of time. Rural patients will suffer for weeks with symptoms before traveling hours to a clinic to get tested. The doctors will order chest X-rays to search for signs of the disease, but because of the scarcity of radiologists, it might take a couple days before the physician gets back the radiologist’s read of the scan. By then, the patients have returned home, making it difficult to contact them and perform a microbiological test to confirm the presence of TB.

 

‹ Prev