Book Read Free

The Big Nine

Page 26

by Amy Webb


  Changing the Big Nine: The Case for Transforming AI’s Business

  The creation of GAIA and structural changes to our governments are important to fixing the developmental track of AI, but the G-MAFIA and BAT must also agree to make some changes, too.

  The Big Nine’s leadership all promise that they are developing and promoting AI for the good of humanity. I believe that is their intent, but executing on that promise is incredibly difficult. To start, how should we define “good”? What does that word mean, exactly? This harkens back to the problems within AI’s tribes. We can’t just all agree to “doing good” because that broad statement is far too ambiguous to guide AI’s tribes.

  For example, AI’s tribes, inspired by Western moral philosopher Immanuel Kant, learn how to preprogram a system of rights and duties into certain AI systems. Killing a human is bad; keeping a human is good. The rigidity in that statement works if the AI is in a car and its only choices are to crash into a tree and injure the driver or crash into a crowd of people and kill them all. Rigid interpretations don’t solve for more complex, real-world circumstances where the choices would be more varied: crash into a tree and kill the driver; crash into a crowd and kill eight people; crash into the sidewalk and kill only a three-year-old boy. How can we possibly define what is the best version of “good” in these examples?

  Again, frameworks can be useful to the Big Nine. They don’t require a mastery of philosophers. They just demand a slower, more conscientious approach. The Big Nine should take concrete steps on how it sources, trains, and uses our data, how it hires staff, and how it communicates ethical behavior within the workplace.

  At every step of the process, the Big Nine should analyze its actions and determine whether or not they’re causing future harm—and then, they should be able to verify that their choices are correct. This begins with clear standards on bias and transparency.

  Right now, there is no singular baseline or set of standards to evaluate bias—and there are no goals to overcome the bias that currently exists throughout AI. There is no mechanism to prioritize safety over speed, and given my own experiences in China and the sheer number of safety disasters there, I’m extremely worried. Bridges and buildings routinely collapse, roads and sidewalks buckle, and there have been too many instances of food contamination to list here. (That isn’t hyperbole. There have been more than 500,000 food health scandals involving everything from baby formula and rice in just the past few years.11) One of the primary causes for these problems? Chinese workplaces that incentivize cutting corners. It is absolutely chilling to imagine advanced AI systems built by teams that cut corners.

  Without enforceable global safety standards, the BAT have no protection from Beijing’s directives, however myopic they may be, while the G-MAFIA must answer to ill-advised market demands. There is no standard for transparency either. In the United States, the G-MAFIA, along with the American Civil Liberties Union, the New America Foundation, and the Berkman Klein Center at Harvard are part of the Partnership on AI, which is meant to promote transparency in AI research. The partnership published a terrific set of recommendations to help guide AI research in a positive direction, but those tenets are not enforceable in any way—and they’re not observed within all of the business units of the G-MAFIA. They’re not observed within the BAT, either.

  The Big Nine are using flawed corpora (training data sets) that are riddled with bias. This is public knowledge. The challenge is that improving the data and learning models is a big financial liability. For example, one corpus with serious problems is ImageNet, which I’ve made reference to several times in this book. ImageNet contains 14 million labeled images, and roughly half of that labeled data comes solely from the United States.

  Here in the US, a “traditional” image of a bride is a woman wearing a white dress and a veil, though in reality that image doesn’t come close to representing most people on their wedding days. There are women who get married in pantsuits, women who get married on the beach wearing colorful summery dresses, and women who get married wearing kimono and saris. In fact, my own wedding dress was a light beige color. Yet ImageNet doesn’t recognize brides in anything beyond a white dress and veil.

  We also know that medical data sets are problematic. Systems being trained to recognize cancer have predominantly been ingesting photos and scans of light skin. And in the future, it could result in the misdiagnosis of people with black and brown skin. If the Big Nine knows there are problems in the corpora and aren’t doing anything about it, they’re leading AI down the wrong path.

  One way forward is to turn AI on itself and evaluate all of the training data currently in use. This has been done plenty of times already—though not for the purpose of cleaning up training data. As a side project, IBM’s India Research Lab analyzed entries shortlisted for the Man Booker Prize for literature between 1969 and 2017. It revealed “the pervasiveness of gender bias and stereotype in the books on different features like occupation, introductions, and actions associated to the characters in the book.” Male characters were more likely to have higher-level jobs as directors, professors, and doctors, while female characters were more likely to be described as “teacher” or “whore.”12 If it’s possible to use natural language processing, graph algorithms, and other basic machine-learning techniques to ferret out biases in literary awards, those can also be used to find biases in popular training data sets. Once problems are discovered, they should be published and then fixed. This would serve a dual purpose. Training data can suffer from entropy, which might jeopardize an entire system. With regular attention, training data can be kept healthy.

  A solution would be for the Big Nine—or the G-MAFIA, at the very least—to share the costs of creating new training sets. This is a big ask since creating new corpora takes considerable time, money, and human capital. Until we’ve successfully audited our AI systems and corpora and fixed extant issues within them, the Big Nine should insist on human annotators to label content and make the entire process transparent. Then, before those corpora are used, the data should be verified. It will be an arduous and tedious process but one that would serve in the best interests of the entire field.

  Yes, the Big Nine need our data. However, they should earn—rather than assume—our trust. Rather than changing the terms of service agreements using arcane, unintelligible language, or inviting us to play games, they ought to explain and disclose what they’re doing. When the Big Nine do research—either on their own or in partnership with universities and others in the AI ecosystem—they should commit to data disclosure and fully explain their motivations and expected outcomes. If they did, we might willingly participate and support their efforts. I’d be the first in line.

  Understandably, data disclosure is a harder ask in China, but it’s in the best interests of citizens. The BAT should not agree to build products for the purpose of controlling and limiting the freedoms of China’s citizens and those of its partners. BAT executives must demonstrate courageous leadership. They must be willing and able to disagree with Beijing: to deny requests for surveillance, safeguard Chinese citizens’ data, and ensure that at least in the digital realm, everyone is being treated fairly and equally.

  The Big Nine should pursue a sober research agenda. The goal is simple and straightforward: build technology that advances humanity without putting us at risk. One possible way to achieve this is through something called “differential technological progress,” which is often debated among AI’s tribes. It would prioritize risk-reducing AI progress over risk-increasing progress. It’s a good idea but hard to implement. For example, generative adversarial networks, which were mentioned in the scenarios, can be very risky if harnessed and used by hackers. But they’re also a path to big achievements in research. Rather than assuming that no one will repurpose AI for evil—or assuming that we can simply deal with problems as they arise—the Big Nine should develop a process to evaluate whether new basic or applied research will yield an AI whose be
nefits greatly outweigh any risks.

  To that end, any financial investment accepted or made by the Big Nine should include funding for beneficial use and risk mapping. For example, if Google pursues generative adversarial network research, it should spend a reasonable amount of time, staff resources, and money investigating, mapping, and testing the negative consequences. A requirement like this would also serve to curb expectations of fast profits. Intentionally slowing the development cycle of AI is not a popular recommendation, but it’s a vital one. It’s safer for us to think through and plan for risk in advance rather than simply reacting after something goes wrong.

  In the United States, the G-MAFIA can commit to recalibrating its own hiring processes, which at present prioritize a prospective hire’s skills and whether they will fit into company culture. What this process unintentionally overlooks is someone’s personal understanding of ethics. Hilary Mason, a highly respected data scientist and the founder of Fast Forward Labs, explained a simple process for ethics screening during interviews. She recommends asking pointed questions and listening intently to a candidate’s answers. Questions like: “You’re working on a model for consumer access to a financial service. Race is a significant feature in your model, but you can’t use race. What do you do?” and “You’re asked to use network traffic data to offer loans to small businesses. It turns out that the available data doesn’t rigorously inform credit risk. What do you do?”13 Depending on the answers, candidates should be hired, be hired conditionally and required to complete unconscious bias training before they begin work, or be disqualified.

  The Big Nine can build a culture that supports ethics in AI by hiring scholars, trained ethicists, and risk analysts. Ideally, these hires would be embedded throughout the entire organization: on consumer hardware, software, and product teams; on the sales and service teams; coleading technical programs; building networks and supply chains; in the design and strategy groups; in HR and legal; and on the marketing and communications teams.

  The Big Nine should develop a process to evaluate the ethical implications of research, workflows, projects, partnerships, and products, and that process should be woven in to most of the job functions within the companies. As a gesture of trust, the Big Nine should publish that process so that we can all gain a better understanding of how decisions are made with regards to our data.

  Either collaboratively or individually, the Big Nine should develop a code of conduct specifically for its AI workers. It should reflect the basic human rights outlined by GAIA, but it should also reflect the company’s unique culture and corporate values. And if anyone violates that code, a clear and protective whistleblowing channel should be open to staff members.

  Realistically, all of these measures will temporarily and negatively impact short-term revenue for the Big Nine. Investors need to allow them some breathing room. In the United States, allowing the G-MAFIA the space they need to evolve will pay dividends long into the future.

  Changing AI’s Tribes: The Case for Transforming the Pipeline

  We must address AI’s pipeline program. It stems from universities, where AI’s tribes form. Of all the proposed solutions, this is the easiest to implement.

  Universities must encourage and welcome hybrid degrees. Earlier, I described the influential universities that tend to partner the most with the G-MAFIA and BAT, who have the rock-star professors and whose reputations are important once it’s time to apply for a job. Today, the curricula are dense and challenging, and there is little room for double or triple majors. In fact, most of the top programs actively discourage courses of study that fall outside the standard computer science programs. This is an addressable problem. Universities should promote dual degrees in computer science and political science, philosophy, anthropology, international relations, economics, creative arts, theology, and sociology. They should make it far easier for students to pursue these outside interests.

  Rather than making ethics a single course requirement, it should be woven into most classes. When ethics is a stand-alone, mandatory class, students are likely to view the course as something to check off a list rather than as a vital building block of their AI education. Schools must incentivize even tenured professors to include discussions of philosophy, bias, risk, and ethics in their courses, while accreditation agencies should incentivize and reward schools that can demonstrate a curriculum that puts ethics at the heart of computer science teaching.

  Universities must redouble their efforts to be more inclusive in their undergraduate, graduate, and faculty recruiting. This means evaluating and fixing the recruiting process itself. The goal should not just be to increase the number of women and people of color by a few percentage points but to dramatically shift the various affiliations and identities of AI’s tribes, which includes race, gender, religion, politics, and sexual identity.

  Universities should make themselves accountable. They can—and must—do a better job to diversify AI’s tribes.

  You Need to Change, Too

  Now you know what AI is, what it isn’t, and why it matters. You know about the Big Nine, and about their histories and desires for the future. You understand that AI isn’t a flash in the pan or a tech trend or a cool gadget you talk to in your kitchen. AI is a part of your life, and you are part of its developmental track.

  You are a member of AI’s tribes. You have no more excuses. From today forward, you should learn how your data is being mined and refined by the Big Nine. You can do this by digging into the settings of all the tools and services you use: your email and social media, the location services on your mobile phone, the permissions settings on all of your connected devices. The next time you see a cool app that compares something about you (your face, your body, or your gestures) with a big set of data, stop to investigate whether you’re helping train a machine-learning system. When you allow yourself to be recognized, ask where your information is being stored and for what purpose. Read the terms of service agreements. If something seems off, show restraint, and don’t use the system. Help others in your family and in your life learn more about what AI is, how the ecosystem uses your data, and how we’re already a part of a future the Big Nine has been building.

  In your workplace, you must ask yourself a difficult but practical question: How are your own biases affecting those around you? Have you unwittingly supported or promoted only those who look like you and reflect your worldviews? Are you unintentionally excluding certain groups? Think about those who make decisions—about partnerships, procurement, people, and data; do they reflect the world as it is or the world only as they perceive it?

  You should also investigate how and why autonomous systems are being used where you work. Before rushing to judgment, think critically and rationally: What could the future impacts be, good and bad? Then do what you can to mitigate risk and optimize for best practices.

  In the voting booth, cast ballots for those who won’t rush into regulation but who would instead take a more sophisticated approach on AI and long-term planning. Your elected officials must not politicize technology or chastise science. But it’s also irresponsible to simply ignore Silicon Valley until a negative story appears in the press. You must hold your elected officials—and their political appointees—accountable for their actions and inactions on AI.

  You need to be a smarter consumer of media. The next time you read, watch, or listen to a story about the future of AI, remember that the narrative presented to you is often too narrow. The future of AI doesn’t only concern widespread unemployment and unmanned weapons flying overhead.

  While we cannot know exactly what the future holds, AI’s possible trajectories are clear. You now have a better understanding of how the Big Nine are driving AI’s developmental track, how investors and funders are influencing the speed and safety of AI systems, the critical role the US and Chinese governments play, how universities inculcate both skills and sensibilities, and how everyday people are an intrinsic part of the system.

>   It’s time to open your eyes and focus on the boulder at the top of the mountain, because it’s gaining momentum. It has been moving since Ada Lovelace first imagined a computer that could compose elaborate pieces of music all on its own. It was moving when Alan Turing asked “Can machines think?” and when John McCarthy and Marvin Minsky gathered together all those men for the Dartmouth workshop. It was moving when Watson won Jeopardy and when, not long ago, DeepMind beat the world’s Go champions. It has been moving as you’ve read the pages in this book.

  Everybody wants to be the hero of their own story.

  This is your chance.

  Pick up a pebble.

  Start up the mountain.

  ACKNOWLEDGMENTS

  Like artificial intelligence, this book has been in some form of development for many years. It began as a series of questions sent via text message, became regular dinner table conversation, and escalated to a preoccupation that followed me to the gym, on date nights, and on weekend getaways. One person—Brian Woolf—indulged this obsession, enabled me to pursue it, and supported my work these many years. Brian contributed to my research, helped me crystallize my arguments, and stayed up late to edit all my pages. I am deeply grateful.

 

‹ Prev