The Big Nine

Home > Other > The Big Nine > Page 11
The Big Nine Page 11

by Amy Webb


  A THOUSAND PAPER CUTS: AI’S UNINTENDED CONSEQUENCES

  “We first make our habits, and then our habits make us.”

  —JOHN DRYDEN

  “You are my creator, but I am your master.”

  —FRANKENSTEIN’S MONSTER (BY MARY SHELLEY)

  Contrary to all those catastrophic stories you’ve seen and read in which AI suddenly wakes up and decides to destroy humanity, there won’t be a singular event when the technology blows up and goes bad. What we’re all about to experience is more like a gradual series of paper cuts. Get just one on your finger and it’s annoying, but you can still go about your day. If your entire body is covered with thousands of tiny paper cuts, you won’t die, but living will be agonizing. The everyday parts of your life—putting on your shoes and socks, eating tacos, dancing at a cousin’s wedding—would no longer be options. You would need to learn how to live a different life. One with restrictions. One with painful consequences.

  We already know that learning ethics and prioritizing inclusivity are not mandated in universities, where AI’s tribes form and in the Big Nine, where AI’s tribes later work together. We know that consumerism drives the acceleration of AI projects and research within the G-MAFIA and that the BAT are focused on a centralized Chinese government plan. It’s becoming clear that perhaps no one—not a global regulatory agency (something akin to the International Atomic Energy Agency) or a cluster of schools or even a group of researchers—is asking hard questions about the gap that’s being created, pitting our human values against the considerable economic value of China’s plan for AI dominance and Silicon Valley’s commercial goals. Striking a balance between the two hasn’t been a priority in the past because all of the Big Nine have been great drivers of wealth, they make cool services and products that we all enjoy using, and they let us feel like masters of our own digital domains. We haven’t been demanding answers to questions about values because, at the moment, our lives feel better with the Big Nine in them.

  But we already have paper cuts caused by the beliefs and motivations of AI’s creators. The Big Nine aren’t just building hardware and code. They are building thinking machines that reflect humanity’s values. The gap that currently exists between AI’s tribes and everyday people is already causing worrying outcomes.

  The Values Algorithm

  Ever wondered why the AI system isn’t more transparent? Have you thought about what data sets are being used—including your own personal data—to help AI learn? In what circumstances is AI being taught to make exceptions? How do the creators balance the commercialization of AI with basic human desires like privacy, security, a sense of belonging, self-esteem, and self-actualization? What are the AI tribe’s moral imperatives? What is their sense of right and wrong? Are they teaching AI empathy? (For that matter, is trying to teach AI human empathy even a useful or worthy ambition?)

  Each of the Big Nine has a formally adopted set of values, but these value statements fail to answer these questions. Instead, these stated values are deeply held beliefs that unify, inspire, and enliven employees and shareholders. A company’s values act as an algorithm—a set of rules and instructions, which influence the office culture, leadership style, and play a big role in all of the decisions that are made, from the boardroom to individual lines of code. The absence of certain stated values is notable, too, because out of the spotlight, they become hard to see and are easily forgotten.

  Originally, Google operated under a simple, core value: “Don’t be evil.”1 In their 2004 IPO letter, founders Sergey Brin and Larry Page wrote: “Eric [Schmidt], Sergey and I intend to operate Google differently, applying the values it has developed as a private company to its future as a public company.… We will optimize for the long term rather than trying to produce smooth earnings for each quarter. We will support selected high-risk, high-reward projects and manage our portfolio of projects.… We will live up to our ‘don’t be evil’ principle by keeping user trust.”2

  Amazon’s “leadership principles” are entrenched within management structure, and the core of those values center around trust, metrics, speed, frugality, and results. Its published principles include the following:

  • “Leaders start with the customer and work backwards. They work vigorously to earn and keep customer trust.”

  • “Leaders have relentlessly high standards” which outsiders may think “are unreasonably high.”

  • “Many decisions and actions are reversible and do not need extensive study. We value calculated risk taking.”

  • “Accomplish more with less. There are no extra points for growing headcount, budget size, or fixed expense.”3

  Facebook lists five core values, which include “being bold,” “focusing on impact,” “moving fast,” “being open” about what the company is doing, and “building value” for users.4 Meanwhile, Tencent’s “management philosophy” prioritizes “coaching and encouraging employees to achieve success” based on “an attitude of trust and respect” and making decisions based on a formula it calls “Integrity+Proactive+ Collaboration+Innovation.”5 At Alibaba, “an unwavering focus on meeting the needs of our customers” is paramount, as is teamwork and integrity.6

  If I drew a Venn diagram of all the values and operating principles of the Big Nine, we would see a few key areas of overlap. They all expect employees and teams to seek continual professional improvement, to build products and services customers can’t live without, and to deliver shareholder results. Most importantly, they value trust. The values aren’t exceptional—in fact, they sound like the values of most American companies.

  Because AI stands to make a great impact on all of humanity, the Big Nine’s values should be detailed explicitly—and we ought to hold them to a higher standard than other companies.

  What’s missing is a strongly worded declaration that humanity should be at the center of AI’s development and that all future efforts should focus on bettering the human condition. This should be stated explicitly—and those words should reverberate in other company documents, in leadership meetings, within AI teams, and during sales and marketing calls. Examples include technological values that extend beyond innovation and efficiency, like accessibility—millions of people are differently abled or have trouble speaking, hearing, seeing, typing, grasping, and thinking. Or economic values, which would include the power of platforms to grow and distribute material well-being without disenfranchising individuals or groups. Or social values, like integrity, inclusivity, tolerance, and curiosity.

  As I was writing this book, Google’s CEO Sundar Pichai announced that Google had written a new set of core principles to govern the company’s work on AI. However, those principles didn’t go nearly far enough to define humanity as the core of Google’s future AI work. The announcement wasn’t part of a strategic realignment on core values within the company; it was a reactive measure, owing to internal blowback concerning the Project Maven debacle—and to a private incident that happened earlier in the year. A group of senior software engineers discovered that a project they’d been working on—an air gap security feature for its cloud services—was intended to help Google win military contracts. Amazon and Microsoft both earned “High” certificates for a physically separate government cloud, and that authorized them to hold classified data. Google wanted to compete for lucrative Department of Defense contracts, and when the engineers found out, they rebelled. It’s that rebellion that led to 5% of Google’s workforce publicly denouncing Maven.7

  This was the beginning of a spurt of protests that began in earnest in 2018, when some of AI’s tribe realized that their work was being repurposed for a cause they didn’t support, so they demanded a change. They had assumed that their personal values were reflected within their company—and when that turned out not to be the case, they protested. This illustrates the thorny challenges caused when the G-MAFIA doesn’t hold themselves to higher standards than we’d expect of other companies making less monumental products. />
  It’s not surprising, therefore, that a sizable portion of Google’s AI principles specifically addressed weapons and military work: Google won’t create weaponized technologies whose principal purpose is to hurt people, it won’t create AI that contravenes widely accepted principles of international law, and the like. “We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military,” the document reads.8

  To its credit, Google says the principles are intended to be concrete standards rather than theoretical concepts—and it specifically addresses the problem of unfair biases in data sets. But nothing in the document makes mention of transparency in how AI is making its decisions or which data sets are being used. Nothing addresses the problem of Google’s homogenous tribes working on AI. None of the concrete standards directly put the interests of humanity ahead of the interests of Wall Street.

  The issue is transparency. If the US government isn’t capable of building the systems we need to protect our national security, we should expect that it will hire a company who can do that job—and that has been the case since World War I. We’ve too easily forgotten that peace is something we must work toward constantly and that a well-prepared military is what guarantees our safety and national security. The DoD isn’t bloodthirsty, and it doesn’t want AI-powered superweapons so it can wipe out entire remote villages overseas. The US military has mandates that go well beyond killing bad people and blowing things up. If this isn’t well understood by the people working within the G-MAFIA, that’s because too few people have bridged the divide between DC and the Valley.

  It should give us all pause that the Big Nine are building systems that fundamentally rely on people, and values articulating our aspirations for the improved quality of human life are not explicitly codified. If technological, economic, and social values aren’t part of a company’s statement of values, it is unlikely that the best interests of all of humanity will be prioritized during the research, design, and deployment process. This values gap isn’t always apparent within an organization, and that means significant risk for the G-MAFIA and BAT alike, because it distances employees from the plausible negative outcomes of their work. When individuals and teams aren’t aware of their values gap in advance, they won’t address vitally important issues during the strategic development process or during execution, when products are built, tested for quality assurance, promoted, launched, and marketed. It doesn’t mean that people working on AI aren’t themselves compassionate—but it does mean that they aren’t prioritizing for our basic humanistic values.

  This is how we wind up with paper cuts.

  Conway’s Law

  Computing, like all fields in technology or elsewhere, reflects the worldview and experiences of the team working on innovation. This is something we see outside of technology as well. Let me diverge from AI for a moment and offer two seemingly unconnected examples of how a small tribe of individuals can wield tremendous power over an entire population.

  If you’re someone with straight hair—thick, coarse, fine, long, short, thin (or even thinning)—your experience at a hair salon is radically different from mine. Whether you go to your local barbershop or a Sport Clips in the mall or to a higher-end salon, you’ve had your hair washed at a little sink, where someone effortlessly ran their fingers around your scalp. Then, your barber or stylist used a fine-toothed comb to pull your hair taut and snip across in straight, even lines. If you’re someone with a lot of hair, the stylist might use a brush and a hair drier, again pulling each strand until it forms the desired shape—full and bouncy, or flat and sleek. If you’re someone with a shorter cut, you’d get a smaller brush and less drying time, but the process would essentially be the same.

  My hair is extremely curly, the texture is fine, and I have a lot of it. It tangles easily, and it responds to environmental factors unpredictably. Depending on the humidity, how hydrated I am, and which products I last used, my hair could be coiled tightly, or it could be a frizzy mess. At a typical salon, even those where you’ve never experienced any problems, the sink causes complications for me. The person washing my hair will usually need a lot more space than what’s allowed by the bowl—and occasionally, my curls will wind up accidentally wrapped around the hose attachment, which is painful to separate. The only way to get a regular comb through my hair is when it’s wet and covered in something slippery, like a thick conditioner. (You can forget about a brush.) The force of a regular hair drier would render my curls in knots. Some salons have a special attachment that diffuses the air—it looks like a plastic bowl with jalapeno-sized protrusions sticking out—but in order to use it effectively, I have to bend over and let my hair hang into it, and the stylist has to crouch down to position the drier correctly.

  About 15% of Caucasians have curly hair. Combine us with America’s Black / African American population, and that’s 79 million, or about a quarter of the US population who have a difficult time getting a haircut because, we can infer, the tools and built environment were designed by people with straight hair who didn’t prioritize social values, like empathy and inclusiveness, within their companies.9

  That’s a fairly innocuous example. Now consider a situation where the stakes were quite a bit higher than me getting my hair cut. In April 2017, gate agents for an overbooked United Airlines flight bound from Chicago’s O’Hare International Airport came over the loudspeaker and asked passengers to give up their seats for airline employees for $400 and a complimentary room at a local hotel. No one took the offer. They upped the compensation to $800 plus the hotel room, but again, there were no takers. Meanwhile, priority passengers had already started boarding, including those who had reserved seats in first class.

  An algorithm and an automated system chose four people to bump, including Dr. David Dao and his wife, who is also a physician. He called the airline from his seat, explaining that he had patients to see the following day. While the other passengers complied, Dao refused to leave. Chicago Department of Aviation officials threatened Dao with jail time if he didn’t move. You are undoubtedly familiar with what happened next, because video of the incident went viral on Facebook, YouTube, and Twitter and was then rebroadcast for days on news networks around the world. The officials grabbed Dao by his arms and forcibly removed him from his seat, during which they knocked him into the armrest, breaking his glasses and cutting his mouth. His face covered in blood, Dao suddenly stopped screaming as the officials dragged him down the aisle of the United plane. The incident traumatized both Dao and the other passengers, and it created a public relations nightmare for United, which ultimately resulted in a Congressional hearing. What everyone wanted to know: How could something like this happen in the United States?

  For the majority of airlines worldwide, including United, the boarding procedure is automated. On Southwest Airlines, which doesn’t create seat assignments but instead gives passengers a group (A, B, or C) and a number and has them board in order, all of that sorting is done algorithmically. The line is prioritized based on the price paid for the ticket, frequent flier status, and when the ticket was purchased. Other airlines that use preassigned seats board in priority groups, which are also assigned via algorithm. When it’s time to get on the plane, gate agents follow a set of instructions shown to them on a screen—it’s a process designed to be followed strictly and without deviation.

  I was at a travel industry meeting in Houston a few weeks after the United incident, and I asked senior technology executives about what role AI might have played. My hypothesis: the algorithmic decision-making dictated a set of predetermined steps to resolve the situation without using any context. The system decided that there weren’t enough seats, calculated the amount of compensation to offer initially, and when no resolution was achieved, it then recalibrated compensation again. When a passenger didn’t comply, the system recommended calling airport security. The staff involved were mindlessly following what was on their screen
s, automatically obeying an AI system that wasn’t programmed for flexibility, circumstance, or empathy. The tech executives, who weren’t United employees, didn’t deny the real problem: on the day that Dao was dragged off the plane, human staff had ceded authority to an AI system that was designed by relatively few individuals who probably hadn’t thought enough about the future scenarios in which it would be used.

  The tools and built environments of hair salons and the platforms powering the airline industry are examples of something called Conway’s law, which says that in absence of stated rules and instructions, the choices teams make tend to reflect the implicit values of their tribe.

  In 1968, Melvin Conway, a computer programmer and high school math and physics teacher, observed that systems tend to reflect the people and values who designed them. Conway was specifically looking at how organizations communicate internally, but later Harvard and MIT studies proved his idea more broadly. Harvard Business School analyzed different codebases, looking at software that was built for the same purpose but by different kinds of teams: those that were tightly controlled, and those that were more ad-hoc and open source.10 One of their key findings: design choices stem from how their teams are organized, and within those teams, bias and influence tends to go overlooked. As a result, a small supernetwork of individuals on a team wield tremendous power once their work—whether that’s a comb, a sink, or an algorithm—is used by or on the public.

  Conway’s law applies to AI. From the very beginning, when the early philosophers, mathematicians, and automata inventors debated mind and machine, there has been no singular set of instructions and rules—no values algorithm describing humanity’s motivation and purpose for thinking machines. There has been divergence in the approach to research, frameworks, and applications, and today there’s a divide between the developmental track for AI in China and the West. Therefore, Conway’s law prevails, because the tribe’s values—their beliefs, attitudes, and behaviors as well as their hidden cognitive biases—are so strongly entrenched.

 

‹ Prev