Book Read Free

The Big Nine

Page 7

by Amy Webb

A tribe’s strong bonds are formed when people working closely together suffer setbacks and celebrate successes together. They wind up developing a set of shared experiences, which translate to a common lexicon, which result in a common set of ideas, behaviors, and goals. This is why so many startup stories, political movements, and cultural juggernauts begin the same way: a few friends share a dorm room, home, or garage and work intensely on adjacently related projects.

  While the business epicenters for modern AI might be Silicon Valley, Beijing, Hangzhou, and Shenzhen, colleges are the lifeblood of AI’s tribes. There are just a few hubs. In the United States, they include Carnegie Mellon, Georgia Institute of Technology, Stanford, UC Berkeley, University of Washington, Harvard, Cornell, Duke, MIT, Boston University, McGill University, and the Université de Montréal. These universities are home to active academic research groups with strong industry ties.

  Tribes typically observe rules and rituals, so let’s explore the rights of initiation for AI’s tribes. It begins with a rigorous university education.

  In North America, the emphasis within universities has centered on hard skills—like mastery of the R and Python programming languages, competency in natural language processing and applied statistics, and exposure to computer vision, computational biology, and game theory. It’s frowned upon to take classes outside the tribe, such as a course on the philosophy of mind, Muslim women in literature, or colonialism. If we’re trying to build thinking machines capable of thinking like humans do, it would seem counterintuitive to exclude learning about the human condition. Right now, courses like these are intentionally left off the curriculum, and it’s difficult to make room for them as electives outside the major.

  The tribe demands skills, and there’s a lot to cram in during four years of undergraduate study. For example, at Stanford, students must take 50 credit hours of intense math, science, and engineering classes, in addition to 15 hours of core computer science courses. While there is an ethics course offered as part of the major, it’s one of five electives that can be taken to fulfill the requirement.16 Carnegie Mellon launched a brand-new AI major in 2018, which gave the school a fresh start and the opportunity to design a modern AI major from scratch. But the rules and rituals of the tribe prevailed, and hard skills are what matter. While the degree does require one ethics class and some courses in the humanities and arts, they all focus mostly on neuroscience (e.g., cognitive psychology, human memory, and visual cognition), which makes sense given the link between AI and the human mind. There are no required courses that teach students how to detect bias in data sets, how to apply philosophy to decision-making, or the ethics of inclusivity. There is no formal acknowledgement throughout courses that social and socioeconomic diversity are just as important to a community as biodiversity.

  Skills are taught experientially—meaning that students studying AI don’t have their heads buried in books. In order to learn, they need lexical databases, image libraries, and neural nets. For a time, one of the more popular neural nets at universities was called Word2vec, and it was built by the Google Brain team. It was a two-layer system that processed text, turning words into numbers that AI could understand.17 For example, it learned that “man is to king as woman is to queen.” But the database also decided that “father is to doctor as mother is to nurse” and “man is to computer programmer as woman is to homemaker.”18 The very system students were exposed to was itself biased. If someone wanted to analyze the farther-reaching implications of sexist code, there weren’t any classes where that learning could take place.

  In 2017 and 2018, some of these universities developed a few new ethics courses in response to the challenges already posed by AI. The Berkman Klein Center at Harvard and the MIT Media Lab jointly offered a new course on ethics and the regulation of AI.19 The program and lectures were terrific,20 but the course was hosted outside of each university’s standard computer science tracks—meaning that what was being taught and discussed didn’t have the opportunity to percolate up into other parts of the curriculum.

  To be sure, ethics is a requirement of all universities teaching AI—it’s written into the accreditation standards. In order to be accredited by the Accreditation Board for Engineering and Technology, computer science programs are required to show that students have an “understanding of professional, ethical, legal, security, and social issues and responsibilities” and an “ability to analyze the local and global impact of computing on individuals, organizations, and society.” However, I can tell you from experience that benchmarking and measuring this kind of requirement is subjective at best, and incredibly hard to do with any accuracy, especially without required courses that all students must take. I’m a member of the Accrediting Council on Education in Journalism and Mass Communications. The curricula for journalism and mass communications programs tend to focus on humanities, which you might say are softer skills like reporting, writing, and media production. And yet our academic units regularly struggle to meet our own standards for social issues and responsibilities, including diversity. Schools can still quality for accreditation without meeting compliance standards for diversity—that isn’t unique to the accreditation board on which I serve. Without enforcing the standards more stringently and without serious effort within universities, how could a hard-skills curriculum like AI possibly make a dent in the problem?

  College is tough enough, and the new hire incentives being offered by the Big Nine are competitive. While elective courses on African literature or the ethics of public service would undoubtedly broaden the worldviews of those working in AI, there’s intense pressure to keep the ecosystem growing. The tribe instead wants to see proof of skills so that when graduates enter the workforce, they hit the ground running and are productive members of the team. In fact, the very elective courses that could help AI researchers think more intentionally about all of humanity would likely hurt them during the recruiting process. That’s because the Big Nine uses AI-powered software to sift through resumes, and it’s trained to look for specific keywords describing hard skills. A portfolio of coursework outside the standard subjects would either be an anomaly or would render the applicant as invisible.

  The AI scanning through resumes proves that bias isn’t just about race and gender. There’s even a bias against philosophy, literature, theoretical physics, and behavioral economics, since candidates with lots of elective courses outside the traditional scope of AI tend to get deprioritized. The tribe’s hiring system, designed to automate the cumbersome task of doing a first pass through thousands of resumes, would potentially leave these candidates, who have a more diverse and desirable academic background, out of consideration.

  Academic leaders will be quick to argue that they are open to a mandatory ethics class, even if the tribe does not demand a broader curriculum. (Which it does not.) Adding equally rigorous humanities courses, like comparative literature and world religions, would force needed skills-based classes off the schedule. Students would bristle at being forced to take what appear to be superfluous courses, while industry partners want graduates primed with top-tier skills. With intense competition for the best and brightest students, why would any of these prestigious programs, like those at Carnegie Mellon and Stanford, mess with success?

  Technology is moving way faster than the levers of academia. A single, required ethics course—specifically built for and tailored to students studying AI—won’t do the trick if the material isn’t current and especially if what’s being taught doesn’t reverberate throughout other areas of the curriculum. If the curriculum can’t change, then what about individual professors? Maybe they could be empowered to address the problem? That’s unlikely to happen at scale. Professors are incentivized against modifying their syllabi to relate what they’re teaching back to questions about technological, economic, and social values. That would take up precious time. It could make their syllabi less attractive to students. Universities want to show a strong record of employed graduates, and
employers want graduates with hard skills. The Big Nine are partners with these universities, which rely on their funding and resources. Yet it seems like the best time to ask difficult questions—who owns your face?—should be asked and debated in the safe confines of a classroom, before students become members of teams who are regularly sidelined by product deadlines and revenue targets.

  If universities are where AI’s tribes form, it’s easy to see why there’s so little diversity in the field relative to other professions. In fact, industry executives are quick to point the finger at universities, blaming poor workforce diversity on what they say is AI’s “pipeline problem.” This isn’t entirely untrue. AI’s tribes form as professors train students in their classrooms and labs, and as students collaborate on research projects and assignments. Those professors, their labs, and the leadership within AI’s academic units are again overwhelmingly male and lacking in diversity.

  In universities, PhD candidates serve three functions: to collaborate on research, to teach undergraduate students, and to lead future work in their fields. Women receive only 23% of PhDs awarded in computer science, and only 28% awarded in mathematics and statistics, according to recent data from the National Center for Education Statistics.21 The academic pipeline is leaky: female PhDs do not advance to tenured positions or leadership roles at the same rate as men. So it should not come as a surprise that women received only 18% of undergraduate computer science degrees in recent years—and that’s actually down from 37% in 1985.22 Black and Hispanic PhD candidates are woefully underrepresented—just 3% and 1% respectively.23

  As the tribe scales, it’s expanding within a bubble and bringing out some terrible behaviors. Female AI researchers within universities have had to deal with sexual harassment, inappropriate jokes, and generally crappy behavior by their male counterparts. As that behavior is normalized, it follows the tribe from college into the workforce. So it isn’t a pipeline problem as much as a people problem. AI’s tribes are inculcating a culture in which women and certain minorities—like Black and Hispanic people—are excluded, plain and simple.

  In 2017, a Google engineer sent around a now-infamous memo arguing that women are biologically less capable at programming. Google’s CEO Sundar Pichai eventually responded by firing the guy who wrote the memo, but he also said, “Much of what was in that memo is fair to debate.”24 Cultures that are hostile to nontribe members cause a compounding effect resulting in an even less diverse workforce. As the work in AI advances, to build systems capable of thinking for and alongside humanity, entire populations are being left out of the developmental track.

  This isn’t to say that there are no women or people of color working in universities. The director of MIT’s famed Computer Science and Artificial Intelligence Laboratory (CSAIL) is Daniela Rus, a woman who counts a MacArthur Fellowship among her many professional and academic achievements. Kate Crawford is a Distinguished Research Professor at New York University and heads a new institute there focused on the social implications of AI. There are women and people of color doing tremendous work in AI—but they’re dramatically underrepresented.

  If the tribe’s goal is to imbue AI with more “humanistic” thinking, it’s leaving a lot of the humans out of the process. Fei-Fei Li, who runs Stanford’s Artificial Intelligence Lab and is Google Cloud’s chief scientist of artificial intelligence and machine learning, said,

  As an educator, as a woman, as a woman of color, as a mother, I’m increasingly worried. AI is about to make the biggest changes to humanity, and we’re missing a whole generation of diverse technologists and leaders.… If we don’t get women and people of color at the table—real technologists doing the real work—we will bias systems. Trying to reverse that a decade or two from now will be so much more difficult, if not close to impossible.25

  China’s Tribes: The BAT

  Baidu, Alibaba, and Tencent, collectively known as the BAT, are China’s side of the Big Nine. The AI tribe under the People’s Republic of China operates under different rules and rituals, which include significant government funding, oversight, and industrial policies designed to propel the BAT forward. Together, they are part of a well-capitalized, highly organized state-level AI plan for the future, one in which the government wields tremendous control. This is China’s space race, and we are its Sputnik to their Apollo mission. We might have gotten to orbit first, but China has put its sovereign wealth fund, education system, citizens, and national pride on the line in its pursuit of AI.

  China’s AI tribes begin at universities, too, where there is even more focus on skills and commercial applications. Because China is interested in ramping up the country’s skilled workforce as quickly as possible, its diversity problems aren’t exactly analogous to the West, though they do exist. Gender isn’t as much of a consideration, so women are better represented. That said, classes are taught in Chinese, which is a tough language for foreigners to learn. This excludes non-Chinese speakers from the classroom and also creates a unique competitive advantage, since Chinese university students tend to have studied English and could attend a wider pool of universities.

  In China, AI training begins before students enter university. In 2017, China’s State Council called for the inclusion of AI fundamentals and coursework, which means that Chinese kids begin learning AI skills in elementary school. There is now an official, government-ordered textbook detailing the history and fundamentals of AI. By 2018, 40 high schools had piloted a compulsory AI course,26 and more schools will be included once additional teachers become available. That should be soon: China’s Ministry of Education launched a five-year AI training program for its universities, which intends to train at least 500 teachers and 5,000 students at China’s top universities.27

  The BAT is part of China’s education revolution, providing the tools used in schools and universities, making the products consumers use as teens and adults, hiring graduates into the workforce, and sharing research with the government. Unless you’ve lived or traveled to China in the past decade, you may not be familiar with Baidu, Alibaba, and Tencent. All three were founded at the same time using existing tech companies as their templates.

  Baidu got started at a 1998 summer picnic in Silicon Valley—one of those insider gatherings bringing together AI tribe members over beer and lawn darts. Three men, all in their 30s, were bemoaning how little search engines were advancing. John Wu, who at the time was the head of Yahoo’s search engine team, and Robin Li, who was an engineer at Infoseek, believed that search engines had a bright future. They’d already seen a promising new startup—Google—and thought they could build something similar for China. Together with Eric Xu, a biochemist, the three formed Baidu.28

  The company recruited from AI’s university hubs in North America and China. It was especially good at poaching talented researchers working on deep learning. In 2012, Baidu approached Andrew Ng, a prominent researcher at Google’s Brain division. He’d grown up in Hong Kong and Singapore and had done a tour of the AI tribe’s university hubs: computer science undergrad at Carnegie Mellon, a master’s at MIT, PhD from UC Berkeley, and at the time was on leave from Stanford, where he was a professor. Ng was attractive to Baidu because of a startling new deep neural net project he’d been working on at Google.

  Ng’s team had built a cluster of 1,000 computers that had trained itself to recognize cats in YouTube videos. It was a dazzling system. Without ever being told explicitly what a cat was, the AI ingested millions of hours of random videos, learned to recognize objects, figured out that some of those objects were cats, and then learned what a cat was. All on its own, without human intervention. Shortly after, Ng was at Baidu, which had recruited him to be the company’s chief scientist. (Necessarily, this means that the DNA of Baidu includes nucleotides from the AI courses taught at Carnegie Mellon, MIT, and UC Berkeley.)

  Today, Baidu is hardly just a search engine. Ng went on to help get Baidu’s conversational AI platform (called DuerOS), digital assistant, and s
elf-driving programs, as well as other AI frameworks, off the ground—and that positioned Baidu to begin talking about AI in its earnings calls well ahead of Google. Baidu now has a market cap of $88 billion and is the most used search engine in the world behind Google—quite an accomplishment, considering Baidu isn’t used outside of China. Like Google, Baidu is building a suite of smart home devices, such as a robot intended for the home that combines voice recognition and facial recognition. The company announced an open platform for autonomous driving called Apollo, and the hope is that making its source code publicly available will cause the ecosystem around it to blossom. It already has 100 partners, which include automakers Ford and Daimler, chipmakers NVIDIA and Intel, and mapping services providers like TomTom. Baidu partnered with California-based Access Services to launch self-driving vehicles for people with mobility issues and disabilities. And it partnered with Microsoft’s Azure Cloud to allow Apollo’s non-Chinese partners to process vast amounts of vehicle data.29 You should also know that in recent years, Baidu opened a new AI research lab in cooperation with the Chinese government—and the lab’s leaders are Communist Party elites who’d previously worked on state military programs.30

  The A in China’s BAT is Alibaba Group, a massive platform that acts as a middleman between buyers and sellers through a massive network of websites, rather than a single platform. It was founded in 1999 by Jack Ma, a former professor living about 100 miles southwest of Shanghai who wanted to create a hybrid version of Amazon and eBay for China. Ma himself didn’t know how to code, so he started the company with a university colleague who did. Just 20 years later, Alibaba has a market cap of more than $511 billion.

  Among its sites are Taobao, on which neither buyers nor sellers are assessed a fee for their transactions. Instead, Taobao uses a pay-to-play model, charging sellers to rank them higher on the site’s search engine. (This mimics part of Google’s core business model.) Alibaba also built secure payment systems, including Alipay, which resembles the functionality and features of PayPal. It launched a “smile to pay” AI-powered digital payment system, which in 2017 debuted a facial recognition kiosk allowing consumers to pay by smiling briefly into a camera.

 

‹ Prev