The Big Nine

Home > Other > The Big Nine > Page 10
The Big Nine Page 10

by Amy Webb


  Two-thirds of American adults now use Facebook,72 and most of those people use the social network at least once a day, which means that even if you don’t use it, someone who’s close to you most likely does. There is at most one or two degrees of separation between you and Facebook, even if you’ve never “liked” someone’s post—and even if you’ve deleted your account. Nearly half of all American households are Amazon Prime subscribers, so you have between a one- and three-degree separation between you and Amazon.73 If you’ve visited a doctor’s office in the past decade, you have just one degree of separation between you, Microsoft, and IBM. Fully 95% of Americans own smartphones,74 giving you only one degree of separation between you and Google or Apple.

  By virtue of being alive sometime in the past two decades, you have been generating data for the G-MAFIA, even if you don’t use their services and products. That’s because we’ve acquired a tremendous number of gadgets and smart devices that generate data—our mobile phones, GPS devices, smart speakers, connected TVs and DVRs, security cameras, fitness trackers, wireless garden monitors, and connected gym equipment—and because so much of our communications, shopping, work, and daily living happens on the G-MAFIA’s platforms.

  In the United States, third parties can get access to all of that data for commercial purposes or to make the various systems we rely on more useful. You can now shop on lots of websites using the credit card and address you’ve stored at Amazon. You can log into lots of different websites using your Facebook credentials. The ability to use the G-MAFIA for other services is linked to all the data we generate—in the form of photos, audio files, videos, biometric information, digital usage, and the like. All of our data is stored in “the cloud,” a buzzword that refers to the software and services that run on the internet rather than on your personal device. And—perhaps unsurprisingly—there are four primary cloud providers: Google, Amazon, Microsoft, and IBM.

  You’ve accessed the cloud directly (for example, creating shared Google docs and spreadsheets) and indirectly (when your mobile phone automatically syncs and backs up the photos you’ve taken). If you own an iPhone or iPad, you’re using Apple’s private cloud. If you accessed Healthcare.gov in the US, you were on Amazon’s cloud. If your kid had a Build-A-Bear birthday party at the mall, it was coordinated using Microsoft’s cloud. In the past decade, the cloud became a big deal—so much so that we don’t really think of it as particularly interesting, or noteworthy, or technologically exciting. It just exists, like electricity and running water. We only really think about it when our access is cut off.

  We’re all generating data and using the cloud with a blind faith in the AI tribes and the commercial systems they’ve created. In the US, our data is far more revealing than the social security number we’ve been taught to guard so carefully. With our social security numbers, someone can open a bank account or apply for a car loan. With the data you’re generating in the cloud, the G-MAFIA could theoretically tell if you’re secretly pregnant, if your employees think you’re incompetent, or if you’re grappling with a terminal illness—and the G-MAFIA’s AI would probably know all of that well before you do. The godlike view the G-MAFIA have into our lives is not necessarily bad. In fact, there are numerous ways that mining our personal data for insights could result in all of us living healthier, happier lives.

  As powerful as the G-MAFIA’s cloud and AI sounds, it’s still hampered by some limitations: hardware. The current AI architecture has been good enough to build products with artificial narrow intelligence, like the spam filter in Gmail or Apple’s “visual voicemail” transcription service. But it must also pursue artificial general intelligence (AGI), a longer-term play that is now visible on the horizon. And that requires customized AI hardware.

  The reason AGI requires customized hardware has something to do with John von Neumann, the computer scientist previously mentioned who developed the theory behind the architecture of modern computers. Remember, during von Neumann’s time, computers were fed separate programs and data for processing—in his architecture, computer programs and data were both held in the memory of the machine. This architecture still exists in our modern laptops and desktop computers, with data moving between the processor and memory. If you don’t have enough of either, the machine will start running hot, or you’ll get an error message, or it will simply shut down. It’s a problem known as the “von Neumann bottleneck.” No matter how fast the processor is capable of working, the program memory and data memory cause the von Neumann bottleneck, limiting the data transfer rate. Just about all of our current computers are based on the von Neumann architecture, and the problem is that existing processors can’t execute programs any faster than they’re able to retrieve instructions and data from memory.

  The bottleneck is a big problem for AI. Right now, when you talk to your Alexa or Google Home, your voice is being recorded, parsed, and then transmitted to the cloud for a response—given the physical distance between you and the various data centers involved, it’s mind-blowing that Alexa can talk back within a second or two. As AI permeates more of our devices—in the form of smartphones with biometric sensors, security cameras that can lock onto our faces, cars that drive themselves, or precision robots capable of delivering medicine—a one- or two-second processing delay could lead to a catastrophic outcome. A self-driving car can’t ping up to the cloud for every single action because there are far too many sensors that would need to continually feed data up for processing.

  The only solution is to move the computing closer to the source of the data, which will reduce latency while also saving on bandwidth. This new kind of architecture is called “edge computing,” and it is the inevitable evolution of AI hardware and systems architecture. In order for AI to advance to the next stages of development, the hardware has to catch up. Rather than meeting the G-MAFIA in the cloud, where we still have some ability to set permissions and settings, we’ll soon need to invite them into all of the machines we use. What this means is that sometime in the next decade, the rest of the AI ecosystem will converge around just a few G-MAFIA systems. All the startups and players on the periphery—not to mention you and me—will have to accept a new order and pledge our allegiance to just a few commercial providers who now act as the operating systems for everyday life. Once your data, gadgets, appliances, cars, and services are entangled, you’ll be locked in. As you buy more stuff—like mobile phones, connected refrigerators, or smart earbuds—you’ll find that the G-MAFIA has become an operating system for your everyday life. Humanity is being made an offer that we just can’t refuse.

  Deep learning computations need specialized hardware because they require a lot of power. Since they favor optimization over precision and are basically made up of dense linear algebra operations, it makes sense that a new neural network architecture would lead to greater efficiencies and, more importantly, speed in the design and deployment process. The faster research teams can build and test real-world models, the closer they can get to practical-use cases for AI. For example, training a complicated computer vision model currently takes weeks or months—and the end result might only prove that further adjustments need to be made, which means starting over again. Better hardware means training models in a matter of hours, or even minutes, which could lead to weekly—or even daily—breakthroughs.

  That’s why Google created its own custom silicon, called Tensor Processing Units (TPUs). Those chips can handle its deep-learning AI framework, TensorFlow. As of June 2018, TensorFlow was the number one machine-learning platform on GitHub, which is the largest online platform in the world where software developers store their computer code. It’s been downloaded more than 10 million times from developers living in 180 countries, and at the time of this writing there were 24,500 active repositories.75 Adding to the framework, Google released additional products, like TensorFlow-GAN (a library for generative adversarial network modules) and TensorFlow Object Detection API (which helps developers create more accurate mac
hine-learning models for computer vision). TPUs are already being used at Google’s data centers—they power deep-learning models on every Google Search query.

  Not for nothing, Google tried to acquire GitHub, which is used by 28 million developers worldwide and is an important platform for the Big Nine. But in June 2018, Google lost the bid to—wait for it—Microsoft.76

  Facebook partnered with Intel to develop an AI chip for the purpose of internal R&D, which the company needed to boost efficiency for faster experimentation. Apple developed its own “neural engine” chip to use inside its iPhone X, while Microsoft developed AI chips for its HoloLens mixed-reality headset and for its Azure cloud-computing platform. The BAT are also designing their own chips: In 2017, Alibaba began recruiting heavily in Silicon Valley for “AI chip architects,”77 and in 2018 it launched its own custom chips—the Ali-NPU—that are available for anyone to use on its public cloud.

  Anticipating a near-future need for better performance, IBM developed its TrueNorth neuromorphic chip several years ago, and it’s already pushing ahead on a new kind of hardware that could make neural nets 100 times more efficient. For context, this would be like comparing an abacus made out of sticks and stones to the transporter on Star Trek. The new kind of chip uses two kinds of synapses, one for long-term memory and the other for short-term computation.

  What we’re talking about is our modern-day equivalent of “Are you a PC or a Mac person?” jacked up on steroids. Most of these chips operate on frameworks that the Big Nine classify as “open source”—meaning that developers can access and use and enhance the frameworks for free. But the hardware itself is proprietary, and services come with subscription fees. In practice, this means that once an application is built for one framework, it will be incredibly hard to migrate it elsewhere. In this way, AI’s tribes are signing up new members—and a rite of initiation is kissing the ring of a G-MAFIA framework.

  In a drive to commercialize AI, the G-MAFIA is recruiting developers in creative ways. In May 2018, Google and the Coursera online learning platform launched a new machine-learning specialization. But you have to use TensorFlow. The five-part course, which includes a certificate for graduates, is described as a way for anyone to learn about machine learning and neural networks. Students need real-world data and frameworks, so they learn on Google’s framework.

  Hardware is part of the G-MAFIA’s AI strategy, which is also linked to the government, in ways that are different from what we’ve seen in China but which should be equally concerning, even if you are not a US citizen. That’s because in the United States, AI serves three masters: Capitol Hill, Wall Street, and Silicon Valley. The people who actually write policy and debate regulation are in Congress or are career federal workers who tend to stay in their jobs for decades. But those who set the agenda for that policy—our president and the heads of big government agencies (e.g., the Federal Communications Commission, the Justice Department, and the like) rotate in and out of office every few years. There has been no clear, national purpose or direction for AI.

  Only recently has there been a sharper focus on China and its plans for AI—and that’s primarily because President Xi published a long-term strategic plan focused on AI and the use of data. In the US, we have something called the Committee on Foreign Investment in the United States, or CFIUS. It’s a bipartisan group led by the Treasury secretary and made up of members of the Treasury, Justice, Energy, Defense, Commerce, State, and Homeland Security Departments. Their task is to review and investigate business deals that could put national security at risk. It was CFIUS that blocked Singapore’s Broadcom from acquiring Qualcomm, a San Diego–based chipmaker. CFIUS also rejected a takeover bid of Dallas-based MoneyGram by electronic payments company Ant Financial, whose parent company is Alibaba. At the time of this book’s writing, CFIUS wasn’t focused on AI, even though there were proposals to expand its reach to curb more of China’s investments in US companies.

  Meanwhile, in Silicon Valley, it’s common for employees to hop around, while AI’s tribal leaders tend to stay more fixed in their positions splitting time between the G-MAFIA and universities. Therefore, AI keeps moving along its developmental track as the tribe’s mantra—build it first, and ask for forgiveness later—grows ever stronger. For years Google scanned and indexed copyrighted books without first seeking permission, and the company ended up in a class action lawsuit waged by publishers and authors. Google captured images of our homes and neighborhoods and made them searchable in Google Maps without asking us first. (People are avoided when possible, and their faces are blurred out.) Apple slowed down its older iPhones as its new models hit the shelves and apologized. Post–Cambridge Analytica, Facebook CEO Mark Zuckerberg published a general apology on his Facebook wall, writing, “For those I hurt this year, I ask for forgiveness and I will try to be better. For the ways my work was used to divide people rather than bring us together, I ask forgiveness.”

  Therefore, the G-MAFIA tend to move swiftly in developmental spurts until something bad happens, and then the government gets involved. Facebook’s data policies only attracted the attention of DC once a former Cambridge Analytica employee blew the whistle, explaining how easily our data had been scraped and shared. In 2016, in the wake of a shooting in San Bernardino, California, the federal government tried to order Apple to create a back door into an iPhone belonging to the terrorist. Government agencies and law enforcement argued that breaking the phone’s encryption and handing over private data was in the public’s interest, while privacy advocates said that doing so would violate civil liberties. Law enforcement managed to unlock the phone without Apple’s help, so we never found out which side was correct. In the United States, we may value our privacy, but we do not have clear laws that address our data in the 21st century.

  In the summer of 2018, staff from the office of Senator Mark Warner (D-VA) circulated a policy paper outlining various proposals to rein in our tech giants. They ranged from creating sweeping new legislation to mirror Europe’s aggressive GDPR rules, to a proposal that would designate web platforms as information fiduciaries that would have to follow a prescribed code of conduct, not unlike law firms.78 Just a few months later, Apple CEO Tim Cook went on Twitter to post a screed about the future of privacy, the big tech giants, and America. On October 24, he wrote that companies should make the protection of user privacy paramount. “Companies should recognize that data belongs to users and we should make it easy for people to get a copy of their personal data, as well as correct and delete it,” he wrote, continuing with, “Everyone has a right to the security of their data.”79 Sensing that regulation is becoming a real possibility in the US, Apple has been promoting its data protection services and the privacy protections embedded in its mobile and computer operating systems.

  We agree to constant surveillance in exchange for services. This allows the G-MAFIA to generate revenue so that it can improve and expand its offerings to us, whether we are individual consumers or enterprise customers like companies, universities, nonprofits, or government agencies. It’s a business model predicated on surveillance capitalism. Which, if we’re being completely honest, is a system we’re OK with here in the US—otherwise we’d have long stopped using services like Gmail, Microsoft Outlook, and Facebook. In order to work properly, they must gain access to our data trails, which are mined, refined, and packaged. I’m assuming that you use at least one of the products and services offered by the G-MAFIA. I use dozens of them with the full knowledge of the price I’m really paying.

  What’s implied here is that soon we won’t just be trusting the G-MAFIA with our data. As we transition from narrow AI to more general AI capable of making complex decisions, we will be inviting them directly into our medicine cabinets and refrigerators, our cars and our closets, and into the connected glasses, wristbands, and earbuds we’ll soon be wearing. This will allow the G-MAFIA to automate repetitive tasks for us, help us make decisions, and spend less of our mental energy thinking slowly.
We will have zero degrees of separation between ourselves and the G-MAFIA. It will be impossible for lawmakers to assert any real authority once the whole of our existence is intertwined with these companies. But in exchange, what might we be giving up?

  The Big Nine—China’s BAT (Baidu, Alibaba, and Tencent) and America’s G-MAFIA (Google, Microsoft, Amazon, Facebook, IBM, and Apple)—are developing the tools and built environment that will power the future of artificial intelligence. They are members of the AI tribe, formed in universities where they inculcate shared ideas and goals, which become even more entrenched once graduates enter the workforce. The field of AI isn’t static. As artificial narrow intelligence evolves into artificial general intelligence, the Big Nine are developing new kinds of hardware systems and recruiting developers who get locked into their frameworks.

  AI’s consumerism model in the United States isn’t inherently evil. Neither is China’s government-centralized model. AI itself isn’t necessarily harmful to society. However, the G-MAFIA are profit-driven, publicly traded companies that must answer to Wall Street, regardless of the altruistic intentions of their leaders and employees. In China, the BAT are beholden to the Chinese government, which has already decided what’s best for the Chinese. What I want to know—and what you should demand an answer to—is what’s best for all of humanity? As AI matures, how will the decisions we make today be reflected in the decisions machines make for us in the future?

  CHAPTER THREE

 

‹ Prev