Book Read Free

Voices from the Valley

Page 7

by Ben Tarnoff


  How would you characterize the politics of people within Google, and within the tech industry more broadly?

  Tech has an eclectic mix of political beliefs.

  I would say that most rank-and-file people in tech tend to be on the liberal or socialist side of the spectrum. They believe in democratic institutions and government and things like that. But then you also have very libertarian people. For them, governments are bad at understanding technology. Therefore any regulation will be unhelpful or misguided or even straight-up malicious. Governments shouldn’t try to regulate technology, so it’s useless—or worse—for them to try.

  Then you have the actual executives of these companies, who are often socially liberal but very fiscally conservative. They’re multimillionaires or billionaires, so they would rather not pay taxes. They do everything that they can to reduce the taxes that the corporation pays and the taxes that they personally pay, because it’s a huge chunk of their net worth.

  The politics of tech mostly falls into this tripartite division.

  It’s hard to separate the Damore controversy from the political context of the 2016 election and its aftermath, which really energized the alt-right and the other right-wing elements that you described. But in that same period, you also start to see a more critical mood about tech and a sharper tone toward Silicon Valley from mainstream journalists and politicians—a cultural shift that is sometimes called the “techlash” today. How did that shift manifest within Google?

  As long as I can remember, there was always a basic recognition within Google that big tech companies have real power—that their decisions can affect the geopolitics of the whole world. In 2010, very early in my tenure at Google, the company pulled out of China because the Chinese Communist Party was hacking into Gmail accounts belonging to dissidents and reporters.8 Up to that point, Google had been offering censored search results on Google.cn. In response to the hacking, the company said they would start providing uncensored search results or nothing at all—which quickly became nothing at all.

  So it was always clear that what we did mattered. And that recognition was what motivated the rank-and-file campaign around the Google Plus real-names policy: people saw that there were downsides to the policy that would negatively affect certain groups.

  But I would say that 2016 and the aftermath brought these issues into much sharper focus. Algorithmic news feeds, fake news, content that’s misleading or scammy or worse—Cambridge Analytica is one famous example.9

  Overall, there was more and more of an understanding within Google and within the tech industry more generally of the consequences of what our companies were building. And it felt like a real departure from the old techno-utopian idea that if you just provide access to information, everything will turn out great. People on the internet are jerks. You have to design your systems with the assumption that hostile actors are going to try to use them to do bad things in various ways. And those actors aren’t always just individual assholes. They’re often part of large, well-coordinated groups. We’re in the middle of a planetary information war.

  Do the Right Thing

  How did that greater understanding of the consequences of what the tech industry was building feed into the rank-and-file campaigns within Google against Project Maven and Dragonfly?10

  This returns to our discussion earlier about the reorganization of the company that started after Larry Page became CEO in 2011, and which continued when Sundar Pichai took over in 2015.

  The way the company was restructured into different divisions with distinct product areas changed the incentives when it came to pursuing controversial projects like reopening Search in China or working with the U.S. Department of Defense.

  Take the Department of Defense. One of the divisions is Google Cloud. They want to be number one in cloud computing. They want to beat Amazon and Microsoft and the other competitors in the market. So for the senior vice president in charge of that division, it’s a no-brainer to take military contracts. At the end of the day, what matters is increasing revenue for that division.

  The early Google was different. Back then, it was clear that 90 percent of Google was Search, and everything else was free fun stuff that would eventually redirect people to Search. So you could make the argument that if Google engages in projects that compromise its credibility, people will trust Google less, and Search revenue will go down. Now that the company is split up into these separate fiefdoms, it’s harder to make that case. Cloud doesn’t really care if they take a controversial contract that undermines trust in Search.

  By the same token, I’d imagine there’s less room for projects like Google Books in the new structure.

  Yeah. It’s more hierarchical and has less of that academic feel. The number of engineers and product managers and designers that you can have working on your project is driven by the business case for that project. It’s far less of the freewheeling atmosphere of, “Sure, we can have ten or fifty people working on this experimental thing without knowing whether there’s revenue there or not.”

  So there are fewer organic projects growing out of the curiosity of small teams. The direction is coming from the top and reflects specific business objectives, such as the need to break into this market or beat this competitor.

  In the earlier situation with Google Plus, you said that the feedback mechanism was working. Employees raised their concerns and were able to make a change. By contrast, the rank-and-file campaigns around Dragonfly and Project Maven looked a lot different. They were bigger, more combative, and even spilled into the media. What changed?

  In the Google Plus situation, there was an escalation path and a dialogue between rank-and-file workers and upper management. It was mediated by a senior engineer on the project who served as a kind of liaison. He would answer the questions about the real-names policy at TGIF, Google’s weekly all-hands meeting, with a level of candor and humanness that the other execs did not really exude.

  It was clear that he understood the reason people had problems. He was willing to compromise—even if there were challenges, even if it was going to take a while. He also had credibility on both sides: as one of the project’s technical leaders, he was trusted by the rank-and-file engineers, but he was also trusted by upper management. Management was used to respecting his technical decisions, so they respected his arguments about other aspects of the project as well.

  He left Google a couple of years ago. When he did, we lost a good liaison between the two sides. But as Google has gotten larger, I also think there’s a growing feeling among the executives that this kind of back-and-forth isn’t worth it. They feel impatient. They don’t have time.

  Presumably the reorganization you’ve been describing amplifies this tendency. In a more hierarchical structure, executives can rely more on directives than dialogue.

  Sundar has said on more than one occasion that Google doesn’t run the company by referendum. Which is not something that anybody has actually asked for! It’s a very strange response to employee concerns.

  The point is not necessarily to make every decision democratically but to at least help employees understand the reasons why a decision has been made. Then they’re free to disagree, and can refuse to work on the project, or even leave the company. But these days, the answers from management just come across as business-speaky and vague. They try to placate people without actually showing that they’ve understood the substance of the concerns that have been raised. That makes it hard to feel heard, or even to know your own feelings about a specific project.

  Do you think that your feelings about certain projects would have been different if the executives had done a better job of explaining the reasoning behind them?

  Dragonfly is one where I could see an ethical gray area. We were building a search engine that gave the Chinese government the ability to censor certain topics and pages, and to surveil specific citizens and their searches.

  On the other hand, people in China currently u
se Baidu, which is not very good.11 It returns all kinds of wrong answers about medical information that they search for. We know that’s a problem. We know they’re not going to get effective treatment. Baidu is bad for their health. So you could argue that if Google provided better search results with better medical knowledge, the Chinese people using our search engine would be healthier and live longer lives.

  I could see plausible arguments on either side. I could even line up on the side of Dragonfly being a net good if Google leadership had showed signs that they had understood and thought about these ethical issues ahead of time instead of after the fact—only after people raised concerns. After you’ve already built the prototype is not really the time to start thinking about the ethical ramifications. And the arguments that were actually presented by the executives were very bad. Like, as a college freshman I would’ve been able to tell that they weren’t valid arguments.

  It seems like the gap between rank-and-file Googlers and upper management is growing pretty dramatically in this period.

  A lot of what was missing was the mediation aspect. With Google Plus, we had somebody who could act as a go-between. We had an escalation path for concerns. You could send an email and get a response.

  I have never once received a response to an email that I wrote to a Google executive who is on the board now. It just doesn’t happen. They’re busy people. Maybe they read it, maybe they don’t. Either way, it’s not a useful mechanism for feedback. And as the company and the number of controversies have grown so much larger, the all-hands meeting has become much less useful. You can’t have a dialogue if all you get to do is ask one question every week or two.

  It’s also become harder to know who to even ask. When Dragonfly first became widely known internally, it wasn’t clear who was running the project. This felt intentional: the execs went into panic mode when Dragonfly was discovered, so they stonewalled. It wasn’t clear who you could ask questions of other than Sundar, and that remained the case for the first month or so that we knew about it. It is extremely weird not to have an escalation path that doesn’t involve going up the org chart to your CEO.

  But the worker-led campaigns did produce real changes. Google appears to have pulled back from Dragonfly, saying they have no plans to launch a search engine in China. In the summer of 2018, Google announced that it would not renew its contract with the Pentagon for Project Maven. And later that year, Google dropped out of the bidding war for Joint Enterprise Defense Infrastructure [JEDI], a major cloud computing contract with the Pentagon.

  I have a friend whose opinion is that Google strongly believes in doing the right thing—so long as it doesn’t cost Google money.

  Honestly, I don’t know what the right level of cynicism is. With the JEDI contract, Google probably wouldn’t have won anyway because Amazon is so heavily favored.12 So, when the employee advocacy started, the execs might have figured they could placate the workers by not competing for something that they weren’t going to win anyway.

  Do you think Google executives are still looking for ways to placate? It seems like the tone has grown more hostile than that.13

  There’s definitely been a major loss of trust on both sides. One way this manifests is through leaking: information that would have previously remained confidential keeps getting leaked to media outlets.

  This creates a vicious cycle. Execs feel like they can’t say anything useful because anything they say might end up on Twitter. And workers don’t feel listened to because the execs aren’t saying anything useful—which then makes them more likely to try methods of pressure that don’t involve keeping the conversation inside the company.

  If I were Google leadership I don’t know how I would break this cycle. It’s probably mathematically impossible at this point.

  Presumably leaking can be impulsive: someone gets mad, and they talk to a reporter. But you’re saying that it can also be strategic. What’s the strategy?

  Media pressure is currently among the most useful forms of pressure that workers can exert on Google. They try to inflict a PR hit on the company for doing controversial things.

  This can also affect hiring and retention. If Google is seen by engineers who have many job prospects as a place that’s doing uncool or unethical work, people will simply take another job elsewhere. It’ll be harder for Google to get talent and in some cases to retain the existing talent because people object to these projects.

  A Deep Bench

  Do you think that Google has been particularly fertile ground for white-collar worker organizing compared with other big tech companies? It seems like management’s sensitivity to bad PR on the one hand and the relatively open internal culture on the other have played important enabling roles in these campaigns. I’m not sure the environment would be quite as favorable at a place like Amazon, for example.

  Google certainly is its own separate world in terms of company culture. I get the feeling from folks at Amazon or Microsoft or other places that they have fewer company-wide forums in which rank-and-file employees can express their displeasure about something.

  To be clear, these forums aren’t just about social or political or product issues. There are many mailing lists that anybody can join. There are mailing lists for people who like skiing and people who like video games and people who like music. There are mailing lists for people who are trying to go walk their dogs together every Thursday or whatever.

  So Google’s culture does seem somewhat unique in that way. The mailing lists make it easy to quickly organize a couple hundred to a couple thousand people around an issue. You saw that with all of the worker campaigns, going back to Google Plus. The feeling that I get from workers at other companies is that this sort of culture doesn’t exist elsewhere.

  Ultimately, you decided to leave the company. Can you tell us why?

  At some point, it felt like the controversies were stacking up faster than we could handle them. I could have made the decision to ignore them and just go heads-down on my engineering work. For a while, I tried.

  Over the years, even as my feelings about the company grew more complicated, I had felt an ethical duty to stay and to continue doing what I could to push for changes in the direction of certain projects. I knew that I could apply more pressure from within the company than from outside. But eventually it felt like there was no way that I could usefully participate in that process. I lost faith that my opinions would be reflected in product decisions anymore. So I decided to leave.

  Other people made different choices. Some people resigned much sooner. Some people are still around. One reason I felt all right about leaving, in fact, was because we’ve got a deep bench now. It’s far from over. There are a lot of people inside who are going to keep pushing.

  5

  The Data Scientist

  Ever since the region first emerged as an industrial zone after World War II, Silicon Valley has been reinventing itself. It was once known for microchips and mainframes. Then came personal computers and the web. These days, artificial intelligence looms large. Companies are investing heavily in AI and snapping up all the AI experts they can find. The next incarnation of Silicon Valley, it seems fair to say, will revolve around AI—if it doesn’t already.

  But what even is AI? This should be a simple question, but honest answers are surprisingly hard to find. Mystification and misinformation abound, amplified by a media that’s typically far too deferential to industry hype. We’re told that AI is about to revolutionize everything—among other things, by throwing millions of people out of work by automating away their jobs.

  This didn’t sound quite right to us, so we sat down with a veteran data scientist to learn more. The data scientist helped us sort the fact from the fiction, and obtain a clearer view of Silicon Valley’s next next big thing. When you strip away all the nonsense, what’s actually going on?

  All right, let’s get started with the basics. What is a data scientist? Do you self-identify as one?

 
I would say the people who are the most confident about self-identifying as data scientists are almost unilaterally frauds. They are not people you would voluntarily spend a lot of time with.

  There are a lot of people in this category who have only been exposed to a little bit of real stuff—they’re sort of peripheral. You actually see a lot of this with these strong AI companies: companies that claim to be able to build human intelligence using some inventive “Neural Pathway Connector Machine System,” or something.1 You can look at the profiles of every single one of these companies. There are always people who have strong technical credentials, and they are in a field that is just slightly adjacent to AI, like physics or electrical engineering.

  And that’s close, but the issue is that no person with a Ph.D. in AI starts one of these companies, because if you get a Ph.D. in AI, you’ve spent years building a bunch of really shitty models, or you see robots fall over again and again and again. You become so acutely aware of the limitations of what you’re doing that the interest just gets beaten out of you. You would never go and say, “Oh, yeah, I know the secret to building human-level AI.”

  In a way it’s sort of like my dad, who has a Ph.D. in biology and is a researcher back east, and I told him a little bit about the Theranos story.2 I told him their shtick: “Okay, you remove this small amount of blood, and run these tests…” He asked me what the credentials were of the person starting it, and I was like, “She dropped out of Stanford undergrad.” And he was like, “Yeah, I was wondering, since the science is just not there.” Only somebody who never actually killed hundreds of mice and looked at their blood—like my dad did—would ever be crazy enough to think that was a viable idea.

 

‹ Prev