Book Read Free

Voices from the Valley

Page 10

by Ben Tarnoff


  Do you think your daughter noticed?

  No. But I felt so mortified for her. I felt ashamed. I’d clearly made some sort of mistake, some kind of social faux pas.

  Later, I got angry. I didn’t want my daughter to be exposed to any of that. That’s fine for me, not for my kids. I didn’t want my child to be touched by this hierarchical crap.

  Does she still want to work in tech?

  She does not.

  Closing Time

  Tell us about how you came to stop working in tech.

  At some point, I began to realize that the company wasn’t doing so well. I started to pick up on it with one of my favorites. He always treated me like a regular person. Not putting me on a pedestal, not treating me like a servant—he just took me as I was. And that was actually sort of an exception.

  He was a little older than the average engineer, mid to late thirties. Just the sweetest teddy bear of a guy. The company had purchased a food delivery app, and it required a lot of integration, or something. They assigned him to the project, and it was a bumpy ride. Things didn’t seem to be pulling together. He was working insane hours. His body was a wreck. The stress was killing him. His back felt like Sheetrock.

  Did he ever talk about what exactly was going wrong?

  As I said before, I’m sure they were told not to discuss business, because almost nobody did. And I didn’t want to know. I would never want to carry trade secrets around in my head. I had too much to do. But I knew that whatever was happening, it wasn’t good.

  Things were failing left and right. This was around the time that people started to talk about the company going public. Then it just stopped. The fizzling of the IPO created a sense of upheaval. People started worrying about the potential for layoffs. The stress level definitely picked up. It drove a lot of traffic to our wellness center.

  That’s also when our wellness startup started to get mixed signals about its relationship with the tech company.

  How did that go down?

  After the failed IPO, the tech company brought in someone new to run HR. And that person terminated the contract with the wellness startup.

  Do you know why?

  The startup’s founders talked to me a lot. They said they routinely got feedback that we were so good at our jobs that the engineers were coming back to work too relaxed. So maybe that played a role in the demise of the program. Maybe stress and tension really do make you a better worker.

  7

  The Storyteller

  Among Silicon Valley’s chief exports are the stories it tells about itself. Every company needs a founding myth—extra points if it involves a garage. But as critical scrutiny of the industry has increased, storytelling has acquired a new urgency. More than ever, companies need to justify themselves and their actions to politicians and regulators and journalists and the general public, as well as to their own employees.

  The people who perform this work fall into a range of roles: communications, marketing, and public policy, among others. But their function is broadly similar. They help the company speak, and ensure that it speaks in a single voice.

  We talked to someone who, until recently, told these kinds of stories for a living. They helped us understand how Silicon Valley speaks, both to itself and to the world.

  * * *

  What was your most recent role in the tech industry?

  My job was to tell the right story for bringing a product to market. Let’s say there’s a new product, or a new feature of an existing product, that the company wanted to launch. I helped craft a strategy for midwifing it into the world. My goal was to come up with a way to talk about the product in such a way that it would not be misunderstood.

  What does it mean for a product to be misunderstood?

  My job was at a high-profile company that can be closely scrutinized by the public and by the press. Its products can easily be misunderstood by external parties: by users, policymakers, journalists. So it was important to tell the story in a way that accurately conveyed the benefits but also factored in the risks.

  If there are aspects of a new user experience that could be perceived as self-serving for the company, or only appreciated by a subsection of its users, those kinds of misperceptions need to be anticipated and mitigated with thoughtful messaging.

  Could you give an example?

  Imagine a new feature that’s designed to make messaging more private. If it’s good for privacy, it might not be so good for safety. So certain privacy and human rights advocates might like it but parents and safety groups and law enforcement might not. You have to start with the stakeholders. Who is going to care about this? Who is going to be most affected? Whose stakes need to be understood and analyzed and managed?

  When it comes to telling the story, it’s almost like architecting a screenplay. You need to know who your protagonist is. This might be the person who is going to benefit from your feature—or the entire community at large. Or it might be the person who led the development of the feature within the company.

  But you don’t get the length of a screenplay, obviously. You have to be concise. You have a press release, or really just the headline of a press release, because that’s what positions the whole launch. Or you have the text that pops up in the app when someone tries to use the feature. That language is heavily labored over. It reflects a lot of care. It might be just a few words, but it’s been refined and reviewed by a lot of different people.

  And that kind of concision can be challenging, because a lot of tech products are highly technical. Sometimes their underlying dynamics are actually quite confusing and complicated. It’s hard to fit it all into a single sentence. Moreover, people don’t trust corporate motives. When a company speaks, it generally starts from a low level of credibility. Getting the message through can be difficult.

  Part of the reason for that mistrust is that it’s rarely in a corporation’s interest to be completely honest. In recent years, we’ve seen a lot of examples of tech companies being less than truthful with the public. Have you ever felt like you needed to tell a story that wasn’t completely true?

  Well, you always start with the truth. It’s the only way. If the truth is that you’re launching a new feature to improve advertisers’ ability to run ads on your platform, then you start with that. You can’t change the purpose of the product, but you can modify where the emphasis is in the explanation.

  I never felt pressured to tell a partial truth. But it’s not always a question of simple truth or falsity. It’s a question of completeness. Every company has a communications component because the message needs to be managed. Why does the message need to be managed? Because if it wasn’t, in a company with a hundred thousand employees, you would have a hundred thousand different messages. It’s in the interest of the enterprise to have an organized voice.

  As for whether that organized voice ever tells an incomplete story—well, that’s kind of inherent in the job. As I mentioned earlier, you don’t have the airtime to tell a complete story. But I should say, that cuts both ways. Journalists also tell stories about our products, and their stories aren’t complete, either. They start with some truths, and inevitably tell their own kind of compelling story.

  But isn’t there a difference between journalists who may not have all the information but are trying to ascertain the truth as best they can, and companies who are deliberately spinning a story in a particular way?

  Look, I would never diminish the value of journalism. Obviously, we need to have journalists out there holding corporations and governments accountable. But every institution has its incentives. And the news media is incentivized for engagement. They need people to click.

  One could argue that the tech industry itself is responsible for making the business model of news so focused on engagement. I would say that the dynamic predated the rise of the big platforms, and that the platforms simply accelerated a trend that was already taking place.

  Who is Howard Dean? I think abo
ut this a lot. Howard Dean is a person who did a lot of stuff. But the only thing that anyone remembers about that guy is that he screamed once. Like, a mic was too loud and it sounded like he was yelling, and his campaign was completely derailed.

  Now we live in a media environment where we have a new Dean scream every few minutes. It’s an attention economy. The news is designed to attract people’s attention and keep it. And what does that well is scandal, controversy, conflict, House of Cards–style intrigue. Those are more exciting reads than unadorned facts stitched together to form a much less dramatic reality.

  What’s the reality?

  First off, when things go well, you don’t hear about it. When a company launches a feature that’s good for users, that nobody finds controversial, it’s not covered very widely. That happens all the time.

  And when things go wrong, the media narrative doesn’t typically capture the complexity of what’s happening internally, or the amount of work that’s happening behind the scenes. I think journalists sometimes assume that these companies are full of young people who don’t understand the seriousness of what they’re doing. They’re playing Ping-Pong, they’re bringing their dogs to work. It’s all fun and games.

  But in my experience, the challenges that these companies face are not taken lightly at all. The people who work at them, and the executives who run them, are deeply interested in getting it right. The mood is somber, serious.

  I’m not sure that the media thinks that the main problem in Silicon Valley is neglect. More often, it seems like the issue is greed—that tech companies are just trying to squeeze out as much profit as they can, no matter what the consequences are.

  Perhaps that is how things work at young companies. But at a larger enterprise, nobody wants a short-term gain at the expense of damaging user trust. It’s just not worth it.

  As humans, we love a good story. And a good story needs heroes and villains. So I understand why the media, and the public more broadly, can be quick to assign malicious motives when a problem arises. But that simply doesn’t accord with what I’ve seen from the inside.

  Balancing Acts

  You said that the people who work at these companies and the executives who run them understand the seriousness of the issues and what’s at stake. Was this always the case? Or has that understanding emerged more recently, since the various scandals around the 2016 election?

  I take umbrage at the idea that Silicon Valley didn’t face scrutiny before 2016. When I joined this particular company in 2013, there was already a ton of public attention. And the stakes and the significance of what we were doing were quite clear. That struck me very early on.

  To my mind, the techlash has been more a difference of degree than of kind. The scrutiny isn’t new. It just became more intense. The stakes became even higher. The year 2016 was clearly an inflection point because tech got bound up with big political issues, with the integrity of a national election, which is a subject that people are very justifiably passionate about.

  Let’s talk more about that. What’s your perspective on the controversies around Russian influence operations during the 2016 election? How do you think the platforms handled it?

  People made mistakes. But things that seem obvious now in retrospect just weren’t obvious at the time. Or maybe there weren’t the right incentives for people to be thinking about the right problems.

  Fundamentally, the products were misused. A sovereign nation wanted to interfere with another sovereign nation’s elections, and used these platforms to do it. It was cyberwar, basically. These platforms were the weapons, but they were hijacked. Let’s say you’re an arms manufacturer and your shipment gets hijacked by a hostile country and used to start a war. That’s what happened.

  But is that really a fair comparison? These platforms make their money from ads, which means they need to maximize engagement. So the business model means they have to prioritize whatever content users find engaging, whether that’s Russian disinformation or alt-right propaganda.

  With perfect foresight, you could have stopped the Russia stuff from happening—but not without controversy. Those campaigns adopted the idiom of our politically divided country and exaggerated it. So any effort to combat them would have been interpreted as ideological. Let’s imagine that the platforms took down a bunch of pro-Trump memes, claiming the Russians put them there. How would Republicans or the Trump campaign have reacted? Critics of misinformation forget that conservatives often accused tech companies of censoring their views before 2016.

  It’s all about trade-offs. These are the kinds of conversations that happen internally. They’re complicated, they take time, they involve a lot of people. There are many considerations: integrity, access to information, community safety, free speech.

  Free speech is a big one. The U.S. is where these platforms are based. So they have inherited a lot of norms about free speech and the First Amendment. We pay a price for our love of free speech in the U.S., and that price includes some of the world’s most lax defamation laws. There are plenty of countries where it’s a lot easier to sue somebody for defaming you, where satire isn’t as well protected, where hate speech laws are stricter—in Europe, for instance, they take a different approach to these questions. But these are American companies, so they tend to take a more maximalist view of free speech. How do you fulfill that expectation at scale while balancing other considerations around safety and accountability?

  What if the things that people say cause harm?

  Then it has to be dealt with. But what I’ve seen is that everyone wants it both ways. They want the platforms to be responsible for everything that’s said on them, but they also want the platforms to stand down when it’s their own speech that might be interfered with. It’s just an incredibly challenging thing to get right.

  It sounds like you’re saying that platforms of a certain scale and significance always face difficult choices that involve trade-offs. But if such trade-offs are inevitable, one could argue that those choices shouldn’t be left in the hands of private companies, particularly when the social and political consequences can be so severe. If there are trade-offs that must be made, they should be made through an open, democratic process, using a political forum or a regulatory mechanism of some kind.

  If what you’re saying is that these kinds of debates should take place in the public arena, then yes, absolutely. There needs to be transparency and accountability. And there’s certainly room for smart regulation by smart governments with the right motives. But personally, based on what I’ve seen and observed, regulation can be slow-moving. It can be inefficient. It can be counterproductive. And it can cause a whole new set of controversies.

  Returning to the Russia question: Fundamentally, the government didn’t do its job. Preventing Russian interference in the election should’ve been the business of the intelligence community from the start. I never personally felt like they were involved. But if they had been, that in itself would have been a different scandal. Imagine people’s reaction to finding out the FBI and the CIA or whoever were collaborating closely with the big platforms to combat Russian disinformation. People would freak out about government censorship and surveillance.

  Even smart regulation will raise those kinds of concerns. Then, globally, it’s a very different picture. Liberals in the U.S. have a tendency to assume that government regulation has our interests at heart. That’s not true in many of the more explicitly authoritarian countries where these companies operate.

  Another perspective is that, so long as these platforms are private entities trying to maximize profit and shareholder value, they will always be incentivized to put their bottom line over the well-being and interests of their users, and society more broadly.

  Well, companies certainly act in their own interests. They have competition. They have to make money to stay in business.

  But when I think about how new consumer features come about, it’s generally not driven by people thinking abou
t growth or market share in a systematic way. It’s certainly not driven by people deliberately thinking about how to take advantage of users. On the contrary: it’s usually about people trying to create value for users.

  There are different ways to approach that. One is the more extreme, visionary version: You think you know what the world needs. You say, this is how people should interact with each other on the internet, or this is how businesses should do payroll, or whatever.

  We might call that the Steve Jobs approach.

  Yeah. The other approach is more evidence-driven, which is what I’m more familiar with. It involves gathering data about what people want, what people are frustrated by. At this particular company, it might look like doing research into the pain points of a particular user experience. Then you look for ways to address those concerns.

  That incremental development is first and foremost about creating value for users. And sure, if it takes off, business imperatives come into play—how is this going to affect growth, market share, that sort of thing. But that’s usually not where it starts. It starts from more of a problem-solving mindset. I would say it’s an engineer’s approach to the world. It works by identifying friction in existing systems and trying to make them more efficient.

 

‹ Prev