An Agile Approach to the Unpredictable
In 2009, Aneesh Chopra, the first US national chief technology officer, watched the HealthCare.gov site launch, crash, and burn.
“When the law was passed, the overriding assumption was that states would implement the exchanges” where people sign up for one of the available plans, he told me. The federal government’s role would be to create the standards by which the state health care insurance sites could communicate with a central hub. But, motivated in part by rancorous partisanship, some states left it to the federal government to create their sites for them. With the short deadline stipulated in the Affordable Care Act, the government didn’t have time to go through the usual procurement process and instead used a provision that allows them to ask for bids from the official list of prequalified providers—“exclusively beltway bandits,” says Chopra.14
These were the old names in project development, and they behaved like it. They used traditional software development techniques and came up with a cumbersome, slow, underfeatured, and utterly unreliable site that almost sank the entire health care program.15 “On Healthcare.gov’s first day, six people successfully used it to sign up for health insurance,” reported NBC News.16
HealthCare.gov’s turnaround began when White House digital strategist Macon Phillips stumbled on a mockup of a possible health care site on Twitter, created by Edward Mullen, a designer in Jersey City, New Jersey. Phillips was so impressed with its ease of use that he invited Mullen to come to the White House to help make his design real.17 The White House then hired a group of Silicon Valley developers in what became known as the Tech Surge. A team of young coders moved into a McMansion in Georgetown and started replacing the software produced by the original contractor with code that worked, at one-fiftieth the cost.18 After their approach rescued HealthCare.gov, this approach was given institutional prominence in the new US Digital Service and the federal digital agency named 18F.
The project management technique that saved HealthCare.gov relied upon agile development. It’s another way in which we’re developing products while minimizing the need to anticipate.
The traditional process of software development carefully divides a project into phases, each with its own timetable and milestones. This is called the “waterfall” process because in some project diagrams, the tasks are connected by curved arrows, resembling cascading water.19 More to the point, as with a waterfall, once you complete a phase, there’s no way of getting the water to go back uphill. That one-way flow seemed acceptable because, as one history of programming explains, “it was taken as gospel … that the more time you spent planning, the less time you would spend writing code, and the better that code would be.”20
That makes sense when you’re putting atoms together to create a Model T, but it fails to take advantage of what bits and networks allow. Software creation can be spread across a network of developers working simultaneously and cooperatively, freed from overengineered plans that try to predict every feature and every step. But doing so requires restructuring the code, breaking it into small, functional units—modules—each of which takes in data, operates on it, and outputs the results. One module might take in a username and password, and output whether that user is registered with the system. Another might be responsible for taking in a user’s age and profile and outputting an actuarial prediction. The other developers don’t need to know if the developer of a module has modified its algorithms, so long as the inputs and the outputs continue to work—just as a customer in a diner doesn’t have to worry if the cook is using a new fryer, so long as the input (“Onion rings, please!”) results in the same delicious output.
One developer explained why 95 percent of companies are doing at least some agile development: “Waterfall assumes that one can model the process in one’s mind, sufficiently enough to plan a project start to finish.”21 Agile development knows better: if someone comes up with a new idea for a feature, it can be implemented quickly and cleanly by relying on the already-existing modules. It works because it minimizes anticipating and planning.
Agile development can be traced back to the 1990s with roots that go decades further back, but as geek culture has spread far beyond the engineering cubicles, its radical lesson is now sinking in: even projects as large as a national health insurance program can succeed by routing around overly rigorous planning.
Platforms of Unanticipation
Unanticipation is showing up not only in the product development process—prerelease (agile development) and postrelease (MVP)—but also in an architecture of technology designed for use outside the bounds of expectation.
For example, Sheryl Sandberg, Facebook’s original chief operating officer, in 2011 told Chopra that in 2008 she came across a job board that listed thirty thousand Facebook developers. Since at that time Facebook only employed about 2,600 people, she was puzzled.22
Then Sandberg realized what was going on.
Although the early versions of Facebook went the traditional route of anticipating and meeting its initial users’ needs, early on, Mark Zuckerberg had come up with a secret plan. As the app started to reach beyond the Harvard campus, Facebook launched a new photo-sharing feature that lagged far behind what the dedicated photo services were offering. Yet users were swarming to it. Zuckerberg realized it wasn’t because the feature was particularly good but because the Facebook application understood its users’ social networks, making it far easier for users to share their photos.
Facebook calls the integrated data about its users and their networks its “social graph,” and Zuckerberg knew it was immensely valuable not just because of the uses to which Facebook would put it but also because of all the uses of it that Facebook could never imagine. No company or set of developers, no matter how smart, could. So why not let everyone try?
In fact, Zuckerberg understood that developers were likely to take advantage of its social graph whether or not Facebook let them. After all, one of his earliest projects—an app that showed students who else had signed up for a course so they could decide if they wanted to take it—used data Zuckerberg had gathered without asking Harvard for permission.23 Then came Zuckerberg’s Facemash, an unfortunate “hot or not” app that let students compare photos of Harvard women. It got Zuckerberg into deserved trouble with the school’s administration not just for its crass sexism but also because the photos came from the official “facebooks” of nine of twelve of Harvard’s residences, again without permission, in one case by hacking the residence’s computer over the network. The Harvard Crimson charitably referred to this as “guerrilla computing.”24
So while Facebook’s 2007 launch of its open development platform—introduced as F8—may have surprised the world, it was consistent with Zuckerberg’s vision. The platform provided an online interface that enabled software engineers anywhere in the world to use Facebook software services and social graph data to create their own apps. Facebook of course did not give untrammeled access to all of its users’ private data or to all of the site’s internal functionality, but it provided enough that if you had an idea for an app that needed some of what Facebook knows about its network of users—carefully but inadequately vetted, as time would show—you very likely could create it. Not only didn’t you have to work for Facebook, you didn’t even have to ask Facebook’s permission.
The success of the Facebook platform accounted for the number of Facebook developers that had puzzled Sandberg. The vast majority of those thirty thousand developers, Sandberg realized, were not working at Facebook, even though they were deeply engaged in creating new applications based on the social graph. Within six months of the open platform launch, twenty-five thousand new applications had been created, and half of Facebook’s users were using at least one of them.25
When Zuckerberg first surveyed what had been submitted, most of the apps seemed trivial. But he quickly realized that even an app that was just a silly game could be helping Facebook achieve its avowed (but
not always followed) mission of bringing people together. And simply opening the platform to developers had created financial value too: two years after the platform’s launch, the aggregate value of the companies building apps on top of Facebook was roughly equal to Facebook’s own value.
Open platforms were not a new idea in 2007 when Facebook launched its own. But an open platform created by one of the most important and information-rich sites on the web was a big deal. As Fortune’s main tech writer put it, this brought about a “groundbreaking transformation” that “began to change how the world perceived Facebook.”26
It also was a significant step forward in weaning our culture from tens of thousands of years of relying on anticipation, validating for companies and organizations—for-profits and nonprofits—that making a subset of one’s resources openly available could generate unanticipated financial and cultural value.
* * *
The benefits of open platforms are varied and often remarkable:
Increasing presence
Like most newspapers in the mid-2000s, the Guardian was struggling to make the transition to the digital era. So when Matt McAlister came to the paper in 2007, he found its management ready to listen to the case he laid out: to increase its web presence, the paper ought to launch an open platform where external developers could easily find relevant content from the Guardian and incorporate it into their own sites, without jumping through administrative hoops. McAlister told me that he argued that “media organizations needed to extend beyond their domain to grow, and to be present in all the places that readers are.”27
Such a platform is known technically as an application programming interface (API): software that translates a program’s request for information into a language that the back-end servers understand, and vice versa. The same strategic use of an API has been crucial to making Wikipedia one of the top ten most visited sites on the web. Its API provides access to all of Wikipedia’s content, as well as to the categories, links, “information boxes,” and more that enrich its content. For example, a music site might use the Wikipedia API to get the first paragraph of the biography of any musician and run it on its own site without asking permission. This is one reason Wikipedia is a preferred source on more sites than it can count.
Resilience
As with many organizations that adopt open platforms, there was a second motivation for the Guardian’s adoption of open platforms: an organization’s technology infrastructure—its software and processes—based on this approach is far more resilient. For example, when you search for content at the Guardian, your search request goes to the API, which puts it into a form that the Guardian’s back-end software understands. The API then takes the results from the database and translates them into a form that the website understands. Likewise, when you sign in to your account at the Guardian, the API sends your name and password to the module that authenticates users. The Guardian and many other sites use an API for this internal purpose because it means that if, for example, they change the processes by which the site validates logins, none of the internal services that rely on that function have to be updated. This enables the site to develop new services and support new devices far more easily.
For instance, when Apple gave National Public Radio only a few weeks to create an app for the initial launch of the iPad, the fact that NPR had an API meant its developers didn’t have to write new code to handle searching the NPR content library, to authenticate users, and all the rest. The new iPad app’s user interface could just ask the NPR API to perform those services.
NPR made the deadline and was featured at the launch.
Adding value to products
In 1981, the game Castle Wolfenstein was released for the Apple II, and then for MS-DOS, the Atari, and the Commodore 64. Its graphics were state of the art, which meant they were incredibly primitive by today’s standards: you navigated your little blocky character through a top-down map of corridors and rooms, encountering little blocky Nazi soldiers who fired tiny pixel-bullets at you.
Then, in 1983, some users decided that while they enjoyed the gameplay, they weren’t crazy about the Nazi theme. So they altered the game’s image files on their own computers, replacing German soldiers with Smurfs. They altered the audio files so that instead of your enemies sounding German, they sounded Smurfy.28 Castle Smurfenstein was Wolfenstein with a new coat of Smurf-blue paint.
This sort of hacking was simpler back then. In fact, even with my primitive technical skills, in the early 1990s I turned the then-current version of Wolfenstein into a mockup of “document management software of the future” that you could visually run through to find your files; it was a hit at our annual users conference because it was so ridiculous.
In the early 1990s, “modding,” as the practice was called, flipped from hack to feature. Game companies started supporting user creation of new maps or levels for games, new functionality, and even new rules. For example, in 1996 id Software released a version of its hit game Doom that included levels designed by users. These days, some game companies provide access to the very same tools the in-house developers used. For customers, knowing that there would be endless mods to play made buying a game a better investment.
Enabling users to build what the company developers might never have thought of is now part of the PC gaming mainstream: Grand Theft Auto V has earned $2.3 billion since its launch, in part because mods keep it fresh, enhancing the game’s value.29 Beyond that, by treating their users as cocreators, game makers strengthen the emotional bond between them.
Other industries are going down the same path. For example, the open development environment provided by Pebble, one of the first smartwatch companies, resulted in users creating not only new watch faces but also apps, games, and the occasional art project. Fitbit eventually bought Pebble, in part for its open development environment.30
It’s always been the case that play teaches us our first lessons about how the world works. A generation of gamers is learning a new set of rules about rules.
Integrating into workflows
At the Slack app store, you’ll find hundreds of contributed apps in eighteen categories, including analytics, customer support, health and medical, human resources, marketing, office management, project management, sales, and travel—all free. Many of the most important apps integrate Slack into existing workflows. For example, the Tact app integrates Slack into the major sales force management systems, and Airtable integrates Slack into a database management system. These sorts of apps stitch Slack more tightly into existing business ecosystems.
This is so important to Slack that the company created an $80 million fund to help developers and small companies build apps that Slack could not anticipate. “We expect our portfolio to feature a diverse array of entrepreneurs working on solving problems for teams in every industry, function, and corner of the world,” said the announcement.31 Each problem solved will make Slack more indispensable.
Data.gov, a site established at the beginning of the Obama administration, provides open access to over two hundred thousand government data sets. Jennifer Pahlka, the founder of Code for America and a deputy federal chief technology officer in the Obama White House, told me, “Some [government] data sets that no one would have thought would be popular have been highly used, such as the location of fire hydrants, storm drains, [and] tsunami alarms.”32 For example, Code for America wrote an Adopt-a-Siren app that lets local Hawaiians sign up to make sure that the islands’ hundreds of tsunami warning sirens are in good working order—a helpful service since there’s a 5–10 percent failure rate each month.33
Tim O’Reilly, the head of a major tech media company, thinks our vision of government itself ought to be based on this open model. He sums up the idea in the phrase “government as a platform.”34 Like an API, the government should be a set of services that can be used and extended by citizens so we can create what we need without always having to petition the government to provide it for us.
The aim is to let a government accomplish its mission of serving its citizens without having to anticipate and provide every service citizens may decide they need. O’Reilly’s idea had a strong constituency in the Obama White House.
Moving unanticipation upstream
A manufacturer of playing cards can never know whether a customer is going to use a deck to play Go Fish or to prop up a wobbly table. The Bee Gees couldn’t know if someone has bought a copy of “Staying Alive” to dance to the disco beat or to train people on the right tempo for performing CPR. Manufacturers can’t anticipate all of the uses of the products they make, but they should recognize that unanticipated uses represent customers getting unanticipated value from the product. Open platforms can deliberately push that moment further upstream: sometimes users can use the pieces before they’ve been combined into a product, as if Henry Ford let people take parts off the assembly line and build new cars, windmills, and pasta makers out of them.
The New York Times did this sort of upstream unanticipation with the data behind a 2014 article.35 After thousands of people took to the streets in Ferguson, Missouri, to protest police violence against African Americans, the Times posted the raw data that documented the article’s claims about the transfer of equipment from the military to local police. It did this so people could analyze the data, check it, look for information about their own local police, or try to find correlations between the availability of military equipment and police abuses. Then, to the surprise of the Times, people started to improve the data, reporting errors and compiling it into more usable forms.36 The Times now has a site—The Upshot—dedicated to hosting the data behind its reportage because you can never tell what people will find in it or do with it. The Upshot is a platform built to take advantage of upstream unanticipation.
Everyday Chaos Page 10