No entity has flourished quite like The Weather Channel. Incorporated in 1980 and launched in 1982 as a cable network to disrupt the more traditional model of accessing weather data since the 1950s—which was through your local television station’s news broadcast—The Weather Channel’s strategy was built on a simple understanding. As current CEO David Kenny put it, “Weather is personal, weather is very local.” Most people wanted to know what the weather was and would be where they lived, not necessarily anywhere else. They also wanted to know it consistently and frequently, without waiting for the local news at six or 11. That required taking the available government data, sorting it by ZIP code, and delivering it via the company’s innovative Satellite Transmission and Receiving System (STARS), to homes with cable systems throughout America. It also meant doing it on a steady clock. “There wouldn’t be a Weather Channel without localization every 10 minutes,” Kenny said. While there have been many STARS upgrades and generations since, local updates have remained a staple, known since 1996 as Local on the 8s.
“We tell the story better, we make it more useful, we make it more relevant, and we add value to the science,” Kenny noted.
That’s added value to the company. According to a Magid study that was quoted in AdWeek, The Weather Channel had “150 million unduplicated consumers across TV, online and mobile” as of the fourth quarter of 2011, and it is still the undisputed leader in a $1.5 billion industry. Kenny acknowledges that this success never could have occurred without the government’s open data policy. If the company had been responsible for all of the infrastructure costs, notably the necessary recording equipment, rather than merely leveraging the available government data, the channel’s launch would have been long delayed, and most certainly cost-prohibitive. It might have never gotten off the ground. But there’s more to it: once The Weather Channel took off on cable, the continued access to that data, at a nominal cost, left the company with sufficient resources for additional innovation in the years to come. One innovation was Weather.com, which grants users reports and forecasts, in real time, for some 100,000 locations. Then, as newer technologies provided opportunities, the company was able to create a mobile application, which, in 2013, Apple ranked as the second-most downloaded application on its iPad and the seventh-most on the iPhone.14 Now Weather.com even produces weather sections for local broadcasters’ mobile apps. And, in the winter of 2013, The Weather Channel took Local on the 8s to the next level, launching a continuous, real-time scroll of local conditions, making even the television set appear more like a mobile app.
“All of this comes from open data,” Kenny said.
The Weather Channel doesn’t just take. It gives back. Every year it voluntarily communicates more than 100,000 National Weather Service alerts to its audiences, to assist the agency in its public safety mission. Its team of scientists puts many models and findings back in open source for meteorologists at the NWS (and throughout NOAA) to study, apply, and send back for further conversation, adaptation, and application.
“There’s a whole meteorological community,” Kenny said. “It’s always been this sort of mutual mission of science that’s kept us going. So I think this is a living thing. It’s not just that they post the data, and we repurpose it. We think it’s important that there be continuing collaboration.”
Both sides—public and private—share knowledge and opinions in times of major weather disturbances, or even in the aftermath, in order to identify areas for improvement. Such reviews occurred after the Eyjafjallajökull volcano in Iceland paralyzed European air traffic in 2010. A Weather Channel subsidiary met with the Met Office in the United Kingdom as well as the London Volcanic Ash Advisory Center to exchange, blend, and align forecast techniques and practices. The result was that, when another Icelandic volcano erupted a year later, the impact on air traffic was considerably less.
As technology advances, more knowledge can be gained—even about which way the wind will blow. The Weather Channel’s latest initiatives speak to what’s possible at the intersection of technology, open energy, and weather data. It has long been known that the clean energy sector, wind farms more specifically, could dramatically improve productivity and profitability by optimizing how to angle wind blades based on real-time changes in weather. Historically, however, this has not been possible, because the changes in wind patterns simply occur too quickly. But now, by using new technologies such as cloud computing, The Weather Channel is able to fully exploit the data, by creating algorithms that better predict those patterns. Late in 2012, Kenny said The Weather Channel was already selling that product inside and outside the United States as a pilot and was planning to release version 2.0 in 2014, in the hopes of aligning data, decisions, and energy markets so they could move as fast as the wind.
“We’re crossing new thresholds in terms of data and the ability to manage ‘Big Data,’” Kenny said. “It may not have been useful to release it in the past, but it is incredibly useful today.”
More than useful. Necessary. That’s why other governments and businesses worldwide call upon The Weather Channel to share its data assimilation and computing models; the general mission is the same for all of those entities, real-time data, wherever and whenever, even if the immediate plans for that data are different.
“What’s clear to me is the nations that figure out how to use their data and share their data and put it in the grid give their businesses and citizens a leg up versus nations that don’t,” Kenny said. “If I look at the disparity in weather information that’s given to African farmers and farmers in Kansas, it’s huge. It makes a difference. But that gap will change and close in time as data becomes available. And you compete on the basis of data and information as a nation.”
Kenny deems it much too early to declare a winner on wind, even as some countries—such as those in Scandinavia—have integrated it heavily into their policy. He is certain, however, that data will be critical to making any such policy work.
“People pay us a price for our interpretation of free data, because we interpret it in a better way,” Kenny said. “But at the core of it, data collection and data availability and speed in the use of modern technology will increasingly create competitive nations. And nations that don’t necessarily have natural resources to compete upon can change their competitiveness by the way they use information and provide it.”
When we tell the open data story, we’re not just talking about the weather.
Even prior to our nation’s founding, governments have been collecting data on the population here, through surveys. The British government did so to count the number of people in the colonies in the early seventeenth century. And, under the direction of then-Secretary of State Thomas Jefferson, the U.S. federal government took its first official census in 1790, with another occurring every 10 years since.
This has become an extensive and expensive enterprise: the 23rd census, conducted in 2010, came in under budget and still cost roughly $13.1 billion.15 Through all of those decades, government agencies have collected additional data for a host of purposes—holding regulated entities accountable, conducting research on key social and economic trends, processing individual benefits, and so forth. The methods of data analysis have evolved over that time as well. In 1886, an employee of the U.S. Census Office named Herman Hollerith invented an electrical punch-card reader that could be used to process information; a decade later, Hollerith formed the Tabulating Machine Company, which in 1924 became International Business Machines (IBM). One of his colleagues, James Powers, developed his own card-punching technology and founded his own company, the Powers Tabulating Machine Company, which merged with Remington Rand in 1927. For decades IBM and Remington Rand, tracing their ancestry in part to innovative government employees, dominated the developing computer industry.16
As the government collected all of this data, the public developed a greater desire to access it. The enactm
ent of freedom of information laws has allowed the public to make specific requests through a federal agency, with those requests subject to a number of exclusions and exemptions. We might not have these laws at all if not for the long-standing advocacy of the newspaper industry, as well as the yeoman efforts of John Moss.17 The California Congressman championed transparency measures in the 1950s and 1960s in response to a series of secrecy proposals during the Cold War. Moss encountered sustained, stubborn resistance from both parties, but eventually persuaded enough members, including Republican Congressman (and future Defense Secretary) Donald Rumsfeld, to become allies. In 1966, they got a bill to the desk of a long-time opponent, President Lyndon B. Johnson. Johnson did sign it, along with a statement that he had “a deep sense of pride that the United States is an open society,” even as the statement also focused on all of the exemptions for national security. Over the next two decades, and in response to events such as the Watergate scandal, Congress would amend and strengthen the bill, and it remains the law of the land.
Even so, the Freedom of Information Act (FOIA) has had its limitations.18 It has frequently resulted in needless delay and work, the latter on account of the information’s release in inaccessible formats. The agencies responsible for collecting data have conceived their systems with their own needs in mind, so that they could use that data for assorted, internal government functions. They haven’t given as much thought to making the output of that data easier, in ways that could allow the public to best reuse it. That wasn’t an ill-intentioned stab at secrecy; it simply wasn’t seen as a requirement or priority of government.
Open innovators see data quite differently. They see it as something that should be available not by request but by default in computer-friendly, easily-understandable form. They see it as the igniter of a twenty-first-century economy that can expand industries and better lives.
They see it the way Todd Park does.
I became aware of Park’s unique perspective and ability while serving on the Obama transition team in 2008. Park, the cofounder of athenahealth—a managed web-based service to help doctors collect more of their billings—served as an invaluable informal adviser for what would later become the HITECH Act, an element of the Recovery Act that offered doctors and hospitals more than $26 billion in incentive payments for the adoption and “meaningful use” of health IT. He had no designs on joining the government when he took an interview with Bill Corr, the Deputy Secretary of Health and Human Services (HHS), for the agency’s new CTO position. He intended to steer Corr toward more appropriate candidates, those with plentiful—heck, some—public sector experience. Corr told Park, however, that he had enough people who knew government well, and that his preference was to “cross-pollinate” their DNA with Park’s, so “the DNA of the entrepreneur embeds itself in HHS through this role.”
Corr referenced the President’s call for a more transparent, participatory, and collaborative government. To demonstrate how it would manifest itself through this position, Corr touted his department’s access to vaults upon vaults of incredibly valuable data that could, in the hands of a more innovative and engaged public, better advance the mission of the agency. Then, with Park’s interest piqued, the brainstorming began. “And the notion of actually working on how to leverage the HHS data for maximum public benefit was the thing that really made the role of tech entrepreneur concrete to me,” Park said. “And that’s what convinced me to talk my poor wife into agreeing to move across the country and jump into workaholic mode again and do this job.”
After his appointment, Park explored his own sprawling agency, one with an $80 billion annual budget and 11 distinct operating divisions, including the National Institutes of Health, the R&D engine of the biotech industry; the Centers for Medicare & Medicaid Services, which provides health insurance for more than 50 million Americans; and the Food and Drug Administration, which protects the public safety. And in that research, he uncovered not only the data sets Corr had highlighted but also champions within the civil service, looking for a leader in the cause. This informal activity got a more formal boost with the White House’s delivery of the Open Government Directive. As related in previous chapters, that directive provided explicit instructions and deadlines for culture changes within departments and agencies. Within 45 days, Park published four high-value data sets, one more than required by the White House directive. None had been available online or in a downloadable format.19 One of them, the Medicare Part B National Summary data file (representing payments to doctors) had previously been available only on CD-ROM and for a $100 charge per year of data. Now, that was available for free and without intellectual property restraints, and the same was true for the other three sets Park published: the FDA’s animal drug product directory; the compendium of Medicare hearings and appeals; and the list of NIH-funded grants, research, and products. By the 60th day, Park had launched an open government web page, inviting the public to comment on which data sets should be made more accessible and to offer input about each agency’s overall open government plan.
Yet it wasn’t just what Park was doing. It was the way he was thinking, a way that would later lead me, while grading agency performance, to point to his HHS team as a model for other agencies to replicate.20 He understood, better than anyone, that data alone wouldn’t close the gap between the American people and their government. Rather, true change would come from the improved use of that data in the furtherance of a personal goal, such as finding the right doctor; understanding the latest research on a patient’s condition; or learning of the most recent recall of a food or medical product that could jeopardize a loved one’s health.
Initially, Park did what other officials were doing in their own agencies, methodically inventorying and publishing additional data sets. Then he turned his attention to simplifying public access to that data and encouraging its use. After some investigation, he determined that, while there was little harm in the government creating some of those tools, the “real play” came in engaging outside entrepreneurs and innovators. The trick was not in dictating the next step, but in allowing “everyone else in the universe to actually tap into the data to build all kinds of tools and services and applications and features that we couldn’t even dream up ourselves, let alone execute and grow to scale.” He believed the subsequent development of simple, engaging, impactful tools would result in improving the health care delivery system.
To test and prove that thesis, Park partnered with the Institute of Medicine, a wing of the National Academy of Sciences, to host the Health Data Initiative in March 2010. It was a collaboration to spur participation. Together, they convened a contingent of accomplished entrepreneurs and innovators, drawn equally from the worlds of technology and health care, based on a philosophy that Park had drawn from someone we both considered our Obi-Wan Kenobi, the technology thought leader Tim O’Reilly. “If you are going to actually catalyze innovation with data,” Park said, “if you want to build an ecosystem of innovation that leverages the data, you need to engage from the beginning, the people who are actually going to innovate on the data. Ask them: ‘What would be valuable, how should we use the data, how should we improve the data?’ So we brought a group of 45 folks together, and put a pile of data in front of them and said ‘What do you think? What can you use this for?’”
What Park didn’t fully anticipate was that one plus one would equal three. O’Reilly was a legendary figure in technology, as the leading forecaster of the economic boom that would come from social networking, even coining the term Web 2.0. Don Berwick was a legendary figure in health care, thanks to work with the Institute for Healthcare Improvement. They knew nothing about each other, let alone the other’s importance to an entire community. Now, through this Health Data Initiative, they were in the same room, with the same goals. “Because of the fragmentation of society, you don’t necessarily have a lot of broad connectivity of experts,” Park said. “The intersectio
n of O’Reilly and Berwick, and of their followers, was really magical. A tremendous source of energy and productivity in the Health Data Initiative and all these initiatives is bringing together the best innovators in health care and the best innovators in tech to do things together with data that neither side alone could have done.”
The data sets available through HHS and other sources were voluminous and varied, rendering the permutations endless. By the end of the full-day session, the group had conceived roughly 20 categories of applications and services that the data could potentially power. Further, all left with a challenge: If they could make their conception a reality, within 90 days and without any government funding, their creation would be showcased at the first-ever Health Datapalooza, hosted by HHS and the Institute of Medicine. On June 2, 2010, they would exhibit more than 20 new or upgraded applications and services that would help patients find the right hospital or improve their health literacy, help doctors provide better care, and help policymakers make better decisions related to public health.
The ideas came from a range of sources. Some came from upstart firms, including MeYou Health. Its lightweight Community Clash card game, aimed at creating awareness of health factors in a user’s community as compared to others, drew some of the longest lines.
Others originated from established powerhouses such as Google, which spotted value in one set of HHS data, quality measures for every hospital in America, posted since 2005 on a website (hospitalcompare.hhs.gov). Google’s Chief Health Strategist, Dr. Roni Zeiger, saw the opportunity to bring this information to life through a more journalistic, provocative approach, one that would attract more eyeballs and influence more decisions.21 For instance, he asked: Where in New York City should a patient with chest pain seek care? The city has an abundance of world-renowned medical centers, including one that President Bill Clinton had chosen for his heart operation. While most of us rely on anecdotal advice from our doctors, friends, and neighbors when selecting a hospital for life-saving treatment, Dr. Zeiger demonstrated the potential of relying more on empirical evidence. He did it quickly and at little marginal cost, downloading the national HHS file freely available in computer-friendly form and uploading it to a Google cloud-based tool called Fusion Tables, a free service that simplifies a user’s ability to visualize, manipulate, or share data. He then selected roughly half-a-dozen measures, from heart failure mortality rates (within 30 days) to clinical statistics such as satisfaction surveys, such as whether a patient got a quiet room, zooming in on results in the New York City area. Then he published a screen shot of a map on his blog, with hospitals clearly marked, next to their corresponding “heart-friendly” and “patient-friendly” scores that he had derived from the data.
Innovative State Page 11