A History of the World in 12 Maps

Home > Other > A History of the World in 12 Maps > Page 52
A History of the World in 12 Maps Page 52

by Jerry Brotton


  As TerraServer grew, SGI made a vital breakthrough. When one of their engineers, Chris Tanner, invented a way to do clip-mapping in software for PCs, some of the group founded a new software development company in 2001 called Keyhole, Inc. Keyhole’s intention was to take the new technology and try to find applications for it, and to answer the question that many of the team, including Mark Aubin, kept asking, and which could have been posed of Claude Shannon’s theory of communication: ‘What was it really good for?’38 In Shannon’s theory, the content of his units of information was irrelevant; all that mattered was how to store and communicate them. At this stage the fact that SGI’s developments took geographical data as their focus seemed almost incidental. Aubin understood that the capability to rapidly render graphic information on a globe was something that people found mesmerizing, and which went beyond the technical wizardry. The company attracted interest in the new application, which was obviously an innovative tool even if it still lacked what one of its creators would later call an ‘actionable application platform’.39 The data could be quantified and counted, but according to what value of use? Rather like late fifteenth-century printers, computer scientists at companies like SGI and Microsoft responded to the technical challenge of rendering geographical information in a new medium, but with little foresight as to how the new form would also change the content of maps.

  These computer engineers were beginning to realize that they were tapping into one of the most enduring and iconic graphic images in the human imagination: the earth as seen from above, and the ability to swoop down on it from a seemingly omniscient, divine location beyond terrestrial time and space. The technological ability to offer yet another perspective on this transcendent view of the globe was given an enormous boost by two specific political interventions made by the Clinton administration in the final years of the twentieth century. In January 1998 Vice President Al Gore delivered a talk at the California Science Center in Los Angeles entitled, ‘The Digital Earth: Understanding our Planet in the 21st Century’. Gore began by arguing that a ‘new wave of technological innovation is allowing us to capture, store, process and display an unprecedented amount of information about our planet and a wide variety of environmental and cultural phenomena. Much of this information will be “georeferenced” – that is, it will refer to some specific place on the Earth’s surface.’ Gore’s aim was to harness this information within an application he called ‘Digital Earth’: a ‘multi-resolution, three-dimensional representation of the planet, into which we can embed vast quantities of geo-referenced data’.

  Gore asked his audience to imagine a young child entering a museum and using his Digital Earth program.

  After donning a head-mounted display, she sees Earth as it appears from space. Using a data glove, she zooms in, using higher and higher levels of resolution, to see continents, then regions, countries, cities, and finally individual houses, trees, and other natural and man-made objects. Having found an area of the planet she is interested in exploring, she takes the equivalent of a ‘magic carpet ride’ through a 3-D visualization of the terrain. Of course, terrain is only one of the many kinds of data with which she can interact.

  Gore admitted that ‘this scenario may seem like science fiction’, and that ‘no one organization in government, industry or academia could undertake such a project’. Such an initiative, if it could be realized, would have progressive global ramifications. It could facilitate virtual diplomacy, fight crime, preserve biodiversity, predict climate change and increase agricultural productivity. In pointing the way forward, Gore acknowledged the challenges of integrating and freely disseminating such a vast body of knowledge, ‘especially in areas such as automatic interpretation of imagery, the fusion of data from multiple sources, and intelligent agents that could find and link information on the Web about a particular spot on the planet’. Nevertheless, he believed that ‘enough of the pieces are in place right now to warrant proceeding with this exciting initiative’. He then proposed: ‘we should endeavour to develop a digital map of the world at one meter resolution.’40

  The Clinton administration’s grasp of the need to open up online information did not end there. Since its development in the 1960s, Global Positioning Systems (or GPS) had been controlled by the US Air Force, through dozens of satellites orbiting the earth. The GPS signal allowed US military receivers to pinpoint any location in the world to an accuracy of less than 10 metres. Any member of the public prepared to spend thousands of dollars on a GPS receiver could pick up this signal. But for what were deemed reasons of national security the government filtered the signal for public consumption, using a programme called Selective Availability (SA). This degraded signal could only locate a position to within a few hundred metres, making it virtually useless for practical purposes. The Clinton administration faced increasingly vociferous lobbying by various business interests, including the automobile industry which wanted the deregulation of SA to allow the improved signal to support a range of commercial spin-offs such as in-car navigation systems.

  As a result, and primarily because of Al Gore’s advocacy, the Clinton administration turned off Selective Availability at midnight on 1 May 2000. The result was a much stronger and consistent GPS signal. Commercial businesses immediately grasped the potential of the decision and began putting online maps into the public domain. Simon Greenman, co-founder of the online map service MapQuest.com (launched in 1996), argues that this was a significant moment ‘when many of us from the GIS industry saw the power of the Internet to bring mapping to the masses for free’.41 Other companies like Multimap (launched in 1995) began selling digital maps, while others marketed a proliferation of GPS navigational devices, including relatively cheap personal satellite navigational systems. Avi Bar-Zeev is in no doubt about the significance of Gore’s Digital Earth and SA initiatives:

  Without the open Internet, Google Earth (and this blog and a bunch of other things we like) would not exist. And for that, we owe some thanks to Al Gore. So regardless of what you think of his politics, one of the clear motivations behind Google Earth was a shared desire to give people a vision of the Earth as a seamless whole and give them the tools to do something with that vision.42

  Both these developments added an extra impetus to the rise of geospatial applications in the first years of the twenty-first century. But in the febrile dot-com world of 2000–2001, it was the scramble for commercial survival that soon became paramount. In March 2000, the dot-com bubble suddenly burst, wiping out trillions of dollars of the value of IT companies across the globe. At Keyhole, work had begun on an application called Earthviewer that they saw as following Al Gore’s idea of ‘Digital Earth’, and which people like Mark Aubin thought could be marketed ‘as a consumer product, giving it away to the world’ that would raise revenue through advertising. But then ‘the dot-bomb hit and the company never received funding to support that model, so the company changed gears and focused on commercial applications’.43 Sony Broadband was already investing, but Keyhole wanted a broader portfolio of investors, and initially targeted the real-estate market. Although data for North America was easily available, the new tool was still limited in its global reach, so its use as an application to zoom in on a property and search the local area seemed attractive.

  In June 2001 Keyhole launched Earthviewer 1.0 to a fanfare of critical praise across the industry. The program cost $69.95, with a limited promotional version released for free. Buyers could fly through a 3D digital model of the earth, at unprecedented levels of resolution and speed, although the early versions still had their limitations as they were only able to draw on a database of five to six terabytes of information. The full-earth coverage was disappointingly low-resolution, and many major cities outside the United States were poorly represented and some not visible at all. Keyhole simply could not afford to license enough data from commercial satellite companies to cover the whole earth, so even the UK was only visible at a resolu
tion of 1 kilometre, making it impossible to make out streets. The elevation was often unaligned, with blurry imagery, and the application’s perceptible ‘flatness’ made its 3D claim questionable to many reviewers.

  Nevertheless, its usefulness soon became clear to those well beyond the real-estate market. When American and coalition forces invaded Iraq in March 2003, the US news networks repeatedly used Earthviewer to visualize bombing targets across Baghdad. Newspapers reported that the coverage was ‘making a surprise star of a tiny tech company and its super-sophisticated 3D maps’. As users overwhelmed the website and crashed it, CEO John Hanke is reported to have said, ‘[t]here are worse problems to have.’44 The CIA was already taking an interest in Keyhole, and just weeks earlier had invested in the company through In-Q-Tel, a private non-profit company funded by the agency. The investment was In-Q-Tel’s first with a private company on behalf of the National Imagery and Mapping Agency (NIMA). Formed in 1996 and run out of the Department of Defense, NIMA’s mission was to provide accurate geospatial information in support of military combat and intelligence. Announcing its investment in Keyhole, In-Q-Tel revealed that in ‘demonstrating the value of Keyhole’s technology to the national security community, NIMA used the technology to support United States troops in Iraq’.45 What exactly Keyhole did for the CIA remains unclear, but the injection of capital meant that the company’s short-term success was assured. By late 2004 it had launched six versions of Earthviewer.

  Then along came Google. In October 2004 the Internet search engine announced it had acquired Keyhole for an undisclosed sum. Jonathan Rosenberg, Google’s vice-president of Product Management, expressed his delight. ‘This acquisition gives Google users a powerful new search tool, enabling users to view 3D images of any place on earth as well as tap a rich database of roads, businesses and many other points of interest. Keyhole is a valuable addition to Google’s efforts to organize the world’s information and make it universally accessible and useful.’46 Looking back, Avi Bar-Zeev saw the acquisition of Keyhole as providing Google with the technology to design an application ‘to work like a physical globe on steroids’.47 But at the time nobody seemed aware of just how important the purchase would prove to be for Google’s wider business model.

  The story of Google’s rise to global pre-eminence has been told elsewhere,48 but a brief account of its emergence as one of the key players in the online world provides some explanation of why Keyhole was such an important addition to the company. Google’s founders, Sergey Brin and Larry Page, met at Stanford University in 1995 working as Ph.D. students in computer science. The World Wide Web was still in its infancy, and both Brin and Page grasped the massive potential of developing a search engine that could navigate users around its myriad sites and links. Search engines like AltaVista lacked the ability to conduct ‘intelligent’ searches that could organize information in terms of reliability and relevance, and weed out the Web’s more unsavoury elements (including pornography).

  For Page and Brin, looking at the situation in the late 1990s, the challenge was obvious. ‘The biggest problem facing users of web search engines today’, they said in April 1998, ‘is the quality of the results they get back. While the results are often amusing and expand users’ horizons, they are often frustrating and consume precious time.’ Their solution was PageRank (a punning reference to Page’s name), which attempted to measure the importance of a particular webpage by assessing the number and quality of hyperlinks to it. The cartographic language used by Brin and Page from the outset in describing PageRank is striking. ‘The citation (link) graph of the web is an important resource that has largely gone unused in existing web search engines,’ they wrote in 1998. ‘We have created maps containing as many as 518 million of these hyperlinks, a significant sample of the total. These maps allow rapid calculation of a web page’s “PageRank”, an objective measure of its citation importance that corresponds well with people’s subjective idea of importance.’49 The result was a system that still drives each Google search – estimated at over 34,000 each second (2 million searches per minute or 3 billion per day) in 2011.50

  In September 1997 Brin and Page registered ‘Google’ as a domain name (the intention was to use the name ‘googol’, the mathematical term for a one followed by a hundred zeros, but it was misspelt during the online registration). Within a year they had indexed 30 million pages online, and by July 2000 the figure stood at a billion. In August 2004 Google went public at $85 a share, raising $2 billion in the largest technology flotation ever. Between 2001 and 2009 its profits soared from an estimated $6 million to over $6 billion, with revenue of over $23 billion, 97 per cent of which came from advertising. With assets currently estimated at over $40 billion, Google processes 20 petabytes of information a day, and all with a global workforce of just 20,000, including an estimated 400 working on their geospatial applications.51 With such an extraordinary rise came an equally innovative business philosophy. As well as wanting to organize the world’s information and making it universally accessible, Google is driven by a series of beliefs laid out in its mission statement: ‘democracy on the web works’; ‘the need for information crosses all borders’, and most contentiously of all, ‘you can make money without doing evil’.52

  By 2004 Google had fulfilled Claude Shannon’s theory of quantifying information digitally: the question was how could such information be commodified and translated into financial profit? Google’s motives for acquiring Keyhole were inextricably linked to answering this question, and also showed its ability to grasp how the Internet was changing. Rather than just passively viewing information, the online community was looking for greater interaction with the production of content and an increased capacity to manipulate it – a shift known as Web 2.0, which is characterized by blogging, networking and uploading a variety of different media. Google knew that if its ambition was to ‘organise the world’s information’, it needed some way of depicting its geographical distribution and enticing commercial and personal users to buy and then interact with it. What it in fact needed was the largest virtual GIS application available, and Keyhole’s Earthviewer provided the answer. Google’s first move following its acquisition was to slash the price of buying Earthviewer from $69.95 to $29.95. They then got ‘under the hood’, in Jonathan Rosenberg’s words, as they planned to rebrand it. In June 2005, eight months after first acquiring Keyhole, the company announced the launch of its new, free downloadable program: Google Earth.

  The first reviews were ecstatic. Harry McCracken, editor in chief of PC World’s magazine and website, tested the application days before its official launch and called it ‘spellbinding’. It ranked, he wrote, ‘among the best free downloads in the history of free downloads’. He went on to outline the application’s benefits. It did not need a super-powerful PC to run, it enabled users to swoop and circle across the world, and cities and landscapes featured amazing 3D renderings that ‘are, indeed, wondrous’. Moving on to the drawbacks, McCracken admitted, ‘Google Earth is so spectacular, particularly for a free programme, that my first impulse was to feel guilty about criticising it.’ But the image resolution varied enormously, and some places were still not locatable (McCracken had a lot of trouble finding Hong Kong and Parisian restaurants). Data for the rest of the world was way behind that for the United States, and McCracken complained of difficulties in establishing what the application ‘does and doesn’t know’. He wondered how MSN’s forthcoming Virtual Earth software would compare to Google Earth, but, considering the latter had been released in a beta (or trial) version, McCracken assumed, rightly, that it would quickly evolve.53

  McCracken understood that Google Earth was effectively an updated version of Keyhole’s Earthviewer (they both used the same code base), and that very little separated the two. What was new was the sheer amount of information that stood behind Google Earth. Google had poured in hundreds of millions of dollars of investment to buy and upload commercial satellite and aeria
l imagery that virtually no other company had the resources or foresight to spend. When Earthviewer was first demonstrated to Sergey Brin, he thought at one level it was simply ‘cool’.54 But Google’s activities prior to their acquisition of the company suggest other factors were already at work. As early as 2002, long before it acquired Keyhole, Google had started buying high-resolution satellite imagery from companies like DigitalGlobe, whose two orbital satellites now capture imagery of up to a million square kilometres of the earth’s surface every day, at a resolution of less than half a metre. Google takes this data and captures it with scanners capable of 1,800 dots per inch, or 14 microns. The imagery is then colour-balanced and ‘warped’ to account for the curvature of the earth’s surface. It is then ready to be accessed by users. But Google does not rely on satellite imagery alone. It also uses aerial photography taken at an elevation of between 4,500 and 9,000 metres, using aeroplanes, hot-air balloons – even kites.55 The need to diversify its access to photographic data stems from its powerlessness in preventing the data it receives being blurred out. Press stories from early 2009 claiming that the company censored sensitive locations by blurring places such as the US Vice President’s residence proved to be inaccurate: the censorship apparently lay with the initial data obtained direct from the US military, not Google.56

  The company’s diversification was also driving another initiative. Just weeks before its acquisition of Keyhole in October 2004, Google had also acquired Where2, a small Australian-based digital mapping company, which began work on a new Google-branded map application. In February 2005, four months before Google Earth hit the market, Google announced the launch of Google Maps.57 Ultimately, the synergy between the two applications would enable the viewer to see a graphic, virtual map overlaid on a photo-real image of the earth’s surface, and today users are able to move between the two, depending on what kind of information they wish to access.

 

‹ Prev