Book Read Free

A History of the World in 12 Maps

Page 51

by Jerry Brotton


  Wiener was convinced that ‘the brain and the computing machine have much in common’,21 and in a paper also published in 1948 called ‘A Mathematical Theory of Communication’ Shannon took this idea a step further. He argued that there were two connected problems in any act of communication: the act of defining the message, and what he called the ‘noise’ or interference that affected its transmission from one source to another. For Shannon, a message’s content was irrelevant: to maximize the effectiveness of its transmission he envisaged communication as a conduit. The message originates from a source, enters a transmission device, and is then transmitted across a specific medium, where it encounters a variety of irrelevant ‘noise’, before reaching its intended destination where it is interpreted by a receiver. This metaphor drew on a functional account of human language, but it could also be applied to mechanical messages like the telegraph, television, telephone or radio. Shannon showed that all these messages (including speech) could be digitally transmitted and measured, through sound waves composed of ones and zeros.22 ‘If the base 2 is used,’ argued Shannon, ‘the resulting units may be called binary digits, or more briefly bits’ – thus introducing the term as a unit of countable information.23 Shannon went on to develop a theory of probability using complex algorithms that showed how to maximize the performance of the signal (or units of information) and minimize the transmission of unwarranted errors or ‘noise’.24

  Today, Shannon’s paper is widely regarded by many computer engineers as the Magna Carta of the information age. He provided a theory of how to store and communicate digital information quickly and reliably, and how to convert data into different formats, allowing it to be quantified and counted. Information was now fungible, a commodity capable of quantification and mutual substitution with other commodities. The impact of such a theory on the field of computing hardware would be enormous, and it would also affect other disciplines – including cartography. Over the next two decades cartographers began to adopt Shannon’s theory to develop a new way of understanding maps, based on the so-called ‘map communication model’ (MCM). In 1977 Arno Peters’s great adversary Arthur Robinson proposed a radical reassessment of the function of maps to reflect what he described as ‘an increased concern for the map as a medium of communication’.25 Traditionally, any theory of maps had ended with their completion: the interest was purely in the cartographer’s struggle to impose some kind of order on a disparate, contradictory (or ‘noisy’) body of information that was incorporated into the map according to the cartographer’s subjective decisions. Drawing on Shannon’s theory of communication, Robinson now proposed that the map was simply the conduit across which a message travels from mapmaker to its user, or what he called the percipient.

  The effect upon the study of mapmaking was decisive. Instead of analysing the subjective and aesthetic elements of map design, Robinson’s map communication model demanded a new account of the functional and cognitive aspects of maps. The result was an examination of mapping as a process, explaining how mapmakers collected, stored and communicated geographical information, and which then studied the percipient’s understanding and consumption of maps. Used alongside Shannon’s theories for maximizing communicative performance and minimizing noise, Robinson’s map communication model addressed a conundrum at least as old as Herodotus and Ptolemy: how to accommodate a mass of noise and disparate geographical (or hearsay) into an effective and meaningful map. Adapting Shannon’s theories of ‘noisy’ interference in transmitting information, Robinson aimed to minimize obstacles in what he designated as a map’s effective transmission. This meant avoiding inconsistent map design (for instance in the use of colour or lettering), poor viewing conditions (focusing again on the percipient) and ideological ‘interference’ (an enduring problem that took on greater resonance as Robinson continued his attack on Peters throughout the 1970s). Having directly incorporated both Shannon’s theory of communication and Robinson’s map communication model into subsequent computer technology, digital geospatial applications like Google Earth appear to fulfil the dream of producing maps where form and function are perfectly united, and geographical information about the world is communicated instantaneously to the percipient at any time or place in the world.

  Claude Shannon’s theories changed the perception of the nature of information and its electronic communication, and would provide the foundation for the development of subsequent computerized technology. The spectacular growth of information technology (IT) and graphic computer applications like Google Earth are indebted to Shannon’s mathematical and philosophical propositions. To put Shannon’s theory of communication into practice in the 1940s required a degree of computing power that only began to emerge in subsequent years with vital breakthroughs in electronic technology. The invention of transistors (semiconductors, or what we now call ‘chips’) at the Bell Laboratories in New Jersey in 1947 predated Shannon’s ideas, and in theory enabled the processing of electrical impulses between machines at a hitherto unimaginable speed. But it needed to be made from a suitable material to optimize its usage. In the 1950s a new process manufacturing transistors was developed using silicon, which was perfected in 1959 by a company based in what became known as Silicon Valley, northern California. In 1957 integrated circuits (ICs, known more commonly as ‘microchips’) were invented by Jack Kilby and Bob Noyce, enabling lighter, cheaper integration of transistors. By 1971 these developments culminated with the invention of the microprocessor – a computer on a chip – by the Intel engineer Ted Hoff (also working in Silicon Valley).26 The electronic vehicles required to test Shannon’s theories were now a reality.

  Due to their exorbitant cost at the time the initial impact of these technological developments was limited beyond their use in governmental military and defence, but some geographers were already beginning to use Shannon’s ideas in developing new ways of representing data. The most important practical innovation for subsequent geospatial applications was the emergence of geographical information systems (GIS) in the early 1960s. GIS are systems that use computer hardware and software to manage, analyse and display geographical data in solving problems in the planning and management of resources. To ensure standardization, the results are referenced to a map on an established earth-coordinate system which treats the earth as an oblate spheroid.

  In 1960 the English geographer Roger Tomlinson was working with an aerial survey company in Ottawa, Canada, on a government-sponsored inventory to assess the current use and future capability of land for agriculture, forestry and wildlife. In a country the size of Canada, to cover agricultural and forest areas alone would require over 3,000 maps on a scale of 1:50,000, even before the information could be collated and its results analysed. The government estimated that it would take 500 trained staff three years to produce the mapped data. But Tomlinson had an idea: he knew that the introduction of transistors into computers allowed for greater speed and larger memory. ‘Computers’, Tomlinson recalled, ‘could become information storage devices as well as calculating machines. The technical challenge was to put maps into these computers, to convert shape and images into numbers.’ The problem was that the largest machine then available was an IBM computer with just 16,000 bytes of memory, costing $600,000 (more than £4 million today), and weighing more than 3,600 kilograms.27

  In 1962 Tomlinson put his plan forward to the Canada Land Inventory. Showing the demonstrable influence of Shannon and Robinson’s theories of communication, he called it a geographic information system in which ‘maps could be put into numerical form and linked together to form a complete picture of the natural resources of a region, a nation or a continent. The computer could then be used to analyse the characteristics of those resources . . . It could thus help to devise strategies for rational natural resource management.’28 His proposal was accepted, and the Canada Geographic Information System (CGIS) became the first of its kind in the world. The ability of the resulting maps to represent colour, sha
pe, contour and relief was still limited by printing technology (usually dot matrix printers), but at this stage it was their capacity to collate huge amounts of data that really mattered.

  The CGIS was still active in the early 1980s, using enhanced technology to generate more than 7,000 maps with a partially interactive capability. It inspired the creation of hundreds of other GIS systems throughout North America, as well as substantial US government investment in the foundation of the National Center for Geographic Information and Analysis (NCGIA) in 1988. These developments in GIS marked a noticeable change in the nature and use of maps: not only were they entering a whole new world of computerized reproduction, but they promised to fulfil Shannon’s model of noise-free communication, facilitating new and exciting ways of organizing and presenting geographical information.29

  In the early days of implementing the CGIS, Tomlinson allowed himself a brief flight of fantasy: would it not be wonderful if there was a GIS database available to everyone that covered the whole world in minute detail? Even in the 1970s, the idea was still the preserve of science fiction, as computing power was simply unable to match Tomlinson’s aspiration. It was at this point that computer science began to take over from the geographers. Shannon had provided a theory of communicating countable information; the development of integrated circuits and microprocessors had led to a profound change in the capacity of computerized data; one of the challenges now was to develop hardware and software with the capability of drawing high resolution graphics composed of millions of Shannon’s binary ‘bits’ of information, and which could then be distributed across a global electronic network to a host of international users – in other words, an Internet.

  The Internet as we know it today was developed in the late 1960s by the US Defense Department’s Advanced Research Projects Agency in response to the threat of a nuclear attack from the Soviet Union. The department needed a self-sustaining communication network invulnerable to a nuclear strike, even if parts of the system were destroyed. The network would operate independently of a controlling centre, allowing data to be instantly rerouted across multiple channels from source to destination. The first computerized network went online on 1 September 1969, linking four computers in California and Utah, and was named ARPANET.30 In its first years, its interactivity was limited: public connection to ARPANET was expensive (between $50,000 and $100,000), and using its code was difficult. But gradually, technological developments throughout the 1970s began to open up the network’s possibilities. In 1971 the American computer programmer Ray Tomlinson sent the first email via ARPANET, using the @ sign for the first time to distinguish between an individual and their computer. The invention of the modem in 1978 allowed personal computers to transfer files without using ARPANET. In the 1980s a common communication protocol was developed that could be used by most computerized networks, paving the way for the development of the World Wide Web at CERN (the European Council for Nuclear Research) in Geneva in 1990. A team of researchers led by Tim Berners-Lee and Robert Cailliau designed an application that was capable of organizing Internet sites by information rather than location, using a hypertext transfer protocol (HTTP, a method of accessing or sending information found on web pages), and a uniform resource locator (URL, a method of establishing a unique address for a document or resource on the Internet).31

  These developments in information technology went hand in hand with the profound restructuring of Western capitalist economies that took place between 1970 and 1990. The worldwide economic crisis of the 1970s described in the last chapter led governments in the 1980s to reform economic relations through deregulation, privatization and the erosion of both the welfare state and the social contract between capital and labour organizations. The aim was to enhance productivity and globalize economic production, based on technological innovation. As Castells argues, the relations between a reinvigorated capitalism and electronic technology were mutually self-reinforcing, characterized by ‘the old society’s attempt to retool itself by using the power of technology to serve the technology of power’.32 In contrast to Arno Peters’s 1973 projection, which was a direct response to the economic crisis and political inequalities of the 1970s, the next generation of geospatial applications emerging in the early 1980s were born out of the economic policies of Reaganism and Thatcherism.

  The results of this economic change can be seen in the rise of computer graphics companies in California’s Silicon Valley throughout the 1980s, which began developing user-friendly graphics that would characterize the future of online user experience. In the late 1980s, Michael T. Jones, Chris Tanner, Brian McClendon, Rémi Arnaud and Richard Webb founded Intrinsic Graphics to design applications that could render graphics at a previously unimaginable speed and resolution. Intrinsic was subsequently acquired by Silicon Graphics (SGI), which had been founded in 1981, and specialized in 3D graphics display systems. SGI understood that the most compelling way to demonstrate their new technology was by visualizing it geographically.

  One of SGI’s inspirations was a nine-minute documentary film, Powers of Ten, made in 1977 by Charles and Ray Eames. The film opens with a couple picnicking in a park in Chicago filmed from just 1 metre away. It then zooms out by a factor of ten, to 10+25 or a billion light years away to imagine the perspective from the very edge of the known universe. The film then tracks back to the couple in the park, into the man’s hand, right down through his body and molecular structure, and finally ending with a view of subatomic particles of a carbon atom at 10-17.33 For its producers, the film’s message was one of universal connectedness, derived from the graphic visualization of mathematical scale. It quickly attained cult status within and beyond the scientific community. SGI’s challenge was to take the principle explored in Powers of Ten and unify satellite imagery and computerized graphics to zoom seamlessly between the earth and space very quickly – without being locked into the power of ten (or any other particular multiplier). They needed to mask the obvious intervention of technology in an attempt to simulate perfectly the experience of flight above the earth and deep into the cosmos.

  By the mid-1990s, SGI were starting to demonstrate its new capabilities. They began working on hardware called ‘InfiniteReality’, which used an innovative component called a ‘clip-map’ texture unit.34 A clip-map is a clever way of pre-processing an image that can quickly be rendered on the screen at different resolutions. It is a technological refinement of a MIP map (from the Latin ‘multum in parvo’, ‘many things in a small space’). It works on the basis of creating a large digital image – like a map of the United States – to a resolution of 10 metres. The dimensions of the image would be in the region of 420,000 × 300,000 pixels. If the user pans out to view the image on a 1,024 × 768 monitor, each pixel of data would correspond to thousands of pixels on the map. Clip-maps create a slightly larger source image by including extra pre-processed data for lower levels of resolution for the image. When the computer renders lower resolution versions of the image it avoids the need to interpolate each pixel from the full-size image but instead uses the pre-processed lower level resolution pixel data, arranged rather like an inverted pyramid. Using an innovative algorithm, all the clip-map needs to know is where you are in the world; it will then extract the specific data required from the larger virtual ‘texture’ – all the information which represents the world – ‘clipping’ off the bits you don’t need. So as you zoom down towards the earth from space, the system is supplying the screen with the information central to the user’s view, and everything else is discarded. This makes the application extremely economical in terms of memory, allowing the application to run quickly and efficiently on home computers. As Avi Bar-Zeev, one of the early employees at Intrinsic Graphics puts it, the application is ‘like feeding an entire planet piecewise through a straw’.35 In Claude Shannon’s terms, clip-mapping allows for the uploading of as little data as possible onto a graphics processing unit, to maximize speed and enable animation in r
eal time of complex realities – like physical geography.

  Mark Aubin, one of SGI’s engineers, recalls that ‘our goal was to produce a killer demo to show off the new texturing capabilities’ that drew on commercially available satellite and aerial data of the earth. The result was ‘Space-to-your-face’, a demonstration model that Aubin reveals was inspired more by computer gaming than by geography. After looking at a flipbook of Powers of Ten, Aubin remembers, ‘we decided that we would start in outer space with a view of the whole Earth, and then zoom in closer and closer’. From there, the demo focused in on Europe,

  and then, when Lake Geneva came into view, we’d zero in on the Matterhorn in the Swiss Alps. Dipping down lower and lower, we’d eventually arrive at a 3-D model of a Nintendo 64 [video game console], since SGI designed the graphics chip it uses. Zooming through the Nintendo case, we’d come to rest at the chip with our logo on it. Then we’d zoom a little further and warp back into space until we were looking at the Earth again.36

  SGI’s ‘killer demo’ was impressive and enthusiastically received by those who saw it, but more work was needed on both the software and the data. They needed to move quickly, because the bigger corporations were already beginning to see the potential of developing such applications. In June 1998 Microsoft launched TerraServer (the forerunner of Microsoft Research Maps, or MSR). In collaboration with the United States Geological Survey (USGS) and the Russian Federal Space Agency Sovinsformsputnik, TerraServer used their aerial photographic imagery to produce virtual maps of the United States. But even Microsoft did not fully grasp the significance of the application. Initially it was developed to test how much data its SQL Server could store without crashing. The content was secondary to the sheer size of its data, which within less than two years was over 2 terabytes.37

 

‹ Prev