A Brief History of Science with Levity

Home > Other > A Brief History of Science with Levity > Page 22
A Brief History of Science with Levity Page 22

by Mike Bennett


  In late 1966 MIT went on to develop the computer network concept, and quickly put together a comprehensive plan, publishing it in 1967. The word “packet” was adopted from the work at MIT, and the proposed line speed to be used in the ARPANET design was upgraded from 2.4 kbps to 50 kbps. The network topology and economics were designed and optimised by an engineering team at UCLA (University of California, Los Angeles).

  Soon after this the first host-to-host message was sent. Two more nodes were added at UC Santa Barbara and at the University of Utah. These last two nodes incorporated application visualisation projects.

  UCSB investigated methods for the display of mathematical functions using storage displays to deal with the problem of refresh over the net, and investigated methods of 3D representations over the net. Thus, by the end of 1969, four host computers were connected together into the initial ARPANET, and the budding Internet was off the ground. Even at this early stage, it should be noted that the networking research incorporated both work on the underlying network and work on how to utilise the network. This tradition continues to this day.

  Computers were added quickly to the ARPANET during the following years, and work proceeded on completing a functioning host-to-host protocol and other network software. In December 1970 the Network Working Group (NWG) finished the initial ARPANET host-to-host protocol, called the Network Control Protocol (NCP). As the ARPANET sites completed implementing NCP during the period 1971–72, the network users could finally begin to develop applications.

  In October 1972 a large and very successful demonstration of the ARPANET was given at the International Computer Communication Conference (ICCC). This was the first demonstration of this new network technology to the public. It was also in 1972 that the initial “hot” application, electronic mail, was introduced.

  In July 1973, engineers further expanded its power by writing the first email utility program that could list, selectively read, file, forward and respond to messages. From there email took off as the largest network application for over a decade. This was the embryo of the kind of activity we see on the World Wide Web today, namely, the enormous growth in all kinds of “people-to-people” traffic.

  The original ARPANET grew into the Internet. The Internet was based on the idea that there would be multiple independent networks of rather arbitrary design, beginning with the ARPANET as the pioneering packet switching network. However it was soon to include packet satellite networks, ground-based packet radio networks and other networks.

  The Internet as we now know it embodies a key underlying technical idea, namely that of open architecture networking. In this approach, the choice of any individual network technology was not dictated by a particular network architecture but rather could be selected freely by a provider. It could be made to work with the other networks through a multi-level “internetworking architecture”.

  Up until that time there was only one general method for federating networks. This was the traditional circuit switching method where networks would interconnect at the circuit level. This was achieved by passing individual bits on a synchronous basis along a portion of an end-to-end circuit between a pair of end locations.

  While there were other limited ways to interconnect different networks, they required that one be used as a component of the other, rather than acting as a peer of the other in offering end-to-end service. In an open-architecture network, the individual networks may be separately designed and developed and each may have its own unique interface which it may offer to users and/or other providers. Each network can be designed in accordance with the specific environment and user requirements of that network. There are generally no constraints on the types of network that can be included or on their geographic scope, although certain pragmatic considerations will dictate what makes sense to offer.

  Key to making the packet radio system work was a reliable end-to-end protocol that could maintain effective communication in the face of jamming and other radio interference, and withstand intermittent blackout such as would be caused by being in a tunnel or blocked by the local terrain. It was first contemplated that developing a protocol local only to the packet radio network would work, since that would avoid having to deal with the multitude of different operating systems, and continuing to use NCP.

  However, NCP did not have the ability to address networks (and machines) further downstream than a destination server on the ARPANET, and thus some change to NCP would also be required. (The assumption was that the ARPANET was not changeable in this regard.) NCP relied on ARPANET to provide end-to-end reliability. If any packets were lost, the protocol (and presumably any applications it supported) would come to a grinding halt.

  In this model NCP had no end-to-end host error control, since the ARPANET was to be the only network in existence and it would be so reliable that no error control would be required on the part of the hosts. Thus it was decided to develop a new version of the protocol which could meet the needs of an open-architecture network environment. This protocol would eventually be called the Transmission Control Protocol/Internet Protocol (TCP/IP). While NCP tended to act like a device driver, the new protocol would be more like a communications protocol.

  Commercialisation of the Internet involved not only the development of competitive, private network services, but also the development of commercial products implementing the Internet technology. In the early 1980s, dozens of vendors were incorporating TCP/IP into their products because they saw buyers for that approach to networking. Unfortunately they lacked both real information about how the technology was supposed to work, and how the customers planned on using this approach to networking. Many saw it as a nuisance add-on that had to be glued on to their own proprietary networking solutions. The US DoD had mandated the use of TCP/IP in many of its purchases, but gave little help to the vendors regarding how to build useful TCP/IP products.

  In 1985, recognising this lack of information availability and appropriate training, a three-day workshop was arranged for all vendors to learn about how TCP/IP worked, and what it still could not do well. The speakers came mostly from the research community who had both developed these protocols and used them in day-to-day work. About 250 vendor personnel came to listen to fifty inventors and experimenters. The results were surprising for both sides. The vendors were amazed to find that the inventors were so open about the way things worked (and what still did not work), and the inventors were pleased to listen to new problems they had not considered, but were being discovered by the vendors in the field. Thus a two-way discussion was formed that lasted for over a decade.

  After two years of conferences, tutorials, design meetings and workshops, a special event was organised that invited those vendors whose products ran TCP/IP well to come together in one room for three days to show off how well they all worked together over the Internet. In September of 1988 the first Interop trade show was born. Fifty companies made the cut, and 5,000 engineers from potential customer organisations came to see if it all worked as was promised. It did. This was because the vendors worked extremely hard to ensure that everyone’s products interoperated with all of the other products, and even with those of their competitors. The Interop trade show has grown immensely since then, and today it is held in seven locations around the world each year. An audience of over 250,000 people come to learn which products work with each other in a seamless manner, to learn about the latest products, and discuss the latest technology.

  In parallel with the commercialisation efforts, the vendors began to attend meetings that were held three or four times a year to discuss new ideas for extensions of the TCP/IP protocol suite. Starting with a few hundred attendees mostly from academia and paid for by the government, these meetings now often exceed a thousand attendees, mostly from the vendor community and paid for by the attendees themselves. The reason it is so useful is that it is composed of all stakeholders, researchers, end users and vendors.

  Network management provides an examp
le of the interplay between the research and commercial communities. In the beginning of the Internet, the emphasis was on defining and implementing protocols that achieved interoperation. As the network grew larger, it became clear that the sometimes ad hoc procedures used to manage the network would no longer work. Manual configuration of tables was replaced by distributed automated algorithms, and better tools were devised to isolate faults.

  In 1987 it became clear that a protocol was needed that would permit the elements of the network, such as the routers, to be remotely managed in a uniform way. Several protocols for this purpose were proposed, including Simple Network Management Protocol or SNMP (designed, as its name would suggest, for simplicity, and derived from an earlier proposal called SGMP), HEMS (a more complex design from the research community) and CMIP (from the OSI community).

  A series of meetings led to the decision that HEMS would be withdrawn as a candidate for standardisation, but that work on both SNMP and CMIP would go forward, with the idea that the SNMP could be a more near-term solution and CMIP a longer-term approach. The market could choose the one it found more suitable. SNMP is now used almost universally for network-based management.

  The Internet has changed enormously since it came into existence. It was conceived in the era of time-sharing, but has progressed into the era of personal computers, client-server and peer-to-peer computing, the network computer, smartphones and tablets. It was designed before LANs existed, but has accommodated that network technology, as well as the more recent ATM and frame switched services. It was envisioned as supporting a range of functions from file sharing and remote login to resource sharing and collaboration, and has spawned electronic mail and the World Wide Web. But most importantly, it started as the creation of a small band of dedicated researchers, and has grown to be a commercial success with billions of dollars’ worth of annual investment.

  One should not conclude that the Internet has now finished changing. The Internet, although a network in name and geography, is a creature of the computer, not the traditional network of the telephone or television industry. It will, and indeed it must, continue to change and evolve at the speed of the computer industry if it is to remain relevant. It is now changing to provide new services such as real time transport, in order to support, for example, audio and video streams.

  The availability of pervasive networking (i.e. the Internet) along with powerful, affordable computing and communications in portable form (i.e. laptop computers, smartphones and tablets), is making possible a new paradigm of nomadic computing and communications. This evolution will bring us new applications in the future. It is evolving to permit more sophisticated forms of pricing and cost recovery, a perhaps painful requirement in this commercial world.

  New modes of access and new forms of service will spawn new applications, which in turn will drive further evolution of the net itself. The most pressing question for the future of the Internet is not how the technology will change, but how the process of change and evolution itself will be managed. With the success of the Internet has come a proliferation of stakeholders – stakeholders now with an economic as well as an intellectual investment in the network.

  We now see, in the debates over control of the domain name space and the form of the next generation IP addresses, a struggle to find the next social structure that will guide the Internet in the future. The form of that structure will be harder to find, given the large number of concerned stakeholders. At the same time, the industry struggles to find the economic rationale for the large investment needed for the future growth, for example to upgrade residential access to a more suitable ultra-fast technology. If the Internet stumbles, it will not be because we lack for technology, vision or motivation. It will be because we cannot set a direction and march collectively into the future.

  Now we will turn our attention to the Global Positioning System (GPS). For centuries, navigators and explorers have searched the heavens for a system that would enable them to locate their position on the globe with the accuracy necessary to avoid tragedy and to reach their intended destinations. On 26th June 1993, however, this quest became a reality. On that date, the US Air Force launched the twenty-fourth Navstar satellite into orbit, completing a network of twenty-four satellites known as the Global Positioning System. With a GPS receiver that costs a few hundred dollars, you can instantly find your location on the planet. Your latitude, longitude and even altitude will be known to within a few metres.

  This incredible new technology was made possible by a combination of scientific and engineering advances, particularly the development of the world’s most accurate time pieces, atomic clocks which are precise to within a billionth of a second. The clocks were created by physicists seeking answers to questions about the nature of the universe, with no conception that their technology would some day lead to a global system of navigation. Today, GPS is saving lives, helping society in countless other ways, and generating thousands of jobs in a multi-billion dollar industry.

  In addition, atomic clocks have been used to confirm Einstein’s hypothesis that time slows down in the observer’s frame of reference as you increase in speed. I remember many years ago that British Airways were loaned two atomic clocks, which they placed at Heathrow airport for a week. This was to confirm that the two clocks registered exactly the same time during this period.

  One clock was then placed on a flight from London to Sydney. When the aircraft returned, the time on the two clocks were compared. It was found that the clock that had been aboard the aircraft recorded time more slowly than the clock that was stationary in London. Although the timing difference registered was only billionths of a second, this confirmed an important pillar of Einstein’s space time predictions.

  When global positioning (GPS) was initially introduced, although it had the technical ability to be fantastically accurate, the American military deliberately corrupted the system using what was known at the time as “selective availability”. This meant that some erroneous information was transmitted in order to degrade the accuracy which the system was capable of. US military specification GPS filtered out the selective availability.

  I remember when I purchased my first boat, I fitted a GPS system in order that we could have a navigational backup to the conventional systems. Our position jumped around regularly. Even when we were tied up at the quayside, the GPS would move the reported position of our boat by up to 100 metres at regular intervals.

  Selective availability was supposedly introduced in order that foreign powers could not use this incredibly accurate system to target their weapons. It is difficult to imagine who thought up this ridiculous scenario. The designers of the system must have thought of including secondary and tertiary systems that they could switch to at the press of a button, which would encrypt the positioning information, making it only available to the US military. No foreign power would be stupid enough to use a guidance system for their weapons over which the Americans had absolute control.

  Selective availability was finally switched off when the penny dropped on this point, and also after GPS tracking was incorporated into all mobile smartphones. Using this feature, it enabled the authorities to precisely find the location of any mobile phone. This had huge benefits for the rescue and law enforcement services. We are now able to locate any mobile phone to within a matter of metres, even if it is switched off.

  Advances in technology and new demands on the existing system have now led to efforts to modernise the GPS system and implement the next generation of GPS III satellites, and the next generation Operational Control System (OCS). Announcements from the White House initiated these changes, and then the US Congress authorised the modernisation effort to implement GPS III.

  In addition to GPS, other systems are in use or under development. The Russian Global Navigation Satellite System (GLONASS) was developed during the same period as GPS, but suffered from incomplete coverage of the globe until the mid-2000s. There are also the planned Europe
an Union Galileo positioning system, the Indian Regional Navigational Satellite System and the Chinese Compass Navigation System.

  Restrictions have been placed on the civilian use of GPS. The US government controls the export of some civilian receivers. All GPS receivers capable of functioning at above 18 kilometres (11 miles) in altitude and 515 metres per second, or designed or modified for use with unmanned air vehicles such as ballistic or cruise missile systems and drones, are classified as munitions (weapons) and therefore require State Department export licences.

  Previously this rule applied even to otherwise purely civilian units that only received the L1 frequency and the C/A (Coarse/Acquisition) code, and could not correct for selective availability. The US government discontinued SA on 1st May 2000, resulting in a much improved autonomous GPS accuracy.

  Disabling operation above these limits exempts the receiver from classification as a munition. Vendor interpretations differ. The rule refers to operation at both the target altitude and speed, but some receivers stop operating even when stationary. This has caused problems with some amateur radio balloon launches that regularly reach altitudes of 30 kilometres (19 miles).

  These limitations only apply to units exported from (or which have components exported from) the USA. There is a growing trade in various components, including GPS units, supplied by other countries, which are expressly sold as ITAR free.

  CHAPTER 23

  Now we will move on to discuss mobile phones. The type of wireless communication that is the most familiar today is the mobile phone. It is often called “cellular” because the system uses many base stations to divide a service area into multiple cells. Cellular calls are transferred from base station to base station as a user travels from cell to cell.

 

‹ Prev