A Brief History of Science with Levity

Home > Other > A Brief History of Science with Levity > Page 23
A Brief History of Science with Levity Page 23

by Mike Bennett


  The basic concept of cellular phones began in 1947, when researchers looked at crude mobile car phones and realised that by using small cells they could substantially increase the range and traffic capacity of mobile phones. However at that time, the technology to do so did not exist.

  Anything to do with broadcasting and sending a radio or television message out over the airwaves comes under government regulation. A cell phone is a type of two-way radio. In the 1950s, various companies proposed that the authorities should allocate a large number of radio frequencies so that a widespread mobile telephone service would become feasible. They would then also have an incentive to research the new technology required. We can partially blame the government departments for the gap between the initial concept of a cellular service and its availability to the public.

  Initially very few frequency bands were approved. The government reconsidered its position in 1968, stating, “If the technology to build a better mobile service works, we will increase the frequency allocation, freeing the airwaves for more mobile phones.” Prospective system operators then proposed a cellular network of many small, low-powered broadcast towers, each covering a cell of a few miles in radius but collectively covering a much larger area. Each tower would use only a few of the total frequencies allocated to the system. As the phones travelled across the area, calls would be passed from tower to tower.

  The first fully automated mobile phone system for vehicles was launched in Sweden in 1956. Named MTA (Mobiltelefonisystem A), it allowed calls to be made and received in the car using a rotary dial. The car phone could also be paged. Calls from the car were direct-dial, whereas incoming calls required an operator to determine which base station the phone was currently at. It was developed by Sture Lauren and other engineers at Televerket network operator.

  Ericsson provided the switchboard while Svenska Radioaktiebolaget (SRA) and Marconi provided the telephones and base station equipment. MTA phones consisted of vacuum tubes and relays, and weighed 40 kg. In 1962, an upgraded version called Mobile Telephone B (MTB) was introduced. This was a push-button telephone, and used transistors and DTMF signalling to improve its operational reliability. In 1971 the MTD version was launched, and by that time several different brands of equipment were gaining commercial success. The network remained open until 1983, and had 600 customers when it closed.

  ANALOGUE CELLULAR NETWORKS (1G)

  The first automatic analogue cellular systems deployed were NTT’s system, first used in Tokyo in 1979. This system later spread to the whole of Japan, and NTT was first used in the Nordic countries in 1981.

  The first analogue cellular system widely deployed in North America was the Advanced Mobile Phone System (AMPS). It was commercially introduced in the Americas in October 1983, Israel in 1986 and Australia in 1987. AMPS was a pioneering technology that helped drive the mass market usage of cellular technology, but it had several serious issues by modern standards. Firstly it was unencrypted, and easily vulnerable to eavesdropping via a scanner. It was also susceptible to cell phone cloning, and it used a Frequency Division Multiple Access (FDMA) system that required significant amounts of the available wireless frequency spectrum to support it.

  On 6th March 1983, Dyna TAC Mobile Phone launched the first US 1G network. It cost $100 million to develop, and took over a decade to reach the market. The phone had a talk time of just half an hour, and took ten hours to recharge. Consumer demand was strong despite the battery life, weight and low talk time, and waiting lists were in the thousands.

  Many of the iconic early commercial cell phones such as the Motorola Dyna TAC Analogue AMPS were eventually superseded by Digital AMPS (D-AMPS) in 1990, and the AMPS service was shut down by most North American carriers by 2008.

  DIGITAL CELLULAR NETWORKS (2G)

  In the 1990s, the second-generation mobile phone systems emerged. Two systems competed for supremacy in the global market. They were the European-developed GSM standard and the US-developed CDMA standard. These differed from the previous generation by using digital instead of analogue transmission, and also fast out-of-band phone-to-network signalling. The rise in mobile phone usage as a result of 2G was explosive, and this era also saw the advent of prepaid mobile phones.

  In 1991 the first GSM network (Radiolinja) was launched in Finland. In general the frequencies used by 2G systems in Europe were higher than those in America, though with some overlap. For example, the 900 MHz frequency range was used for both 1G and 2G systems in Europe; so the 1G systems were rapidly closed down to make space for the 2G systems. In America the IS-54 standard was deployed in the same band as AMPS, and displaced some of the existing analogue channels.

  In 1993, the IBM Simon was introduced. This was possibly the world’s first smartphone. It was a mobile phone, pager, fax machine and PDA all rolled into one. It included a calendar, address book, clock, calculator, notepad, email and a touchscreen with a QWERTY keyboard. The IBM Simon had a stylus you used to tap the touchscreen with. It featured predictive typing that would guess the next characters as you tapped. It had applications, or at least a way to deliver more features by plugging a PCMCIA 1.8 MB memory card into the phone.

  Coinciding with the introduction of 2G systems was a trend away from the larger brick-sized phones toward tiny 100–200 gram handheld devices. This change was possible not only through technological improvements such as more advanced batteries and more energy-efficient electronics, but also because of the higher density of cell sites to accommodate increasing usage. The latter meant that the average distance transmission from phone to the base station shortened, leading to increased battery life whilst on the move.

  The second generation introduced a new variant of communication called SMS or text messaging. It was initially available only on GSM networks but spread eventually to all digital networks. The first machine-generated SMS message was sent in the UK on 3rd December 1992, followed in 1993 by the first person-to-person SMS sent in Finland. The advent of prepaid services in the late 1990s soon made SMS the communication method of choice amongst the young, a trend which then spread across all age groups.

  2G also introduced the ability to access media content on mobile phones. In 1998 the first downloadable content sold to mobile phones was the ringtone, launched by Finland’s Radiolinja (now Elisa). Advertising on the mobile phone first appeared in Finland when a free daily SMS news headline service was launched in 2000, sponsored by advertising.

  Mobile payments were trialled in 1998 in Finland and Sweden, where a mobile phone was used to pay for a Coca-Cola vending machine and car parking. Commercial launches followed in 1999 in Norway. The first commercial payment system to mimic banks and credit cards was launched in the Philippines in 1999, simultaneously by mobile operators Globe and Smart.

  The first full Internet service on mobile phones was introduced by NTT DoCoMo in Japan in 1999.

  MOBILE BROADBAND DATA (3G)

  As the use of 2G phones became more widespread and people began to utilise mobile phones in their daily lives, it became clear that demand for data (such as access to browse the Internet) was growing. Further experience from fixed broadband services showed there would also be an ever-increasing demand for greater data speeds. The 2G technology was nowhere near up to the job, so the industry began to work on the next generation of technology known as 3G. The main technological difference that distinguishes 3G technology from 2G technology is the use of packet switching rather than circuit switching for data transmission. In addition, the standardisation process focused on requirements more than technology (2 Mbs maximum data rate indoors and 384 Kbs outdoors, for example).

  Inevitably this led to many competing standards with different contenders pushing their own technologies, and the vision of a single unified worldwide standard looked far from reality. The standard 2G CDMA networks became 3G compliant with the adoption of Revision A to EV-DO, which made several additions to the protocol while retaining backwards compatibility.

  The first
network with 3G was launched by NTT DoCoMo in the Tokyo region of Japan in May 2001. European launches of 3G were made soon after in Italy and the UK by the Three/Hutchison group. The high connection speeds of 3G technology enabled a transformation in the industry. For the first time, media streaming of radio and television content to 3G handsets became possible, with companies such as Real Networks and Disney among the early pioneers in this type of offering.

  In the mid-2000s, an evolution of 3G technology began to be implemented, namely High Speed Downlink Packet Access (HSDPA). It is an enhanced 3G mobile telephony communications protocol in the High Speed Packet Access (HSPA) family, also coined 3.5G, 3G+ or turbo 3G. It allows networks based on Universal Mobile Telecommunications System (UMTS) to have higher data transfer speeds and capacity. Current HSDPA deployments support downlink speeds of 1.8, 3.6, 7.2 and 14.0 Mbs.

  By the end of 2013, there were 1.28 billion subscribers on 3G networks worldwide. The 3G telecoms services generated over $220 billion of revenue during 2013 and in many markets the majority of new phones activated were 3G phones. In Japan and South Korea the market no longer supplies phones of the second generation.

  Although mobile phones had long had the ability to access data networks such as the Internet, it was not until the widespread availability of good quality 3G coverage in the mid-2000s that specialised devices appeared to access the mobile Internet. The first such devices, known as “dongles”, plugged directly into a computer through the USB port. Another new class of device appeared subsequently, the so-called “compact wireless router” such as the Novatel MiFi. These made the existing 3G Internet connectivity available to multiple computers simultaneously over Wi-Fi, rather than just to a single computer via a USB plug-in.

  Such devices became especially popular for use with laptop computers due to the added portability they bestow. Consequently, some computer manufacturers started to embed the mobile data function directly into the laptop so a dongle or MiFi wasn’t needed. Instead, the SIM card could be inserted directly into the device itself to access the mobile data services. Such 3G-capable laptops became commonly known as “netbooks”. Other types of data-aware devices followed in the netbook’s footsteps. By the beginning of 2010, e-readers, such as the Amazon Kindle and the Nook from Barnes & Noble, had already become available with embedded wireless Internet, and Apple had announced plans for embedded wireless Internet on its iPad tablet devices beginning in late 2010.

  NATIVE IP NETWORKS (4G)

  By 2009, it had become clear that, at some point, 3G networks would be overwhelmed by the growth of bandwidth-intensive applications like streaming media. Consequently, the industry began looking to data-optimised fourth-generation technologies, with the promise of speed improvements up to tenfold over existing 3G technologies. The first two commercially available technologies billed as 4G were the WiMAX standard (offered in the US by Sprint) and the LTE standard, first offered in Scandinavia by TeliaSonera.

  One of the main ways in which 4G differed technologically from 3G was in its elimination of circuit switching, instead employing an all-IP network. Thus, 4G ushered in a treatment of voice calls just like any other type of streaming audio media, utilising packet switching over Internet, LAN or WAN networks. As we head towards 2020, the second half of this decade will certainly see the widespread introduction of 4G infrastructure, and further technical advances will be in the pipeline.

  LASERS

  The final major scientific and technological advance of this era that has had a profound effect on everyone’s life was the development of the laser. Lasers can burn movies onto DVDs, mark diamonds, precision cut metals, destroy missiles, perform eye surgery, cancer surgery and cosmetic surgery, and they can even whiten teeth.

  Natural light, or sunlight, contains an entire spectrum of frequencies with which we are all familiar. In the early days of science, monochromatic light could be produced using sodium lamps and other devices. However, although the light was monochromatic, that is of a single specific frequency, it was not coherent. That means that the photons in the light emitted were not in phase. The unique feature of laser light is that it is not only monochromatic, but it is also coherent, which gives it some special and very useful properties.

  I became interested in lasers when I was still at school. I remember saving the money that I earned from my paper round and eventually buying my first laser. It was manufactured by a company called Melles Griot and was a helium neon laser. It had a power of 2 mW and produced a beam of red laser light at a wavelength of 632.8 nm.

  Unlike in James Bond movies, you cannot see the path of a “visible wavelength” laser beam in clean air. The beam is only visible if the air contains dust particles or smoke which will reflect some of the light, thereby revealing its path. In addition, lasers do not make silly noises when they operate.

  The device itself was around 20cm long, but it also required a power pack of about the same size. At that time, solid-state lasers had not been developed. The early lasers such as mine consisted of an evacuated glass plasma tube containing the necessary components to enable the device to operate. Subsequently, glass tube lasers were replaced by solid-state lasers, in exactly the same way that transistors and microchips replaced the valves (tubes). Today solid-state lasers are produced that operate both within and outside the visible light spectrum.

  The word laser is the acronym for Light Amplification by Stimulated Emission of Radiation. Albert Einstein first explained the theory of stimulated emission in 1917, which became the basis of laser. He postulated that, when the population inversion exists between upper and lower levels among atomic systems, it is possible to realise amplified stimulated emission, and the stimulated emission would have the same frequency and phase as the incident radiation. However, it was in late the 1940s and 1950s that scientists and engineers did extensive work to realise a practical device based on the principle of stimulated emission. Notable scientists who pioneered the work include Charles Townes, Joseph Weber, Alexander Prokhorov and Nikolai G Basov.

  Initially, the scientists and engineers were working towards the realisation of a MASER (Microwave Amplification by the Stimulated Emission of Radiation), a device that amplified microwaves for its immediate application in microwave communication systems. Townes and the other engineers believed it to be possible to create an optical maser, a device for creating powerful beams of light using higher frequency energy to stimulate what was to be termed the lasing medium. Despite the pioneering work of Townes and Prokhorov it was left to Theodore Maiman in 1960 to invent the first laser using ruby as a lasing medium that was stimulated using high-energy flashes of intense light.

  The development of lasers has been a turning point in the history of science and engineering. It has produced completely new types of systems with the potential for applications in a wide variety of fields. During the 1960s, a lot of work had been carried out on the basic development of almost all the major lasers including high-power gas dynamic and chemical lasers. Almost all the practical applications of these lasers in defence as well as in industry were also identified during this period. The motivation of using the high-power lasers in a strategic scenario was a great driving force for the rapid development of these high-power lasers. In early 1970s, megawatt class carbon dioxide gas dynamic lasers were successfully developed and tested against typical military targets. The development of chemical lasers, free electron and X-ray lasers took a slightly longer time because of involvement of a multidisciplinary approach.

  Chemical lasers are powered by a chemical reaction permitting a large amount of energy to be released quickly. Such very high-power lasers are especially of interest to the military. However continuous wave chemical lasers at very high-power levels fed by streams of gases have been developed, and have some industrial applications. As examples, in the hydrogen fluoride laser (2,700–2,900 nm) and the deuterium fluoride laser (3,800 nm), the reaction is the combination of hydrogen or deuterium gas with combustion products of ethyl
ene in nitrogen trifluoride.

  Excimer lasers are a special sort of gas laser powered by an electric discharge in which the lasing medium is an excimer, or more precisely an exciplex in existing designs. These are molecules which can only exist with one atom in an excited electronic state. Once the molecule transfers its excitation energy to a photon, therefore, its atoms are no longer bound to each other and the molecule disintegrates. This drastically reduces the population of the lower-energy states, thus greatly facilitating a population inversion.

  Excimers currently use noble gas compounds. Noble gases are chemically inert and can only form compounds while in an excited state. Excimer lasers typically operate at ultraviolet wavelengths, with major applications including semi-conductor photolithography and LASIK eye surgery. Commonly used excimer molecules include ArF (emission at 193 nm), KrCl (222 nm), KrF (248 nm), XeCl (308 nm) and XeF (351 nm). The molecular fluorine laser, emitting at 157 nm in the vacuum ultraviolet, is sometimes referred to as an excimer laser, however this appears to be a misnomer inasmuch as F2 is a stable compound.

  Solid-state lasers use a crystalline or glass rod which is “doped” with ions that provide the required energy states. For example, the first working laser was made from a ruby crystal (chromium-doped corundum). The population inversion is actually maintained in the “dopant”, such as chromium or neodymium. These materials are pumped optically using a shorter wavelength than the lasing wavelength, often from a flashtube or from another laser.

  It should be noted that “solid state” in this sense refers to a crystal or glass, but this usage is distinct from the designation of “solid-state electronics” in referring to semiconductors. Semiconductor lasers (laser diodes) are pumped electrically and are thus not referred to as solid-state lasers. The class of solid-state lasers would, however, properly include fibre lasers in which dopants in the glass lase under optical pumping. But in practice these are simply referred to as “fibre lasers”, with “solid-state” reserved for lasers using a solid rod of such a material.

 

‹ Prev