Book Read Free

The Science Book

Page 33

by Clifford A Pickover


  The first Internet-like interconnection of computers occurred when four computers connected together in 1969 under the name ARPANET (Advanced Research Projects Agency Network), using concepts and ideas developed by American engineer Paul Baran, Welsh scientist Donald Davies, and Lawrence Roberts of the Lincoln Laboratory. Then this tiny network started growing. By 1984 the number of host computers hit 1,000. By 1987 the number was 10,000.

  Two key technologies important to the early Internet were NCP (Network Control Program) and IMPs (Interface Message Processor). Together these technologies created what is known as a packet-switched network between all the computers. When a host computer wanted to send something to another computer, it would break the information down into small data packets and hand the packets of information along with a destination address to its IMP. The IMP, working in conjunction with other IMPs, would deliver the packets to the desired recipient computer. The two computers had no idea how the packets would travel over the network, nor did they care. Once they arrived, packets would be reassembled. NCP would eventually be replaced by TCP/IP (Transmission Control Protocol/Internet Protocol), and IMPs would be replaced by routers. At that point, engineers had created the Internet as we know it today.

  SEE ALSO Telephone (1876), ENIAC (1946), Transistor (1947), World Wide Web (1990).

  This is a photo of the front panel of the very first Interface Message Processor (IMP). This one was used at the UCLA Boelter 3420 lab to transmit the first message on the Internet.

  1969

  First on the Moon • Jim Bell

  Neil A. Armstrong (1930–2012), Edwin G. “Buzz” Aldrin (b. 1930), Michael A. Collins (b. 1930)

  After Yuri Gagarin became the first human in space, the race between the United States and the Soviet Union quickly became focused on the next big milestone: landing astronauts on the Moon and bringing them safely back to the Earth. The Soviet Vostok program was reoriented toward the larger rockets and landing systems needed for a lunar landing and return. In America, the challenge was to beat the Russians and to meet slain president John F. Kennedy’s 1961 goal of doing it “before this decade is out.”

  A series of incrementally more advanced US missions were conducted between 1961 and 1969, starting with the Mercury single-astronaut flights, continuing with the Gemini two-person Earth orbital docking and rendezvous flights, and culminating in the three-person Apollo missions to the Moon. Apollo 8 achieved an important first in 1968 by sending the first humans to orbit the Moon and view the whole Earth and the lunar far side firsthand; the feat was repeated in early 1969 with the flight of Apollo 10, a full dress rehearsal of a lunar landing that sent astronauts to within 10 miles (16 kilometers) of the lunar surface before returning home. Meanwhile, the Russians continued to make progress in their own secret lunar cosmonaut program. Several catastrophic unmanned launch failures in 1969 set their program back significantly, however, opening the door for an American victory.

  That victory came on July 20, 1969, with the entire world watching as astronauts Neil A. Armstrong and Edwin G. “Buzz” Aldrin became the first humans to land, walk, and work on the moon. Armstrong and Aldrin landed on the ancient volcanic lava flows of the Mare Tranquillitatis (Sea of Tranquillity) impact basin (age dating of the samples showed them to be 3.6–3.9 billion years old), and spent about two and a half hours collecting samples and exploring the terrain. After less than a day, they took off and rejoined command module pilot Michael A. Collins back in lunar orbit for the three-day trip home—as world heroes.

  SEE ALSO Wright Brothers’ Airplane (1903), First Humans in Space (1961), Saturn V Rocket (1967).

  ABOVE: Buzz Aldrin’s bootprint in the fine, powdery lunar soil. BELOW: Apollo 11 astronaut Aldrin unloads scientific equipment from the lunar module Eagle at the landing site in Mare Tranquillitatis (photo taken by Neil Armstrong).

  1972

  Genetic Engineering • Marshall Brain

  Paul Berg (b. 1926)

  When we think about engineering, we generally think about creating a new object: a new building, a new device, a new mechanism. Genetic engineering is a different type of endeavor. Here we are taking an existing system that is quite complicated and that we do not completely understand—a genome—and we are tinkering with it. Genetic engineers add new genes to a genome to create new behaviors.

  The predecessor to genetic engineering is selective breeding. Breeders select desired traits. With selective breeding we have created all the various breeds of dogs.

  But genetic engineering, which appeared in 1972, when American biochemist Paul Berg created the first recombinant DNA molecules, is something altogether different: Engineers are injecting new genes into genomes in ways nature could never accomplish. For example, Berg combined two viruses. Other more recent applications have included a jellyfish gene that produces a green fluorescent protein added to a fish or a mouse, creating fluorescent mice. A gene that makes a plant immune to an herbicide gets added to soybean plants so that the herbicide won’t kill them.

  In one of the most bizarre examples, genetic engineers took genes for producing spider silk and added them to goats. The proteins of spider silk appear in the milk of a female goat. The goal was to extract the proteins to create super strong, highly elastic materials.

  There are different ways to inject the genes of one organism into another, and the gene gun is one popular tool. The technique is so simple, it is amazing that it works. The gene to be injected is added, in liquid form, to tiny particles of tungsten or gold. The particles are shot out of a gun, shotgun style, at a petri dish full of target cells. Some of the cells get punctured, but not killed, and they pick up the new gene.

  By injecting genes from one organism into another, genetic engineers are able to create new organisms. One of the most beneficial examples is human insulin produced by E. coli bacteria. Developed in the 1980s, genetically engineered insulin is used by millions of people today.

  SEE ALSO Mendel’s Genetics (1865), Chromosomal Theory of Inheritance (1902), HeLa cells (1951), DNA Structure (1953), Epigenetics (1983).

  GloFish,® the first genetically modified organism (GMO) designed as a pet (a fluorescent fish), was first sold in the United States in December 2003.

  1975

  Feigenbaum Constant • Clifford A. Pickover

  Mitchell Jay Feigenbaum (b. 1944)

  Simple formulas can produce amazingly diverse and chaotic behaviors while characterizing phenomena ranging from the rise and fall of animal populations to the behavior of certain electronic circuits. One formula of special interest is the logistic map, which models population growth and was popularized by biologist Robert May in 1976 and based on the earlier work of Belgian mathematician Pierre François Verhulst (1804–1849), who researched models of population changes. The formula may be written as xn+1 = rxn(1 − xn). Here, x represents the population at time n. The variable x is defined relative to the maximum population size of the ecosystem and therefore has values between 0 and 1. Depending on the value of r, which controls the rate of growth and starvation, the population may undergo many behaviors. For example, as r is increased, the population may converge to a single value, or bifurcate so that it oscillates between two values, then oscillates between four values, then eight values, and finally becomes chaotic such that slight changes in the initial population yield very different, unpredictable outcomes.

  The ratio of the distances between two successive bifurcation intervals approaches the Feigenbaum constant, 4.6692016091. . ., a number discovered by American mathe-matical physicist Mitchell Feigenbaum in 1975. Interestingly, although Feigenbaum initially considered this constant for a map similar to the logistic map, he also showed that it applied to all one-dimensional maps of this kind. This means that multitudes of chaotic systems will bifurcate at the same rate, and thus his constant can be used to predict when chaos will be exhibited in systems. This kind of bifurcation behavior has been discovered in many physical systems before they enter the chaotic regim
e.

  Feigenbaum quickly realized that his “universal constant” was important, remarking that “I called my parents that evening and told them that I had discovered something truly remarkable, that, when I had understood it, would make me a famous man.”

  SEE ALSO Cellular Automata (1952), Chaos and the Butterfly Effect (1963), Fractals (1975).

  Bifurcation diagram (rotated clockwise by 90º), by Steven Whitney. This figure reveals the incredibly rich behavior of a simple formula as a parameter r is varied. Bifurcation “pitchforks” can be seen as small, thin, light branching curves amidst the chaos.

  1975

  Fractals • Clifford A. Pickover

  Benoît B. Mandelbrot (1924–2010)

  Today, computer-generated fractal patterns are everywhere. From squiggly designs on computer art posters to illustrations in the most serious of physics journals, interest continues to grow among scientists and, rather surprisingly, artists and designers. The word fractal was coined in 1975 by mathematician Benoît Mandelbrot to describe an intricate-looking set of curves, many of which were never seen before the advent of computers with their ability to quickly perform massive calculations. Fractals often exhibit self-similarity, which suggests that various exact or inexact copies of an object can be found in the original object at smaller size scales. The detail continues for many magnifications—like an endless nesting of Russian dolls within dolls. Some of these shapes exist only in abstract geometric space, but others can be used as models for complex natural objects such as coastlines and blood vessel branching. The dazzling computer-generated images can be intoxicating, motivating students’ interest in math more than any other mathematical discovery in the last century.

  Physicists are interested in fractals because they can sometimes describe the chaotic behavior of real-world phenomena such as planetary motion, fluid flow, the diffusion of drugs, the behavior of inter-industry relationships, and the vibration of airplane wings. (Chaotic behavior often produces fractal patterns.) Traditionally, when physicists or mathematicians saw complicated results, they often looked for complicated causes. In contrast, many fractal shapes reveal the fantastically complicated behavior of the simplest formulas.

  Early explorers of fractal objects include Karl Weierstrass, who in 1872 considered functions that were everywhere continuous but nowhere differentiable, and Helge von Koch, who in 1904 discussed geometric shapes such as the Koch snowflake. In the nineteenth and early twentieth centuries, several mathematicians explored fractals in the complex plane; however, they could not fully appreciate or visualize these objects without the aid of the computer.

  SEE ALSO Descartes’ La Géométrie (1637), Pascal’s Triangle (1654), Chaos and the Butterfly Effect (1963).

  Fractal structure by Jos Leys. Fractals often exhibit self-similarity, which suggests that various structural themes are repeated at different size scales.

  1977

  Public-Key Cryptography • Clifford A. Pickover

  Ronald Lorin Rivest (b. 1947), Adi Shamir (b. 1952), Leonard Max Adleman (b. 1945), Bailey Whitfield Diffie (b. 1944), Martin Edward Hellman (b. 1945), Ralph C. Merkle (b. 1952)

  Throughout history, cryptologists have sought to invent a means for sending secret messages without the use of cumbersome code books that contained encryption and decryption keys that could easily fall into enemy hands. For example, the Germans, between 1914 and 1918, lost four code books that were recovered by British intelligence services. The British code-breaking unit, known as Room Forty, deciphered German communications, giving Allied forces a crucial strategic advantage in World War I.

  In order to solve the key management problem, in 1976, Whitfield Diffie, Martin Hellman, and Ralph Merkle at Stanford University, California, worked on public-key cryptography, a mathematical method for distributing coded messages through the use of a pair of cryptographic keys: a public key and a private key. The private key is kept secret, while, remarkably, the public key may be widely distributed without any loss of security. The keys are related mathematically, but the private key cannot be derived from the public key by any practical means. A message encrypted with the public key can be decrypted only with the corresponding private key.

  To better understand public-key encryption, imagine a mail slot in the front door to a home. Anyone on the street can stuff something into the mail slot; the public key is akin to the house address. However, only the person who possesses the key to the house door can retrieve the mail and read it.

  In 1977, MIT scientists Ronald Rivest, Adi Shamir, and Leonard Adleman suggested that large prime numbers could be used to guard the messages. Multiplication of two large prime numbers is easy for a computer, but the reverse process of finding the two original prime numbers given their product can be very difficult. It should be noted that computer scientists had also developed public-key encryption for the British intelligence at an earlier date; however, this work was kept secret for reasons of national security.

  SEE ALSO Sieve of Eratosthenes (c. 240 BCE) Proof of the Prime Number Theorem (1896), ENIAC (1946).

  Enigma machine, used to code and decode messages before the age of modern cryptography. The Nazis used Enigma-produced ciphers, which had several weaknesses, such as the fact that the messages could be decoded if a code book was captured.

  1978

  Theory of Mind • Wade E. Pickren

  David Premack (1925–2015)

  The ability to imagine what other people feel or think and to respond accordingly is one of the most important accomplishments of social development. Modern developmental science has been intensively studying this ability in infants, children, chimpanzees, and even rodents for about thirty years.

  Developmental psychologists call this ability theory of mind (ToM). It is a principle that can be found in several of the world’s major religions, but in psychology, David Premack and Guy Woodruff offered one of the first full expressions of ToM in 1978. Formally, theory of mind refers to children’s understanding that others also have thoughts, beliefs, objectives, and emotions. Without ToM, the child would not be able to pick up on the social cues or intentions of others, as is often the case when a child suffers from autism.

  Theory of mind is a developmental process that, in normally developing children, is usually fully in place by about age four or five. Scientists have found that the critical precursors of ToM occur as early as seven to nine months as the infant learns that the attention of others can be directed by simple tasks like pointing or reaching. By the end of the first year, infants are beginning to understand that people have intentions. But it is not until about age four or five that children truly understand that there is a link between how others feel or think and what they do.

  Neuroscientists using brain-imaging techniques have shown that this is exactly the age when the prefrontal cortex of the brain is rapidly maturing. For children with autism, this is not the case, although there are interventions that can help improve the brain’s responses in children with autism.

  Theory of mind is crucially important for displaying empathy and caring for others. It makes it possible for us to be socially competent. Research on ToM has greatly facilitated our understanding of children’s social development, with implications for emotional and cognitive development. It is also a principle that facilitated the reception of mirror neurons.

  SEE ALSO The Principles of Psychology (1890), Psychoanalysis (1899), Classical Conditioning (1903), Placebo Effect (1955), Cognitive Behavior Therapy (1963).

  By age four or five, normal children learn that a person’s actions link to what they feel and think—a crucial step in the development of empathy and social competence.

  1979

  Gravitational Lensing • Jim Bell

  One of the fundamental features of physicist Albert Einstein’s early-twentieth-century theory of general relativity is that space and time are curved near extremely massive objects. The curvature of space-time led Einstein and others to predict that light from distant o
bjects would be bent by the gravitational field of massive foreground objects. The prediction was verified in 1919 by the British astrophysicist Arthur Stanley Eddington, who noticed that stars observed near the Sun during a solar eclipse were slightly out of position. Einstein continued to study this effect in the 1930s, and he and others, including the Swiss-American astronomer Fritz Zwicky, speculated that more massive objects, such as galaxies and clusters of galaxies, could bend and amplify light from distant objects almost as a lens bends and magnifies normal light.

  It took many decades for astronomers to find observational evidence of such gravitational lensing, however. The first example was discovered in 1979 by astronomers at the Kitt Peak National Observatory in Arizona, who found an example of what appeared to be twin quasars—two active galactic nuclei very close to each other in the sky. The two quasars were shown to actually be a single object whose light was bent and split into two parts by the strong gravitational field of a foreground galaxy.

 

‹ Prev