Studies conducted at the California Institute of Technology during the 1960s provided greater insight into brain lateralization (functional specialization). The left and right cerebral hemispheres (sides) of the brain are almost identical in appearance and yet are very different in carrying out functions. The two hemispheres normally communicate with each other through a thick band of nerve fibers called the corpus callosum. Since the 1940s, large portions of this band had been severed to treat severe epilepsy, resulting in split-brain patients; these operations are now rare. The psychobiologist Roger Sperry and his graduate student Michael Gazzaniga tested the functioning of each hemisphere independent of the other in split-brain humans and monkeys. In about 1964, they found that while each hemisphere was able to learn, one hemisphere had no perception of what the other hemisphere learned or experienced.
The results of these studies led to the conclusion that the left and right hemispheres are specialized in performing different functions. The left brain is primarily concerned with analytical, verbal, and language-processing tasks, while the right side handles the senses, creativity, feelings and facial recognition. Sperry was awarded the 1981 Nobel Prize for his split-brain discoveries.
Individuals are often characterized as being left-brain or right-brain thinkers. Left-brain persons are said to be more logical, fact-oriented, linear thinkers, and concerned with structure and reasoning, while those labeled as being right-brained are said to be feelings-oriented, intuitive, creative, and musical. Although this makes for interesting conversation at cocktail parties, there is no compelling anatomical or physiological evidence to support these labels, and most scientists regard this characterization as a myth.
SEE ALSO Cerebral Localization (1861), Neuron Doctrine (1891), Psychoanalysis (1899).
The left brain is said to control analytical, structured thinking, while the right brain is believed to influence creativity. This left-brain, right-brain distinction is popular but has generally been discounted by neuroscientists.
1964
Quarks • Clifford A. Pickover
Murray Gell-Mann (b. 1929), George Zweig (b. 1937)
Welcome to the particle zoo. In the 1960s, theorists realized that patterns in the relationships between various elementary particles, such as protons and neutrons, could be understood if these particles were not actually elementary but rather were composed of smaller particles called quarks.
Six types, or flavors, of quarks exist and are referred to as up, down, charm, strange, top and bottom. Only the up and down quarks are stable, and they are the most common in the universe. The other heavier quarks are produced in high-energy collisions. (Note that another class of particles called leptons, which include electrons, are not composed of quarks.)
Quarks were independently proposed by physicists Murray Gell-Mann and George Zweig in 1964, and, by 1995, particle-accelerator experiments had yielded evidence for all six quarks. Quarks have fractional electric charge; for example, the up quark has a charge of +2/3, and the down quark has a charge of −1/3. Neutrons (which have no charge) are formed from two down quarks and one up quark, and the proton (which is positively charged) is composed of two up quarks and one down quark. The quarks are tightly bound together by a powerful short-range force called the color force, which is mediated by force-carrying particles called gluons. The theory that describes these strong interactions is called quantum chromodynamics. Gell-Mann coined the word quark for these particles after one of his perusals of the silly line in Finnegans Wake, “Three quarks for Muster mark.”
Right after the Big Bang, the universe was filled with a quark-gluon plasma, because the temperature was too high for hadrons (i.e. particles like protons and neutrons) to form. Authors Judy Jones and William Wilson write, “Quarks pack a mean intellectual wallop. They imply that nature is three-sided. . . . Specks of infinity on the one hand, building blocks of the universe on the other, quarks represent science at its most ambitious—also its coyest.”
SEE ALSO Electron (1897), Neutron (1932), Quantum Electrodynamics (1948), Standard Model (1961).
Scientists used the photograph (left) of particle trails in a Brookhaven National Laboratory bubble chamber as evidence for the existence of a charmed baryon (a three-quark particle). A neutrino enters the picture from below (dashed line in right figure) and collides with a proton to produce additional particles that leave behind trails.
1965
Cosmic Microwave Background • Clifford A. Pickover
Arno Allan Penzias (b. 1933), Robert Woodrow Wilson (b. 1936)
The cosmic microwave background (CMB) is electromagnetic radiation filling the universe, a remnant of the dazzling “explosion” from which our universe evolved during the Big Bang 13.7 billion years ago. As the universe cooled and expanded, there was an increase in wavelengths of high-energy photons (such as in the gamma-ray and X-ray portion of the electromagnetic spectrum) and a shifting to lower-energy microwaves.
Around 1948, cosmologist George Gamow and colleagues suggested that this microwave background radiation might be detectable, and in 1965 physicists Arno Penzias and Robert Wilson of the Bell Telephone Laboratories in New Jersey measured a mysterious excess microwave noise that was associated with a thermal radiation field with a temperature of about −454 ºF (3 K). After checking for various possible causes of this background “noise,” including pigeon droppings in their large outdoor detector, it was determined that they were really observing the most ancient radiation in the universe and providing evidence for the Big Bang model. Note that because photons of energy take time to reach the Earth from distant parts of the universe; whenever we look outward in space, we are also looking back in time.
More precise measurements were made by the COBE (Cosmic Background Explorer) satellite, launched in 1989, which determined a temperature of -454.47 ºF (2.735 K). COBE also allowed researchers to measure small fluctuations in the intensity of the background radiation, which corresponded to the beginning of structures, such as galaxies, in the universe.
Luck matters for scientific discoveries. Author Bill Bryson writes, “Although Penzias and Wilson had not been looking for the cosmic background radiation, didn’t know what it was when they had found it, and hadn’t described or interpreted its character in any paper, they received the 1978 Nobel Prize in physics.” Connect an antenna to an analog TV; make sure it’s not tuned to a TV broadcast and “about 1 percent of the dancing static you see is accounted for by this ancient remnant of the Big Bang. The next time you complain that there is nothing on, remember you can always watch the birth of the universe.”
SEE ALSO Telescope (1608), Electromagnetic Spectrum (1864), X-rays (1895), Hubble’s Law of Cosmic Expansion (1929), Cosmic Inflation (1980).
The Horn reflector antenna at Bell Telephone Laboratories in Holmdel, New Jersey, was built in 1959 for pioneering work related to communication satellites. Penzias and Wilson discovered the cosmic microwave background using this instrument.
1966
Dynamic RAM • Marshall Brain
Robert Dennard (b. 1932)
Every computer needs RAM, or random access memory. The central processing unit of the computer needs a place to store its programs and data so it can access them quickly—at the same pace that the CPU’s clock is operating. For each instruction the CPU (central processing unit) executes, it must fetch the instruction from RAM. The CPU also moves data to or from RAM.
Imagine you are an engineer looking at computer memory options in the late 1960s. There are two possibilities. The first is core memory, which is made by weaving tiny ferrite donuts into a wire mesh. The problems with core memory are many; it is expensive, heavy, and enormous. The second possibility is static RAM made from standard transistor circuits. It takes several transistors for each memory bit, and given the state of integrated circuits at the time, it is not possible to put much memory on a chip.
But in 1966, American electrical engineer Robert Dennard, working for IBM, tried something differ
ent in the interest of reducing the number of transistors and fitting more memory cells on a chip. He explored the idea of dynamic RAM using a capacitor to store one bit of data. When the capacitor is charged it represents a 1, discharged it represents a zero. On the surface this seems ridiculous, because capacitors leak. If you store a 1 in memory made of capacitors and do nothing, the capacitor will leak and forget the 1 in less than a tenth of a second.
But the advantage is that this approach greatly reduces the number of transistors, and therefore increases the number of memory cells on a chip. To solve the leakage problem, all of the capacitors are read periodically (for example, every few milliseconds) and rewritten, thus refilling all of the leaking capacitors containing 1s with a full charge. This approach is known as dynamic RAM (DRAM), first manifested in 1970, because it must be dynamically refreshed to keep the capacitors charged.
The dynamic RAM approach yields memory cells that are so much smaller, and therefore less expensive, than static RAM that every desktop, laptop, tablet, and smart phone today uses DRAM. It is a great example of the way engineers can reduce costs by embracing ideas that may seem initially ridiculous.
SEE ALSO Slide Rule (1621), Babbage Mechanical Computer (1822), ENIAC (1946), Transistor (1947).
Pictured: Dynamic SDRAM (synchronous dynamic random-access memory) for a computer.
1967
Endosymbiont Theory • Michael C. Gerald with Gloria E. Gerald
Konstantin Mereschkowski (1855–1921), Lynn Margulis (1938–2011)
The endosymbiont theory helps us understand evolution because it explains the origin of organelles in eukaryotic cells—those in plants, animals, fungi, and protists. Symbiosis, which occurs at all levels of biological organization, involves two organisms that cooperate for their mutual benefit to gain a competitive advantage—for example, insect pollination of flowers or the digestion of food by gut bacteria. In eukaryotic cells, mitochondria and chloroplasts are organelles involved in the generation of energy required to carry out cell functions. Mitochondria, the site of cellular respiration, use oxygen to break down organic molecules to form ATP (adenosine triphosphate), while chloroplasts in plants—the sites of photosynthesis—use energy derived from the sun to synthesize glucose from carbon dioxide and water.
ADDING ONE ORGANELLE AT A TIME. According to the endosymbiont theory, small bacteria (alpha proteobacteria) containing mitochondria were engulfed by primitive eukaryotic cells (protists). In the ensuing symbiotic relationship, the bacterium (now called the symbiont) provided its evolving mitochondria, the generator of energy, while the eukaryotic cell offered protection and nutrients. By an analogous process, a eukaryotic cell engulfed a photosynthetic cyanobacterium that, in time, evolved into a chloroplast. In this description of primary endosymbiosis, one living organism has been engulfed by another. When the product of this primary endosymbiosis is engulfed by another eukaryote, secondary endosymbiosis is said to have occurred. This provides the basis for incorporating additional organelles and expands the number of environments in which eukaryotes can survive.
The endosymbiotic theory was first proposed in 1905 for chloroplasts by the Russian botanist Konstantin Mereschkowski (who rejected Darwin’s theory of evolution and actively promoted eugenics); the idea was expanded to include mitochondria in 1920. Endosymbiotic theory gained no scientific traction until 1967, when it was reintroduced by Lynn Margulis, a biology professor at the University of Massachusetts, Amherst (and former wife of the late astronomer Carl Sagan). Her paper was rejected by fifteen journals before being accepted, and is now considered a milestone in endosymbiont theory.
SEE ALSO Natural Section (1859), Cellular Respiration (1937), Photosynthesis (1947).
This image depicts the symbiosis between a fly agaric mushroom (Amanita muscaria) and a birch tree. The mushroom receives sugar (C6H12O6) and oxygen from the tree in exchange for minerals and carbon dioxide.
1967
Heart Transplant • Clifford A. Pickover
James D. Hardy (1918–2003), Christiaan Neethling Barnard (1922–2001), Robert Koffler Jarvik (b. 1946)
Journalist Laura Fitzpatrick writes, “For much of recorded history, many doctors saw the human heart as the inscrutable, throbbing seat of the soul, an agent too delicate to meddle with.” However, the possibility of heart transplantation—in which the damaged heart of a recipient is replaced with the healthy heart of a deceased donor—became a possibility after the 1953 invention of the heart-lung machine, a device that could temporarily bypass the heart and lungs during surgery and ensure adequate oxygenation of the blood.
In 1964, American surgeon James Hardy performed the first heart transplant when he transplanted the heart of a chimpanzee into the chest of a dying man (no human heart was available). The animal heart beat inside the patient but was too small to keep him alive, and he died after 90 minutes. The world’s first successful human-to-human heart transplant took place in 1967, when South African surgeon Christiaan Barnard removed the heart of a young woman who was killed in a car accident. The recipient was Louis Washkansky, a 54-year-old man who suffered from heart disease. A day later, he was awake and talking. He lived for 18 days and then succumbed to pneumonia caused by the immunosuppressive drugs he was taking to combat rejection of the foreign organ tissue.
Organ transplants became much more successful after the 1972 discovery of cyclosporine, a compound derived from fungus that suppressed organ rejection while allowing a significant portion of the body’s immune system to function normally and fight general infection. The prognosis for heart transplant patients was no longer so bleak. For example, although an extreme case, American Tony Huesman survived for 31 years with a transplanted heart. Today, organs that can be transplanted include the heart, kidneys, liver, lungs, pancreas, and intestines. In 1982, American researcher Robert Jarvik implanted the first permanent artificial heart.
SEE ALSO Paré’s “Rational Surgery” (1545), Circulatory system (1628), Blood Transfusion (1829), Artificial Heart (1982).
Artwork titled “Transplants, Resurrection, and Modern Medicine.” A creative artist depicts the human heart growing from a tree, symbolizing the rejuvenation of life provided to recipients of donor hearts—as well as the “miracle” of modern transplant surgery.
1967
Saturn V Rocket • Marshall Brain
The Saturn V rocket screams “engineering!” From its ridiculous size to its brute force power to the mission it helped accomplish, it is the most amazing rocket ever created. It holds a number of records, including its 260,000-pound (117,934 kilogram) payload capacity to low Earth orbit.
How could engineers create something this stupendous given the technology available at the time? Most engineers were still using slide rules as this rocket was conceptualized. And how did they create it so quickly? In 1957, the United States had never successfully launched anything into orbit. Yet in 1967 this colossus headed into orbit with ease.
One key was the F-1 engine. Engineers started its development for an Air Force project several years before NASA even existed—a fortunate coincidence. It meant the engine was tested and running smoothly before actually needed. The F-1 is the largest single engine ever created, with a thrust rating of 1.5 million pounds (6.8 meganewtons). Putting five F-1s together on the first stage created 7.6 million pounds of thrust. Which is a good thing because, fully fueled and loaded with payload, the whole rocket weighed about 6.5 million pounds (3,000,000 kg). To launch the rocket, each engine burned almost one million pounds (450,000 kg) of kerosene and liquid oxygen in less than 3 minutes.
Once the first stage fell away, the remainder of the Saturn V was 5 million pounds (2.3 million kg) lighter, and the second and third stages took the payload the rest of the way to orbit burning liquid hydrogen and liquid oxygen.
At the top of the third stage rested an essential component—a ring called the Instrument Unit. At almost 22 feet (7 meters) in diameter, 3 feet (1 meter) high and weighing 2 tons, this ring contained th
e computers, radios, monitoring equipment, radar, batteries, and other systems to control the three stages during flight and to communicate with ground control. The microprocessor did not exist yet, so the brain here was a custom-made, triple redundant IBM minicomputer.
Engineers created this disposable behemoth to send men to the moon, and then used it to loft Skylab as well. A true engineering wonder.
SEE ALSO Wright Brothers’ Airplane (1903), First Humans in Space (1961), First on the Moon (1969).
The Apollo 4 (Spacecraft 017/Saturn 501) space mission was launched from the Kennedy Space Center, Florida.
1969
ARPANET • Marshall Brain
Donald Davies (1924–2000), Paul Baran (1926–2011), Lawrence Roberts (b. 1937)
In the 1950s, there were only a few hundred computers in the world, but, by the 1960s, companies were selling thousands of computers. The minicomputer was born in 1965 with the PDP-8, produced by the Digital Equipment Corporation (DEC).
What if you wanted to use a computer? You needed a terminal and a dedicated communication line. To use two computers, you needed two terminals and two lines. People started to consider connecting computers together on networks to allow access to many computers. Electrical engineers created hardware that allowed voice signals to be digitized and then sent as digital data. The T1 line, invented in 1961, could carry 1.5 million bits per second—enough bandwidth for 24 phone calls. Once phone lines could carry digital data, two things could happen: computers could connect together, and different services could be implemented that take advantage of the computers and the connections between them. By organizing all of this, the Internet was born.
The Science Book Page 32