The Science Book

Home > Other > The Science Book > Page 13
The Science Book Page 13

by Clifford A Pickover


  SEE ALSO Kinetic Theory (1859), Electron (1897), Atomic Nucleus (1911), Quarks (1964).

  LEFT: Engraving of John Dalton, by William Henry Worthington (c. 1795–c. 1839). RIGHT: According to atomic theory, all matter is composed of atoms. Pictured here is a hemoglobin molecule with atoms represented as spheres. This protein is found in the red blood cell.

  1812

  Laplace’s Théorie Analytique des Probabilités • Clifford A. Pickover

  Pierre-Simon, Marquis de Laplace (1749–1827)

  The first major treatise on probability that combines probability theory and calculus was French mathematician and astronomer Pierre-Simon Laplace’s Théorie Analytique des Probabilités (Analytical Theory of Probabilities). Probability theorists focus on random phenomena. Although a single roll of the dice may be considered a random event, after numerous repetitions, certain statistical patterns become apparent, and these patterns can be studied and used to make predictions.

  The first edition of Laplace’s Théorie Analytique was dedicated to Napoleon Bonaparte and discusses methods of finding probabilities of compound events from component probabilities. The book also discusses the method of least squares and Buffon’s Needle and considers many practical applications.

  Stephen Hawking calls Théorie Analytique a “masterpiece” and writes, “Laplace held that because the world is determined, there can be no probabilities in things. Probability results in our lack of knowledge.” According to Laplace, nothing would be “uncertain” for a sufficiently advanced being—a conceptual model that remained strong until the rise of quantum mechanics and chaos theory in the twentieth century.

  To explain how probabilistic processes can yield predictable results, Laplace asks readers to imagine several urns arranged in a circle. One urn contains only black balls, while another contains only white balls. The other urns have various ball mixtures. If we withdraw a ball, place it in the adjacent urn, and continue around the circle, eventually the ratio of black to white balls will be approximately the same in all of the urns. Here, Laplace shows how random “natural forces” can create results that have a predictability and order. Laplace writes, “It is remarkable that this science, which originated in the consideration of games of chance, should become the most important object of human knowledge. . . . The most important questions in life are, for the most part, really only problems of probability.” Other famous probabilists include Gerolamo Cardano (1501–1576), Pierre de Fermat (1601–1665), Blaise Pascal (1623–1662), and Andrey Nikolaevich Kolmogorov (1903–1987).

  SEE ALSO Development of Modern Calculus (1665), Law of Large Numbers (1713), Normal Distribution Curve (1733).

  Laplace felt it was remarkable that probability, which originated in analysis of games of chance, should become “the most important object of human knowledge . . .”

  1822

  Babbage Mechanical Computer • Clifford A. Pickover

  Charles Babbage (1792–1871), Augusta Ada King, Countess of Lovelace (1815–1852)

  Charles Babbage was an English analyst, statistician, and inventor who was also interested in the topic of religious miracles. He once wrote, “Miracles are not a breach of established laws, but . . . indicate the existence of far higher laws.” Babbage argued that miracles could occur in a mechanistic world. Just as Babbage could imagine programming strange behaviors on his calculating machines, God could program similar irregularities in nature. While investigating biblical miracles, he suggested that the chance of a man rising from the dead is one in 1012.

  Babbage is often considered the most important mathematician-engineer involved in the prehistory of computers. In particular, he is famous for conceiving an enormous hand-cranked mechanical calculator, an early progenitor of our modern computers. Babbage thought the device would be most useful in producing mathematical tables, but he worried about mistakes that would be made by humans who transcribed the results from its 31 metal output wheels. Today, we realize that Babbage was around a century ahead of his time and that the politics and technology of his era were inadequate for his lofty dreams.

  Babbage’s Difference Engine, begun in 1822 but never completed, was designed to compute values of polynomial functions, using about 25,000 mechanical parts. He also had plans to create a more general-purpose computer, the Analytical Engine, which could be programmed using punch cards and had separate areas for number storage and computation. Estimates suggest that an Analytical Engine capable of storing 1,000 50-digit numbers would be more than 100 feet (about 30 meters) in length. Ada Lovelace, the daughter of the English poet Lord Byron, gave specifications for a program for the Analytical Engine. Although Babbage provided assistance to Ada, many consider Ada to be the first computer programmer.

  In 1990, novelists William Gibson and Bruce Sterling wrote The Difference Engine, which asked readers to imagine the consequences of Babbage’s mechanical computers becoming available to Victorian society.

  SEE ALSO Slide Rule (1621), ENIAC (1946), ARPANET (1969)

  Working model of a portion of Charles Babbage’s Difference Engine, currently located at the London Science Museum.

  1824

  Carnot Engine • Clifford A. Pickover

  Nicolas Léonard Sadi Carnot (1796–1832)

  Much of the initial work in thermodynamics—the study of the conversion of energy between work and heat—focused on the operation of engines and how fuel, such as coal, could be efficiently converted to useful work by an engine. Sadi Carnot is probably most often considered the “father” of thermodynamics, thanks to his 1824 work Réflexions sur la puissance motrice du feu (Reflections on the Motive Power of Fire).

  Carnot worked tirelessly to understand heat flow in machines partly because he was disturbed that British steam engines seemed to be more efficient than French engines. During his day, steam engines usually burned wood or coal in order to convert water into steam. The high-pressure steam moved the pistons of the engine. When the steam was released through an exhaust port, the pistons returned to their original positions. A cool radiator converted the exhaust steam to water, so it could be heated again to steam in order to drive the pistons.

  Carnot imagined an ideal engine, known today as the Carnot engine, that would theoretically have a work output equal to that of its heat input and not lose even a small amount of energy during the conversion. After experiments, Carnot realized that no device could perform in this ideal matter—some energy had to be lost to the environment. Energy in the form of heat could not be converted completely into mechanical energy. However, Carnot did help engine designers improve their engines so that the engines could work close to their peak efficiencies.

  Carnot was interested in “cyclical devices” in which, at various parts of their cycles, the device absorbs or rejects heat; it is impossible to make such an engine that is 100 percent efficient. This impossibility is yet another way of stating the Second Law of Thermodynamics. Sadly, in 1832 Carnot contracted cholera and, by order of the health office, nearly all his books, papers, and other personal belongings had to be burned!

  SEE ALSO Second Law of Thermodynamics (1850), Steam Turbine (1890), Internal Combustion Engine (1908).

  LEFT: An 1813 portrait of Sadi Carnot. RIGHT: Locomotive steam engine. Carnot worked to understand heat flow in machines, and his theories have relevance to this day. During his time, steam engines usually burned wood or coal.

  1824

  Greenhouse Effect • Clifford A. Pickover

  Joseph Fourier (1768–1830), Svante August Arrhenius (1859–1927), John Tyndall (1820–1893)

  “Despite all its bad press,” write authors Joseph Gonzalez and Thomas Sherer, “the process known as the greenhouse effect is a very natural and necessary phenomenon. . . . The atmosphere contains gases that enable sunlight to pass through to the earth’s surface but hinder the escape of reradiated heat energy. Without this natural greenhouse effect, the earth would be much too cold to sustain life.” Or, as Carl Sagan once wrote, “A little greenhous
e effect is a good thing.”

  Generally speaking, the greenhouse effect is the heating of the surface of a planet as a result of atmospheric gases that absorb and emit infrared radiation, or heat energy. Some of the energy reradiated by the gases escapes into outer space; another portion is reradiated back toward the planet. Around 1824, mathematician Joseph Fourier wondered how the Earth stays sufficiently warm to support life. He proposed that although some heat does escape into space, the atmosphere acts a little like a translucent dome—a glass lid of a pot, perhaps—that absorbs some of the heat of the Sun and reradiates it downward to the Earth.

  In 1863, British physicist and mountaineer John Tyndall reported on experiments that demonstrated that water vapor and carbon dioxide absorbed substantial amounts of heat. He concluded that water vapor and carbon dioxide must therefore play an important role in regulating the temperature at the Earth’s surface. In 1896, Swedish chemist Svante Arrhenius showed that carbon dioxide acts as a very strong “heat trap” and that halving the amount in the atmosphere might trigger an ice age. Today we use the term anthropogenic global warming to denote an enhanced greenhouse effect due to human contributions to greenhouse gases, such as the burning of fossil fuels.

  Aside from water vapor and carbon dioxide, methane from cattle belching can also contribute to the greenhouse effect. “Cattle belching?” Thomas Friedman writes. “That’s right—the striking thing about greenhouse gases is the diversity of sources that emit them. A herd of cattle belching can be worse than a highway full of Hummers.”

  SEE ALSO Conservation of Energy (1843), Internal Combustion Engine (1908), Photosynthesis (1947).

  LEFT: “Coalbrookdale by Night” (1801), by Philip James de Loutherbourg (1740–1812), showing the Madeley Wood Furnaces, a common symbol of the early Industrial Revolution. RIGHT: Large changes in manufacturing, mining, and other activities since the Industrial Revolution have increased the amount of greenhouse gases in the air. For example, steam engines, fuelled primarily by coal, helped to drive the Industrial Revolution.

  1825

  Ampère’s Law of Electromagnetism • Clifford A. Pickover

  André-Marie Ampère (1775–1836), Hans Christian Ørsted (1777–1851)

  By 1825, French physicist André-Marie Ampère had established the foundation of electromagnetic theory. The connection between electricity and magnetism was largely unknown until 1820, when Danish physicist Hans Christian Ørsted discovered that a compass needle moves when an electric current is switched on or off in a nearby wire. Although not fully understood at the time, this simple demonstration suggested that electricity and magnetism were related phenomena, a finding that led to various applications of electromagnetism and eventually culminated in telegraphs, radios, TVs, and computers.

  Subsequent experiments during a period from 1820 to 1825 by Ampère and others showed that any conductor that carries an electric current I produces a magnetic field around it. This basic finding, and its various consequences for conducting wires, is sometimes referred to as Ampère’s Law of Electromagnetism. For example, a current-carrying wire produces a magnetic field B that circles the wire. (The use of bold signifies a vector quantity.) B has a magnitude that is proportional to I, and points along the circumference of an imaginary circle of radius r centered on the axis of the long, straight wire. Ampère and others showed that electric currents attract small bits of iron, and Ampère proposed a theory that electric currents are the source of magnetism.

  Readers who have experimented with electromagnets, which can be created by wrapping an insulated wire around a nail and connecting the ends of the wire to a battery, have experienced Ampère’s Law first hand. In short, Ampère’s Law expresses the relationship between the magnetic field and the electric current that produces it.

  Additional connections between magnetism and electricity were demonstrated by the experiments of American scientist Joseph Henry (1797–1878), British scientist Michael Faraday (1791–1867), and James Clerk Maxwell. French physicists Jean-Baptiste Biot (1774–1862) and Félix Savart (1791–1841) also studied the relationship between electrical current in wires and magnetism. A religious man, Ampère believed that he had proven the existence of the soul and of God.

  SEE ALSO Coulomb’s Law of Electrostatics (1785), Faraday’s Laws of Induction (1831), Maxwell’s Equations (1861).

  LEFT: Engraving of André-Marie Ampère by A. Tardieu (1788–1841). RIGHT: Electric motor with exposed rotor and coil. Electromagnets are widely used in motors, generators, loudspeakers, particle accelerators, and industrial lifting magnets.

  1827

  Brownian Motion • Clifford A. Pickover

  Robert Brown (1773–1858), Jean-Baptiste Perrin (1870–1942), Albert Einstein (1879–1955)

  In 1827, Scottish botanist Robert Brown was using a microscope to study pollen grains suspended in water. Particles within the vacuoles of the pollen grains seemed to dance about in a random fashion. In 1905, Albert Einstein predicted the movement of such kinds of small particles by suggesting that they were constantly being buffeted by water molecules. At any instant in time, just by chance, more molecules would strike one side of the particle than another side, thereby causing the particle to momentarily move slightly in a particular direction. Using statistical rules, Einstein demonstrated that this Brownian motion could be explained by random fluctuations in such collisions. Moreover, from this motion, one could determine the dimensions of the hypothetical molecules that were bombarding the macroscopic particles.

  In 1908, French physicist Jean-Baptiste Perrin confirmed Einstein’s explanation of Brownian motion. As a result of Einstein and Perrin’s work, physicists were finally compelled to accept the reality of atoms and molecules, a subject still ripe for debate even at the beginning of the twentieth century. In concluding his 1909 treatise on this subject, Perrin wrote, “I think that it will henceforth be difficult to defend by rational arguments a hostile attitude to molecular hypotheses.”

  Brownian motion gives rise to diffusion of particles in various media and is so general a concept that it has wide applications in many fields, ranging from the dispersal of pollutants to the understanding of the relative sweetness of syrups on the surface of the tongue. Diffusion concepts help us understand the effect of pheromones on ants or the spread of muskrats in Europe following their accidental release in 1905. Diffusion laws have been used to model the concentration of smokestack contaminants and to simulate the displacement of hunter-gatherers by farmers in Neolithic times. Researchers have also used diffusion laws to study diffusion of radon in the open air and in soils contaminated with petroleum hydrocarbons.

  SEE ALSO Atomic Theory (1808), Kinetic Theory (1859), Boltzmann’s Entropy Equation (1875).

  Scientists used Brownian motion and diffusion concepts to model muskrat propagation. In 1905, five muskrats were introduced to Prague from the U.S. By 1914, their descendants had spread 90 miles in all directions. In 1927, they numbered over 100 million.

  1828

  Germ-Layer Theory of Development • Michael C. Gerald with Gloria E. Gerald

  Karl Ernst von Baer (1792–1876), Christian Heinrich Pander (1794–1865), Robert Remak (1815–1865), Hans Spemann (1869–1941)

  Casper Friedrich Wolff provided evidence supporting the epigenetic theory of generation—namely that, after conception, each individual begins as an undifferentiated mass in the egg and gradually differentiates and grows. Wolff’s theory (1759) was largely disregarded by the scientific community; however, during the following century, it was revisited and served as the foundation for the germ-layer theory.

  In 1815, the Estonian-born Karl Ernst von Baer attended the University of Würzburg, where he was introduced to the new field of embryology. His anatomy professor encouraged him to pursue research on chick embryo development but, unable to pay for the eggs or hiring an attendant to watch the incubators, he turned the project over to his more-affluent friend Christian Heinrich Pander, who identified three distinct regions in t
he chick embryo.

  Von Baer extended Pander’s findings in 1828 to show that in all vertebrate embryos, there are three concentric germ layers. In 1842, the Polish-German embryologist Robert Remak provided microscopic evidence for the existence of these layers and designated them by names still in use. The ectoderm or outermost layer develops into the skin and nerves; from the endoderm, the innermost layer, comes the digestive system and lungs; and between these layers, the mesoderm, is derived blood, heart, kidneys, gonads, bones, and connective tissues. It was subsequently determined that while all vertebrates exhibit bilateral symmetry and have three germ layers, animals that display radial symmetry (hydra and sea anemone) have two layers, while only the sponge has a single germ layer.

  Von Baer proposed other principles in embryology: General features of a large group of animals appear earlier than the specialized features seen in a smaller group. All vertebrates begin development with skin that differentiates to scales in fish and reptiles, feathers in birds, and hair and fur in mammals. In 1924, Hans Spemann’s discovery of embryonic induction explained how groups of cells form particular tissues and organs.

  SEE ALSO Discovery of Sperm (1678), Cell Division (1855), Epigenetics (1983).

 

‹ Prev