Miss Buddha
Page 67
Genetics
At the dawn of the 20th century, the life sciences made explosive progress. For one, Mendel’s work in genetics was rediscovered in 1900, and by 1910 biologists had (correctly) deduced that the postulated genes were located in chromosomes, the threadlike structures that contain proteins and deoxyribonucleic acid (DNA).
In the 1940s, American Biochemists discovered that DNA taken from one kind of bacterium could influence the characteristics of another. From these experiments, it grew increasingly clear that DNA is the chemical that makes up genes and thus the key to heredity.
After American Biochemist James Watson and British Biophysicist Francis Crick finally established the structure of DNA in 1953, geneticists were now able to map heredity in chemical terms. Since then, progress in this field has been nothing short of astounding. Not only have scientists identified the complete genome, or genetic catalog, of the human body, but now also know how individual genes are activated and what effects they have in the human body.
In the bio-labs, genes can now be transferred from one species to another, side-stepping the normal processes of heredity and so creating hybrid organisms that are unknown in the natural world. This, of course, has sparked, and continues to spark wide controversy since there is no way of telling which guns are loaded and which guns are not.
Medicine
As we entered the 20th century, Dutch physician Christiaan Eijkman showed that disease can be caused not only by microorganisms but by a dietary deficiency of certain substances we now call vitamins, a very healthy discovery that has also spawned its own industry.
In 1909 German bacteriologist Paul Ehrlich introduced the world’s first bactericide, a chemical designed to kill specific kinds of bacteria without killing the patient’s cells as well.
Then, following the discovery of penicillin in 1928 by British bacteriologist Sir Alexander Fleming, antibiotics joined medicine’s chemical armory, making the fight against bacterial infection almost a routine matter.
While these antibiotics cannot act against viruses, ever since Pasteur discovered vaccines (a weak strain of the disease itself injected to stimulate the body’s own production of effective antibodies) in the 1880s, they have been used to great effect to prevent some of the deadliest viral diseases.
Smallpox, once a worldwide killer, was pronounced completely eradicated by the late 1970s, and the United States saw the number of polio cases drop from 38,000 in the 1950s to less than 10 a year by the 21st century thanks to effective vaccines.
By the middle of the 20th century, scientists believed they were well on the way to treating, preventing, or eradicating most of the deadly infectious diseases that had plagued humankind for centuries. By the 1980s, however, the medical community’s confidence in its ability to control infectious diseases was shaken by the emergence of new types of disease-causing microorganisms. New cases of tuberculosis developed, caused by bacteria strains that were resistant to antibiotics. New, deadly infections for which there was no known cure also appeared, including the viruses that cause hemorrhagic fever and the human immunodeficiency virus (HIV), the cause of acquired immunodeficiency syndrome (AIDS)—a disease that has yet to find a definite cure.
In other fields of medicine, new imaging techniques, including magnetic resonance imaging and computed tomography, revolutionized the diagnosis of disease. Scientists also tried to cure some diseases by gene therapy, where the insertion of normal or genetically altered genes into a patient’s cells replaces nonfunctional or missing genes. The jury is still out on this, still controversial, approach.
Improved drugs and new tools have also made surgical operations that were once considered impossible routine.
For instance, drugs that suppress the immune system now enable the transplant of organs or tissues with a reduced risk of rejection. Endoscopy permits the diagnosis and surgical treatment of a wide variety of ailments using minimally invasive surgery.
Advances in high-speed fiber-optic connections permit surgery on a patient using robotic instruments controlled by surgeons at another location. Known as telemedicine, this form of medicine makes it possible for skilled physicians to treat patients in remote locations or places that lack medical help.
Social Sciences
During the 20th century, the social sciences emerged from relative obscurity to become prominent fields of research.
Early in the century, the Austrian physician Sigmund Freud founded the practice of psychoanalysis, creating a revolution in psychology that led him to be called the “Copernicus of the mind.”
In 1948 the American biologist Alfred Kinsey published Sexual Behavior in the Human Male, which proved to be one of the best-selling scientific works of all time. Although criticized for his methodology and conclusions, Kinsey succeeded in making human sexuality an acceptable subject for scientific research.
The 20th century also saw dramatic discoveries in the field of anthropology, with new fossil finds helping to piece together the story of human evolution.
A new and surprising source of anthropological information arrived with studies of the DNA in mitochondria—cell structures that provide energy to fuel the cell’s activities. Mitochondrial DNA has since been used not only to track certain genetic diseases but also to trace the ancestry of a variety of organisms, including humans.
Technology
Some claim, and I don’t think incorrectly, that the modern technological era started in 1901 when the Italian electrical engineer Guglielmo Marconi sent his first radio signal across the Atlantic Ocean, a feat for which he received the Nobel Prize in 1909.
Hard on those heels, in 1906, the American inventor Lee De Forest invented what is properly called the triode, but which everyone calls the vacuum tube. The vacuum tube was to become one, if not the, key component in nearly all early radio, radar, television, and computer systems.
A handful of years later, in 1920, the Scottish engineer John Logie Baird developed the Baird Televisor, a primitive television that provided the first transmission of a recognizable moving image. Based on Baird’s invention, during the next decade or so, the American electronics engineer Vladimir Kosma Zworykin improved the television’s picture and reception to the point where it began to resemble the television we are now used to, and so laid the foundation for a news and entertainment industry that has only grown since then.
Shortly before the Second World War, in 1935, the British Physicist Sir Robert Watson-Watt deployed radio waves to reflect from (and so locate) aircraft in flight. Radar signals, as they were soon called, have since been reflected from the moon, planets, and stars to learn their distance from Earth and to track their movements.
Shortly after the Second World War, in the now famous Bell Laboratories in New Jersey, the American Physicists John Bardeen, Walter Brattain, and William Shockley invented the transistor, an electronic device used to control or amplify an electrical current—pretty much what the triode did. However, the transistor was much smaller, and was far less expensive to manufacture than the triode. It also required less power to operate, and was considerably more reliable than the triodes—who by now could see the writing on the wall, for since their first commercial use in hearing aids in 1952, transistors have replaced triodes in virtually all applications.
Mid-century, the transistor found another home, and by now computers were built using transistors rather than triodes. Earlier computers, such as the electronic numerical integrator and computer (ENIAC)—first introduced in 1946 by American Physicist John W. Mauchly and American electrical engineer John Presper Eckert, Jr.—used as many as 18,000 triodes and filled a large room.
The transistor changed all that by sparking microminiaturization, in which individual electronic circuits are reduced to microscopic, if not atomic, size. This trend drastically reduced not only the computer’s size and cost, but also its power requirements, which eventually led to electronic circuits with processing speeds measured in billions of computations per second.
By the early 1970s, continued miniaturization led to the first microprocessor, which in essence is a computer on a chip. Combined with other specialized chips, the microprocessor became the central arithmetic and logic unit of a computer smaller in size than a portable typewriter.
By the 1990s, with their small size and a price less than that of a used car, these personal computers were many times more powerful than the physically huge, multimillion-dollar computers of the 1950s.
By now, these computers—faithfully adhering to Moore’s law, which states that the number of transistors on an integrated circuit will double every four years, while the cost will halve—have reduced in size to that of a small wallet, and could be made a lot smaller were that a practical way to go. Instead of reducing the size, however, the computing power of our current handheld computers, such as the Mortimer, would match that of, say, one hundred mainframe computers of the 1960s.
Today, computers are, of course, used by virtually everyone on our planet, not the least to interface with worldwide communications networks, such as the Internet and the World Wide Web, to send and receive e-mail, to shop, or to find information on just about any subject.
Space Exploration
The early 1950s saw increasing public interest in space exploration. Some say that the seminal event, the event that sparked the space age, was the International Geophysical Year from July 1957 to December 1958, during which hundreds of scientists around the world coordinated their efforts to measure the Earth’s near-space environment. During, and as part of, this study, both the United States and the then Soviet Union announced that they would launch artificial satellites into orbit for nonmilitary, exploratory, purposes.
The Soviet Union—much to the embarrassment of the United States, beat its American rivals to the punch, and when they launched the first Sputnik satellite in 1957, this feat spurred the United States to intensify its own space exploration efforts; and so, in 1958, the National Aeronautics and Space Administration (NASA) was founded for the purpose of developing human spaceflight.
NASA then went on to design, manufacture, test, and eventually use the Saturn rocket and the Apollo spacecraft for the first manned landing on the Moon in 1969, and during the 1960s and 1970s, also designed and built the first robotic space probes to explore the planets Mercury, Venus, and Mars.
NASA then focused its efforts on a reusable space shuttle, which was first deployed in 1981. In 1998 this space shuttle, along with Soyuz, its Russian counterpart, enabled the construction of the International Space Station.
Quantum Physics
It was in the year 1900 that the German physicist Max Planck proposed the (at the time) sensational idea that energy is always given off in set amounts, or quanta. Five years later, Albert Einstein successfully used quanta to explain the photoelectric effect, which is the release of electrons when metals are bombarded by light.
This, together with Einstein’s special and general theories of relativity, challenged some of the most fundamental assumptions of the Newtonian era.
Unlike the laws of classical physics, quantum theory deals with events that occur on the smallest of scales; explaining how subatomic particles form atoms, and how atoms interact when they combine to form chemical compounds.
In fact, quantum theory deals with a world where the attributes of any single particle can never be fully known—an idea known as the uncertainty principle—put forward by the German physicist Werner Heisenberg in 1927.
But while the subatomic level appears a sea of uncertainty, quantum physics does successfully predict the overall outcome of subatomic events, a fact that definitely relates it to the macroscopic world—that is, the one in which we live.
In 1934, Enrico Fermi began a series of experiments in which he used neutrons (subatomic particles without an electric charge) to bombard atoms of various elements, including uranium. The neutrons combined with the nuclei of the uranium atoms to produce what he thought were elements heavier than uranium, known as trans-uranium elements.
In 1939, however, some of his fellow physicists demonstrated that in these experiments Fermi had not formed heavier elements, but instead had managed to split the uranium atom’s nucleus, a feat that eventually led to fission both as an energy source and as a weapon.
These experiments and studies, along with the development of particle accelerators in the 1950s, initiated a long expedition into the nature of subatomic particles, a journey that continues today.
Scientists now know that, far from being indivisible, atoms are made up of at least 12 fundamental particles known as quarks and leptons, which combine in different ways to constitute all matter currently known.
Cosmology
The advances in particle physics are closely linked to similar advances in cosmology. From the 1920s onward, when the American astronomer Edwin Hubble showed that the universe is indeed still expanding, cosmologists have sought to rewind the clock and so determine how the universe began.
Today, most scientists hold that our universe started with a cosmic explosion (the Big Bang) sometime between 10 and 20 billion years ago (most subscribe to approximately 13 billion years), but the scientific jury is still out as to the exact sequence of events surrounding the birth of the universe.
The Path of Science
I believe the Greeks, more than the Mesopotamians, the Egyptians or the Chinese, took the right approach when they approached Truth as Truth, and let their curiosity take them wherever it would.
However—and as this whirlwind journey through science history shows—when Science finally gained traction in the West, she was pragmatism and profit personified, and had morphed from being curious about the Earth to a “What have you done for us lately?” mentality which today has spiraled, and continues to spiral, out of control.
That said, let us take a look the next path.
:: Philosophy ::
I believe a good way to approach philosophy—by now a vast subject—is to break it down into its three main wellsprings: that of the West (Europe, and, later on, America), that of China, and that of India.
Western Philosophy
Philosophy (again, from the Greek philosophia, “love of wisdom”), is defined as the rational and critical inquiry into basic principles, and is often divided into four main branches: metaphysics, the investigation of ultimate reality; epistemology, the study of the origins, validity, and limits of knowledge; ethics, the study of the nature of morality and judgment; and aesthetics, the study of the nature of beauty in the fine arts.
However, as practiced by the ancient Greeks, the term philosophy meant the pursuit of knowledge of whatever kind, for its own sake. At that time, philosophy comprised all areas of speculative thought and included not only the arts, but sciences and religion as well.
Later, as special methods and principles were developed in the various areas of knowledge, each acquired its own philosophical aspect, giving rise to the separate cognitive disciplines of art, of science, and of religion.
Greek Philosophy
As I briefly touched upon earlier, western philosophy is generally considered to have begun in ancient Greece as speculation about the underlying nature of the physical world. In its earliest form it was indistinguishable from natural science. Unfortunately, the writings of the earliest philosophers are no longer available to us, except for a few fragments cited by Aristotle in the 4th century BCE (who did have access) and by other writers of later times.
The Ionian School
The first Western philosopher of any historical record was Thales, who lived in the 6th century BCE Miletus, a city on the Ionian coast of Asia Minor. Thales, who was revered by later generations as one of the Seven Wise Men of Greece, was curious about most things, specifically astronomical, physical, and meteorological phenomena.
His scientific investigations and speculations led to the postulate that all natural phenomena are but different forms of one fundamental substance. This fundamental substance he believed to be water. W
hy water? He had observed water’s evaporation and condensation and assumed this to be a universal process, involving all forms and substances.
Anaximander was a disciple of Thales, and a bright one at that. He concluded that the first principle from which all things evolve is an intangible, invisible, infinite substance that he called apeiron, “the boundless.” This substance, he maintained, is eternal and indestructible. Out of its ceaseless motion continuously evolve the more familiar substances, such as warmth, cold, earth, air, and fire, generating in turn the various objects and organisms that make up the recognizable world.
Anaximander also taught that life as we know it began in water, and that humans originated from fish, predating Darwin by a good two thousand years.
The third great Ionian philosopher of the 6th century BCE, Anaximenes, returned to Thales’ assumption that the primary substance must be something familiar and material, but he held this substance to be air rather than water.
He also held that the observable changes things undergo could be explained by rarefaction (thinning) and condensation (solidification) of air. Thus Anaximenes was, in fact, the first philosopher to explain observable differences in quality in terms of differences in size or quantity, a method that later grew fundamental to physical science.
As a whole, the Ionian school took the initial, and radical step from mythological to scientific explanation of natural phenomena. It also laid the groundwork for the important scientific principles of the permanence of substance, the natural evolution of the world, and—rightly or wrongly—the reduction of quality to quantity.