A Brief History of Science with Levity

Home > Other > A Brief History of Science with Levity > Page 21
A Brief History of Science with Levity Page 21

by Mike Bennett


  This energy includes a contribution from the Casimir effect, namely from quantum fluctuations in the string. The size of this contribution depends on the number of dimensions, since for a larger number of dimensions there are more possible fluctuations in the string position. Therefore, the photon in flat space time will be massless – and the theory consistent – only for a particular number of dimensions. When the calculation is done, the critical dimensionality is not four as one may expect (three axes of space and one of time).

  The subset of X is equal to the relation of photon fluctuations in a linear dimension. Flat space string theories are twenty-six-dimensional in the bosonic case, while superstring and M-theories turn out to involve ten or eleven dimensions for flat solutions. In bosonic string theories, the twenty-six dimensions come from the Polyakov equation. Starting from any dimension greater than four, it is necessary to consider how these are reduced.

  Gravitons are postulated because of the great success of quantum field theory (in particular, the Standard Model) at modelling the behaviour of all other known forces of nature as being mediated by elementary particles: electromagnetism by the photon, the strong interaction by the gluons and the weak interaction by the W and Z bosons. The hypothesis is that the gravitational interaction is likewise mediated by another elementary particle, dubbed the graviton. In the classical limit, the theory would reduce to general relativity and conform to Newton’s law of gravitation in the weak field limit.

  However, attempts to extend the Standard Model with gravitons have run into serious theoretical difficulties at high energies (processes with energies close to or above the Planck scale) because of infinities arising due to quantum effects. Since classical general relativity and quantum mechanics are incompatible at such energies, from a theoretical point of view the present situation is not tenable. Some proposed models of quantum gravity attempt to address these issues, but these are speculative theories.

  As recently as the 19th century, many people thought that it would be impossible to determine the chemical composition of the stars. Since then, physicists have proved them wrong by using spectroscopy.

  The word spectrum is used today to mean a display of electromagnetic radiation as a function of wavelength. A spectrum originally meant a phantom or apparition, but Isaac Newton introduced a new meaning in 1671, when he reported his experiment of decomposing the white sunlight into colours using a prism. Several related words, such as spectroscopy (the study of spectra) and spectrograph have since been introduced into the English language.

  You can be a spectroscopist (a person who studies spectra) as well. When you see a rainbow, or use a prism on a beam of sunlight to project a band of colours onto a screen or a wall, it will probably appear as if the change of colours is gradual, and the change in intensity of the light of different colours is also gradual. We use the word continuum to describe spectra that change gradually like this.

  There are also discrete features, called emission lines or absorption lines depending on whether they are brighter or fainter than the neighbouring continuum. You can use a prism on candlelight or some special light bulbs to observe these effects.

  Most bright astronomical objects shine because they are hot. In such cases, the continuum they emit tells us what the temperature is. Here is a very rough guide.

  Temperature (K)

  Predominant Radiation

  Astronomical examples

  600

  Infrared

  Planets and warm dust

  6,000

  Optical

  Photosphere of stars

  60,000

  UV

  Photosphere of hot stars

  600,000

  Soft X-rays

  Corona of the sun

  6,000,000

  X-rays

  Coronae of active stars

  We can learn a lot more from the spectral lines than from the continuum, and can actually determine the chemical composition of stars.

  During the first half of the 19th century, scientists such as John Herschel, Fox Talbot and William Swan studied the spectra of different chemical elements in flames. Gradually, the idea that each element produces a set of characteristic emission lines was established. Each element has several prominent, and many lesser, emission lines in a characteristic pattern. Sodium, for example, has two prominent yellow lines (the so-called D lines) at 589.0 and 589.6 nm – any sample that contains sodium (such as table salt) can be easily recognised using this pair of lines. All of the elements have this type of unique “bar code”.

  Joseph Fraunhofer is the most famous and probably also the most important contributor to studies in this field, and revealed absorption lines (dark lines against the brighter continuum). The precise origin of these “Fraunhofer lines”, as we call them today, remained in doubt for many years until discoveries made by Gustav Kirchhoff. He announced that the same substance can either produce emission lines (when a hot gas is emitting its own light) or absorption lines (when a light from a brighter, and usually hotter, source is shone through it). Now scientists had the means to determine the chemical composition of stars through spectroscopy.

  One of the most dramatic triumphs of early spectroscopy during the 19th century was the discovery of helium. An emission line at 587.6 nm was first observed in the solar corona during the eclipse of 18th August 1868, although the precise wavelength was difficult to establish at the time due to the short observation using temporary setups of instruments transported to Asia.

  Later, Norman Lockyer used a new technique and managed to observe solar prominences without waiting for an eclipse. He noted a line with the precise wavelength (587.6 nm), and knew that this must be helium. Today, from the data collected by our advanced space telescopes, we know that helium is the second most abundant element in the universe. We also now know that the most abundant element is hydrogen.

  However, this fact was not obvious at first. Many years of both observational and theoretical works culminated when Cecilia Payne published her PhD thesis entitled Stellar Atmospheres. In this early work, she utilised many excellent spectra taken by Harvard observers, and measured the intensities of 134 different lines from eighteen different elements. She applied the up-to-date theory of spectral line formation and found that the chemical compositions of stars were probably all similar, with the temperature being the important factor in creating their diverse appearances. She was then able to estimate the abundances of seventeen of the elements relative to the eighteenth, silicon. Hydrogen appeared to be more than a million times more abundant than silicon, a conclusion so unexpected that it took many years to become widely accepted.

  In such an analysis of chemical abundances, the wavelength of each line is treated as fixed. However, this is not true when the star is moving toward us (the lines are observed at shorter wavelengths, or “blue-shifted”, compared to those measured in the laboratory) or moving away from us (observed at longer wavelengths, or “red-shifted”). This is the phenomenon of “Doppler shift”.

  If the spectrum of a star is red- or blue-shifted, then you can use that to infer its velocity along the line of sight. Such radial velocity studies have had at least three important applications in astrophysics.

  The first is the study of binary star systems. The component stars in a binary revolve around each other. You can measure the radial velocities for one cycle (or more) of the binary, then you can relate that back to the gravitational pull using Newton’s equations of motion (or their astrophysical applications, Kepler’s laws).

  If you have additional information, such as from observations of eclipses, then you can sometimes measure the masses of the stars accurately. Eclipsing binaries, in which you can see the spectral lines of both stars, have played a crucial role in establishing the masses and the radii of different types of stars.

  The second is the study of the structure of our galaxy. Stars in the galaxy revolve around its centre, just like planets revolve around the sun. However it is more co
mplicated, because the gravity is due to all the stars in the galaxy combined in this case. In the solar system, the sun is such a dominant source that you can virtually ignore the pull of the planets. So, radial velocity studies of stars (binary or single) have played a major role in establishing the shape of the galaxy. It is still an active field today. For example, one of the evidences for dark matter comes from the study of the distribution of velocities at different distances from the centre of the galaxy. Another recent development is the radial velocity studies of stars very near the galactic centre, which strongly suggest that our galaxy contains a massive black hole.

  The third is the expansion of the universe. Edwin Hubble established that more distant galaxies tended to have more red-shifted spectra. Although not predicted even by Einstein, such an expanding universe is a natural solution for his theory of general relativity. Today, for more distant galaxies, the red-shift is used as a primary indicator of their distances. The ratio of the recession velocity to the distance is called the Hubble Constant, and the precise measurement of its value has been one of the major accomplishments of astrophysics today, using such tools as the Hubble Space Telescope.

  Quantum mechanics predicts the existence of what are usually called ‘’zero point’’ energies for the strong, the weak and the electromagnetic interactions, where the ‘’zero point’’ refers to the energy of the system at temperature T=0, or the lowest quantised energy level of a quantum mechanical system. Although the term zero point energy applies to all three of these interactions in nature, customarily (and hereafter in this section) it is used in reference only to the electromagnetic case.

  In conventional quantum physics, the origin of zero point energy is the Heisenberg uncertainty principle, which states that, for a moving particle such as an electron, the more precisely one measures the position, the less exact the best possible measurement of its momentum (mass times velocity), and vice versa. The least possible uncertainty of position multiplied by momentum is specified by Planck’s constant, h.

  A parallel uncertainty exists between measurements involving time and energy (and other so-called conjugate variables in quantum mechanics). This minimum uncertainty is not due to any correctable flaws in measurement, but rather reflects an intrinsic quantum fuzziness in the very nature of energy and matter springing from the wave nature of the various quantum fields. This leads to the concept of zero point energy.

  Zero point energy is the energy that remains when all other energy is removed from a system. This behaviour is demonstrated by, for example, liquid helium. As the temperature is lowered to absolute zero, helium remains a liquid, rather than freezing to a solid, owing to the irremovable zero point energy of its atomic motions. (Increasing the pressure to 25 atmospheres will cause helium to freeze.)

  A harmonic oscillator is a useful conceptual tool in physics. Classically a harmonic oscillator, such as a mass on a spring, can always be brought to rest. However a quantum harmonic oscillator does not permit this. A residual motion will always remain due to the requirements of the Heisenberg uncertainty principle, resulting in a zero point energy, equal to 1/2 hf, where f is the oscillation frequency.

  Electromagnetic radiation can be pictured as waves flowing through space at the speed of light. The waves are not waves of anything substantive, but are ripples in a state of a theoretically defined field. However these waves do carry energy (and momentum), and each wave has a specific direction, frequency and polarisation state. Each wave represents a “propagating mode of the electromagnetic field”.

  Each mode is equivalent to a harmonic oscillator and is thus subject to the Heisenberg uncertainty principle. From this analogy, every mode of the field must have 1/2 hf as its average minimum energy. That is a tiny amount of energy in each mode, but the number of modes is enormous, and indeed increases per unit frequency interval as the square of the frequency. The spectral energy density is determined by the density of modes times the energy per mode and per volume, and thus increases as the cube of the frequency per unit volume. The product of the tiny energy per mode times the huge spatial density of modes yields a very high theoretical zero point energy density per cubic centimetre.

  From this line of reasoning, quantum physics predicts that all of space must be filled with electromagnetic zero point fluctuations (also called the zero point field) creating a universal sea of zero point energy. The density of this energy depends critically on where in frequency the zero point fluctuations cease. Since space itself is thought to break up into a kind of quantum foam at a tiny distance scale called the Planck length (10-33 cm), it is argued that the zero point fluctuations must cease at a corresponding Planck frequency (1043 Hz). If that is the case, the zero point energy density would be 110 orders of magnitude greater than the radiant energy at the centre of the sun.

  How could such an enormous amount of energy not be wildly evident? There is one major difference between zero point electromagnetic radiation and ordinary electromagnetic radiation. Turning again to the Heisenberg uncertainty principle one finds that the lifetime of a given zero point photon, viewed as a wave, corresponds to an average distance travelled of only a fraction of its wavelength. Such a wave fragment is somewhat different to an ordinary plane wave, and it is difficult to know how to interpret this.

  On the other hand, zero point energy appears to have been directly measured as current noise in a resistively shunted Josephson junction by Koch, van Harlingen and Clarke up to a frequency of about 600 GHz.

  CHAPTER 22

  In this section dealing with scientific advances from 1970 onwards, we will discuss the advances that have had a profound effect on everyone’s life. There have been so many huge advances over the last few decades that they would warrant a book in themselves. In this section therefore, we will be discussing primarily the Internet, the GPS system, mobile phones and lasers. For the younger readers, I’m sure that you could not imagine what life was like before these technologies were developed.

  I remember my first mobile phone. It was a bulky item which needed to be fitted to a car. The handset alone was far bigger than a modern iPhone, and needed to be connected to a transceiver also fitted within the car. I had this equipment fitted as soon as it was available. At the time, I was often on the way to a meeting with an oil company, when their secretary would phone my secretary, saying that the meeting had to be cancelled due to a more pressing problem that had occurred. My secretary was unable to communicate this information to me, so I would spend maybe one hour driving across Aberdeen in heavy traffic before I discovered that the meeting had been cancelled, only to return back to my office after having wasted two hours of the day.

  The Internet has revolutionised the computer and communications world like nothing before. The invention of the telegraph, telephone, radio and computer set the stage for this unprecedented integration of capabilities. The Internet now has a worldwide broadcasting capability, a mechanism for information dissemination, and a medium for collaboration and interaction between individuals and their computers without regard for geographic location. The Internet represents one of the most successful examples of the benefits of sustained investment and commitment to scientific research and development of information infrastructure. Beginning with the early research into packet switching, governments, industry and academia have been partners in evolving and deploying this exciting new technology.

  This section of the book is intended to be a brief and incomplete history. Much material currently exists about the Internet, covering history, technology and usage. The history revolves around several distinct aspects.

  There is the technological evolution that began with early research on packet switching and the ARPANET (Advanced Research Projects Agency Network), where current research continues to expand the horizons of the infrastructure along several dimensions, such as scale, performance and higher-level functionality.

  There is the operations and management aspect of a global and complex operational infrastructu
re. There is the social aspect, which resulted in a broad community of “Internauts” working together to create and evolve the technology. Finally there is the commercialisation aspect, resulting in an extremely effective transition of research results into a broadly deployed and available information infrastructure.

  The Internet today is a widespread information infrastructure, the initial prototype of what is often called the National (or Global or Galactic) Information Infrastructure. Its history is complex and involves many technological and organisational aspects. Its influence reaches not only to the technical fields of computer communications but throughout society as we move towards the increasing use of online tools to accomplish electronic commerce, information acquisition and community operations.

  The first recorded description of the social interactions that could be enabled through networking was in a series of memos written at MIT (Massachusetts Institute of Technology) in August 1962 discussing this “Galactic Network” concept. They envisioned a globally interconnected set of computers through which everyone could quickly access data and programs from any site. In spirit, the concept was very much like the Internet of today. The computer research programme into this concept began at DARPA (Defence Advanced Research Projects Agency), starting in October 1962.

  MIT published the first paper on packet switching theory in July 1961 and the first book on the subject in 1964. MIT understood the theoretical feasibility of communications using packets rather than circuits, which was a major step along the path towards computer networking. The other key step was to make the computers talk together. To explore this, in 1965 MIT connected their TX-2 computer to a Q-32 computer in California with a low-speed dial-up telephone line, creating the first (however small) wide-area computer network ever built. The result of this experiment was the realisation that the time-shared computers could work well together, running programs and retrieving data as necessary on the remote machine, but that the circuit switched telephone system was totally inadequate for the job.

 

‹ Prev