The Math Book

Home > Other > The Math Book > Page 31
The Math Book Page 31

by DK


  Galton was a rigorous scientist, determined to analyze data to show mathematically how probable outcomes are. In his innovative 1888 book Natural Inheritance, Galton showed how two sets of data can be compared to show if there is a significant relationship between them. His approach involved establishing two related concepts that are now at the heart of statistical analysis: correlation and regression.

  Correlation measures the degree to which two random variables, such as height and weight, correspond. It often looks for a linear relationship—that is, a relationship that gives a simple line on a graph, with one variable changing in step with the other. Correlation does not imply a causal relationship between the two variables; it simply means they vary together. Regression, on the other hand, looks for the best equation for the graph line for two variables, so that changes in one variable can be predicted from changes to the other.

  Galton built an “anthropometric laboratory” to collect information on human characteristics, including head size and quality of vision. It generated huge amounts of data that he had to analyze statistically.

  Galton noticed that very tall parents tend to have children who are shorter than their parents, while very short parents tend to have children who are slightly taller than their parents. The second generation will be closer in height than the first, an example of regression to the mean.

  Standard deviation

  Although Galton’s main interest was human heredity, he created a broad range of data sets. Famously, he measured the size of seeds produced by sweet pea plants grown from seven sets of seeds. Galton found that the smallest pea seeds had larger offspring and the largest seeds produced smaller offspring. He had discovered the phenomenon of “regression to the mean,” a tendency for measurements to even out, always drifting toward the mean over time.

  Inspired by Galton’s work, Pearson set out to develop the mathematical framework for correlation and regression. After exhaustive tests that involved tossing coins and drawing lottery tickets, Pearson came up with the key idea of “standard deviation,” which shows how much on average observed values differ from expected. To arrive at this figure, he found the mean, which is the sum of all the values divided by how many values there are. Pearson then found the variance—the average of the squared differences from the mean. The differences are squared in order to avoid problems with negative numbers, and the standard deviation is the square root of the variance. Pearson realized that by uniting the mean and the standard deviation, he could calculate Galton’s regression precisely.

  No observational problem will not be solved by more data.

  Vera Rubin

  American astronomer

  Chi-squared test

  In 1900, after an extensive study of betting data from the gaming tables of Monte Carlo, Pearson described the chi-squared test, now one of the cornerstones of statistics. Pearson’s aim was to determine whether the difference between observed values and expected values is significant, or simply the result of chance.

  Using his data on gambling, Pearson calculated a table of probability values, called chi-squared (x2), in which 0 shows no significant difference from expected (the “null hypothesis”), whereas larger values show a significant difference. Pearson painstakingly worked out his table by hand, but chi-squared tables are now produced using computer software. For each set of data, a chi-squared value can be found from the sum of all the differences between observed and expected values. The chi-squared values are checked against the table to find the significance of the variations in the data within limits set by the researcher and known as “degrees of freedom.”

  The combination of Galton’s correlation and regression, and Pearson’s standard deviation and chi-squared test, formed the foundations of modern statistics. These ideas have since been refined and developed, but they remain at the heart of data analysis. This is crucial in many aspects of modern life, from comprehending economic behavior to planning new transportation links and improving public health services.

  KARL PEARSON

  Karl Pearson was born in London in 1857. An atheist, freethinker, and socialist, he became one of the greatest statisticians of the 1900s, but he was also a champion of the discredited science of eugenics.

  After graduating with a degree in mathematics from Cambridge University, Pearson became a teacher before making his mark in statistics. In 1901, he founded the statistical journal Biometrika with Francis Galton and evolutionary biologist Walter F. R. Weldon, followed by the world’s first university department of statistics at University College, London, in 1911. His views often led him into disputes. He died in 1936.

  Key works

  1892 The Grammar of Science

  1896 Mathematical Contributions to the Theory of Evolution

  1900 On the criterion that a given system of deviation from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling.

  See also: Negative numbers • Probability • Normal distribution • The fundamental theorem of algebra • Laplace’s demon • The Poisson distribution

  IN CONTEXT

  KEY FIGURE

  Bertrand Russell (1872–1970)

  FIELD

  Logic

  BEFORE

  c. 300 BCE Euclid’s Elements contains an axiomatic approach to geometry.

  1820s French mathematician Augustin Cauchy clarifies the rules for calculus, inaugurating a new rigor in mathematics.

  AFTER

  1936 Alan Turing studies the computability of mathematical functions, with a view to analyzing which problems in mathematics can be decided and which cannot.

  1975 American logician Harvey Friedman develops the “reverse mathematics” program, which starts with theorems and works backward to axioms.

  The common perception that mathematics is logical, with fixed rules, evolved over millennia, dating back to ancient Greece with the works of Plato, Aristotle, and Euclid. A rigorous definition of the laws of arithmetic and geometry had emerged by the 1800s, with the work of George Boole, Gottlob Frege, Georg Cantor, Giuseppe Peano, and, in 1899, David Hilbert’s Foundations of Geometry. However, in 1903, Bertrand Russell published The Principles of Mathematics, which revealed a flaw in the logic of one area of mathematics. In the book, he explored a paradox, known as Russell’s paradox (or the Russell–Zermelo paradox, after German mathematician Ernst Zermelo, who made a similar discovery in 1899).

  The paradox implied that set theory, which deals with the properties of sets of numbers or functions, and was fast becoming the bedrock of mathematics, contained a contradiction. To explain the problem, Russell used an analogy known as the barber paradox in which a barber shaves every man in town aside from those who shave themselves, creating two sets of people: those who shave themselves and those shaved by the barber. However, this begs the question: if the barber shaves himself, to which of the two sets does the barber belong?

  Russell’s barber paradox contradicted Frege’s Basic Laws of Arithmetic concerning the logic of mathematics, which Russell had pointed out in a letter to Frege in 1902. Frege declared that he was “thunderstruck,” and he never found an adequate solution to the paradox.

  A theory of types

  Russell went on to produce his own response to his paradox, developing a “theory of types,” which placed restrictions on the established model of set theory (known as “naive set theory”) by creating a hierarchy so that “the set of all sets” would be treated differently from its constituent smaller sets. In so doing, Russell managed to circumvent the paradox completely. He utilized this new set of logical principles in the momentous Principia Mathematica, written with Alfred North Whitehead and published in three volumes from 1910 to 1913.

  Every good mathematician is at least half a philosopher, and every good philosopher is at least half a mathematician.

  Gottlob Frege

  Logical gaps

  In 1931, Kurt Gödel, an Austrian mathematician and
philosopher, published his incompleteness theorem (following on from his completeness theorem of a few years earlier). The 1931 theorem concluded that there will always exist some statements regarding numbers that may be true, but can never be proved. Furthermore, expanding mathematics by simply adding more axioms will lead to further “incompleteness.” This meant that the efforts of Russell, Hilbert, Frege, and Peano to develop complete logical frameworks for mathematics were destined to have logical gaps, however watertight they tried to make them.

  Gödel’s theorem also implied that some as-yet unproven theorems in mathematics, such as the Goldbach conjecture, may never be proved. This has not, however, deterred mathematicians in their resolute efforts to prove Gödel wrong.

  BERTRAND RUSSELL

  The son of a lord, Bertrand Russell was born in Monmouthshire, Wales, in 1872. He studied mathematics and philosophy at Cambridge University, but was dismissed from an academic post there in 1916 for anti-war activities. A prominent pacifist and social critic, in 1918 he was jailed for six months, during which he wrote his Introduction to Mathematical Philosophy.

  Russell taught in the US in the 1930s, although his appointment at a college in New York was revoked due to a judicial declaration that his opinions rendered him morally unfit. He was awarded the Nobel Prize in Literature in 1950, and in 1955 he and Albert Einstein released a joint manifesto calling for a ban on nuclear weapons. He later opposed the Vietnam War. Russell died in 1970.

  Key works

  1903 The Principles of Mathematics

  1908 Mathematical Logic as Based on the Theory of Types

  1910–13 Principia Mathematica (with Alfred North Whitehead)

  See also: The Platonic solids • Syllogistic logic • Euclid’s Elements • The Goldbach conjecture • The Turing machine

  IN CONTEXT

  KEY FIGURE

  Hermann Minkowski (1864–1909)

  FIELD

  Geometry

  BEFORE

  c. 300 BCE In his book Elements, Euclid establishes the geometry of 3-D space.

  1904 In his book The Fourth Dimension, British mathematician Charles Hinton coins the term “tesseract” for a four-dimensional cube.

  1905 French scientist Henri Poincaré has the idea of making time the fourth dimension in space.

  1905 Albert Einstein states his theory of special relativity.

  AFTER

  1916 Einstein writes the key paper outlining his theory of general relativity, in which he explains gravity as a curvature of spacetime.

  There are three dimensions in our familiar view of the world—length, width, and height—and they can largely be described mathematically by the geometry of Euclid. But in 1907, German mathematician Hermann Minkowski delivered a lecture in which he added time, an invisible fourth dimension, to create the concept of spacetime. This has played a key part in understanding the nature of the Universe. It has provided a mathematical framework for Einstein's theory of relativity, allowing scientists to develop and expand this theory.

  It was in the 1700s that scientists first began questioning whether three-dimensional Euclidean geometry could describe the entire Universe. Mathematicians started to develop non-Euclidean geometric frameworks, while some considered time as a potential dimension. Light provided the mathematical prompt. In the 1860s, Scottish scientist James Clerk Maxwell found that the speed of light is the same whatever the speed of its source. Mathematicians then developed his equations to try to understand how the finite speed of light fit into the coordinate system of space and time.

  A black hole occurs when spacetime warps so much that its curvature becomes infinite at the hole’s center. Even light is not fast enough to escape the hole’s immense gravitational pull.

  Mathematics of relativity

  In 1904, Dutch mathematician Henrik Lorentz developed a set of equations, called “transformations,” to show how mass, length, and time change as a spatial object approaches the speed of light. A year later, Albert Einstein produced his theory of special relativity, which proved that the speed of light is the same throughout the Universe. Time is a relative, not an absolute, quantity—running at different speeds in different places and woven together with space.

  Minkowski turned Einstein’s theory into mathematics. He showed how space and time are parts of a four-dimensional spacetime, where each point in space and time has a position. He represented movement between positions as a theoretical line, a “worldline,” which could be plotted on a graph, with space and time as the axes. A static object produces a vertical worldline, and the worldline of a moving object is at an angle (see below). The worldline angle of an object moving at the speed of light is 45°. According to Minkowski, no worldline can exceed this angle, but in reality, there are three axes of space, plus the axis of time, so the 45° worldline is really a “hypercone,” a 4-dimensional figure. All physical reality is held within it, as nothing can travel faster than light.

  Henceforth, space by itself, and time by itself shall fade to mere shadows, and only some union of the two will preserve independent reality.

  Hermann Minkowski

  HERMANN MINKOWSKI

  Born in Aleksotas (now in Lithuania) in 1864, Minkowski moved with his family to Königsberg in Prussia in 1872. As a boy, he showed an aptitude for math and began his studies at the University of Königsberg aged 15. By 19, he had won the Paris Grand Prix for mathematics, and at 23, he became a professor at the University of Bonn. In 1897 he taught the young Albert Einstein in Zurich.

  Following a move to Göttingen in 1902, Minkowski became fascinated by the mathematics of physics, especially the interaction of light and matter. When Einstein unveiled his theory of special relativity in 1905, Minkowski was spurred on to develop his own theory, in which space and time form part of a four-dimensional reality. This concept inspired Einstein’s theory of general relativity in 1915, but by then, Minkowski was dead—killed at 44 years old by a ruptured appendix.

  Key work

  1907 Raum und Zeit (Space and Time)

  See also: Euclid’s Elements • Newton’s laws of motion • Laplace’s demon • Topology • Proving the Poincaré conjecture

  IN CONTEXT

  KEY FIGURE

  Srinivasa Ramanujan (1887–1920)

  FIELD

  Number theory

  BEFORE

  1657 In France, mathematician Bernard Frénicle de Bessy cites the properties of 1,729, the original “taxicab” number.

  1700s Swiss mathematician Leonhard Euler calculates that 635,318,657 is the smallest number that can be expressed as the sum of two fourth powers (numbers to the power of 4) in two ways.

  AFTER

  1978 Belgian mathematician Pierre Deligne receives the Fields Medal for his work on number theory, including the proof of a conjecture in the theory of modular forms that was first made by Ramanujan.

  A“taxicab” number, Ta(n), is the smallest number that can be expressed as the sum of two positive cubed integers (whole numbers) in n (number of) different ways. They owe their name to an anecdote from 1919, when British mathematician G. H. Hardy went to Putney, London, to visit his protégé Srinivasa Ramanujan, who was unwell. Arriving in a cab with the number 1,729, Hardy remarked, “Rather a dull number, don’t you think?” Ramanujan disagreed, then explained that 1,729 is the smallest number that is the sum of two positive cubes in two different ways. Hardy’s frequent retelling of this story ensured that 1,729 would become one of the best-known numbers in mathematics. Ramanujan was not the first to make note of this number’s unique properties; French mathematician Bernard Frénicle de Bessy had also written about them in the 1600s.

  Extending the concept

  The taxicab story inspired later mathematicians to examine the property that Ramanujan had recognized and to expand its application. The hunt was on for the smallest number that could be expressed as the sum of two positive cubes in three, four, or more different ways. A further question was whether Ta(n) exists for all values of n;
in 1938, Hardy and British mathematician Edward Wright proved that it does (an existence proof), but developing a method of finding Ta(n) in each case has proved elusive.

  Extending the concept further, the expression Ta(j, k, n) seeks the smallest positive number that is the sum of any number of different positive integers (j), each to any power (k) in n distinct ways. For example, Ta(4, 2, 2) requires the smallest number that is the sum of four squares (or two fourth powers) in two different ways: 635,318,657.

  The existence of Ta(n) was proved theoretically in 1938 for all values of n, but the search is still on for larger taxicab numbers. Even with the benefits of computer calculations, mathematicians have not yet moved beyond Uwe Hollerbach’s discovery of Ta(6).

  Continuing relevance

  Taxicab numbers were only one area of Hardy and Ramanujan’s work. Their main focus was prime numbers. Hardy was excited by Ramanujan’s claim that he had found a function of x that exactly represented the number of prime numbers less than x; Ramanujan was unable, however, to offer a rigorous proof.

  Taxicab numbers have little practical use, but they still inspire scholars as curiosities. Mathematicians now also seek “cabtaxi” numbers: based on the taxicab formula, these allow calculations using both positive and negative cubes.

  An equation means nothing to me unless it expresses a thought of God.

 

‹ Prev