Book Read Free

The Math Book

Page 22

by DK


  The bell curve is a visual illustration of normal distribution. The highest point of the curve (b) represents the mean, which the values cluster around. Values become less frequent the further they are from the mean, so are least frequent at points a and c.

  Finding the mean

  In 1721, Scottish baronet Alexander Cuming gave de Moivre a problem concerning the expected winnings in a game of chance. De Moivre concluded that it came down to finding the mean deviation (the average difference between the overall mean and each value in a set of figures) of binomial distribution. He wrote up his results in Miscellanea Analytica.

  De Moivre had realized that binomial outcomes cluster around their mean—on a graph, they plot an uneven curve that gets closer to the shape of a bell (normal distribution) the more data is collected. In 1733, de Moivre was satisfied that he had found a simple way of approximating binomial probabilities using normal distribution, thus creating a bell curve for binomial distribution on a graph. He wrote up his findings as a short paper, then included it in the 1738 edition of his Doctrine of Chances.

  Using normal distribution

  From the mid-1700s, the bell curve cropped up as a model for all kinds of data. In 1809, Carl Friedrich Gauss pioneered normal distribution as a useful statistical tool in its own right. French mathematician Pierre-Simon Laplace used normal distribution to model curves for random errors, such as measurement errors, in one of the first applications of a normal curve.

  In the 1800s, many statisticians studied variation in experimental results. British statistician Francis Galton used a device called the quincunx (or Galton board) to study random variation. The board consisted of a triangular array of pegs through which beads dropped from top to bottom, where they collected in a series of vertical tubes. Galton measured how many beads were in each tube and described the resulting distribution as “normal.” His work—along with that of Karl Pearson—popularized the use of the term “normal” to describe what was also known as a “Gaussian” curve.

  Today, normal distribution is widely used to model statistical data, with applications ranging from population studies to investment analysis.

  ABRAHAM DE MOIVRE

  Born in 1667, Abraham de Moivre was raised as a Protestant in Catholic France, and lived there until 1685, when Louis XIV expelled the Huguenots. Briefly imprisoned for his religious beliefs, de Moivre emigrated to England upon his release. He became a private mathematics tutor in London. He had hoped for a university teaching position, but he still faced some discrimination as a Frenchman in England. Nevertheless, de Moivre impressed and befriended many eminent scientists of the time, including Isaac Newton, and was elected as a fellow of the Royal Society in 1697. As well as his work on distribution, de Moivre was best known for his work on complex numbers. He died in London in 1754.

  Key works

  1711 De Mensura Sortis (On the Measurement of Chance)

  1721–30 Miscellanea Analytica (Miscellany of Analysis)

  1738 The Doctrine of Chances (1st edition)

  1756 The Doctrine of Chances (3rd edition)

  See also: Probability • The law of large numbers • The fundamental theorem of algebra • Laplace’s demon • The Poisson distribution • The birth of modern statistics

  IN CONTEXT

  KEY FIGURE

  Leonhard Euler (1707–83)

  FIELDS

  Number theory, topology

  BEFORE

  1727 Euler develops the constant e, which is used in describing exponential growth and decay.

  AFTER

  1858 August Möbius extends Euler’s graph theory formula to surfaces that are joined to form a single surface.

  1895 Henri Poincaré publishes his paper Analysis situs, in which graph theory is generalized to create a new area of mathematics known as topology (the study of properties of geometrical figures that are not affected by continuous deformation).

  Graph theory and topology began with Leonhard Euler’s attempt to find a solution to a mathematical puzzle—whether it was possible to make a circuit of the seven bridges in Königsberg (now Kaliningrad, Russia) without crossing any bridge twice. The river flowed around an island and then forked. Realizing that the problem related to the geometry of position, Euler developed a new type of geometry to show that it was impossible to devise such a route. Distances between points were not relevant: the only thing that counted was the connections between points.

  Euler modeled the Königsberg bridges problem by making each of the four land areas a point (node or vertex) and making the bridges arcs (curves or edges) that joined the various points. This gave him a “graph” that represented the relationships between the land and the bridges.

  First graph theorem

  Euler began from the premise that each bridge could be crossed only once and each time a land area was entered it also needed to be exited, which required two bridges in order to avoid crossing any bridge twice. Each land area therefore needed to connect to an even number of bridges, with the possible exception of the start and finish (if they were different locations). However, in the graph representing Königsberg, A is the endpoint of five bridges and B, C, and D are each the endpoint of three. A successful route needs land areas (nodes or vertices) to have an even number of bridges (arcs) to enter and exit by. Only the start and end points can have an odd number. If more than two nodes have an odd number of arcs, then a route using each bridge only once is impossible. By showing this, Euler provided the first theorem in graph theory.

  The word “graph” is most often used to describe a Cartesian system of coordinates with points plotted using x and y axes. More generally, a graph consists of a discrete set of nodes (or vertices) connected by arcs (or edges). The number of arcs meeting at a node is called its degree. For the Königsberg graph, A has degree 5 and B, C, and D each have degree 3. A path that travels each arc once and only once is called an Eulerian path (or a semi-Eulerian path if the start and end are at different nodes).

  The Königsberg bridges problem can be expressed as the question: “Is there an Eulerian or a semi-Eulerian path for the graph of Königsberg?” Euler’s answer is that such a graph must have at most two nodes of odd degree, but the Königsberg graph has four odd degree nodes.

  Read Euler, read Euler. He is our master in everything.

  Pierre-Simon Laplace

  Network theory

  Arcs on a graph may be “weighted” (given degrees of significance) by assigning numerical values to them—for example, to represent the different lengths of roads on a map. A weighted graph is also called a network. Networks are used to model relationships between objects in many disciplines—including computer science, particle physics, economics, cryptography, sociology, biology, and climatology—usually with a view to optimizing a particular property, such as the shortest distance between two points.

  One application of networks is to address the so-called “traveling salesperson problem.” This involves finding the shortest route for a salesperson to travel from their home to a series of cities and back again. The puzzle was allegedly first set as a challenge on the back of a cereal box. In spite of advances in computing, no method exists that guarantees to always find the best solution, because the time this takes grows exponentially as the given number of cities increases.

  The city of Königsberg had seven bridges linking two parts of the city to its two islands. Euler’s graph shows that it is impossible to construct a route that visits each island and crosses each bridge only once.

  See also: Coordinates • Euler’s number • The complex plane • The Möbius strip • Topology • The butterfly effect • The four-color theorem

  IN CONTEXT

  KEY FIGURE

  Christian Goldbach (1690–1764)

  FIELD

  Number theory

  BEFORE

  c. 200 CE Diophantus of Alexandria writes his Arithmetica in which he lays out key issues about numbers.

  1202 Fibonacci identifies what beco
mes known as the Fibonacci sequence of numbers.

  1643 Pierre de Fermat pioneers number theory.

  AFTER

  1742 Leonhard Euler refines the Goldbach conjecture.

  1937 Soviet mathematician Ivan Vinogradov proves the ternary Goldbach problem, a version of the conjecture.

  In 1742, Russian mathematician Christian Goldbach wrote to Leonhard Euler, the leading mathematician of the time. Goldbach believed he had observed something remarkable—that every even integer can be split into two prime numbers, such as 6 (3 + 3) or 8 (3 + 5). Euler was convinced that Goldbach was right, but he could not prove it. Goldbach also proposed that every odd integer above 5 is the sum of three primes, and concluded that every integer from 2 upward can be created by adding together primes; these additional proposals are dubbed “weak” versions of the original “strong” conjecture, as they would follow naturally if the strong conjecture were true.

  Manual and electronic methods have, as yet, failed to find any even number that does not conform to the original strong conjecture. In 2013, a computer tested every even number up to 4 × 1018 without finding one. The bigger the number, the more pairs of primes can create it, so it seems highly likely that the conjecture is valid and no exception will be found. Mathematicians, however, require a definitive proof.

  Over centuries, different “weak” versions of the conjecture have been proved, but no one to date has proved the strong conjecture, which seems destined to defeat even the brightest minds.

  UCLA’s Terence Tao, winner of the Fields Medal in 2006 and the Breakthrough Prize in mathematics in 2015, published a rigorous proof of a weak Goldbach conjecture in 2012.

  See also: Mersenne primes • The law of large numbers • The Riemann hypothesis • The prime number theorem

  IN CONTEXT

  KEY FIGURE

  Leonhard Euler (1707–83)

  FIELD

  Number theory

  BEFORE

  1714 Roger Cotes, the English mathematician who proofread Newton’s Principia, creates an early formula similar to Euler’s, but using imaginary numbers and a complex logarithm (a type of logarithm used when the base is a complex number).

  AFTER

  1749 Abraham de Moivre uses Euler’s formula to prove his theorem, which links complex numbers and trigonometry.

  1934 Soviet mathematician Alexander Gelfond shows that eπ is transcendental, that is, irrational and still irrational when raised to any power.

  Formulated by Leonhard Euler in 1747, the equation known as Euler’s identity, eiπ + 1 = 0, encompasses the five most important numbers in mathematics: 0 (zero), which is neutral for addition and subtraction; 1, which is neutral for multiplication and division; e (2.718..., the number at the heart of exponential growth and decay); i (, the fundamental imaginary number); and π (3.142..., the ratio of a circle’s circumference to its diameter, which occurs in many equations in mathematics and physics). Two of these numbers, e and i, were introduced by Euler himself. His genius lay in combining all five milestone numbers with three simple operations: raising a number to a power (for example, 54, or 5 × 5 × 5 × 5), multiplication, and addition.

  Complex powers

  Mathematicians such as Euler asked themselves if it would be meaningful to raise a number to a complex power—a complex number being a number that combines a real number with an imaginary one, such as a + bi, where a and b are any real numbers. When Euler raised the constant e to the power of the imaginary number i multiplied by π, he discovered that it equals –1. Adding 1 to both sides of the equation produces Euler’s identity, eiπ + 1 = 0. The equation’s simplicity has led mathematicians to describe it as “elegant,” a description reserved for proofs that are profound yet also unusually succinct.

  It is simple… yet incredibly profound; it comprises the five most important mathematical constants.

  David Percy

  British mathematician

  See also: Calculating pi • Trigonometry • Imaginary and complex numbers • Logarithms • Euler’s number

  IN CONTEXT

  KEY FIGURE

  Thomas Bayes (1702–61)

  FIELD

  Probability

  BEFORE

  1713 Jacob Bernoulli’s Ars Conjectandi (The Art of Conjecturing), published after his death, sets out his new mathematical theory of probability.

  1718 Abraham de Moivre defines the statistical independence of events in his book The Doctrine of Chances.

  AFTER

  1774 In his Memoir on the Probability of the Causes of Events, Pierre-Simon Laplace introduces the principle of inverse probability.

  1992 The International Society for Bayesian Analysis (ISBA) is founded to promote the application and development of Bayes’ theorem.

  In 1763, Richard Price, a Welsh minister and mathematician, published a paper called “An Essay Towards Solving a Problem in the Doctrine of Chances.” Its author, the Reverend Thomas Bayes, had died two years earlier, leaving the paper to Price in his will. It was a breakthrough in the modeling of probability and is still used today in areas as diverse as locating lost aircraft and testing for disease.

  Jacob Bernoulli’s book Ars Conjectandi (1713) showed that as the number of identically distributed, randomly generated variables increases, so their observed average gets closer to their theoretical average. For example, if you toss a coin for long enough, the number of times it comes up heads will get closer and closer to half the total of tosses —a probability of 0.5.

  In 1718, Abraham de Moivre grappled with the mathematics underpinning probability. He demonstrated that, provided the sample size was large enough, the distribution of a continuous random variable—people’s heights, for example—averaged out into a bell-shaped curve, later named the “normal distribution” by German mathematician Carl Gauss.

  If a disease affects 5 percent of the population (event A) and is diagnosed using a test with 90 percent accuracy (event B), you might assume that the probability (P) of having the disease if you test positive—P(A|B)—is 90 percent. However, Bayes’ theorem factors in the false results produced by the test’s 10 percent inaccuracy—P(B).

  Working out probabilities

  Most real-world events, however, are more complicated than the toss of a coin. For probability to be useful, mathematicians needed to determine how an event’s outcome could be used to draw conclusions about the probabilities that led to it. This reasoning based on the causes of observed events—rather than using direct probabilities, such as the 50 percent chance of a heads coin toss—became known as inverse probability. Problems that deal with the probabilities of causes are called inverse probability problems and might involve, for example, observing a bent coin landing on heads 13 times out of 20 and then trying to determine whether the probability of that coin landing on heads lies somewhere between 0.4 and 0.6.

  To show how to calculate inverse probabilities, Bayes considered two interdependent events—“event A” and “event B”. Each has a probability of occurring—P(A) and P(B)— with P for each being a number between 0 and 1. If event A occurs, it alters the probability of event B happening, and vice versa. To denote this, Bayes introduced “conditional probabilities.” These are given as P(A|B), the probability of A given B, and P(B|A), the probability of B given A. Bayes managed to solve the problem of how all four probabilities related to one another with the equation: P(A|B) = P(A) × P(B|A)/P(B).

  THOMAS BAYES

  The son of a Nonconformist minister, Thomas Bayes was born in 1702 and grew up in London. He studied logic and theology at the University of Edinburgh and followed his father into the ministry, spending much of his life leading a Presbyterian chapel in Tunbridge Wells, Kent.

  Although little is known of Bayes’ life as a mathematician, in 1736 he anonymously published An Introduction to the Doctrine of Fluxions, and a Defence of the Mathematicians Against the Objections of the Author of the Analyst, in which he defended Isaac Newton’s calculus foundations against the criticisms of th
e philosopher Bishop George Berkeley. Bayes was made a fellow of the Royal Society in 1742 and died in 1761.

  Key work

  1736 An Introduction to the Doctrine of Fluxions, and a Defence of the Mathematicians Against the Objections of the Author of the Analyst

  See also: Probability • The law of large numbers • Normal distribution • Laplace’s demon • The Poisson distribution • The birth of modern statistics • The Turing machine • Cryptography

  IN CONTEXT

  KEY FIGURE

  Joseph-Louis Lagrange (1736–1813)

  FIELD

  Algebra

  BEFORE

  628 Brahmagupta publishes a formula for solving many quadratic equations.

  1545 Gerolamo Cardano creates formulae for resolving cubic and quartic equations.

  1749 Leonhard Euler proves that polynomial equations of degree n have exactly n complex roots (where n = 2, 3, 4, 5, or 6).

  AFTER

  1799 Carl Gauss publishes the first proof of the fundamental theorem of algebra.

  1824 In Norway, Niels Henrik Abel completes Paolo Ruffini’s 1799 proof that there is no general formula for the quintic equation.

 

‹ Prev