The Math Book

Home > Other > The Math Book > Page 32
The Math Book Page 32

by DK


  Srinivasa Ramanujan

  SRINIVASA RAMANUJAN

  Born in Madras, India in 1887, Ramanujan displayed an extraordinary aptitude for mathematics at an early age. Finding it hard to get full recognition locally, he took the bold step of sending some of his results to G. H. Hardy, then a professor at Trinity College, Cambridge. Hardy declared that they had to be the work of a mathematician “of the highest class,” and had to be true, because no one could invent them. In 1913, Hardy invited Ramanujan to work with him in Cambridge. The collaboration was hugely productive: in addition to the taxicab numbers, Ramanujan also developed a formula for obtaining the value of pi to a high level of accuracy.

  However, Ramanujan suffered from poor health. He returned to India in 1919 and died a year later—probably as a result of amoebic dysentery contracted years earlier. He left behind several notebooks, which mathematicians are still studying today.

  Key work

  1927 Collected papers of Srinivasa Ramanujan

  See also: Cubic equations • Elliptic functions • Catalan’s conjecture • The prime number theorem

  IN CONTEXT

  KEY FIGURE

  Émile Borel (1871–1956)

  FIELD

  Probability

  BEFORE

  45 BCE The Roman philosopher Cicero argues that a random combination of atoms forming Earth is highly improbable.

  1843 Antoine Augustin Cournot makes a distinction between physical and practical certainty.

  AFTER

  1928 British physicist Arthur Eddington develops the idea that improbable is impossible.

  2003 Scientists at Plymouth University in the UK test Borel’s theory with real monkeys and a computer keyboard.

  2011 American programmer Jesse Anderson’s million virtual monkey software generates the complete works of Shakespeare.

  In the early 1900s, French mathematician Émile Borel explored improbability—when events had a very small chance of ever occurring. Borel concluded that events with a sufficiently small probability will never occur. He was not the first to study the probability of unlikely events. In the 4th century BCE, the ancient Greek philosopher Aristotle suggested in Metaphysics that Earth was created by atoms coming together entirely by chance. Three centuries later, the Roman philosopher Cicero argued that this was so unlikely that it was essentially impossible.

  Defining impossibility

  Over the past two millennia, various thinkers have probed the balance between the improbable and the impossible. In the 1760s, French mathematician Jean d’Alembert questioned whether it was possible to have a very long string of occurrences in a sequence in which occurrence and non-occurrence are equally likely—for example, whether a person flipping a coin might get “heads” two million times in a row. In 1843, French mathematician Antoine Augustin Cournot questioned the possibility of balancing a cone on its tip. He argued that it is possible but highly unlikely, and made the distinction between a physical certainty—an event that can happen physically, like the balancing cone—and a practical certainty, which is so unlikely that in practical terms it is considered impossible. In what is sometimes known as Cournot’s principle, Cournot suggested that an event with a very small probability will not happen.

  The physically impossible event is therefore the one that has infinitely small probability, and only this remark gives substance… to the theory of mathematical probability.

  Antoine Augustin Cournot

  Infinite monkeys

  Borel’s law, which he called the law of single chance, gave a scale to practical certainty. For events on a human scale, Borel considered events with a probability of less than 10-6 (or 0.000001) to be impossible. He also came up with a famous example to illustrate impossibility: monkeys hitting typewriter keys at random will eventually type the complete works of Shakespeare. This outcome is highly improbable, but mathematically, over an infinite time (or with an infinite number of monkeys), it must happen. Borel noted that, while it cannot be mathematically proven that it is impossible for monkeys to type Shakespeare, it is so unlikely that mathematicians should consider it impossible. This idea of monkeys typing the works of Shakespeare captured people’s imagination and Borel’s law came to be known as the infinite monkey theorem.

  Borel’s theory is often applied to stock markets, where the level of chaos means that in some cases random selection performs better than selection based on traditional economic theories.

  ÉMILE BOREL

  Born in 1871 in Saint-Affrique, France, Émile Borel was a mathematics prodigy and graduated top of his class from the École Normale Supérieure in 1893. After lecturing in Lille for four years, he returned to the École, where he dazzled fellow mathematicians with a series of brilliant papers.

  Borel is best known for his infinite monkey theorem, but his lasting achievement was in laying the foundations for the modern understanding of complex functions—what a variable must be altered by to achieve a particular output. During World War I, Borel worked for the War Office and later became minister of the navy. Imprisoned when the Germans invaded France in World War II, he was released and fought for the Resistance, earning himself the Croix de Guerre. He died in 1956 in Paris.

  Key works

  1913 Le Hasard (Chance)

  1914 Principes et formules classiques du calcul des probabilités (Principles and classic formulas of probability)

  See also: Probability • The law of large numbers • Normal distribution • Laplace’s demon • Transfinite numbers

  IN CONTEXT

  KEY FIGURE

  Emmy Noether (1882–1935)

  FIELD

  Algebra

  BEFORE

  1843 German mathematician Ernst Kummer develops the concept of ideal numbers—ideals in the ring of integers.

  1871 Richard Dedekind builds on Kummer’s idea to formulate definitions of rings and ideals more generally.

  1890 David Hilbert refines the concept of the ring.

  AFTER

  1930 Dutch mathematician Bartel Leendert Van der Waerden writes the first comprehensive treatment of abstract algebra.

  1958 British mathematician Alfred Goldie proves that Noetherian rings can be understood and analyzed in terms of simpler ring types.

  In the 1800s, analysis and geometry were the leading fields of mathematics, while algebra was considerably less popular. Throughout the Industrial Revolution, applied mathematics was prioritized over areas of study that were more theoretical. This all changed in the early 1900s with the rise of “abstract” algebra, which became one of the key fields of mathematics, largely thanks to the innovations of German mathematician Emmy Noether.

  Noether was not the first to focus on abstract algebra. Work on algebra theory had been developed by mathematicians such as Joseph-Louis Lagrange, Carl Friedrich Gauss, and British mathematician Arthur Cayley, but gained traction when German mathematician Richard Dedekind began to study algebraic structures. He conceptualized the ring—a set of elements with two operations, such as addition and multiplication. A ring can be broken into parts called “ideals”—a subset of elements. For example, the set of odd integers are an ideal in the ring of integers.

  My methods are really methods of working and thinking; this is why they have crept in everywhere anonymously.

  Emmy Noether

  Significant works

  Noether began her work on abstract algebra shortly before World War I with her exploration of invariant theory, which explained how some algebraic expressions stay the same while other quantities change. In 1915, this work led her to make a major contribution to physics; she proved that the laws of conservation of energy and mass each correspond to a different type of symmetry. The conservation of electric charge, for example, is related to rotational symmetry. Now called Noether’s theorem, it was praised by Einstein for the way it addressed his theory of general relativity.

  In the early 1920s, Noether’s work focused on rings and ideals. In a key paper in 1921, Idealtheorie
in Ringbereichen (Ideal Theory in Rings), she studied ideals in a particular set of “commutative rings,” in which the numbers can be swapped around when they are multiplied without affecting the result. In a 1924 paper, she proved that in these commutative rings, every ideal is the unique product of prime ideals. One of the most brilliant mathematicians of her time, Noether laid the foundations for the development of the entire field of abstract algebra with her contributions to ring theory.

  EMMY NOETHER

  Born in 1882, Emmy Noether struggled to find education, recognition, and even basic employment in early 20th century academia as a Jewish woman in Germany. Although her mathematical skill won her a position at the University of Erlangen—where her father also taught mathematics— from 1908 to 1923 she received no pay. She later faced similar discrimination in Göttingen, where her colleagues had to fight to have her officially included in the faculty. In 1933, the rise of the Nazis led to her dismissal, and she moved to the US, working at Bryn Mawr College and at the Institute for Advanced Study until her death in 1935.

  Key works

  1921 Idealtheorie in Ringbereichen (Ideal Theory in Rings)

  1924 Abstrakter Aufbau der Idealtheorie im algebraischen Zahlkörper (Abstract Construction of Ideal Theory in Algebraic Fields)

  See also: Algebra • The binomial theorem • The algebraic resolution of equations • The fundamental theorem of algebra • Group theory • Matrices • Topology

  IN CONTEXT

  KEY FIGURES

  André Weil (1906–1998), Henri Cartan (1904–2008)

  FIELDS

  Number theory, algebra

  BEFORE

  1637 René Descartes creates coordinate geometry, allowing points on a flat surface to be described.

  1874 Georg Cantor creates set theory, describing how sets and their subsets interrelate.

  1895 Henri Poincaré lays the foundations of algebraic topology in Analysis Situs (Analysis of Position).

  AFTER

  1960s The New Mathematics movement, which focuses on set theory, becomes popular in American and European schools.

  1995 Andrew Wiles publishes his final proof of Fermat’s last theorem.

  Russian mathematical genius Nicolas Bourbaki was one of the most prolific and influential mathematicians of the 1900s. His monumental work Éléments de Mathématique (Elements of Mathematics, 1960), occupies a key place in university libraries and countless students of mathematics have learned the tools of their trade from his work.

  Bourbaki, however, never existed. He was a fiction created in the 1930s by young French mathematicians who were striving to fill the vacuum left by the devastation of World War I. While other countries had kept academics at home, French mathematicians had joined their countrymen in the trenches and a generation of teachers had been killed. French mathematics was stuck with antiquated textbooks and teachers.

  Renewing mathematics

  Some young teachers believed that French mathematics had fallen victim to a lack of rigor and precision. They were distrustful of the creative guesswork, as they saw it, of older mathematicians such as Henri Poincaré in developing chaos theory and mathematics for physics.

  In 1934, two young lecturers at the University of Strasbourg, André Weil and Henri Cartan, took matters into their own hands. They invited six fellow former students from the École Normale Supérieur to lunch in Paris, hoping to persuade them to take part in an ambitious project to write a new treatise that would revolutionize mathematics.

  The group—which included Claude Chevalley, Jean Delsarte, Jean Dieudonné, and René de Possel—agreed to create a new body of work that covered all fields of mathematics. Meeting regularly and marshaled by Dieudonné, the group produced book after book, led by Éléments de Mathématique. Their work was likely to be controversial, so they adopted the pseudonym Nicolas Bourbaki.

  The group aimed to strip mathematics back to basics and provide a foundation from which it could go forward. While their work sparked a brief fad in the 1960s, it proved too radical for teachers and pupils alike. The group was often at odds with cutting-edge mathematics and physics, and was so focused on pure math that applied math was of little interest to them. Topics containing uncertainty, such as probability, had no place in Bourbaki’s work.

  Even so, the group made important contributions across a wide range of mathematical topics, particularly in set theory and algebraic geometry. The group, which acts in secrecy and whose members must resign at age 50, still exists, although Bourbaki now publishes infrequently. The most recent two volumes were published in 1998 and 2012.

  The Bourbaki group poses for a photo at the first Bourbaki congress in July 1935. Among them are Henri Cartan (standing far left) and André Weil (standing fourth from left).

  Bourbaki’s legacy

  Topology and set theory—the meeting between numbers and shapes—were for Bourbaki at the very root of mathematics and lay at the heart of the group’s work. René Descartes had first made the link between shapes and numbers in the 1600s with coordinate geometry, turning geometry into algebra. Bourbaki helped make the link the other way, turning algebra into geometry to create algebraic geometry, which is perhaps their lasting legacy. It was at least partly Bourbaki’s work on algebraic geometry that led British mathematician Andrew Wiles to finally prove Fermat’s last theorem; he published his proof in 1995.

  Some mathematicians believe algebraic geometry has great untapped potential for the future. It already has real-world applications such as in programming codes in cell phones and smart cards.

  See also: Coordinates • Topology • The butterfly effect • Proving Fermat’s last theorem • Proving the Poincaré conjecture

  IN CONTEXT

  KEY FIGURE

  Alan Turing (1912–54)

  FIELD

  Computer science

  BEFORE

  1837 In the UK, Charles Babbage designs the Analytical Engine, a mechanical computer using the decimal system. If it had been constructed, it would have been the first “Turing-complete” device.

  AFTER

  1937 Claude Shannon designs electrical switching circuits that use Boolean algebra to make digital circuits that follow rules of logic.

  1971 American mathematician Stephen Cook poses the P versus NP problem, which tries to understand why some mathematical problems can quickly be verified but would take billions of years to prove, despite computers’ immense calculating power.

  If a machine is expected to be infallible, it cannot also be intelligent.

  Alan Turing

  Alan Turing is often cited as the “father of digital computing,” yet the Turing machine that earned him that accolade was not a physical device but a hypothetical one. Instead of constructing a prototype computer, Turing used a thought experiment in order to solve the Entscheidungsproblem (decision problem) that had been posed by German mathematician David Hilbert in 1928. Hilbert was interested in whether logic could be made more rigorous by being simplified into a set of rules, or axioms, in the same way that arithmetic, geometry, and other fields of mathematics were thought possible to simplify at the time. Hilbert wanted to know if there was a way to predetermine whether an algorithm—a method for solving a specific mathematical problem using a given set of instructions in a given order—would arrive at a solution to the problem.

  In 1931, Austrian mathematician Kurt Gödel demonstrated that mathematics based on formal axioms could not prove everything that was true according to those axioms. What Gödel called the “incompleteness theorem” found that there was a mismatch between mathematical truth and mathematical proof.

  Ancient roots

  Algorithms have ancient origins. One of the earliest examples is the method used by the Greek geometer Euclid to calculate the greatest common divisor of two numbers—the largest number that divides both of them without leaving a remainder. Another early example is Eratosthenes’ sieve, attributed to the 3rd-century BCE Greek mathematician. It is an algorithm for sorting p
rimes from composite (not prime) numbers. The algorithms of Eratosthenes and Euclid work perfectly and can be proven always to do so, but they did not conform to a formal definition. It was the need for this that led Turing to create his “virtual machine.”

  In 1937, Turing published his first paper as a fellow of King’s College, Cambridge, “On Computable Numbers, with an Application to the Entscheidungsproblem.” It showed that there is no solution to Hilbert’s decision problem: some algorithms are not computable, but there is no universal mechanism for identifying them before trying them.

  Turing reached this conclusion using his hypothetical machine, which came in two parts. First there was a tape, as long as it needed to be, divided into sections, each section carrying a coded character. This character could be anything, but the simplest version used 1s and 0s. The second part was the machine itself, which read the data from each section of the tape (either by the head or tape moving). The machine would be equipped with a set of instructions (an algorithm) that controlled the behavior of the machine. The machine (or tape) could move left, right, or stay where it was, and it could rewrite the data on the tape, switching a 0 to 1 or vice versa. Such a machine could carry out any conceivable algorithm.

  Turing was interested in whether any algorithm put into the machine would cause the machine to halt. Halting would signify that the algorithm had arrived at a solution. The question was whether there was a way of knowing which algorithms (or virtual machines), would halt and which would not; if Turing could find out, he would answer the decision problem.

 

‹ Prev