The Math Book

Home > Other > The Math Book > Page 27
The Math Book Page 27

by DK


  Since quaternions can model and control the motion of objects in three dimensions, they are particularly useful in virtual reality games.

  Four dimensions

  Hamilton’s solution was to add a fourth nonreal unit, k. This created a quaternion, with a basic structure of a + bi + cj + dk, where a, b, c, and d are real numbers. The two additional quaternion units, j and k, share similar properties to i and are imaginary. A quaternion can define a vector, or a line in three-dimensional space, and can describe an angle and direction of rotation around that vector. Like the complex plane, simple quaternion mathematics, combined with basic trigonometry, offers a way to describe all kinds of movements within three-dimensional space.

  An undercurrent of thought was going on in my mind which gave at last a result… An electric circuit seemed to close; and a spark flashed forth, the herald of many long years.

  William Rowan Hamilton

  WILLIAM ROWAN HAMILTON

  Born in Dublin in 1805, Hamilton became interested in mathematics from the age of eight after meeting Zerah Colburn, a touring American mathematical child prodigy. At the age of 22, while still studying at Trinity College, Dublin, he was appointed both professor of astronomy at the university and Royal Astronomer of Ireland.

  Hamilton’s expertise in Newtonian mechanics enabled him to calculate the paths of heavenly bodies. He later updated Newtonian mechanics into a system that enabled further advances to be made in electromagnetism and quantum mechanics. In 1856, he tried to capitalize on his skills by launching the icosian game, in which players search for a path connecting the points of a dodecahedron without returning to the same point twice. Hamilton sold the rights to the game for £25. He died in 1865, following a severe attack of gout.

  Key works

  1853 Lectures on Quaternions

  1866 Elements of Quaternions

  See also: Imaginary and complex numbers • Coordinates • Newton’s laws of motion • The complex plane

  IN CONTEXT

  KEY FIGURE

  Eugène Catalan (1814–94)

  FIELD

  Number theory

  BEFORE

  c. 1320 French philosopher and mathematician Levi ben Gershon (Gersonides) shows that the only powers of 2 and 3 that differ by 1 are 8 = 23 and 9 = 32.

  1738 Leonhard Euler proves that 8 and 9 are the only consecutive square or cube numbers.

  AFTER

  1976 Dutch number theorist Robert Tijdeman proves that, if more consecutive powers exist, there are only a finite number of them.

  2002 Preda Mihăilescu proves Catalan’s conjecture, 158 years after it was formulated in 1844.

  Many problems in number theory are easy to pose, but extremely difficult to prove. Fermat’s last theorem, for example, remained a conjecture (unproven claim) for 357 years. Like Fermat’s conjecture, Catalan’s conjecture is a deceptively simple claim about powers of positive integers that was proved long after its initial statement.

  In 1844, Eugène Catalan claimed that there is only one solution to the equation xm - yn = 1, where x, y, m, and n are natural numbers (positive integers) and m and n are greater than 1. The solution is x = 3, m = 2, y = 2, and n = 3, since 32 - 23 = 1. In other words, squares, cubes, and higher powers of natural numbers are almost never consecutive. Five hundred years before, Gersonides had proved a special case of the claim. He used only powers of 2 and 3, solving the equations 3n − 2m = 1 and 2m − 3n = 1. In 1738, Leonhard Euler similarly proved a case in which the only powers allowed were squares and cubes. Euler did this by solving the equation x2 − y3 = 1. This was closer to Catalan's conjecture, but did not allow for the possibility that larger powers or exponents could result in consecutive numbers.

  Becoming a theorem

  Catalan himself said that he could not prove his conjecture completely. Other mathematicians tackled the problem, but it was only in 2002 that Romanian mathematician Preda Mihăilescu solved the outstanding issues and turned conjecture into theorem.

  It might seem that Catalan’s conjecture must be false, since simple calculations quickly yield examples of powers that are almost consecutive. For example, 33 - 52 = 2, and 27 - 53 = 3. On the other hand, even these near-solutions are rare. One approach to proving the conjecture appeared to involve making many calculations: in 1976, Robert Tijdeman found an upper bound (maximum size) for x, y, m, and n. This proved that there is only a finite number of powers that can be consecutive. The truth of Catalan’s conjecture could now be tested by checking each of these powers. Unfortunately, Tijdeman’s upper bound is astronomically large, making such computation practically unfeasible even for modern computers.

  Mihăilescu’s proof of Catalan’s conjecture does not involve any such computation. Mihăilescu built on 20th-century advances (by Ke Zhao, J. W. S. Cassels, and others) that had proved m and n must be odd primes for any further solutions of xm - yn = 1. His proof is not as formidable as Andrew Wiles’s proof of Fermat’s last theorem, but it is still highly technical.

  If squared and cubed numbers are lined up in order of their values, the difference between each value becomes clear. The difference between 23 and 32 is 1, and Catalan’s conjecture states that this is the only pair of squares, cubes, or higher powers that differ by 1.

  EUGÈNE CATALAN

  Born in Bruges, Belgium, in 1814, Eugène Catalan studied under French mathematician Joseph Liouville at the École Polytechnique in Paris. Catalan was a republican from an early age and a participant in the 1848 revolution. His political beliefs led to his expulsion from a number of academic posts.

  Catalan was particularly interested in geometry and combinatorics (counting and arranging), and his name is associated with the Catalan numbers. This sequence (1, 2, 5, 14, 42…) counts, among other things, the ways that polygons can be divided into triangles.

  Although he considered himself French, Catalan won recognition in Belgium, where he lived from his appointment as professor of analysis at the University of Liège in 1865 until his death in 1894.

  Key works

  1860 Traité élémentaire des séries (Elementary Treatise on Series)

  1890 Intégrales eulériennes ou elliptiques (Eulerian or Elliptic Integrals)

  See also: Pythagoras • Diophantine equations • The Goldbach conjecture • Taxicab numbers • Proving Fermat’s last theorem

  IN CONTEXT

  KEY FIGURE

  James Joseph Sylvester (1814–97)

  FIELDS

  Algebra, number theory

  BEFORE

  200 BCE The ancient Chinese text The Nine Chapters on the Mathematical Art presents a method for solving equations using matrices.

  1545 Gerolamo Cardano publishes techniques using determinants.

  1801 Carl Friedrich Gauss uses a matrix of six simultaneous equations to compute the orbit of the asteroid Pallas.

  AFTER

  1858 Arthur Cayley formally defines matrix algebra, and proves results for 2 × 2 and 3 × 3 matrices.

  Matrices are rectangular arrays (grids) of elements (numbers or algebraic expressions), arranged in rows and columns enclosed by square brackets. The rows and columns can be extended indefinitely, which enables matrices to store vast amounts of data in an elegant and compact manner. Although a matrix contains many elements, it is treated like one unit. Matrices have applications in mathematics, physics, and computer science, such as in computer graphics and describing the flow of a fluid.

  The earliest known evidence for such arrays comes from the ancient Mayan civilization of Central America, c. 2600 BCE. Some historians believe the Maya people manipulated numbers in rows and columns to solve equations, and cite gridlike decorations on their monuments and priestly robes as evidence. Others, however, doubt these patterns represent actual matrices.

  The first verified instance of the use of matrices comes from ancient China. In the second century BCE, the textbook The Nine Chapters on the Mathematical Art described how to set out a counting board and use a matrixlike method to s
olve linear simultaneous equations with several unknown values. This method was similar to the elimination system introduced by German mathematician Carl Gauss in the 1800s, which is still used today for solving simultaneous equations.

  The dimensions of a matrix are important, as operations such as addition and subtraction require the matrices involved to have the same dimensions. The 2 × 2 matrices below are square matrices, meaning that they have the same number of rows as they have columns. The graphic below shows how matrices are added together by adding the elements in corresponding positions.

  Matrix arithmetic

  In 1850, British mathematician James Joseph Sylvester first used the term “matrix” to describe an array of numbers. Shortly after Sylvester introduced the term, his friend and colleague Arthur Cayley formalized the rules for manipulating matrices. Cayley showed that the rules of matrix algebra are different from those in standard algebra. Two matrices of the same size (with the same number of elements in their respective rows and columns) are added by simply adding corresponding elements. Matrices with different dimensions cannot be added. Matrix multiplication is, however, quite different from multiplication of numbers. Not all matrices can be multiplied together; in matrix multiplication, AB can only be calculated if the row count of B is the same as the column count of A. Matrix multiplication is noncommutative, meaning that even where both A and B are square matrices, AB is not equal to BA.

  The arrays found in Mayan relics suggest to some historians that the Maya used matrices to solve linear equations. However, others believe they were merely replicating patterns in nature, such as on a turtle’s shell.

  Square matrices

  Because of their symmetry, square matrices have particular properties. For example, a square matrix can be repeatedly multiplied by itself. A square matrix of size n × n with the value 1 along the diagonal starting top left, and the value 0 everywhere else, is called the identity matrix (In).

  Every square matrix has an associated value called its determinant, which encodes many of the matrix’s properties and can be computed by arithmetic operations on the matrix’s elements. Square matrices whose elements are complex numbers, and whose determinants are not zero, form an algebraic structure called a group. Theorems that are true for groups are therefore also true for such matrices, and advances in group theory can be applied to matrices. Groups can also be represented as matrices, enabling difficult problems in group theory to be expressed in terms of matrix algebra, which is more easily solved. Representation theory, as this field is known, is applied in number theory and analysis, and in physics.

  Multiplying two matrices together is achieved by multiplying the horizontals in the first matrix by the vertical numbers in the second (the centered dot indicates multiplication) and adding the results. In matrix algebra, switching around the order in which the two matrices are multiplied produces a different result as shown here with the multiplication of two square matrices (A and B).

  Determinants

  The determinant of a matrix was named by Gauss, due to the fact that it determines whether the system of equations represented by the matrix has a solution. As long as the determinant is not zero, the system will have a unique solution. If the determinant is zero, the system may have either no solution or many.

  In the 1600s, Japanese mathematician Seki Takakaze had shown how to calculate the determinants of matrices up to size 5 × 5. Over the following century, mathematicians uncovered the rules for finding determinants of larger and larger arrays. In 1750, Swiss mathematician Gabriel Cramer stated a general rule (now called Cramer’s rule) for the determinant of a matrix with m rows and n columns, but he failed to give the proof of this rule.

  In 1812, French mathematicians Augustin-Louis Cauchy and Jacques Binet proved that when two square matrices of the same size are multiplied, the determinant of this product is, in fact, the same as the product of their individual determinants: detAB = (detA) = (detB). This rule simplified the process of finding the determinant of a very large matrix by breaking it down into the determinants of two smaller matrices.

  A linear transformation in 2 dimensions maps lines through the origin to other lines through the origin, and parallel lines to parallel lines. Linear transformations include rotations, reflections, enlargements, stretches, and shears (lines that slide parallel to a fixed line, in proportion to their distance from the fixed line). The image of any point (x, y) is found by multiplying the matrix by the column vector representing the point (x, y). In the examples above, the original shape is the pink square, with vertices (0, 0), (2, 0), (2, 2) and (0, 2), and the image is the green quadrilateral.

  Transformation matrices

  Matrices can be used to represent linear geometric transformations (see above) such as reflections, rotations, translations, and scalings. Transformations in two dimensions are encoded by 2 × 2 matrices, while 3-D transformations involve 3 × 3 matrices. The determinant of a transformation matrix contains information about the area or volume of the transformed figure. Today, computer aided design (CAD) software makes extensive use of matrices for this purpose.

  Modern applications

  Matrices can store vast amounts of data compactly, making them essential across math, physics, and computing. Graph theory uses matrices to encode how a set of vertices (points) is connected by edges (lines). One formulation of quantum physics, called matrix mechanics, makes extensive use of matrix algebra, and particle physicists and cosmologists use transformation matrices and group theory to study the symmetries of the Universe.

  Matrices are used to represent electrical circuits for solving problems about voltage and current. They are also important in computer science and cryptography. Stochastic matrices, whose elements represent probabilities, are used by search engine algorithms for ranking web pages. Programmers use matrices as keys when encrypting messages; letters are assigned individual numerical values, which are then multiplied by the numbers in the matrix. The larger the matrix used, the more secure the encryption is.

  I have not thought it necessary to undertake the labor of a formal proof of the theorem in the general case of a matrix of any degree.

  Arthur Cayley

  JAMES JOSEPH SYLVESTER

  Born in 1814, James Joseph Sylvester began his studies at University College London, but left when he was accused by another student of wielding a knife. He then went to Cambridge and came second in the university examinations, but was not allowed to graduate because, as a Jew, he would not swear allegiance to the Church of England.

  Sylvester taught briefly in the US, but faced similar difficulties there. Returning to London, he studied law and was admitted to the bar in 1850. He also began to work on matrices with fellow British mathematician Arthur Cayley. In 1876, Sylvester returned to the US as a math professor at Johns Hopkins University, Maryland, where he founded the American Journal of Mathematics. Sylvester died in London in 1897.

  Key works

  1850 On a New Class of Theorems

  1852 On the principle of the calculus of forms

  1876 Treatise on elliptic functions

  See also: Algebra • Coordinates • Probability • Graph theory • Group theory • Cryptography

  IN CONTEXT

  KEY FIGURE

  George Boole (1815–64)

  FIELD

  Logic

  BEFORE

  350 BCE Aristotle’s philosophy discusses syllogisms.

  1697 Gottfried Leibniz tries, unsuccessfully, to use algebra to formalize logic.

  AFTER

  1881 John Venn introduces Venn diagrams to explain Boolean logic.

  1893 Charles Sanders Peirce uses truth tables to show outcomes of Boolean algebra.

  1937 Claude Shannon uses Boolean logic as the basis for computer design in his A Symbolic Analysis of Relay and Switching Circuits.

  Mathematics had never more than a secondary interest for him, and even logic he cared for chiefly as a means of clearing the ground.

  Mary Ev
erest Boole

  British mathematician and wife of George Boole

  Logic is the bedrock of mathematics. It provides us with the rules of reasoning and gives us a basis for deciding on the validity of an argument or proposition. A mathematical argument uses the rules of logic to ensure that if a basic proposition is true, then any and all statements constructed from that proposition will also be true.

  The earliest attempt to set out the principles of logic was carried out by the Greek philosopher Aristotle around 350 BCE. His analysis of the various forms of arguments marked the beginning of logic as a subject for study in its own right. In particular, Aristotle looked at a type of argument known as a syllogism, consisting of three propositions. The first two propositions, called the premises, logically entail the third proposition, the conclusion. Aristotle’s ideas about logic were unrivaled and unchallenged in Western thought for more than 2,000 years.

  Aristotle approached logic as a branch of philosophy, but in the 1800s, scholars began to study logic as a mathematical discipline. This involved moving from arguments expressed in words to a symbolic logic where arguments could be expressed using abstract symbols. One of the pioneers of this shift to mathematical logic was British mathematician George Boole, who sought to apply methods from the emerging field of symbolic algebra to logic.

 

‹ Prev