The Math Book

Home > Other > The Math Book > Page 36
The Math Book Page 36

by DK


  Any combination of shapes in a plane, however complex the pattern, can be colored in using just four colors so that no two adjacent shapes have the same color.

  New hope

  The introduction of supercomputers, computers capable of handling huge amounts of data, in the 1970s revived interest in solving the four-color theorem. Although German mathematician Heinrich Heesch suggested a method for doing this, he did not have sufficient access to a supercomputer to test it. Wolfgang Haken, a former student of Heesch’s, became interested in the problem, and began to make progress after meeting computer programmer Kenneth Appel at the University of Illinois. The pair finally cracked the problem in 1977. Relying completely on computing power—the first proof in the history of mathematics to do so—they examined around 2,000 cases, involving billions of calculations and using 1,200 hours of computer time.

  Computer proofs

  The IBM System/370 computer c. 1970 was one of the first computers to use virtual memory, a working memory system that allowed it to process large amounts of data.

  When Appel and Haken proved the four-color theorem in 1977, it was the first time that a computer had been used to prove a mathematical theorem. This was controversial among mathematicians, who were used to solving problems through logic that could be checked by their peers. Appel and Haken had used the computer to carry out a proof by exhaustion—all possibilities were meticulously checked one by one, a feat that would have been impossible to do manually. The question was whether a long calculation that could not be checked by humans, followed by a simple verdict of “yes, the theorem has been proved,” could be accepted. Many mathematicians argued that it could not. Proof by computers remains controversial, but advances in technology have increased confidence in their reliability.

  See also: Euler’s number • Graph theory • The complex plane • Proving Fermat’s last theorem

  IN CONTEXT

  KEY FIGURES

  Ron Rivest (1947–), Adi Shamir (1952–), Leonard Adleman (1945–)

  FIELD

  Computer science

  BEFORE

  9th century CE Al-Kindi develops frequency analysis.

  1640 Pierre de Fermat states his “little theorem” (on primality), which is still used as a test when searching for primes to use in public key encryption.

  AFTER

  2004 Elliptic curves are first used in cryptography; they use smaller keys but offer the same security as the RSA algorithm.

  2009 An anonymous computer scientist mines the first Bitcoin, a cryptocurrency without a central bank. All transactions are encrypted but public.

  Cryptography is the development of means of secret communication. It has become a ubiquitous feature of modern life, with almost every connection between one digital device and another starting with a “handshake,” in which the devices agree on a way of securing their connection. That handshake is often the result of the work of three mathematicians: Ron Rivest, Adi Shamir, and Leonard Adleman. In 1977, they developed the RSA algorithm (named for their initials), an encryption procedure that won them the Turing Award in 2002. The RSA algorithm is special because it ensures that any third party monitoring the connection will be completely unable to figure out any private details.

  One main reason people have needed to encrypt communications is to ensure financial transactions can happen without banking information falling into the wrong hands. However, encryption is used against all kinds of third-party “adversary”—a rival company, an enemy power, or a security service. Cryptography is an ancient practice. Mesopotamian clay tablets from c. 1500 BCE were often encrypted to protect recipes for pottery glazes and other such commercially valuable information.

  The work did not really need mathematics, but mathematicians tended to be good at it.

  Joan Clarke

  British cryptanalyst

  Cipher and key

  The term “cryptography” comes from the Greek for “hidden writing study.” For much of history it was used to secure written messages. The unencrypted message is known as the plaintext, while the encrypted version is called the ciphertext. For example, “HELLO” might become “IFMMP.” Going from plaintext to this ciphertext requires a cipher and a key. A cipher is an algorithm (a systematic, repeatable method)—in this case, to substitute each letter with one in another position in the alphabet. The key is +1, because each of the letters in plaintext is substituted with the letter +1 along in the alphabet. If the key were ˗6, then the cipher would turn the same plaintext “HELLO” into “BZFFI.” This simple substitution system is known as the Caesar cipher (or Caesar shift) after the Roman dictator Julius Caesar, who used it in the 1st century BCE. The Caesar cipher is an example of symmetric encryption, as the same cipher and key are used (in reverse) to decipher the message.

  Cipher wheels, such as this British example from 1802, sped up the decryption of Caesar ciphers. Once the key was uncovered, the two individual wheels could be set accordingly.

  Deciphering processes

  Given enough paper and time, it is relatively easy to figure out a Caesar cipher by trying out every possible substitution. In modern terms this is known as a “brute force” technique. More complex ciphers and keys make brute force more time-consuming—and, before computers, effectively unworkable for messages long enough to hold large amounts of information.

  Longer messages were vulnerable to another decryption technique called frequency analysis. Initially developed by the Arab mathematician al-Kindi in the 9th century, this technique made use of the frequency of each letter of the alphabet in a particular language. The most common letter in the English language is “e,” so a cryptanalyst would find the most common letter in the ciphertext and designate that as e. The next most common letter is “t,” then “a,” and so on. Common groupings of letters, such as “th” and “ion” could also provide a way into revealing the cipher. Given a large enough ciphertext, this system worked on any substitution cipher, no matter how elaborate the encryption.

  There are two ways of combatting frequency analysis. The first is to obscure the plaintext by using a “code.” Cryptography uses a specific definition of this term. A code changes an entire word or phrase in the plaintext before it is encrypted. An encoded plaintext might read “buy lemons on Thursday,” where “buy” is code for “kill” and “lemons” is code for a particular target on a hit list—perhaps with all targets encoded as fruits. Without the list of code words, deciphering the message’s full meaning is impossible.

  The Enigma machine was used in German espionage between 1923 and 1945. The three rotor wheels sit behind the lampboard, and the plugboard is at the front.

  The Enigma code

  Another method of increasing the security of encryption is to use a polyalphabetic cipher, where a letter in plaintext can be substituted for several different letters in ciphertext, thus removing the possibility of frequency analysis. Such ciphers were first developed in the 1500s, but the most famous one was the encryption produced by the Enigma machines used by the Axis forces in World War II.

  The Enigma machine was a formidable encryption device. In essence, it was a battery connected to 26 lightbulbs, or lamps—one for each letter of the alphabet. When a signaler pressed a letter on the keyboard, a corresponding letter lit up on the lampboard. Pressing the same key a second time always lit a different lamp (never the same letter as the key) because the connections between battery and lampboard were altered by three rotors that clicked around with every key press. Added complexity was introduced by the plugboard, which swapped 10 pairs of letters, thus scrambling the message further. To encrypt and decrypt an Enigma message, both machines needed to be set up in the correct way. This involved the correct three rotors being inserted and set to the right starting positions, and the 10 plugs being connected correctly on the board. The settings became the encryption key. A three-rotor Enigma had over 158,962,555,217 billion possible settings, which were changed daily.

  Enigma’s flaw was t
hat it could not encrypt a letter as itself. This allowed Allied codebreakers to try frequently used phrases, such as “Heil Hitler” and “Weather Report” to attempt to figure out that day’s key. Ciphertext without any of the letters in those words was a potential ciphertext of them. Allied codebreakers used the Turing Bombe, an electromechanical device that mimicked Enigma machines to break the encryption by brute force, using shortcuts developed by British mathematician Alan Turing and others. The British encryption device, the Typex, was a modified version of Enigma that could encode a letter as itself. The Nazis gave up trying to crack it.

  Computer technology is on the verge of providing the ability for [people] to communicate and interact with each other in a totally anonymous manner.

  Peter Ludlow

  American philosopher

  Asymmetric encryption

  With symmetric encryption, messages are only as secure as the key. This must be exchanged by physical means—written in a military code book or whispered in the ear of a spy at a secluded rendezvous. If a key falls into the wrong hands, the encryption fails.

  The rise of computer networks has allowed people to communicate easily over great distances without ever meeting. However, the most commonly used network, the internet, is public, so any symmetric key shared over a connection would be available to unintended parties, making it useless. The RSA algorithm was an early development in building asymmetric encryption, where a sender and receiver use two keys: one private and the other public. If two people, Alice and Bob, wish to communicate in secret, Alice can send Bob her public key. It is made up of two numbers, n and a. She keeps a private key, z, to herself. Bob uses n and a to encrypt a plaintext message (M), which is a string of numbers (or letters ciphered into numbers). Each plaintext number is raised to the power of a, and then divided by n. The division is a modulo operation (abbreviated to modn), meaning the answer is just the remainder. So, for example, if n were 10 and Ma were 12, that would give the answer 2. If Ma were 2, it would also give an answer of 2, because 10 goes into 2 zero times with a remainder of 2. The answer to Ma modn is the ciphertext (C), and in this example it is 2. Someone spying could know the public key, n and a, but would have no idea whether M is 2, 12, or 1,002 (all divisible by 10 with a remainder of 2). Only Alice can find out using her private key, z, because Cz modn = M.

  The crucial number in this algorithm is n, which is formed by multiplying two prime numbers: p and q. Then a and z are calculated from p and q using a formula which ensures that the modulo calculations work. The only way to crack the code is to figure out what p and q are and then calculate z. To do that, a codebreaker must figure out the prime factors of n, but today’s RSA algorithms use values for n with 600 digits or more. It would take a supercomputer thousands of years to work out p and q by trial and error, making RSA and similar protocols practically unbreakable.

  Public key encryption scrambles data with an encryption key available to anyone. The data can only be unscrambled with a private key, known only to its owner. This method is effective for small amounts of data, but is too time-consuming for large amounts.

  Finding primes in random ways

  Lava lamps can be hooked up to computers in order to generate a selection of random numbers based on their movements.

  The RSA algorithm and other public key encryption systems require a large collection of primes to act as p and q. If the system relies heavily on too few primes, then it is possible for attackers to figure out some of the values for p and q being used in everyday encryption. The solution is to have a supply of fresh primes. These are found by generating random numbers and testing their primality with Pierre de Fermat’s “little theorem”: if a number (p) is prime, when another number (n) is raised to the power of p, and n is subtracted from the result, the answer is a multiple of p.

  Computers are not easily programmed to create truly random sequences of numbers, so companies use physical phenomena to generate them. Computers are programmed to follow the movements of lava lamps, measure radioactive decay, or listen to white noise made by radio transmissions, turning that input into random numbers to use for encryption.

  See also: Group theory • The Riemann hypothesis • The Turing machine • Information theory • Proving Fermat’s last theorem

  IN CONTEXT

  KEY FIGURE

  Daniel Gorenstein (1923–92)

  FIELD

  Number theory

  BEFORE

  1832 Évariste Galois defines the concept of a simple group.

  1869–89 Camille Jordan, a French mathematician, and Otto Hölder, a German, prove that all finite groups can be built from finite simple groups.

  1976 Croatian mathematician Swonimir Janko introduces the sporadic simple group Janko Group 4, the last finite simple group to be discovered.

  AFTER

  2004 American mathematicians Michael Aschbacher and Stephen D. Smith complete the classification of finite simple groups begun by Daniel Gorenstein.

  Simple groups have been described as algebra’s atoms. The Jordan-Hölder theorem, proven around 1889, asserts that, just as all positive integers can be constructed from prime numbers, so all finite groups can be built from finite simple groups. In mathematics, a group is not simply a collection of things, but a specification of how the group members can be used to generate more members, for example, by multiplication, subtraction, or addition. In the early 1960s, American mathematician Daniel Gorenstein began to pioneer the classification of groups and issued his complete classification of finite simple groups in 1979.

  There are similarities between simple groups and symmetry in geometry. Just as a cube rotated through 90 degrees looks the same as it did before it was rotated, the transformations (rotational and reflexive) associated with a regular 2-D or 3-D shape can be arranged into a type of simple group known as a symmetry group.

  Infinite and finite groups

  Some groups are infinite, as in the group of all integers under addition, which is infinite because numbers can be added infinitely. However, the numbers –1, 0, and 1 with the multiplication operation form a finite group; multiplying any members of the group produces only –1, 0, or 1. All the members of a group and the rules for generating it can be visualized using a Cayley graph

  A group is simple if it cannot be broken down into smaller groups. While the number of simple groups is infinite, the number of types of simple group is not—at least, not when simple groups of finite size are considered. In 1963, American mathematician John G. Thompson proved that, with the exception of trivial groups (for example, 0 + 0 = 0, or 1 × 1 = 1), all simple groups have an even number of elements. This led Daniel Gorenstein to propose a more difficult task: the classification of every finite simple group.

  This Cayley graph shows all 60 elements (different orientations) of the group A5 (the group of rotational symmetries of a regular icosahedron, a three-dimensional shape with 20 faces), and how they relate to each other. Since A5 has a finite number of elements, it is a finite group. A5 is also a simple group. It has two generators (elements that can be combined to give any other element of the group).

  The Monster

  There are precise descriptions of 18 families of finite simple groups, with each family related to symmetries of certain types of geometrical structure. There are also 26 individual groups called sporadic groups, the largest of which is called the Monster, which has 196,883 dimensions and approximately 8 × 1053 elements. Every finite simple group either belongs to one of the 18 families or is one of the 26 sporadic groups.

  DANIEL GORENSTEIN

  Born in Boston, Massachusetts, in 1923, Daniel Gorenstein had taught himself calculus by the age of 12 and later attended Harvard University. There, he became acquainted with finite groups, which would become his life’s work. After graduating in 1943, he stayed at Harvard for several years, first to teach mathematics to military personnel during World War II, then to earn his PhD under mathematician Oscar Zariski.

  In 1960–61, Gore
nstein attended a nine-month program in group theory at the University of Chicago, which inspired him to propose a classification of finite simple groups. He continued to work on this project until his death in 1992.

  Key works

  1968 Finite groups

  1979 “The classification of finite simple groups”

  1982 Finite simple groups

  1986 “Classifying the finite simple groups”

  See also: The Platonic solids • Algebra • Projective geometry • Group theory • Cryptography • Proving Fermat’s last theorem

  IN CONTEXT

  KEY FIGURE

  Andrew Wiles (1953–)

  FIELD

  Number theory

  BEFORE

  1637 Pierre de Fermat states that there are no sets of positive whole numbers x, y, and z that satisfy the equation xn + yn = zn, where n is greater than 2. However, he does not provide the proof.

  1770 Swiss mathematician Leonhard Euler shows that Fermat’s last theorem is true when n = 3.

  1955 In Japan, Yutaka Taniyama and Goro Shimura propose that every elliptic curve has a modular form.

  AFTER

  2001 The Taniyama–Shimura conjecture is established. It becomes known as the modularity theorem.

 

‹ Prev