by E. T. Bell
“Towards constructing such an application it is natural, or rather necessary, to employ the method introduced by Descartes for the application of Algebra to Geometry. That great and philosophical mathematician conceived the possibility, and employed the plan, of representing or expressing algebraically the position of any point in space by three co-ordinate numbers which answer respectively how far the point is in three rectangular directions (such as north, east, and west), from some fixed point or origin selected or assumed for the purpose; the three dimensions of space thus receiving their three algebraical equivalents, their appropriate conceptions and symbols in the general science of progression [order]. A plane or curved surface became thus algebraically defined by assigning as its equation the relation connecting the three co-ordinates of any point upon it, and common to all those points: and a line, straight or curved, was expressed according to the same method, by the assigning two such relations, correspondent to two surfaces of which the line might be regarded as the intersection. In this manner it became possible to conduct general investigations respecting surfaces and curves, and to discover properties common to all, through the medium of general investigations respecting equations between three variable numbers: every geometrical problem could be at least algebraically expressed, if not at once resolved, and every improvement or discovery in Algebra became susceptible of application or interpretation in Geometry? The sciences of Space and Time (to adopt here a view of Algebra which I have elsewhere ventured to propose) became intimately intertwined and indissolubly connected with each other. Henceforth it was almost impossible to improve either science without improving the other also. The problem of drawing tangents to curves led to the discovery of Fluxions or Differentials: those of rectification and quadrature to the inversion of Fluents or Integrals: the investigation of curvatures of surfaces required the Calculus of Partial Differentials: the isoperimetrical problems resulted in the formation of the Calculus of Variations. And reciprocally, all these great steps in Algebraic Science had immediately their applications to Geometry, and led to the discovery of new relations between points or lines or surfaces. But even if the applications of the method had not been so manifold and important, there would still have been derivable a high intellectual pleasure from the contemplation of it as a method.
“The first important application of this algebraical method of coordinates to the study of optical systems was made by Malus, a French officer of engineers in Napoleon’s army in Egypt, and who has acquired celebrity in the history of Physical Optics as the discoverer of polarization of light by reflexion. Malus presented to the Institute of France, in 1807, a profound mathematical work which is of the kind above alluded to, and is entitled Traité d’Optique. The method employed in that treatise may be thus described:—The direction of a straight ray of any final optical system being considered as dependent on the position of some assigned point on the ray, according to some law which characterizes the particular system and distinguishes it from others; this law may be algebraically expressed by assigning three expressions for the three co-ordinates of some other point of the ray, as functions of the three co-ordinates of the point proposed. Malus accordingly introduces general symbols denoting three such functions (or at least three functions equivalent to these), and proceeds to draw several important general conclusions, by very complicated yet symmetric calculations; many of which conclusions, along with many others, were also obtained afterwards by myself, when, by a method nearly similar, without knowing what Malus had done, I began my own attempt to apply Algebra to Optics. But my researches soon conducted me to substitute, for this method of Malus, a very different, and (as I conceive that I have proved) a much more appropriate one, for the study of optical systems; by which, instead of employing the three functions above mentioned, or at least their two ratios, it becomes sufficient to employ one function, which I call characteristic or principal. And thus, whereas he made his deductions by setting out with the two equations of a ray, I on the other hand establish and employ the one equation of a system.
“The function which I have introduced for this purpose, and made the basis of my method of deduction in mathematical Optics, had, in another connexion, presented itself to former writers as expressing the result of a very high and extensive induction in that science. This known result is usually called the law of least action, but sometimes also the principle of least time [see chapter on Fermat], and includes all that has hitherto been discovered respecting the rules which determine the forms and positions of the lines along which light is propagated, and the changes of direction of those lines produced by reflexion or refraction, ordinary or extraordinary [the latter as in a doubly refracting crystal, say Iceland spar, in which a single ray is split into two, both refracted, on entering the crystal]. A certain quantity which in one physical theory is the action, and in another the time, expended by light in going from any first to any second point, is found to be less than if the light had gone in any other than its actual path, or at least to have what is technically called its variation null, the extremities of the path being unvaried. The mathematical novelty of my method consists in considering this quantity as a function of the co-ordinates of these extremities, which varies when they vary, according to a law which I have called the law of varying action; and in reducing all researches respecting optical systems of rays to the study of this single function: a reduction which presents mathematical Optics under an entirely novel view, and one analogous (as it appears to me) to the aspect under which Descartes presented the application of Algebra to Geometry.”
Nothing need be added to this account of Hamilton’s, except possibly the remark that no science, no matter how ably expounded, is understood as readily as any novel, no matter how badly written. The whole extract will repay a second reading.
In this great work on systems of rays Hamilton had builded better than even he knew. Almost exactly one hundred years after the above abstract was written the methods which Hamilton introduced into optics were found to be just what was required in the wave mechanics associated with the modern quantum theory and the theory of atomic structure. It may be recalled that Newton had favored an emission, or corpuscular, theory of light, while Huygens and his successors up to almost our own time sought to explain the phenomena of light wholly by means of a wave theory. Both points of view were united and, in a purely mathematical sense, reconciled in the modern quantum theory, which came into being in 1925-6. In 1834, when he was twenty eight, Hamilton realized his ambition of extending the principles which he had introduced into optics to the whole of dynamics.
Hamilton’s theory of rays, shortly after its publication when its author was but twenty seven, had one of the promptest and most spectacular successes of any of the classics of mathematics. The theory purported to deal with phenomena of the actual physical universe as it is observed in everyday life and in scientific laboratories. Unless any such mathematical theory is capable of predictions which experiments later verify, it is no better than a concise dictionary of the subject it systematizes, and it is almost certain to be superseded shortly by a more imaginative picture which does not reveal its whole meaning at the first glance. Of the famous predictions which have certified the value of truly mathematical theories in physical science, we may recall three: the mathematical discovery by John Couch Adams (18191892) and Urbain-Jean-Joseph Leverrier (1811-1877) of the planet Neptune, independently and almost simultaneously in 1845, from an analysis of the perturbations of the planet Uranus according to the Newtonian theory of gravitation; the mathematical prediction of wireless waves by James Clerk Maxwell (1831-1879) in 1864, as a consequence of his own electromagnetic theory of light; and finally, Einstein’s prediction in 1915-16, from his theory of general relativity, of the deflection of a ray of light in a gravitational field, first confirmed by observations of the solar eclipse on the historic May 29, 1919, and his prediction, also from his theory, that the spectral lines in light issuing from a massive body would be shif
ted by an amount, which Einstein stated, toward the red end of the spectrum—also confirmed. The last two of these instances—Maxwell’s and Einstein’s—are of a different order from the first: in both, totally unknown and unforeseen phenomena were predicted mathematically; that is, these predictions were qualitative. Both Maxwell and Einstein amplified their qualitative foresight by precise quantitative predictions which precluded any charge of mere guessing when their prophecies were finally verified experimentally.
Hamilton’s prediction of what is called conical refraction in optics was of this same qualitative plus quantitative order. From his theory of systems of rays he predicted mathematically that a wholly unexpected phenomenon would be found in connection with the refraction of light in biaxal crystals. While polishing the Third Supplement to his memoir on rays he surprised himself by a discovery which he thus describes:
“The law of the reflexion of light at ordinary mirrors appears to have been known to Euclid; that of ordinary refraction at a surface of water, glass, or other uncrystallized medium, was discovered at a much later date by Snellius; Huygens discovered, and Malus confirmed, the law of extraordinary refraction produced by uniaxal crystals, such as Iceland spar; and finally the law of the extraordinary double refraction at the faces of biaxal crystals, such as topaz or arragonite, was found in our own time by Fresnel. But even in these cases of extraordinary or crystalline refraction, no more than two refracted rays had ever been observed or even suspected to exist, if we except a theory of Cauchy, that there might possibly be a third ray, though probably imperceptible to our senses. Professor Hamilton, however, in investigating by his general method the consequences of the law of Fresnel, was led to conclude that there ought to be in certain cases, which he assigned, not merely two, nor three, nor any finite number, but an infinite number, or a cone of refracted rays within a biaxal crystal, corresponding to and resulting from a single incident ray; and that in certain other cases, a single ray within such a crystal should give rise to an infinite number of emergent rays, arranged in a certain other cone. He was led, therefore, to anticipate from theory two new laws of light, to which he gave the names of Internal and External Conical Refraction.”
The prediction and its experimental verification by Humphrey Lloyd evoked unbounded admiration for young Hamilton from those who could appreciate what he had done. Airy, his former rival for the professorship of astronomy, estimated Hamilton’s achievement thus: “Perhaps the most remarkable prediction that has ever been made is that lately made by Professor Hamilton.” Hamilton himself considered this, like any similar prediction, “a subordinate and secondary result” compared to the grand object which he had in view, “to introduce harmony and unity into the contemplations and reasonings of optics, regarded as a branch of pure science.”
* * *
According to some this spectacular success was the high-water mark in Hamilton’s career; after the great work on optics and dynamics his tide ebbed. Others, particularly members of what has been styled the High Church of Quaternions, hold that Hamilton’s greatest work was still to come—the creation of what Hamilton himself considered his masterpiece and his title to immortality, his theory of quaternions. Leaving quaternions out of the indictment for the moment, we may simply state that, from his twenty seventh year till his death at sixty, two disasters raised havoc with Hamilton’s scientific career, marriage and alcohol. The second was partly, but not wholly, a consequence of the unfortunate first.
After a second unhappy love affair, which ended with a thoughtless remark that meant nothing but which the hypersensitive suitor took to heart, Hamilton married his third fancy, Helen Maria Bayley, in the spring of 1833. He was then in his twenty eighth year. The bride was the daughter of a country parson’s widow. Helen was “of pleasing ladylike appearance, and early made a favourable impression upon him [Hamilton] by her truthful nature and by the religious principles which he knew her to possess, although to these recommendations was not added any striking beauty of face or force of intellect.” Now, any fool can tell the truth, and if truthfulness is all a fool has to recommend her, whoever commits matrimony with her will get the short end of the indiscretion. In the summer of 1832 Miss Bayley “passed through a dangerous illness, . . . , and this event doubtless drew his [the lovelorn Hamilton’s] thoughts especially toward her, in the form of anxiety for her recovery, and, coming at a time [when he had just broken with the girl he really wanted] when he felt obliged to suppress his former passion, prepared the way for tenderer and warmer feelings.” Hamilton in short was properly hooked by an ailing female who was to become a semi-invalid for the rest of her life and who, either through incompetence or ill-health, let her husband’s slovenly servants run his house as they chose, which at least in some quarters—especially his study—came to resemble a pigsty. Hamilton needed a sympathetic woman with backbone to keep him and his domestic affairs in some semblance of order; instead he got a weakling.
Ten years after his marriage Hamilton tried to pull himself up short on the slippery trail he realized with a brutal shock he was treading. As a young man, feted and toasted at dinners, he had rather let himself go, especially as his great gifts for eloquence and conviviality were naturally enough heightened by a drink or two. After his marriage, irregular meals or no meals at all, and his habit of working twelve or fourteen hours at a stretch, were compensated for by taking nourishment from a bottle.
It is a moot question whether mathematical inventiveness is accelerated or retarded by moderate indulgence in alcohol, and until an exhaustive set of controlled experiments is carried out to settle the matter, the doubt must remain a doubt, precisely as in any other biological research. If, as some maintain, poetic and mathematical inventiveness are akin, it is by no means obvious that reasonable alcoholic indulgence (if there is such a thing) is destructive of mathematical inventiveness; in fact numerous well-attested instances would seem to indicate the contrary. In the case of poets, of course, “wine and song” have often gone together, and in at least one instance—Swinburne—without the first the second dried up almost completely. Mathematicians have frequently remarked on the terrific strain induced by prolonged concentration on a difficulty, and some have found the let-down occasioned by a drink a decided relief. But poor Hamilton quickly passed beyond this stage and became careless, not only in the untidy privacy of his study, but also in the glaring publicity of a banquet hall. He got drunk at a scientific dinner. Realizing what had overtaken him, he resolved never to touch alcohol again, and for two years he kept his resolution. Then, during a scientific meeting at the estate of Lord Rosse (owner of the largest and most useless telescope then in existence), his old rival, Airy, jeered at him for drinking nothing but water. Hamilton gave in, and thereafter took all he wanted—which was more than enough. Still, even this handicap could not put him out of the race, although without it he probably would have gone farther and have reached a greater height than he did. However, he got high enough, and moralizing may be left to moralists.
* * *
Before considering what Hamilton regarded as his masterpiece, we may briefly summarize the principal honors which came his way. At thirty he held an influential office in the British Association for the Advancement of Science at its Dublin meeting, and at the same time the Lord Lieutenant bade him to “Kneel down, Professor Hamilton,” and then, having dubbed him on both shoulders with the sword of State, to “Rise up, Sir William Rowan Hamilton.” This was one of the few occasions in his life on which Hamilton had nothing whatever to say. At thirty two he became President of the Royal Irish Academy, and at thirty eight was awarded a Civil List life pension of two hundred pounds a year from the British Government, Sir Robert Peel, Ireland’s reluctant friend, being then Premier. Shortly before this Hamilton had made his capital invention—quaternions.
An honor which pleased him more than any he had ever received was the last, as he lay on his deathbed: he was elected the first foreign member of the National Academy of Sciences of t
he United States, which was founded during the Civil War. This honor was in recognition of his work in quaternions, principally, which for some unfathomable reason stirred American mathematicians of the time (there were only one or two in existence, Benjamin Peirce of Harvard being the chief) more profoundly than had any other British mathematics since Newton’s Principia. The early popularity of quaternions in the United States is somewhat of a mystery. Possibly the turgid eloquence of the Lectures on Quaternions captivated the taste of a young and vigorous nation which had yet to outgrow its morbid addiction to senatorial oratory and Fourth of July verbal fireworks.
* * *
Quaternions has too long a history for the whole story to be told here. Even Gauss with his anticipation of 1817 was not the first in the field; Euler preceded him with an isolated result which is most simply interpreted in terms of quaternions. The origin of quaternions may go back even farther than this, for Augustus de Morgan once half-jokingly offered to trace their history for Hamilton from the ancient Hindus to Queen Victoria. However, we need glance here only at the lion’s share in the invention and consider briefly what inspired Hamilton.
The British school of algebraists, as will be seen in the chapter on Boole, put common algebra on its own feet during the first half of the nineteenth century. Anticipating the currently accepted procedure in developing any branch of mathematics carefully and rigorously they founded algebra postulationally. Before this, the various kinds of “numbers”—fractions, negatives, irrationals—which enter mathematics when it is assumed that all algebraic equations have roots, had been allowed to function on precisely the same footing as the common positive integers which were so staled by custom that all mathematicians believed them to be “natural” and in some vague sense completely understood—they are not, even today, as will be seen when the work of Georg Cantor is discussed. This naîve faith in the self-consistency of a system founded on the blind, formal juggling of mathematical symbols may have been sublime but it was also slightly idiotic. The climax of this credulity was reached in the notorious principle of permanence of form, which stated in effect that a set of rules which yield consistent results for one kind of numbers—say the positive integers—will continue to yield consistency when applied to any other kind—say the imaginaries—even when no interpretation of the results is evident. It does not seem surprising that this faith in the integrity of meaningless symbols frequently led to absurdity.