The Most Powerful Idea in the World
Page 15
In the 1970s, Eric Kandel, a neuroscientist then working at New York University, embarked on a series of experiments that conclusively proved that cognition could be plotted by following a series of chemical reactions that changed the electrical potential of neurons. Kandel and his colleagues demonstrated that experiences literally change the chemistry of neurons by producing a protein called cyclic Adenosine MonoPhosphate, or cAMP. The cAMP protein, in turn, produces a cascade of chemical changes that either promote or inhibit the synaptic response between neurons; every time the brain calculates the area of a rectangle, or sight-reads a piece of music, or tests an experimental hypothesis, the neurons involved are chemically changed to make it easier to travel the same path again. Kandel’s research seems to have identified that repetition forms the chains that Polanyi called tacit knowing, and that James Watt called “the correct modes of reasoning.”
Kandel’s discovery of the mechanism by which memory is formed and preserved at the cellular level, for which he received the Nobel Prize in Physiology in 2000, was provocative. But because the experiments in question were performed on the fairly simple nervous system of Aplysia californica, a giant marine snail, and documented the speed with which the snails could “learn” to eject ink in response to predators, it may be overreaching to say that science knows that the more one practices the violin, or extracts cube roots, the more cAMP is produced. It’s even more of a stretch to explain how one learns to sight-read a Chopin etude. Or invent a separate condenser for a steam engine.
Which is why, a decade before Kandel was sticking needles into Aplysia, a Caltech neurobiologist named Roger Sperry was working at the other end of the evolutionary scale, performing a series of experiments on a man whose brain had been surgically severed into right and left halves.* The 1962 demonstration of the existence of a two-sided brain, which would win Sperry the Nobel Prize twenty years later, remains a fixture in the world of pop psychology, as anyone who has ever been complimented (or criticized) for right-brained behavior can testify. The notion that creativity is localized in the right hemisphere of the brain and analytic, linguistic rationality in the left has proved enduringly popular with the general public long after it lost the allegiance of scientists.
However simple the right brain/left brain model, the idea that ideas must originate somewhere in the brain’s structure continued to attract scientists for years: neurologists, psychologists, Artificial Intelligence researchers. Neuroscientists have even applied the equations of chaos theory to explain how neurons “fire together.” John Beggs of Indiana University has shown that the same math used to analyze how sandhills spontaneously collapse, or a stable snowpack turns into an avalanche—the term is “self-organized criticality”—also describes how sudden thoughts, especially insights, appear in the brain. When a single neuron chemically fires3 its electrical charge, and causes its neighbors to do the same, the random electrical activity that is always present in the human brain can result in a “neuronal avalanche” within the brain.
Where those avalanches ended up, however, remained little more than speculation until there was some way to see what was actually going on inside the brain; so long as the pathway leading to a creative insight remained invisible, theories about them could be proposed, but not tested.
Those new pathways aren’t invisible anymore. A cognitive scientist at Northwestern named Mark Jung-Beeman, and one at Drexel named John Kounios, have performed a series of experiments very nicely calibrated to measure heightened activity in portions of the brain when those “eureka” moments strike. In the experiments, subjects were asked to solve a series of puzzles and to report when they solved them by using a systematic strategy versus when the solution came to them by way of a sudden insight. By wiring those subjects up like Christmas trees, they discovered two critical things:
First, when subjects reported solving a puzzle via a sudden flash of insight, an electroencephalograph, which picks up different frequencies of electrical activity, recorded that their brains burst out with the highest of its frequencies: the one that cycles thirty times each second, or 30Hz. This was expected,4 since this is the frequency band that earlier researchers had associated with similar activities such as recognizing the definition of a word or the outline of a car. What wasn’t expected was that the EEG picked up the burst of 30Hz activity three-tenths of a second before a correct “insightful” answer—and did nothing before a wrong one. Second, and even better, simultaneous with the burst of electricity, another machine, the newer-than-new fMRI (functional Magnetic Resonance Imaging) machine showed blood rushing to several sections of the brain’s right, “emotional” hemisphere, with the heaviest flow to the same spot—the anterior Superior Temporal Gyrus, or aSTG.
But the discovery that resonates most strongly with James Watt’s flash of insight about separating the condensing chamber from the piston is this: Most “normal” brain activity serves to inhibit the blood flow to the aSTG. The more active the brain, the more inhibitory, probably for evolutionary reasons: early Homo sapiens who spent an inordinate amount of time daydreaming about new ways to start fire were, by definition, spending less time alert to danger, which would have given an overactive aSTG a distinctly negative reproductive value. The brain is evolutionarily hard-wired to do its best daydreaming only when it senses that it is safe to do so—when, in short, it is relaxed. In Kounios’s words, “The relaxation phase is crucial.5 That’s why so many insights happen during warm showers.” Or during Sunday afternoon walks on Glasgow Green, when the idea of a separate condenser seems to have excited the aSTG in the skull of James Watt. Eureka indeed.
IN 1930, JOSEPH ROSSMAN, who had served for decades as an examiner in the U.S. Patent Office, polled more than seven hundred patentees, producing a remarkable picture of the mind of the inventor. Some of the results were predictable;6 the three biggest motivators were “love of inventing,” “desire to improve,” and “financial gain,” the ranking for each of which was statistically identical, and each at least twice as important as those appearing down the list, such as “desire to achieve,” “prestige,” or “altruism” (and certainly not the old saw, “laziness,” which was named roughly one-thirtieth as frequently as “financial gain”). A century after Rocket, the world of technology had changed immensely: electric power, automobiles, telephones. But the motivations of individual inventors were indistinguishable from those inaugurated by the Industrial Revolution.
Less predictably, Rossman’s results demonstrated that the motivation to invent is not typically limited to one invention or industry. Though the most famous inventors are associated in the popular imagination with a single invention—Watt and the separate condenser, Stephenson and Rocket—Watt was just as proud of the portable copying machine he invented in 1780 as he was of his steam engine; Stephenson was, in some circles, just as famous for the safety lamp he invented to prevent explosions in coal mines as for his locomotive. Inventors, in Rossman’s words, are “recidivists.”
In the same vein, Rossman’s survey revealed that the greatest obstacle perceived by his patentee universe was not lack of knowledge, legal difficulties, lack of time, or even prejudice against the innovation under consideration. Overwhelmingly, the largest obstacle faced by early twentieth-century inventors (and, almost certainly, their ancestors in the eighteenth century) was “lack of capital.”7 Inventors need investors.
Investors don’t always need inventors. Rational investment decisions, as the English economist John Maynard Keynes demonstrated just a few years after Rossman completed his survey, are made by calculating the marginal efficiency of the investment, that is, how much more profit one can expect from putting money into one investment rather than another. When the internal rate of return—Keynes’s term—for a given investment is higher than the rate that could be earned somewhere else, it is a smart one; when it is lower, it isn’t.
Unfortunately, while any given invention can have a positive IRR, the decision to spend one’s life inventing is
overwhelmingly negative. Inventors typically forgo more than one-third of their lifetime earnings. Thus, the characteristic stubbornness of inventors throughout history turns out to be fundamentally irrational. Their optimism is by any measure far greater than that found in the general population, with the result that their decision making is, to be charitable, flawed, whether as a result of the classic confirmation bias—the tendency to overvalue data that confirm one’s original ideas—or the “sunk-cost” bias, which is another name for throwing good money after bad. Even after reliable colleagues urge them to quit, a third of inventors will continue to invest money, and more than half will continue to invest their time.8
A favorite explanation for the seeming contradiction is the work of the Czech émigré economist Joseph Schumpeter,* who drew a famous, though not perfectly clear, boundary between invention and innovation, with the former an economically irrelevant version of the latter. The heroes of Schumpeter’s economic analysis were, in consequence, entrepreneurs, who “may9 be inventors just as they may be capitalists … they are inventors not by nature of their function, but by coincidence….” To Schumpeter, invention preceded innovation—he characterized the process as embracing three stages: invention, commercialization, and imitation—but was otherwise insignificant. However, his concession that (a) the chances of successful commercialization were improved dramatically when the inventor was involved throughout the process, and (b) the imitation stage looks a lot like invention all over again, since all inventions are to some extent imitative, makes his dichotomy look a little like a chicken-and-egg paradox.
Another study, this one conducted in 1962,10 compared the results of psychometric tests given to inventors and noninventors (the former defined by behaviors such as application for or receipt of a patent) in similar professions, such as engineers, chemists, architects, psychologists, and science teachers. Some of the results were about what one might expect: inventors are significantly more thing-oriented than people-oriented, more detail-oriented than holistic. They are also likely to come from poorer families than noninventors in the same professions. No surprise there; the eighteenth-century Swiss mathematician Daniel Bernoulli,11 who coined the term “human capital,” explained why innovation has always been a more attractive occupation to have-nots than to haves: not only do small successes seem larger, but they have considerably less to lose.
More interesting, the 1962 study also revealed that independent inventors scored far lower on general intelligence tests than did research scientists, architects, or even graduate students. There’s less to this than meets the eye: The intelligence test that was given to the subjects subtracted wrong answers from right answers, and though the inventors consistently got as many answers correct as did the research scientists, they answered far more questions, thereby incurring a ton of deductions. While the study was too small a sample to prove that inventors fear wrong answers less than noninventors, it suggested just that. In the words of the study’s authors, “The more inventive an independent inventor is,12 the more disposed he will be—and this indeed to a marked degree—to try anything that might work.”
WATT’S FLASH OF INSIGHT, like those of Newcomen and Savery before him (and thousands more after), was the result of complicated neural activity, operating on a fund of tacit knowledge, in response to both a love of inventing and a love of financial gain. But what gave him the ability to recognize and test that insight was a trained aptitude for mathematics.
The history of mechanical invention in Britain began in a distinctively British manner: with a first generation of craftsmen whose knowledge of machinery was exclusively practical and who were seldom if ever trained in the theory or science behind the levers, escapements, gears, and wheels that they manipulated. These men, however, were followed (not merely paralleled) by another generation of instrument makers, millwrights, and so on, who were.
Beginning in 1704, for example, John Harris, the Vicar of Icklesham in Sussex, published, via subscription, the first volume of the Lexicon Technicum, or an Universal Dictionary of Arts and Sciences, the prototype for Enlightenment dictionaries and encyclopedias. Unlike many of the encyclopedias that followed, Harris’s work had a decidedly pragmatic bent, containing the most thorough, and most widely read, account of the air pump or Thomas Savery’s steam engine. In 1713, a former surveyor and engineer named Henry Beighton, the “first scientific man to study the Newcomen engine,”13 replaced his friend John Tipper as the editor of a journal of calendars, recipes, and medicinal advice called The Ladies Diary. His decision to differentiate it from its competitors in a fairly crowded market by including mathematical games and recreations, riddles, and geographical puzzles made it an eighteenth-century version of Scientific American and, soon enough, Britain’s first and most important mathematical journal. More important, it inaugurated an even more significant expansion of what might be called Britain’s mathematically literate population.
Teaching more Britons the intricacies of mathematics would be a giant long-term asset to building an inventive society. Even though uneducated craftsmen had been producing remarkable efficiencies using only rule of thumb—when the great Swiss mathematician Leonhard Euler applied14 his own considerable talents to calculating the best possible orientation and size for the sails on a Dutch windmill (a brutally complicated bit of engineering, what with the sail pivoting in one plane while rotating in another), he found that carpenters and millwrights had gotten to the same point by trial and error—it took them decades, sometimes centuries, to do so. Giving them the gift of mathematics to do the same work was functionally equivalent to choosing to travel by stagecoach rather than oxcart; you got to the same place, but you got there a lot faster.
Adding experimental rigor to mathematical sophistication accelerated things still more, from stagecoach to—perhaps—Rocket. The power of the two in combination, well documented in the work of James Watt, was hugely powerful. But the archetype of mathematical invention in the eighteenth century was not Watt, but John Smeaton, by consensus the most brilliant engineer of his era—a bit like being the most talented painter in sixteenth-century Florence.
SMEATON, UNLIKE MOST OF his generation’s innovators, came from a secure middle-class family: his father was an attorney in Leeds, who invited his then sixteen-year-old son into the family firm in 1740. Luckily for the history of engineering, young John found the law less interesting than tinkering, and by 1748 he had moved to London and set up shop as a maker of scientific instruments; five years later, when James Watt arrived in the city seeking to be trained in exactly the same trade, Smeaton was a Fellow of the Royal Society, and had already built his first water mill.
In 1756, he was hired to rebuild the Eddystone Lighthouse, which had burned down the year before; the specification for the sixty-foot-tall structure* required that it be constructed on the Eddystone rocks off the Devonshire coast between high and low tide, and so demanded the invention of a cement—hydraulic lime—that would set even if submerged in water.
The Eddystone Lighthouse was completed in October 1759. That same year, evidently lacking enough occupation to keep himself interested, Smeaton published a paper entitled An Experimental Enquiry Concerning the Natural Powers of Water and Wind to Turn Mills. The Enquiry, which was rewarded with the Royal Society’s oldest and most prestigious prize—the Copley Medal for “outstanding research in any branch of science”—documented Smeaton’s nearly seven years’ worth of research into the efficiency of different types of waterwheels, a subject that despite several millennia of practical experience with the technology was still largely a matter of anecdote or, worse, bad theory. In 1704, for example, a French scientist named Antoine Parent had calculated the theoretical benefits of a wheel operated by water flowing past its blades at the lowest point—an “undershot” wheel—against one in which the water fell into buckets just offset from the top of the “overshot” wheel—and got it wrong. Smeaton was a skilled mathematician, but the engineer in him knew that experimental
comparison was the only way to answer the question, and, by the way, to demonstrate the best way to generate what was then producing nearly 70 percent of Britain’s measured power. His method remains one of the most meticulous experiments of the entire eighteenth century.
Fig. 4: One of the best-designed experiments of the eighteenth century, Smeaton’s waterwheel was able to measure the work produced by water flowing over, under, and past a mill. Science Museum / Science & Society Picture Library
He constructed a model waterwheel twenty inches in diameter, running in a model “river” fed by a cistern four feet above the base of the wheel. He then ran a rope through a pulley fifteen feet above the model, with one end attached to the waterwheel’s axle and the other to a weight. He was so extraordinarily careful to avoid error that he set the wheel in motion with a counterweight timed so that it would rotate at precisely the same velocity as the flow of water, thus avoiding splashing as well as the confounding element of friction. With this model, Smeaton was able to measure the height to which a constant weight could be lifted by an overshot, an undershot, and even a “breastshot” wheel; and he measured more than just height. His published table of results15 recorded thirteen categories of data, including cistern height, “virtual head” (the distance water fell into buckets in an overshot wheel), weight of water, and maximum load. The resulting experiment16 not only disproved Parent’s argument for the undershot wheel, but also showed that the overshot wheel was up to two times more “efficient” (though he never used the term in its modern sense).