by Tom Phillips
Awkward.
It’s not like there hadn’t been plenty of skeptical voices—numerous scientists felt the discovery sounded implausible; one even announced that if polywater turned out to be real he would quit chemistry entirely. But it’s often hard to disprove something, especially when there’s the lurking fear that the reason your polywater isn’t doing what polywater is supposed to do is that you simply didn’t make it properly in the first place. The difficulty of making more than trace amounts of polywater, added to the febrile atmosphere of Cold War–era scientific research, allowed scientists spread over several continents to simply see what they’d been told to expect, and to dramatically overinterpret vague or contradictory results. The whole affair was science by wishful thinking.
Even after the first papers pushing back at the existence of polywater were published (also in Science, in 1970), it was years before everybody finally admitted that the whole thing had been a mistake. Ellison Taylor, one of the skeptics who had been involved in finally disproving polywater, wrote in the Oak Ridge National Laboratory’s in-house magazine in 1971: “[We] knew they were wrong from the beginning, and I suppose lots of people who never got involved knew it, too, but none of the chief protagonists has given any sign of admitting it.” Popular Science even ran an article entitled “How You Can Grow Your Own Polywater” in June of 1973 (subtitle: “Some experts claim this rare substance doesn’t exist. Yet here’s how you can harvest enough of it for your own experiments”).
It’s far from the only time something like this has happened. Of course, the early centuries of science (even before the term science was invented) were full of popular theories that turned out to be completely wrong—in the eighteenth century it was phlogiston, the mysterious substance that lurked inside all combustible things and was released when they burned; in the nineteenth, luminiferous ether, an invisible substance permeating the universe through which light was transmitted. But those have the distinction of at least being attempts to explain something that couldn’t be explained with the science of the time. Which, more or less, is kind of how science is supposed to work.
The reason science has a fairly decent track record is that (in theory, at least) it starts from the sensible, self-deprecating assumption that most of our guesses about how the world works will be wrong. Science tries to edge its way in the general direction of being right, but it does that through a slow process of becoming progressively a bit less wrong. The way it’s supposed to work is this: you have an idea about how the world might work, and in order to see if there’s a chance it might be right, you try very hard to prove yourself wrong. If you fail to prove yourself wrong, you try to prove yourself wrong again, or prove yourself wrong another way. After a while you decide to tell the world that you’ve failed to prove yourself wrong, at which point everybody else tries to prove you wrong, as well. If they all fail to prove you wrong, then slowly people begin to accept that you might possibly be right, or at least less wrong than the alternatives.
Of course, that’s not how it actually works. Scientists are no less susceptible than any other humans to the perils of just assuming that their view of the world is right, and ignoring anything to the contrary. That’s why all the structures of science—peer review and replication and the like—are put in place to try and stop that happening. But it’s extremely far from foolproof, because groupthink and bandwagon-jumping and political pressure and ideological blinders are all things in science, as well.
That’s how you can get a load of scientists at different institutions in different countries all convincing themselves they can see the same imaginary substance. The saga of polywater isn’t alone there: six decades earlier, the scientific world had been gripped by the discovery of a whole new type of radiation. These remarkable new rays (which it would eventually turn out were entirely imaginary) were called N-rays.
René Prosper Blondlot (1849–1930)
N-rays were “discovered” in France, and they took their name from the town of Nancy, where the scientist who first identified them worked—René Prosper Blondlot, an award-winning researcher who was widely acclaimed as an excellent, diligent experimental physicist. This was 1903, less than a decade after the discovery of X-rays had sent waves through the field, so people were primed to expect that new forms of radiation could be discovered here, there and everywhere. What’s more, just as with polywater, there was more than a little international rivalry at play—X-rays had been discovered in Germany, so the French were eager for a piece of the action.
Blondlot first uncovered N-rays by accident—in fact, it was while he was conducting research on X-rays. His experimental equipment involved a small spark that would grow brighter when the rays passed by, and his attention was caught when he saw the spark flare up at a time when no X-rays could possibly be affecting it. He dug deeper, gathered more evidence and in spring 1903 announced his discovery to the world in the Proceedings of the French Academy. Fairly quickly, a large part of the science world went N-ray crazy.
Over the next few years, more than 300 papers would be published about the remarkable properties of N-rays by over 120 scientists (Blondlot himself published 26 of them). The qualities that N-rays demonstrated were certainly...intriguing. They were produced by certain types of flame, a heated sheet of iron and the sun. They were also produced by living things, Blondlot’s colleague Augustin Charpentier found: by frogs and rabbits, by bicep muscles and by the human brain. N-rays could pass through metal and wood, and could be transmitted along a copper wire, but were blocked by water and rock salt. They could be stored in bricks.
Unfortunately, not everybody was having quite as much success in producing and observing N-rays. Many other reputable scientists couldn’t seem to summon them into existence at all, despite Blondlot being very helpful in describing his methods. Possibly this was because they were hard to detect: by this point Blondlot had moved on from detecting them with a glowing spark, instead using a phosphorescent sheet that would glow faintly when exposed to the rays. The trouble was that the change in the sheet’s glow was so faint that it was best seen in an entirely darkened room, and only then after the experimenter had allowed their eyes to adjust to the darkness for about 30 minutes. Oh, and it worked best if you didn’t look at the sheet directly, but instead out of the corner of your eye.
Because of course there’s no way that sitting in a dark room for half an hour then looking at a very faint glow in your peripheral vision would possibly make your eyes play tricks on you.
The N-ray skeptics, of whom there were many, couldn’t help but notice one rather telling feature of the N-ray mania: virtually all the scientists who’d been able to produce the rays were French. There were a couple of exceptions in England and Ireland; nobody in Germany or the USA had managed to see them at all. This was starting to cause not just skepticism, but outright irritation: while the French Academy awarded Blondlot one of the top prizes in French science for his work, one leading German radiation specialist, Heinrich Rubens, was summoned by the kaiser and forced to waste two weeks trying to re-create Blondlot’s work before giving up in humiliation.
All of this prompted one American physicist, Robert Wood, to pay a visit to Blondlot’s lab in Nancy while he was visiting Europe for a conference. Blondlot was happy to welcome Wood and demonstrate his latest breakthroughs; Wood had a slightly different plan in mind. One of the strangest properties of the mystery rays was that, just as light is refracted through a glass prism, N-rays could apparently be refracted through an aluminum prism, producing a spectrum of ray patterns on the sheet. Blondlot eagerly demonstrated this to Wood, reading out to him the measurements of where the spectrum patterns fell. Wood then asked him if he’d mind repeating the experiment, and Blondlot readily agreed, whereupon Wood introduced a proper scientific control—or to put it another way, played a pretty funny trick on Blondlot.
In the darkness, without Blondlot noticing, he reached out and simpl
y pocketed the prism. Unaware that his equipment was now missing its vital component, Blondlot continued to read out wavelength results for a spectrum that shouldn’t have been there anymore.
Wood summarized his findings in a politely brutal letter to Nature in the autumn of 1904: “After spending three hours or more in witnessing various experiments, I am not only unable to report a single observation which appeared to indicate the existence of the rays, but left with a very firm conviction that the few experimenters who have obtained positive results have been in some way deluded.” After that, interest in N-rays collapsed, although Blondlot and a few other true believers kept on plugging away, determined to prove that they hadn’t just been studying a mirage all this time.
The stories of both polywater and N-rays are cautionary tales about how even scientists can fall prey to the same biases that affect us all, but they’re also tales of science...well, working. While the hype around them both was, in retrospect, more than a little embarrassing for quite a lot of highly qualified professionals, neither mania lasted for more than a few years before skepticism and the need for hard evidence won out. Go, team.
But if these examples are relatively harmless, there have been plenty of instances where dodgy science has done a lot more than merely leave some people with bruised reputations. Like, for example, the legacy of Francis Galton.
Francis Galton was undoubtedly a genius and a polymath, but also a creepy weirdo who had terrible ideas that led to dreadful consequences. A half cousin of Charles Darwin, he achieved breakthroughs in multiple disciplines—he was a pioneer of scientific statistics, including inventing the concept of correlation, and his creations in fields as diverse as meteorology and forensics are still with us today, in the form of the weather map and the use of fingerprints to identify people.
He was obsessed with measuring things and applying scientific principles to just about everything he came across—his letters printed by Nature include one estimating the total number of brushstrokes in a painting (after he got bored in lengthy sittings for a portrait), and another in 1906 entitled “Cutting a Round Cake on Scientific Principles” (in short: don’t cut wedges, cut straight slices through the middle, so you can push the remaining halves together to stop them drying out).
But this obsession went further than coming up with extremely British teatime life hacks. In one of his more infamous investigations, Galton toured the towns and cities of Britain in an attempt to create a map of where the women were most attractive. He would sit in a public space and use a device concealed in his pocket called a “pricker”—a thimble with a needle in it that could puncture holes in a piece of marked paper—to record his opinion of the sexual desirability of every woman who walked past. The end product of this was a “beauty map” of the country, much like his weather maps, which revealed that the women in London were the most attractive, while the women in Aberdeen were the least attractive. At least, according to the tastes of a pervy statistician furtively making notes on women’s fuckability with a needle hidden in his pocket, which perhaps isn’t the most objective of measures.
It was that same combination of qualities—a compulsion to measure human traits and a complete lack of respect for the actual humanity of the people being measured—that led Galton to his most infamous contribution to the world of science: his advocacy of, and indeed coining of the term, eugenics. He believed firmly that genius was entirely inherited, and that a person’s success came from their inner nature alone, rather than fortune or circumstance. And so he believed that marriages between people deemed suitable for breeding should be encouraged, possibly with monetary rewards, in order to improve the stock of the human race; and that those who were undesirable, such as the feebleminded or paupers, should be strongly discouraged from breeding.
In the early part of the twentieth century, there was worldwide uptake of the eugenics movement, with Galton (now near the end of his life) seen as its hero. Thirty-one US states passed compulsory sterilization laws—by the time the last had finally been repealed in the sixties, over 60,000 people in mental institutions in the United States had been forcibly sterilized, the majority of them women. A similar number were sterilized in Sweden’s efforts to promote “ethnic hygiene,” where the law wasn’t repealed until 1976. And of course in Nazi Germany...well, you know what happened. Galton would no doubt have been horrified if he’d lived long enough to see what was being done in the name of the “science” he created, but that doesn’t make his original ideas any less wrong.
Or there’s Trofim Lysenko, the Soviet agricultural scientist whose profoundly bad ideas contributed to famines in both the USSR and (as mentioned way back in Chapter 3) China. Unlike Galton, Lysenko doesn’t even have actual legitimate scientific advances to even out his legacy. He was just inordinately wrong.
Lysenko came from a poor family, but quickly rose through the ranks of Soviet agronomy thanks to some early successes in stimulating seeds to grow without needing to be planted over the cold winters. He eventually became a favorite of Stalin, which gave him enough power to start imposing his ideas on the rest of the Soviet scientific community.
Those ideas weren’t right—they weren’t even close to being right—but they did have the advantage of appealing to the ideological biases of Lysenko’s communist overlords. Despite the fact that genetics was a pretty well-established discipline by the 1930s, Lysenko rejected it entirely, even denying that genes existed, on the grounds that this promoted an individualistic view of the world. Genetics suggested that organisms’ behaviors were fixed and unchanging, while Lysenko believed that changing the environment could improve the organism and pass those improvements down to its offspring. One species of crop could even turn into another, given the right environment. Rows of crops should be planted more closely together, he instructed farmers, because plants of the same “class” would never compete with each other for resources.
None of these things was true, and what’s more they were very obviously not true, as evidenced by the fact that attempts to put them into practice just ended up with a lot of dead crops. That didn’t stop Lysenko maintaining his political power and shutting down any criticism—to the point of having thousands of other Soviet biologists sacked, imprisoned or even killed if they refused to abandon genetics and embrace Lysenkoism. It wasn’t until Khrushchev was forced out of power in 1964 that other scientists finally managed to persuade the party that Lysenko was a charlatan, and he was quietly shuffled out. His legacy was to contribute to millions of deaths, and to set the field of biology in the Soviet world back decades.
But if Lysenko’s mistakes in biology were entirely enabled by communism, the next case was pure capitalism—the tale of a man who managed to make not one, but two of the most disastrous mistakes in the history of science, all within the space of one decade.
Lead Astray
In 1944, the genius engineer, chemist and inventor Thomas Midgley Jr., a man whose discoveries had helped shape the modern world to a remarkable degree, died at home in bed at the age of 55.
Dying at home in bed sounds quite peaceful, you’d think. Not in this case. Paralyzed below the waist due to a bout of polio some years earlier, Midgley disliked the indignity of being lifted in and out of bed, and had put his talent for innovation to good use, building himself an elaborate system of pulleys so he could do it himself. Which was all going terribly well until that day in November, when something went a bit wrong and he was found strangled to death by the ropes of his own device.
Thomas Midgley Jr. (1889–1944)
The manner of his death is grimly ironic enough—but that’s not the reason Tom Midgley is in this book. He’s in this book because, incredibly, being killed in bed by his own invention doesn’t even make it into the top two biggest mistakes of his life.
In fact, by pretty much any standard, he has to rank as one of the most catastrophic individuals who ever lived.
Midgley was a qu
iet, clever man who spent most of his life in Columbus, Ohio. From a family of inventors, he had barely any training as a chemist, but showed a knack for problem-solving across a range of disciplines—through a combination of systematic examination of the issues on one hand, and on the other a tendency to haphazardly but doggedly throw solutions at a problem until something stuck.
In the 1910s and 1920s, he was working on the problem of car engines “knocking”—a persistent problem where engines would stutter and jerk, especially when put under strain. This didn’t just make early automobiles kind of suck, it also reduced fuel efficiency, a major concern at a time when there were early worries that the world’s oil supplies were due to run low sooner rather than later.
Midgley and his boss, Charles Kettering, suspected that knocking was down to the fuel used burning unevenly, rather than a fundamental flaw in the design of engines. So they set about trying to find an additive that would reduce this effect. Initially, for reasons that make astonishingly little sense, they settled on the idea that the solution was “the color red.” Midgley went out to get some red dye, but the lab didn’t have any. He was told, however, that iodine was kind of reddish and dissolved well in oil, so he basically went, “Ah, what the heck,” stuck a load of iodine in some gasoline and whacked it into an engine.
It worked.
It was complete dumb luck, but they’d hit on proof that they were on the right track. Iodine itself wasn’t a workable solution: it was too expensive and too difficult to produce in the quantities they’d need. But it was enough to convince them to carry on their work. Over the following years, they tried—depending on which corporate statement you believe—somewhere between 144 and 33,000 different compounds. If that seems like quite an imprecise range, well, there’s a reason why the companies behind their work have been kinda vague about the research process.