Book Read Free

The Half-Life of Facts

Page 6

by Samuel Arbesman


  Even the field of neuroscience is able to move forward at a pace similar to Moore’s Law: The technological advances related to recording individual neurons have been growing at an exponential pace. Specifically, the number of neurons that can be14 recorded simultaneously has been growing exponentially, with a doubling time of about seven and a half years.

  Due to the intermingling of science and technology, how do we disentangle scientific knowledge and technological innovation? Well, sometimes, as we’ll see, we can’t. This is not to say that there aren’t differences, though. As Jonathan Cole, a sociologist of science, argues:

  Science and technology are closely related,15 but they are not the same thing. Science involves a body of knowledge that has accumulated over time through the process of scientific inquiry, as it generates new knowledge about the natural world—including knowledge in the physical and biological sciences as well as in the social and behavioral sciences. Technology, in its broadest sense, is the process by which we modify nature to meet our needs and wants. Some people think of technology in terms of gadgets and a variety of artifacts, but it also involves the process by which individuals or companies start with a set of criteria and constraints and work toward a solution of a problem that meets those conditions.

  Henry Petroski, a professor of engineering and history16 at Duke University, puts it even more succinctly: “Science is about understanding the origins, nature, and behavior of the universe and all it contains; engineering is about solving problems by rearranging the stuff of the world to make new things.” Science modifies the facts of what we know about the world, while technology modifies the facts of what we can do in the world.

  Sometimes, though, instead of basic scientific insight leading to new technologies, there are instances where engineering can actually precede science. For example, the steam engine was invented over a hundred years before a clear understanding of thermodynamics—the physics of energy—was developed.

  But not only isn’t it always clear which one occurs first, it is just as often the case that it’s difficult to distinguish between scientific and technological knowledge. Iron’s magnetic properties demonstrate this well.

  Iron is magnetic, as anyone who has spent any amount of time playing with paper clips and magnets knows. And iron is much more magnetic than aluminum, which you can quickly ascertain by holding up foil to a magnet. These differences in magnetism can be measured, and the amount that a material is magnetic (or not) is known as its magnetic permeability.

  It turns out that the magnetic permeability of iron has changed over time. Specifically, iron has gotten twice as magnetic every five years. This sounds wrong. Shouldn’t the magnetic property of iron be unchanging? Iron is a chemical element, so any amount of this material should be the same, and pure as snow. Why should it instead increase over time?

  In truth, the iron that people have used throughout history has actually been far from pure. It has had numerous impurities of all sorts; what could be obtained years ago was far from a perfectly pristine elemental substance. In 1928, the engineer Trygve Dewey Yensen17 set out to determine the magnetic properties of iron over the previous several decades. By scouring records as far back as 1870, Yensen discovered that iron had steadily, and in a rather exponential fashion, increased its magnetic permeability. And this was entirely due to technology.

  As our technological methods for making pure iron have improved, so have the magnetic properties of iron. Something that seems to be safely in the category of scientific fact is actually intimately intertwined with our technological abilities. We have seen a steady and regular shift in these scientific facts as we improved these technologies. But just as technological advances change the scientific facts we already have, new technologies also allow for new discoveries, reflecting the tightly coupled nature of scientific and technological knowledge.

  Take the periodic table. The number of known chemical elements has steadily increased over time. However, while in the aggregate the number has grown relatively smoothly, if you zoom in to the data closely, a different picture emerges. As Derek de Solla Price found, the periodic table has grown by a series of logistic curves. He argued that each of these was due to a successive technological advance or approach. For example, from the beginnings of the scientific revolution in the late seventeenth century until the late nineteenth century, more than sixty elements were discovered, using various chemical techniques, including electrical shocks, to separate compounds into their constituent parts. In fact, many of these techniques were pioneered by a single man, Sir Humphry Davy, who himself discovered calcium, sodium, and boron, among many other elements.

  However, soon the limits of these approaches became evident, and the discoveries slowed. But, following a Moore’s Law–like trajectory, a new technology arose. The particle accelerator was created, and its atom-smashing ability enabled further discoveries. As particle accelerators of increasing energies have been developed, we have discovered heavier and larger chemical elements. In a very real way, these advances have allowed for new facts.

  Technological growth facilitates changes in facts, sometimes rapidly, in many areas: sequencing new genomes (nearly two hundred distinct species were sequenced as of late 2011); finding new asteroids (often done using sophisticated computer algorithms that can detect objects moving in space); even proving new mathematical theorems through increasing computer power.

  There are even new facts that combine technology with human performance. Athletes break records as their tools—for example, swimsuits, sneakers, and training facilities—become more sophisticated due to technological advances. Even the world of board games has been revolutionized. As noted earlier, over the past several decades, game after game has become18 a domain where computers dominate, changing the facts around us. Checkers was one of the first ones in which computers were able to beat humans consistently—the computer had its first victory in 1990. Chess and Othello were the next ones people lost to computers, both in 1997, and since 2011 even Jeopardy! has become the domain of computer mastery.

  Computers can now checkmate better than people, and phrase a correct answer in the form of a question, provinces long thought to be exclusively those of the human mind.

  Technology has had a large impact on many other realms of knowledge as well. One that jumps immediately to mind is medicine. Just as our medical knowledge undergoes wholesale changes, so do our medical advances in terms of what is possible. For example, John Wilkins, who in the seventeenth century created what he thought would be the first universal language to help organize our facts and ideas, was himself felled by what is now outdated medical knowledge.

  Wilkins likely died due to complications surrounding kidney stones. At the time, the medical options for kidney stones too large to be passed were either (a) a terrible surgery (it involved cutting near the scrotum up into the bladder while the patient was conscious) or (b) a painful death. Wilkins opted out of the surgery, which often killed those who chose it anyway, and died. But medical advances since the Scientific Revolution have progressed such that kidney stones now can be broken up by sound waves, dissolved, or treated otherwise—with high survival rates.

  Similarly, medical advances have progressed so rapidly that travelers from previous centuries, if not decades, would scarcely recognize what we have available to us. Not only does a vaccine exist for smallpox, but the disease has been entirely eradicated from the planet. Childbirth has gone from life threatening to a routine procedure. Bubonic plague, far from capable of generating a modern wave of the Black Death, is easily treatable with antibiotics. In fact, when I spent a summer in Santa Fe, we were told that bubonic plague exists in that region not because we should be scared, but just to make our doctors aware of this possibility when we went back to our homes, so they could administer the readily available drugs to treat this scourge of the Middle Ages.

  Polio has gone from a menace of chil
dhood summers to a distant memory. A few years ago I was fortunate to attend an exhibit on polio at the Smithsonian National Museum of American History. The disease was presented as something from the history books, and was certainly nothing I had ever experienced, and yet I had a great uncle who walked with a limp due to the disease, and my wife’s aunt had it as a child. Reading of people’s experiences with the disease, the fear, and the iron lungs was astounding. But through medical advances, polio is now generally regarded19 in the developed world as a curious artifact of the past.

  Technology can even affect economic facts. Computer chips, in addition to becoming more powerful, have gone from prohibitively expensive to disposable. Similarly, while aluminum used to be the most valuable20 metal on Earth, it plummeted in price due to technological advances that allowed it to be extracted cheaply. We now wrap our leftovers in it.

  But occasionally, changes in medical or technological advances don’t just alter our lifestyles dramatically, such as in the case of the advent of the Internet. Sometimes they have the potential for fundamentally changing the very nature of humanity. We can see the true extremes of the possibilities of change in the facts of technology by focusing on our life spans.

  There has been a rapid increase in the average life span of an individual in the developed world over the past hundred years. This has occurred through a combination of lowered infant mortality and better hygiene, among other beneficial medical and public health practices. These advances have added about 0.4 years21 to Americans’ total expected life spans in each year since 1960. But this increase in life span is itself increasing; it is accelerating.

  If this acceleration continues, something curious will happen at a certain point. When we begin adding more than one year to the expected life span—a simple shift from less than one to greater than one—we get what is called actuarial escape velocity.22 What this means is that when we are adding more than one year per year, we can effectively live forever. Let me stress this again: A slight change of the underlying state of affairs in our technological and medical abilities—facts about the world around us—can allow people to be essentially immortal. The phrase actuarial escape velocity was popularized by Aubrey de Grey, a magnificently bearded scientist obsessed with immortality. Aubrey de Grey has made the realization of this actuarial escape velocity his life’s work.

  We’re at least several decades from this, according to even the most optimistic and starry-eyed of estimates. And it might very well never happen. But this sort of simple back-of-the-envelope calculation can teach us something: Not only can knowledge change rapidly based on technology, but it can happen so rapidly that it can produce other drastically rapid changes in knowledge. In this case, life spans go from short to long to very long to effectively infinite. Discontinuous jumps in knowledge, and how they occur, are discussed in more detail in chapter 7. But the message is clear: Technological change can affect many other facts, sometimes with the potential for profound change around us.

  But what about the opposite direction? Rather than being overly optimistic and assuming massive positive changes in the world based on technology, what about a quantified pessimism? Will we ever reach the end of technology? And are there mathematical regularities here, too?

  Just as with science, where naysayers have prognosticated the end of scientific progress, others have done the same with innovation more generally. There is the well-known story of the head of the United States Patent and Trademark Office who said there was nothing more to invent, and a similar story about a patent clerk who even resigned because he felt this to be true.

  But there is actually no truth to these stories. In the first case, U.S. Patent Office commissioner Henry Ellsworth, in a report to Congress in 1943, wrote the following: “The advancement of the arts, from year to year, taxes our credulity and seems to presage the arrival of that period when human improvement must end.” But Ellsworth wrote this to contrast it with the fact of continuous growth. Essentially, he was arguing that the fact that things continue to grow exponentially, despite the constant feeling that we have reached some sort of plateau, is something startling and worth marveling at. In the other case, the statement by the head of the U.S. Patent Office—that new inventions were things of the past—simply never happened.

  However, these stories, and how we use them to laugh at our own ignorance, are indicative of a viewpoint in our society: Not only will innovation continue, but anyone who foresees an end to the growth in technological knowledge is bound to be proven wrong. Technological development, and the changes in facts that go along with it, doesn’t seem to be ending anytime soon. Of course, these things must end eventually. The physicist Tom Murphy has shown,23 in a reductio ad absurdum style of argument, that based on certain fundamental ideas about energy constraints, we will exhaust all the energy in our entire galaxy in less than three millennia. So a logistic curve, with its slow saturation to some sort of upper limit, might be more useful in the long term than a simple exponential with never-ending growth.

  In the meantime, technology and science are growing incredibly rapidly and systematically. But there are still questions that need to be addressed: Why do these fields continue to grow? And why do they grow in such a regular manner, with mathematical shapes that are so often exponential curves?

  • • •

  THERE are those who, when confronted with regularities such as Moore’s Law, feel that these are simply self-fulfilling propositions.24 Once Moore quantified the doubling rate of the number of components of integrated circuits, and predicted what would happen in the coming decade, it was simply a matter of working hard to make it come to pass. And once the prediction of 1975 came true, the industry had a continued stake in trying to reach the next milestone predicted by Moore’s Law, because if any company ever fell behind this curve, it would be out of business. Since it was presumed to be possible, these companies had to make it possible; otherwise, they were out of the game.

  This is similar to the well-known Hawthorne effect, when subjects behave differently if they know they are being studied. The effect was named after what happened in a factory called Hawthorne Works outside Chicago in the 1920s and 1930s. Scientists wished to measure the effects of environmental changes, such as lighting, on the productivity of the workers. They discovered that whatever they did to change the workers’ behaviors—whether they increased the lighting or altered any other aspect of their environment—resulted in increased productivity. However, as soon as the study was completed, the productivity dropped.

  The researchers concluded that the observations themselves were affecting productivity and not the experimental changes. The Hawthorne effect was defined25 as “an increase in worker productivity produced by the psychological stimulus of being singled out and made to feel important.” While it has been expanded to mean any change in response to being observed and studied, the focus here on productivity is important for us: If the members of an industry know that they’re being observed and measured, especially in relationship to a predicted metric, perhaps they have an added incentive to increase productivity and meet the metric’s expectations.

  But this doesn’t quite ring true, and in fact it isn’t even possible. These doublings have been occurring in many areas of technology well before Moore formulated his law. As noted earlier, this regularity just in the realm of computing power has held true as far back as the late nineteenth and early twentieth centuries, before Gordon Moore was even born. So while Moore gave a name to something that had been happening, the phenomenon he named didn’t actually create it.

  Why else might everything be adhering to these exponential curves and growing so rapidly? A likely answer is related to the idea of cumulative knowledge. Anything new—an idea, discovery, or technological breakthrough—must be built upon what is known already. This is generally how the world works. Scientific ideas build upon one another to allow for new scientific knowledge and tech
nologies and are the basis for new breakthroughs. When it comes to technological and scientific growth, we can bootstrap what we have learned before toward the creation of new facts. We must gain a certain amount of knowledge in order to learn something new.

  Koh and Magee argue that we should imagine that the magnitude of technological growth is proportional to the amount of knowledge that has come before it. The more preexisting methods, ideas, or anything else that is essential for making a certain technology just a little bit better, the more potential for that technology to grow.

  What I have just stated can actually be described mathematically. An equation in which something grows by an amount proportional to its current size gets exactly what we hoped for: exponential growth. What this means is that if technology is essentially bootstrapping itself, much as science does, and its growth is based on how much has come before it, then we can easily get these doublings and exponential growth rates. Numerous researchers have proposed a whole variety of mathematical models to explain this, using the core idea of cumulative knowledge.

  So while exponential growth is not a self-fulfilling proposition, there is feedback, which leads to a sort of technological imperative: As there is more technological or scientific knowledge on which to grow, new technologies increase the speed at which they grow.

  But why does this continue to happen? Technological or scientific change doesn’t happen automatically; people are needed to create new ideas and concepts. Therefore, in addition to knowledge accumulation, we need to understand another piece that’s important to the growth of knowledge: population growth.

  • • •

  SOMEWHERE between ten thousand and twelve thousand years ago, a land bridge between Australia and Tasmania was destroyed. Up until that point individuals could easily walk between Australia and what became this small island off the southern coast of the mainland. Soon after the land bridge vanished, something happened: The tiny population of Tasmania26 became one of the least technologically advanced societies on the planet.

 

‹ Prev