Book Read Free

The Half-Life of Facts

Page 15

by Samuel Arbesman


  Mathematics—from Ising models to probability—can help us to understand how rapid changes in the facts we know can occur around us. But are these phase transitions the rule or the exception? What should we expect more of: slow and steady changes in knowledge or extremely rapid shifts in the facts around us?

  While there will no doubt always be slow change in knowledge, many of us have an intuitive sense that facts are changing around us faster and faster, with rapid transitions occurring more often every day. There is scientific evidence to buttress this intuition. To understand this we have to understand how cities produce innovation.

  • • •

  RECENTLY, physicists have begun to take mathematical tools from their own field and apply them to understanding the relationship between the populations of cities and how they use energy and produce new ideas. Specifically, Luís Bettencourt and his colleagues,10 who are affiliated with the Santa Fe Institute, found that there are economies of scale for certain properties of cities. For example, the larger the population of a city, the smaller the number of gas stations that are necessary per capita; gas stations might be indicative of energy usage of the city as a whole, and it seems that larger cities are more efficient consumers of energy. This is similar to how larger organisms are more energy efficient than smaller ones.

  However, when looking at productivity and innovation, cities obey mathematical relationships that operate like increasing returns. For example, the yearly number of patents produced in a city per person is higher for bigger cities, and in a mathematically precise way. This sort of scaling is called superlinear, because things grow faster than they would at linear speeds, faster than a straight line. Double the population of a city, and it doesn’t simply double its productivity; it yields productivity and innovation that is more than doubled. These relationships have been found in patents, a city’s gross metropolitan product, research and development budgets, and even the presence of so-called supercreative individuals, such as artists and academics.

  When resource availability and consumption adhere to sublinear scaling and drives growth, the way it does for living things,11 a system develops according to a simple progression: Start small and grow rapidly, but eventually slow growth until a mature adult size is reached. However, if a system’s growth is dependent upon superlinear phenomena, as in the case of cities and innovation, the mathematics require the system to grow faster and faster, until it approaches an infinite growth rate.

  But infinite growth can’t happen, either for organisms (cancer eventually overwhelms its own resources) or for cities. For cities, then, the only way to not be overwhelmed by this infinite growth is to undergo what the researchers deemed paradigmatic innovations, essentially to reset the parameters of growth. These innovations can encompass such changes as modern sewage systems—abundant waste limits the sizes of cities—or the birth of the skyscraper, which allows for increased urban population and density through the use of the third dimension. Whatever they are, these innovations allow the city to avoid becoming overwhelmed by its own growth. In past years these innovative resets occurred every few hundred years, and for any single person, these large-scale changes in facts were manageable. But no longer; we are living during the first time in history when multiple rapid changes can occur within a single human lifetime.

  As knowledge changes more and more rapidly, the resulting change in society can be drastic. Rather than changes in degree, we have changes in kind. For someone living in a small English village at any point during the early Middle Ages, aside from certain details, there would be little difference12 in one’s lifestyle between any two years. There might be alterations in fashion, but in one’s overall life—manner of occupation, means of cooking and doing household chores—none of these would be different. In fact, even fashion during the Middle Ages13 only changed about every fifty years, far slower than shifting every decade or so, which has been happening since the nineteenth century in industrialized countries. Even if you lived during a rare innovative reset during the later medieval period (such as during the introduction of gunpowder), things weren’t so difficult to adapt to, as they occurred only once every several generations.

  This has been true for most of human history. But with exponential increases in technology and innovation, these changes are coming much more rapidly. When changes are able to occur very quickly, we are in a special situation: The world around us seems to be ever poised on the edge of some rapid shift in facts and knowledge. A small change can cause a large shift in our knowledge at any moment.

  Of course, the world of facts is not the only system that can be in this sort of state; there are many other systems that can have this property. Imagine a complex ecological system in which the slightest change, such as the removal of a single species or the introduction of a pathogen, has the potential to upset the entire system. Or a party, where one person leaving suddenly kills the entire gathering. Or, at an even more basic level, imagine a pile of sand. Take a pile of sand and add a few grains, and the pile gets bigger. But as sand is added a bit more at a time, eventually adding just a few more grains triggers a rapid shift, a sort of avalanche of sand. Why the transition? What about that single grain yields a system that is right on the edge of a rapid shift?

  This question was examined in great detail by three physicists: Per Bak, Chao Tang, and Kurt Wiesenfeld. In 1987, they published a simple mathematical model that aimed to understand why small changes can yield a system that is always on the verge of large shifts.

  This model uses a grid, just like the Ising model. But here, each location is a spot where a grain of sand can be added. And, according to mathematical rules, the model dictates what should happen if the pile of grains at a single location gets too high. Essentially, if the height goes above a single specified value, the grains begin moving to neighboring points, similar to how water will run down a cone if poured onto its tip. When this simulated sandpile reaches what is known as a critical state, it exists in a situation exactly as described above: With the addition of each additional grain, there is absolutely no telling what will happen. There could be a tiny little movement of the sand, or a massive avalanche could be unleashed. Like Jenga or Ker Plunk, but with more mathematics, the system is constantly hovering at the brink of the unknown.

  It turns out that this sort of system, one that organizes itself to always be at the edge of a total avalanche, is the hallmark of actual systems we find in the real world, perhaps even including the world of knowledge. While a bit overly metaphorical, a world with the constant potential for rapid knowledge change would look just like ours.

  • • •

  WE are alive during an amazing time, one in which the potential for rapid changes in knowledge is all around us. Every day that we read the news we have the possibility of being confronted with a fact about our world that is wildly different from what we thought we knew. From Pluto no longer being a planet to humans walking on the moon—to limit our examples to outer space—that is exactly what has become the norm in our modern world.

  But it turns out that these rapid changes, while true phase transitions in our knowledge, are not unexpected or random. We understand how they behave in the aggregate, through the use of probability, but we can also predict these changes by searching for the slower, regular changes in our knowledge that underlie them.

  Fast changes in facts, just like everything else we’ve seen, have an order to them. One that is measurable and predictable.

  But what about measurement itself? We’ve explored one fact after another, but they can only really exist if we are able to quantify them. How measurement affects what we know is the subject of the next chapter.

  CHAPTER 8

  Mount Everest and the Discovery of Error

  IN 1800, the British Empire conceived of the Great Trigonometrical Survey, also known as the Survey of India. After the British took control of the Indian subcontin
ent, they were in need of accurate and detailed maps. How could you properly rule a subcontinent if you didn’t really know what it looked like? So Colonel William Lambton, the surveyor general of India, began this massive project of determining the precise locations of places throughout the colony.

  Starting at the southernmost tip of India in 1808, Lambton employed the triangulation technique. Using trigonometry, chains, metallic bars, monuments, theodolites—a type of optical measuring device—and observations of the stars, one can in general measure the distances and locations of nearly anything. Far from being sorcery, though the varied tools employed could lead one to that conclusion, this was a well-established means of surveying a land.

  The empire gradually came to understand India more and more precisely over the course of many decades. This massive project was a multigenerational one, a sort of scientific equivalent to the construction of the pyramids. When Lambton died the project was placed into the capable hands of a scientist named George Everest. Everest was in turn replaced by Andrew Scott Waugh in 1843. And under Waugh, the advanced survey techniques of the Great Arc of India, another name for this massive project, finally reached the Himalayas.

  Waugh knew that the mountains in that region were some of the tallest in the world, but he did not know their height. Measurements were made over several years, and then computed. It is unknown which individual was responsible for the calculation that a peak known as XV was the tallest in the world. But in 1856, Waugh announced that they had found the world’s tallest mountain, and he named it after his predecessor, Everest.

  • • •

  WE know that Mount Everest was determined to be the tallest mountain in the world, and that fact doesn’t seem to be changing, but what exactly is its height? By 1954, the observations that had been made varied by seventeen feet. Since then, though, the variation in calculations has decreased greatly. This has been due to innovations in measurement techniques. Now, rather than having to compare measurements between multiple survey stations perched atop surrounding peaks, surveyors minimize differences by using the Global Positioning System.

  In fact, the increase in precision has actually allowed us to know a new type of fact: The height of the mountain actually changes a bit every year. Mount Everest’s height seems to be subject to two competing forces. On the one hand, the collision of two continental plates (Asia and India) causes a certain amount of uplift each year, perhaps about a centimeter or so, although there seems to be some disagreement. On the other hand, other forces, such as erosion and melting glaciers, can cause a decrease in height. While it’s unclear how much it changes each year, we now know for certain1 that the height is never exactly constant. We also know that Mount Everest is moving laterally at quite a nice clip: six centimeters per year, making its location also a mesofact, one of those slowly changing pieces of knowledge. But only due to improvements in how we measure the world could we have learned these things.

  Consider the world’s tallest tree. By far the tallest trees in the world are the redwoods of California, coming in at nearly four hundred feet tall. The next tallest tree species are a type of eucalyptus in southern Australia, which top out around three hundred feet.

  While it’s possible to measure the height of any tree to within a few feet, using a laser range finder, when it comes to finding the world record holder, a laser, unfortunately, isn’t precise enough. Instead, the tree actually has to be climbed in order to be properly measured.

  Through a combination of measurement, discovery, and lots of tree climbing on the part of botanists and surveyors, as of 2006, the world record for the tallest tree2 has hopped from Rockefeller Tree (356 feet) to the Libbey Tree (367.8 feet) to the Stratosphere Giant (368.6 feet) to Hyperion (379.1 feet). While the overall record holder likely won’t change appreciably in height from year to year, many trees, just like Mount Everest, also undergo changes in their height. But it’s a different type of change in this case, though often just as predictable: growth. But only once we know the true height can we measure how it grows.

  Ultimately, what facts are and how they change often comes down to measurement. Just as there have been systematic and quantifiable rules that govern many of the facts in our lives, measurement itself also obeys mathematics. Our increases in measurement follow certain well-defined regularities. In chapter 4, I showed how the description of our world using scientific prefixes has proceeded according to exponential growth. While we generally think of this in terms of technological growth—we start saying gigabyte instead of megabyte when our computers become more powerful—it can also be used to define our measurements, and our uncertainty. We now think of things in terms of billionths of centimeters instead of tenths of centimeters, because we have better rulers and measuring devices.

  Inscribed on the University of Chicago’s Social Science Research Building is a saying by Lord Kelvin: “When you cannot measure, your knowledge is meager and unsatisfactory.” While this can be viewed within the context of exploring the merits of quantitative analysis as compared to qualitative examination, it also can help us think about how we measure more precisely.

  Sinan Aral, a professor at the New York University Stern School of Business, has stated: “Revolutions in science have often been preceded3 by revolutions in measurement.” From the incorrect number of chromosomes to the misclassification of species, our increased preoccupation with measuring our surroundings allows us to both increase our knowledge and find opportunities in which large amounts of our knowledge will be overturned. A corollary to Lord Kelvin’s adage: If you can measure it, it can also be measured incorrectly. Measurement affects nearly everything we know, and this chapter is devoted to the many ways that measurement is intertwined with facts, beginning with how we have improved our measurement and analysis over time, and in turn our understanding of the world. Let’s start with the Scientific Revolution, when many minds were preoccupied with measurement.

  • • •

  CHRISTOPHER Wren, known today primarily for his architecture and work in the rebuilding of London after the Great Fire in 1666, was involved in many aspects of the creation of the modern scientific endeavor. Among these were his innovations in measurement. Along with John Wilkins, another important figure of the Scientific Revolution mentioned earlier, Wren was involved in the creation of the concept of the meter.

  In April 1668 Wilkins proposed at a meeting of the Royal Society that it was time for measurement to be standardized. Among those measurement standards he proposed was that of length. He argued that the base unit of length, from which all others would be derived, should be known as the standard and should be defined as follows: the length of a pendulum that causes it to swing from one side to the other—known as a single half-period—every second. This definition used the insight gained by Galileo decades earlier that all pendulums of equal length—regardless of the weights at the end of the pendulums—swing at the same rate. Furthermore, no matter at what height you let go of the pendulum, it takes the same amount of time to go from one side to the other.

  This suggestion, which was made to Wilkins by Christopher Wren, yields the definition of a standard as thirty-nine and a quarter inches, remarkably similar to the current measure of a meter. Wilkins went on to define a regular system of lengths4 derived from the standard, such as a tenth of a standard being denoted a foot, and ten standards equaling a pearch. A cube with sides of a standard was proposed to be equal to a bushel.

  It’s probably clear that these derived measurements didn’t stick. In fact, I would be hard-pressed to find anyone who knows the term pearch (in case you’re wondering, it’s similar to our decameter, which seems to be similarly unused). However, the standard, which was transmuted into the French metre, or meter, continues to exist today.

  But in the eighteenth century another approach to defining our units of length was the method that eventually won out. Rather than using time to calculate a me
ter—which Wilkins argued was uniform throughout the universe and would therefore be hard to beat for constructing a unit of measurement—the other approach derived the meter from the distance between the equator and the North Pole. A meter then becomes one ten-millionth of this distance. Due to the variation in gravity over the surface of the Earth, which would affect a pendulum’s swing, the French Academy of Sciences chose the distance-based measurement in 1791.

  But there was a hitch. In addition to no one having actually yet visited the North Pole in 1791, the measurement methods used to calculate the distance from there to the equator were of varying qualities. Unlike in the previous cases discussed, not only were the properties of the Earth not completely known, but these unknown properties have a curious effect on the very measurements themselves. Since the imprecision of measuring the world affects the units that are being measured in the first place, there is a certain circularity when it comes to measurement. This creates a feedback loop in which the better we know how to measure various quantities, the more we improve the very nature of measurement itself.

  The story of the meter has been one of ever-changing definition. Over time, the definition of the meter has evolved, as technologies have advanced and as different techniques have been proposed. As the meter’s definition has changed, its precision has increased, which ultimately is the point of any effective definition.

  While knowing the approximate length of a meter is helpful for many tasks, such as cutting a carpet or measuring one’s own height, it will not do when it comes to finer and more precise tasks, such as designing a circuit board. As the world’s complexity has progressed alongside technological and scientific development, more detailed and more exact measurements have become necessary. While I don’t particularly care if my height is off by a half centimeter or so, when it comes to measuring the size of microscopic organisms, I’m going to be a bit more punctilious.

 

‹ Prev