The Half-Life of Facts

Home > Other > The Half-Life of Facts > Page 5
The Half-Life of Facts Page 5

by Samuel Arbesman


  So too with species and many other discoveries. As mentioned earlier, when a field is young the discoveries come easily, and they are often the ones that explain a lot of what is going on—or, in the case of species, are the really big ones. Many things that we know about are incredibly common or relatively easy to know; they’re in the main portion of this distribution. But there are uncountably more discoveries, although far rarer, in the tail of this distribution of discovery. As we delve deeper, whether it’s into discovering the diversity of life in the oceans or the shape of the Earth, we begin to truly understand the world around us.

  So what we’re really dealing with is the long tail of discovery. Our search for what’s way out at the end of that tail, while it might not be as important or as Earth-shattering as the blockbuster discoveries, can be just as exciting and surprising. Each new little piece can teach us something about what we thought was possible in the world and help us to asymptotically approach a more complete understanding of our surroundings. Whether it’s finding multicellular creatures that can live without oxygen or a shrimp that had been thought dead for sixty million years, both of which were found by participants in the Census of Marine Life project, each new discovery adds to all that we know about the universe, in its rich complexity and diversity.

  • • •

  THERE is an order to how science accumulates and explains everything around us that allows us to construct an intricate and ever-improving theory of our world. But just as the science of science can explain how facts are both created and overturned, it can also lead us to understand how other sorts of facts change. And many of these facts are related to the world of technology.

  CHAPTER 4

  Moore’s Law of Everything

  I had my first experience with the Internet in the early 1990s. I activated our 300-baud modem, allowed it to begin its R2-D2–like hissing and whistling, and began to telnet. A window on our Macintosh’s screen began filling with text and announced our connection to the computers of the local university through this now antiquated protocol. After exploring a series of text menus, I commenced my first download: a text document containing Plato’s Republic, via Project Gutenberg. Once I completed this task (no doubt after a significant fraction of an hour), I was ecstatic. I can distinctly remember jumping up and down, celebrating that I had gotten this entire book onto our computer using nothing but the phone lines and a lot of atonal beeping.

  It took me almost a decade after this incident to actually get around to reading The Republic. By the time I did, the notion that we ever expressed wonder at such a mundane activity as downloading a text document seemed quaint. In 2012, people stream movies onto their computers nightly without praising the modem gods. We have gone from the days of early Web pages, with their garish backgrounds and blinking text, to slick interactive sites using cascading style sheets, JavaScript, and so many other bells and whistles that make the entire experience smooth and multimedia-based. No one thinks any longer about modems or the details of bandwidth speeds. And certainly no one uses the word baud anymore.

  To understand how much has changed, and how rapidly, during the 1990s, we can look to the Today show. At one point in January 1994, Bryant Gumbel was asked to read an e-mail address out loud.

  He was at an utter loss, especially when it came to the “a, and then the ring around it.” This symbol, @, is second nature for us now, but Gumbel found it baffling. Gumbel and Katie Couric then went into a discussion about what the Internet is. They even asked those off camera, “What is ‘Internet’ anyway?”

  The @ symbol has been on keyboards1 since the first typewriter in 1885, the Underwood. However, it languished in relative obscurity until people began using it as a separator in e-mail addresses, beginning in 1971. Even then, its usage didn’t enter the popular consciousness until decades later. Gumbel’s confusion, and our amusement at this situation, is a testament to the rapid change that the Internet has wrought.

  But, of course, these changes aren’t limited to the Internet. When I think of a 386 processor I think of playing SimCity 2000 on my friend’s desktop computer, software and hardware that have both long since been superseded. In digital storage media, I have personally used 5¼-inch floppy disks, 3½-inch diskettes, zip discs, rewritable CDs, flash drives, burnable DVDs, even the Commodore Datasette, and in 2012 I save many of my documents to the storage that’s available anytime I have access to the Internet: the cloud. This is over a span of less than thirty years.

  Clearly our technological knowledge changes rapidly, and this shouldn’t surprise us. But in addition to our rapid adaptation to all of the change around us—which I address in chapter 9—what should surprise us is that there are regularities in these changes in technological knowledge. It’s not random and it’s not erratic. There is a pattern, and it affects many of the facts that surround us, even ones that don’t necessarily seem to deal with technology. The first example of this? Moore’s Law.

  • • •

  WE all at least have heard of Moore’s Law. It deals with the rapid doubling of computer processing power. But what exactly is it and how did it come about? Gordon Moore, of the eponymous law, is a retired chemist and physicist as well as the cocreator of the Intel Corporation. He founded Intel in 1968 with Robert Noyce, who helped invent the integrated circuit, the core of every modern computer. But Moore wasn’t famous or fabulously wealthy when he developed his law. In fact, he hadn’t even founded Intel yet. Three years before, Moore wrote a short paper in the journal2 Electronics entitled, “Cramming More Components Onto Integrated Circuits.”

  In this paper Moore predicted the number of components that it would be possible to place on a single circuit in the years 1970 and 1975. He argued that growth would continue to increase at the same rate. Essentially, Moore’s Law states that the processing power of a single chip or circuit will double every year. He didn’t arrive at this conclusion through exhaustive amounts of data gathering and analysis; in fact, he based his law on only four data points.

  The incredible thing is that he was right. This law has held roughly true since 1965, even as more and more data have been added to the simple picture he examined. While with more data we now know that the period for doubling is closer to eighteen months than a year, the principle stands. It has weathered the personal computer revolution, the march from 286 to 486 to Pentium, and the many advances since then. Just as in science, we have experienced an exponential rise in technological advances over time: Processing power grows every year at a constant rate rather than by a constant amount. And according to the original formulation, the annual rate of growth is about 200 percent.

  Moore’s Law hasn’t simply affected our ability to make more and more calculations more easily. Many other developments occur as an outgrowth of this pattern. When processing power doubles rapidly it allows much more to be possible. For example, the number of pixels that digital cameras can process3 has increased directly due to the regularity of Moore’s Law.

  But it gets even more interesting. If you generalize Moore’s Law from chips to simply thinking about information technology and processing power in general, Moore’s Law becomes the latest in a long line of technical rules of thumb that explain extremely regular change in technology.

  What does this mean? Let’s first take the example of processing power. Rather than simply focusing on the number of components on an integrated circuit, we can think more broadly. What do these components do? They enable calculations to occur. So if we measure calculations per second, or calculations per second at a given cost (which is the kind of thing that might be useful when looking at affordable personal computers), we can ignore the specific underlying technologies that enable these things to happen and instead focus on what they are designed to do.

  Chris Magee set out to do exactly that. Magee is a professor at MIT in the Engineering Systems Division, an interdisciplinary department that def
ies any sort of simple description. It draws people from lots of different areas—physics, computer science, engineering, even aerospace science. But the common denominator is that all of these people think about complex systems—from traffic to health care—from the perspectives of engineering, management science, and the quantitative social sciences.

  Magee, along with a postdoctoral fellow Heebyung Koh,4 decided to examine the progress we’ve made in our ability to calculate, or what they termed information transformation. They compiled a vast data set of all the different instances of information transformation that have occurred throughout history. Their dataset, which goes back to the nineteenth century, is close to exhaustive: It begins with calculations done by hand in 1892 that clocked in at a little under one calculation a minute. Following that came: an IBM Hollerith Tabulator in 1919 that was only about four times faster; the ENIAC, which is often thought of as the world’s first computer, that used vacuum tubes to complete about four thousand calculations per second in 1946; the Apple II, which could perform twenty thousand calculations every second, in 1977; and, of course, many more modern and extremely fast machines.

  By lining up one technology after another, one thing becomes clear: Despite the differences among all of these technologies—human brains, punch cards, vacuum tubes, integrated circuits—the overall increase in humanity’s ability to perform calculations has progressed quite smoothly and extremely quickly. Put together, there has been a roughly exponential increase in our information transformation abilities over time.

  But how does this happen? Isn’t it true that when a new technology or innovation is developed it is often far ahead of what is currently in use? And if a new technology’s not that much better, shouldn’t it simply not be adopted? How can all of these combined technologies yield such a smooth and regular curve? Actually, the truth is far messier but much more exciting.

  In fact, when someone develops a new innovation, it is often largely untested. It might be better than what is currently in use, but it is clearly a work in progress. This means that the new technology is initially only a little bit better. As its developers improve and refine it (this is the part that often distinguishes engineering and practical application from basic science), they begin to realize the potential of this new innovation. Its capabilities begin to grow exponentially.

  But then a limit is reached. And when that limit is reached there is the opportunity to bring in a new technology, even if it’s still tentative, untested, and buggy. This progression of refinement and plateau for each successive innovation is in fact described in the mathematical world as a series of steadily rising logistic curves.

  This is a variation on the theme of the exponential curve. Imagine bacteria growing in a petri dish. At first, as they gobble the nutrients in the dish, they obey the doubling and rapid growth of the exponential curve. One bacterium divides into two bacteria, two bacteria become four, and eventually, one million becomes two million. But soon enough these bacteria bump up against certain limits. They begin to run out of space,5 literally bumping up against each other, since the size of the petri dish, though very large in the eyes of each individual bacterium, is far from infinite relative to the entire colony.

  Soon the growth slows, and eventually it approaches a certain steady number of bacteria, the number that can be safely held in the petri dish over a long period of time. This amount is known as the carrying capacity. The mathematical function that explains how something can quickly begin to grow exponentially, only to slow down until it reaches a carrying capacity, is known as a logistic curve.

  Of course, the logistic curve describes lots more than bacteria. It can explain everything from how deer populate a forest to how the fraction of the world population with access to the Internet changes over time. It can also explain how people adopt something new.

  When a tech gadget is new, its potential for growth is huge. No one has it yet, so its usage can only grow. As people begin to buy the newest Apple device, for example, each additional user is gained faster and faster, obeying an exponential curve. But of course this growth can’t go on forever. Eventually the entire population that might possibly choose to adopt the gadget is reached. The growth slows down as it reaches this carrying capacity, obeying its logistic shape.

  These curves are also often referred to as S-curves, due to their stretched S-like shapes. This is the term that’s commonly used when discussing innovation adoption. Clayton Christensen, a professor at Harvard Business School,6 argues that a series of tightly coupled and successive S-curves—each describing the progression and lifetime of a single technology—can be combined sequentially when looking at what each consecutive technology is actually doing (such as transforming information) and together yield a steady and smooth exponential curve, exactly as Magee and Koh found. This is known as linked S-curve theory, and it explains how multiple technologies have been combined to explain the shapes of change we see over time.

  Figure 4. Linked S-curves (or linked logistic curves). When combined, they can yield a smooth curve over time.

  But Magee and Koh didn’t simply expand Moore’s Law and examine information transformation. They looked at a whole host of technological functions to see how they have changed over the years. From information storage and information transportation to how we deal with energy, in each case they found mathematical regularities.7

  This ongoing doubling of technological capabilities has even been found in robots.8 Rodney Brooks is a professor at MIT who has lived through much of the current growth in robotics and is himself a pioneer in the field. He even cofounded the company that created the Roomba. Brooks looked at how robots have improved over the years and found that their movement abilities—how far and how fast a robot can move—have gone through about thirteen doublings in twenty-six years. That means that we have had a doubling about every two years: right on schedule and similar to Moore’s Law.

  Kevin Kelly, in his book What Technology Wants,9 has cataloged a wide collection of technological growth rates that fit an exponential curve. The doubling time of each kind of technology, as shown in the following table, acts as a sort of half-life for it and is indicative of exponential growth: It’s the amount of time before what you have is out-of-date and you’re itching to upgrade.

  Technology

  Doubling Time (in months)

  Wireless, bits per second

  10

  Digital cameras, pixels per dollar

  12

  Pixels, per array

  19

  Hard-drive storage, gigabytes per dollar

  20

  DNA sequencing, dollars per base paid

  22

  Bandwidth, kilobits per second per dollar

  30

  Notably, this table bears a strikingly similarity to the chart seen in chapter 2, from Price’s research. Technological knowledge exhibits rapid growth just like scientific knowledge.

  But the relationship between the progression of technological facts and that of science is even more tightly intertwined. One of the simplest ways to begin seeing this is by looking at scientific prefixes.

  • • •

  IN chapter 8, I explore how advances in measurement enable the creation of new facts and new knowledge. But one fundamental way that measurement is affected is through the tools that we have to understand our surroundings. And we can see the effects of technological advances in measurement by looking at it in one small and simple area: the scientific prefix.

  The International Bureau of Weights and Measures, which is responsible for defining the length of a meter, and for
a long time maintained in a special vault the quintessential and canonical kilogram, is also in charge of providing the officially sanctioned metric prefixes. We are all aware of centi- (one tenth), from the world of length, and giga- (one billion), from measuring hard disk space. But there are other, more exotic, prefixes. For example, femto- is one quadrillionth and zeta- is a sextillion (a one followed by twenty-one zeroes). The most recent prefixes are yotta- (1024) and yocto- (10-24), both approved in 1991.

  But while prefixes are entertaining, and can possibly allow you to win the odd bar bet, the creation of new ones is not just for fun. They are created only when there is a need for them. As technology and science increase exponentially rapidly, so too do the prefix sizes. If you plot prefix sizes against the years10 they were introduced, you get a roughly exponential progression.

  We only measure quantities when we can wrap our scientific minds around them, whether it’s measuring energy usage, examining tiny atoms, or thinking about astronomical distances. It would make little sense to have prefixes that referred to numbers in the sextillions, or larger, if we had no use for them. However, as we expand what we know, from the number of galaxies in our universe to the sizes of subatomic particles, we expand our need for prefixes. For example, the cost of genome sequencing is dropping rapidly,11 even, recently, far faster than exponentially. All of the technological developments facilitate the quick advances of science, and with them the need for new metric prefixes.

  These technological doublings in the realm of science12 are actually the rule rather than the exception. For example, there is a Moore’s Law of proteomics,13 the field that deals with large-scale data and analysis related to proteins and their interactions within the cell. Here too there is a yearly doubling in technological capability when it comes to understanding the interactions of proteins.

 

‹ Prev