Book Read Free

This Will Make You Smarter

Page 18

by John Brockman


  In principle, there should be no limit to the diversity of supernatural beings that humans can imagine. However, as the anthropologist Pascal Boyer has argued, only a limited repertoire of such beings is exploited in human religions. Its members—ghosts, gods, ancestor spirits, dragons, and so on—have in common two features:

  They each violate some major intuitive expectations about living beings: the expectation of mortality, of belonging to one and only one species, of being limited in one’s access to information, and so on.

  They satisfy all other intuitive expectations and are therefore, in spite of their supernaturalness, rather predictable.

  Why should this be so? Because being “minimally counterintuitive” (Boyer’s phrase) makes for “relevant mysteries” (my phrase) and is a cultural attractor. Imaginary beings that are either less or more counterintuitive than that are forgotten or are transformed in the direction of this attractor.

  And what is the attractor around which the “meme” meme gravitates? The meme idea—or rather a constellation of trivialized versions of it—has become an extraordinarily successful bit of contemporary culture not because it has been faithfully replicated again and again but because our conversation often does revolve (and here is the cultural attractor) around remarkably successful bits of culture that, in the time of mass media and the Internet, pop up more and more frequently and are indeed quite relevant to our understanding of the world. They attract our attention even when—or, possibly, especially when—we don’t understand that well what they are and how they come about. The meaning of “meme” has drifted from Dawkins’s precise scientific idea to a means to refer to these striking and puzzling objects.

  This was my answer. Let me end by posing a question (which time will answer): Is the idea of a cultural attractor itself close enough to a cultural attractor for a version of it to become in turn a “meme”?

  Scale Analysis

  Giulio Boccaletti

  Physicist; atmospheric and oceanic scientist; expert associate principal, McKinsey & Company

  There is a well-known saying: Dividing the universe into things that are linear and those that are nonlinear is very much like dividing the universe into things that are bananas and things that are not. Many things are not bananas.

  Nonlinearity is a hallmark of the real world. It occurs any time that outputs of a system cannot be expressed in terms of a sum of inputs, each multiplied by a simple constant—a rare occurrence in the grand scheme of things. Nonlinearity does not necessarily imply complexity, just as linearity does not exclude it, but most real systems do exhibit some nonlinear feature that results in complex behavior. Some, like the turbulent stream from a water tap, hide deep nonlinearity under domestic simplicity, while others—weather, for example—are evidently nonlinear to the most distracted of observers. Nonlinear complex dynamics are around us: Unpredictable variability, tipping points, sudden changes in behavior, hysteresis—all are frequent symptoms of a nonlinear world.

  Nonlinear complexity has also the unfortunate characteristic of being difficult to manage, high-speed computing notwithstanding, because it tends to lack the generality of linear solutions. As a result, we have a tendency to view the world in terms of linear models—for much the same reason that looking for lost keys under a lamppost might make sense because that’s where the light is. Understanding seems to require simplification, one in which complexity is reduced wherever possible and only the most material parts of the problem are preserved.

  One of the most robust bridges between the linear and the nonlinear, the simple and the complex, is scale analysis, the dimensional analysis of physical systems. It is through scale analysis that we can often make sense of complex nonlinear phenomena in terms of simpler models. At its core reside two questions. The first asks what quantities matter most to the problem at hand (which tends to be less obvious than one would like). The second asks what the expected magnitude and—importantly—dimensions of such quantities are. This second question is particularly important, as it captures the simple yet fundamental point that physical behavior should be invariant to the units we use to measure quantities. It may sound like an abstraction, but, without jargon, you could really call scale analysis “focusing systematically only on what matters most at a given time and place.”

  There are some subtle facts about scale analysis that make it more powerful than simply comparing orders of magnitude. A most remarkable example is that scale analysis can be applied, through a systematic use of dimensions, even when the precise equations governing the dynamics of a system are not known. The great physicist G. I. Taylor, a character whose prolific legacy haunts any aspiring scientist, gave a famous demonstration of this deceptively simple approach. In the 1950s, back when the detonating power of the nuclear bomb was a carefully guarded secret, the U.S. government incautiously released some unclassified photographs of a nuclear explosion. Taylor realized that whereas its details would be complex, the fundamentals of the problem would be governed by a few parameters. From dimensional arguments, he posited that there ought to be a scale-invariant number linking the radius of the blast, the time from detonation, energy released in the explosion, and the density of the surrounding air. From the photographs, he was able to estimate the radius and timing of the blast, inferring a remarkably accurate—and embarrassingly public—estimate of the energy of the explosion.

  Taylor’s capacity for insight was no doubt uncommon: Scale analysis seldom generates such elegant results. Nevertheless, it has a surprisingly wide range of applications and an illustrious history of guiding research in applied sciences, from structural engineering to turbulence theory.

  But what of its broader application? The analysis of scales and dimensions can help us understand many complex problems and should be part of everybody’s toolkit. In business planning and financial analysis, for example, the use of ratios and benchmarks is a first step toward scale analysis. It is certainly not a coincidence that they became common management tools at the height of Taylorism—a different Taylor, F. W. Taylor, the father of modern management theory—when “scientific management” and its derivatives made their first mark. The analogy is not without problems and would require further detailing than we have time for here—for example, on the use of dimensions to infer relations between quantities. But inventory turnover, profit margin, debt and equity ratios, labor and capital productivity, are dimensional parameters that could tell us a great deal about the basic dynamics of business economics, even without detailed market knowledge and day-to-day dynamics of individual transactions.

  In fact, scale analysis in its simplest form can be applied to almost every quantitative aspect of daily life, from the fundamental time scales governing our expectations on returns on investments to the energy intensity of our lives. Ultimately, scale analysis is a particular form of numeracy—one where the relative magnitude as well as the dimensions of things that surround us guide our understanding of their meaning and evolution. It almost has the universality and coherence of Warburg’s Mnemosyne Atlas: a unifying system of classification, where distant relations between seemingly disparate objects can continuously generate new ways of looking at problems and, through simile and dimension, can often reveal unexpected avenues of investigation.

  Of course, any time a complicated system is translated into a simpler one, information is lost. Scale analysis is a tool only as insightful as the person using it. By itself, it does not provide answers and is no substitute for deeper analysis. But it offers a powerful lens through which to view reality and to understand “the order of things.”

  Hidden Layers

  Frank Wilczek

  Physicist, MIT; recipient, 2004 Nobel Prize in physics; author, The Lightness of Being: Mass, Ether, and the Unification of Forces

  When I first took up the piano, merely hitting each note required my full attention. With practice, I began to work in phrases and chords. Eventually I
made much better music with much less conscious effort.

  Something powerful had happened in my brain.

  That sort of experience is very common, of course. Something similar occurs whenever we learn a new language, master a new game, or get comfortable in a new environment. It seems very likely that a common mechanism is involved. I think it’s possible to identify, in broad terms, what that mechanism is: We create hidden layers.

  The scientific concept of a hidden layer arose from the study of neural networks. Here, a little picture is worth a thousand words:

  In this picture, the flow of information runs from top to bottom. Sensory neurons—the eyeballs at the top—take input from the external world and encode it into a convenient form (which is typically electrical pulse trains for biological neurons and numerical data for the computer “neurons” of artificial neural networks). They distribute this encoded information to other neurons, in the next layer below. Effector neurons—the stars at the bottom—send their signals to output devices (which are typically muscles for biological neurons and computer terminals for artificial neurons). In between are neurons that neither see nor act upon the outside world directly. These interneurons communicate only with other neurons. They are the hidden layers.

  The earliest artificial neural networks lacked hidden layers. Their output was, therefore, a relatively simple function of their input. Those two-layer, input-output “perceptrons” had crippling limitations. For example, there is no way to design a perceptron that, faced with a series of different pictures of a few black circles on a white background, counts the number of circles. It took until the 1980s, decades after the pioneering work, for people to realize that including even one or two hidden layers could vastly enhance the capabilities of their neural networks. Nowadays such multilayer networks are used, for example, to distill patterns from the explosions of particles that emerge from high-energy collisions at the Large Hadron Collider. They do it much faster and more reliably than humans possibly could.

  David Hubel and Torsten Wiesel were awarded a 1981 Nobel Prize for figuring out what neurons in the visual cortex are doing. They showed that successive hidden layers first extract features likely to be meaningful in visual scenes (for example, sharp changes in brightness or color, indicating the boundaries of objects) and then assemble them into meaningful wholes (the underlying objects).

  In every moment of our adult waking life, we translate raw patterns of photons hitting our retinas—photons arriving every which way from a jumble of unsorted sources and projected onto a two-dimensional surface—into the orderly, three-dimensional visual world we experience. Because it involves no conscious effort, we tend to take that everyday miracle for granted. But when engineers tried to duplicate it in robotic vision, they got a hard lesson in humility. Robotic vision remains today, by human standards, primitive. Hubel and Wiesel exhibited the architecture of nature’s solution. It is the architecture of hidden layers.

  Hidden layers embody in a concrete physical form the fashionable but rather vague and abstract idea of emergence. Each hidden-layer neuron has a template. The neuron becomes activated, and sends signals of its own to the next layer, precisely when the pattern of information it is receiving from the preceding layer matches (within some tolerance) that template. But this is just to say, in precision-enabling jargon, that the neuron defines, and thus creates, a new emergent concept.

  In thinking about hidden layers, it’s important to distinguish between the routine efficiency and power of a good network, once that network has been set up, and the difficult issue of how to set it up in the first place. That difference is reflected in the difference between playing the piano (or, say, riding a bicycle, or swimming) once you’ve learned (easy) and learning to do it in the first place (hard). Understanding exactly how new hidden layers get laid down in neural circuitry is a great unsolved problem of science. I’m tempted to say it’s the greatest unsolved problem.

  Liberated from its origin in neural networks, the concept of hidden layers becomes a versatile metaphor with genuine explanatory power. For example, in my work in physics I’ve noticed many times the impact of inventing names for things. When Murray Gell-Mann invented “quarks,” he was giving a name to a paradoxical pattern of facts. Once that pattern was recognized, physicists faced the challenge of refining it into something mathematically precise and consistent, but identifying the problem was the crucial step toward solving it! Similarly, when I invented “anyons,” for theoretical particles existing in only two dimensions, I knew I had put my finger on a coherent set of ideas, but I hardly anticipated how wonderfully those ideas would evolve and be embodied in reality. In cases like this, names create new nodes in hidden layers of thought.

  I’m convinced that the general concept of hidden layers captures deep aspects of the way minds—whether human, animal, or alien; past, present, or future—do their work. Minds create useful concepts by embodying them in a specific way: namely, as features recognized by hidden layers. And isn’t it pretty that “hidden layers” is itself a most useful concept, worthy to be included in hidden layers everywhere?

  “Science”

  Lisa Randall

  Physicist, Harvard University; author, Warped Passages: Unraveling the Mysteries of the Universe’s Hidden Dimensions

  The word “science” itself might be the best answer to this year’s Edge Question. The idea that we can systematically understand certain aspects of the world and make predictions based on what we’ve learned, while appreciating and categorizing the extent and limitations of what we know, plays a big role in how we think. Many words that summarize the nature of science, such as “cause and effect,” “predictions,” and “experiments”—as well as words describing probabilistic results such as “mean,” “median,” “standard deviation,” and the notion of “probability” itself—help us understand more specifically what we mean by “science” and how to interpret the world and the behavior within it.

  “Effective theory” is one of the more important notions within science—and outside it. The idea is to determine what you can actually measure and decide, given the precision and accuracy of your measuring tools, and to find a theory appropriate to those measurable quantities. The theory that works might not be the ultimate truth, but it’s as close an approximation to the truth as you need and is also the limit to what you can test at any given time. People can reasonably disagree on what lies beyond the effective theory, but in a domain where we have tested and confirmed it, we understand the theory to the degree that it’s been tested.

  An example is Newton’s laws of motion, which work as well as we will ever need when they describe what happens to a ball when we throw it. Even though we now know that quantum mechanics is ultimately at play, it has no visible consequences on the trajectory of the ball. Newton’s laws are part of an effective theory that is ultimately subsumed into quantum mechanics. Yet Newton’s laws remain practical and true in their domain of validity. It’s similar to the logic you apply when you look at a map. You decide the scale appropriate to your journey—are you traveling across the country, going upstate, or looking for the nearest grocery store?—and use the map scale appropriate to your question.

  Terms that refer to specific scientific results can be efficient at times, but they can also be misleading when taken out of context and not supported by true scientific investigation. However, the scientific methods for seeking, testing, and identifying answers and understanding the limitations of what we have investigated will always be reliable ways of acquiring knowledge. A better understanding of the robustness and limitations of what science establishes, as well as those of probabilistic results and predictions, could make the world a place where people make the right decisions.

  The Expanding In-Group

  Marcel Kinsbourne

  Neurologist and cognitive neuroscientist, The New School; coauthor (with Paula Kaplan), Children’s Learning and Attention Proble
ms

  The ever-cumulating dispersion not only of information but also of population across the globe is the great social phenomenon of this age. Regrettably, cultures are being homogenized, but cultural differences are also being demystified, and intermarriage is escalating, across ethnic groups within states and among ethnicities across the world. The effects are potentially beneficial for the improvement of cognitive skills, from two perspectives. We can call these the “expanding in-group” and the “hybrid vigor” effects.

  The in-group-vs.-out-group double standard, which had and has such catastrophic consequences, could in theory be eliminated if everyone alive were considered to be in everyone else’s in-group. This utopian prospect is remote, but an expansion of the conceptual in-group would expand the range of friendly, supportive, and altruistic behavior. This effect may already be in evidence in the increase in charitable activities in support of foreign populations confronted by natural disasters. Donors identifying to a greater extent with recipients make this possible. The rise in international adoptions also indicates that the barriers set up by discriminatory and nationalistic prejudice are becoming porous.

  The other potential benefit is genetic. The phenomenon of hybrid vigor in offspring, which is also called heterozygote advantage, derives from a cross between dissimilar parents. It is well established experimentally, and the benefits of mingling disparate gene pools are seen not only in improved physical but also improved mental development. Intermarriage therefore promises cognitive benefits. Indeed, it may already have contributed to the Flynn effect, the well-known worldwide rise in average measured intelligence by as much as three IQ points per decade over successive decades since the early twentieth century.

 

‹ Prev