The most common misunderstanding about science is that scientists seek and find truth. They don’t—they make and test models.
Kepler, packing Platonic solids to explain the observed motion of planets, made pretty good predictions, which were improved by his laws of planetary motion, which were improved by Newton’s laws of motion, which were improved by Einstein’s general relativity. Kepler didn’t become wrong because of Newton’s being right, just as Newton didn’t then become wrong because of Einstein’s being right; these successive models differed in their assumptions, accuracy, and applicability, not in their truth.
This is entirely unlike the polarizing battles that define so many areas of life: Either my political party, or religion, or lifestyle, is right or yours is, and I believe in mine. The only thing that’s shared is the certainty of infallibility.
Building models is very different from proclaiming truths. It’s a never-ending process of discovery and refinement, not a war to win or destination to reach. Uncertainty is intrinsic to the process of finding out what you don’t know, not a weakness to avoid. Bugs are features—violations of expectations are opportunities to refine them. And decisions are made by evaluating what works better, not by invoking received wisdom.
These are familiar aspects of the work of any scientist, or baby: It’s not possible to learn to talk or walk without babbling or toddling to experiment with language and balance. Babies who keep babbling turn into scientists who formulate and test theories for a living. But it doesn’t require professional training to make mental models—we’re born with those skills. What’s needed is not displacing them with the certainty of absolute truths that inhibit the exploration of ideas. Making sense of anything means making models that can predict outcomes and accommodate observations. Truth is a model.
E Pluribus Unum
Jon Kleinberg
Professor of computer science, Cornell University; coauthor (with David Easley), Networks, Crowds, and Markets: Reasoning About a Highly Connected World
If you used a personal computer twenty-five years ago, everything you needed to worry about was taking place in the box in front of you. Today, the applications you use over the course of an hour are scattered across computers all over the world; for the most part, we’ve lost the ability to tell where our data sit at all. We invent terms to express this lost sense of direction: Our messages, photos, and online profiles are all somewhere in “the cloud.”
The cloud is not a single thing. What you think of as your Gmail account or Facebook profile is in fact made possible by the teamwork of a huge number of physically dispersed components—a distributed system, in the language of computer science. But we can think of it as a single thing, and this is the broader point: The ideas of distributed systems apply whenever we see many small things working independently but cooperatively to produce the illusion of a single unified experience. This effect takes place not just on the Internet but in many other domains as well. Consider, for example, a large corporation that releases new products and makes public announcements as though it were a single actor, when we know that at a more detailed level it consists of tens of thousands of employees. Or a massive ant colony engaged in coordinated exploration, or the neurons of your brain creating your experience of the present moment.
The challenge for a distributed system is to achieve this illusion of a single unified behavior in the face of so much underlying complexity. And this broad challenge, appropriately, is in fact composed of many smaller challenges in tension with one another.
One basic piece of the puzzle is the problem of consistency. Each component of a distributed system sees different things and has a limited ability to communicate with everything else, so different parts of the system can develop views of the world that are mutually inconsistent. There are numerous examples of how this can lead to trouble, both in technological domains and beyond. Your handheld device doesn’t sync with your e-mail, so you act without realizing that there’s already been a reply to your message. Two people across the country both reserve seat 5F on the same flight at the same time. An executive in an organization “didn’t get the memo” and so strays off-message. A platoon attacks too soon and alerts the enemy.
It is natural to try “fixing” these kinds of problems by enforcing a single global view of the world and requiring all parts of the system to constantly refer to this global view before acting. But this undercuts many of the reasons you use a distributed system in the first place. It makes the component that provides the global view an enormous bottleneck and a highly dangerous single point of potential failure. The corporation doesn’t function if the CEO has to sign off on every decision.
To get a more concrete sense of some of the underlying design issues, it helps to walk through an example in a little detail—a basic kind of situation, in which we try to achieve a desired outcome with information and actions that are divided among multiple participants. The example is the problem of sharing information securely: Imagine trying to back up a sensitive database on multiple computers while protecting the data so that it can be reconstructed only if a majority of the backup computers cooperate. But since the question of secure information-sharing ultimately has nothing specifically to do with computers or the Internet, let’s formulate it instead using a story about a band of pirates and a buried treasure.
Suppose that an aging pirate king knows the location of a secret treasure and before retiring he intends to share the secret among his five shiftless sons. He wants them to be able to recover the treasure if three or more of them work together, but he also wants to prevent a “splinter group” of one or two from being able to get the treasure on their own. To do this, he plans to split the secret of the location into five “shares,” giving one to each son, in such a way that he ensures the following condition. If, at any point in the future, at least three of the sons pool their shares of the secret, then they will know enough to recover the treasure. But if only one or two pool their shares, they will not have enough information.
How to do this? It’s not hard to invent ways of creating five clues so that all of them are necessary for finding the treasure. But this would require unanimity among the five sons before the treasure could be found. How can we do it so that cooperation among any three is enough and cooperation among any two is insufficient?
Like many deep insights, the answer is easy to understand in retrospect. The pirate king draws a secret circle on the globe (known only to himself) and tells his sons that he’s buried the treasure at the exact southernmost point on this circle. He then gives each son a different point on this circle. Three points are enough to uniquely reconstruct a circle, so any three pirates can pool their information, identify the circle, and find the treasure. But for any two pirates, an infinity of circles pass through their two points, and they cannot know which is the one they need for recovering the secret. It’s a powerful trick and broadly applicable; in fact, versions of this secret-sharing scheme form a basic principle of modern data security, discovered by the cryptographer Adi Shamir, wherein arbitrary data can be encoded using points on a curve and reconstructed from knowledge of other points on the same curve.
The literature on distributed systems is rich with ideas in this spirit. More generally, the principles of distributed systems give us a way to reason about the difficulties inherent in complex systems built from many interacting parts. And so to the extent that we sometimes are fortunate enough to get the impression of a unified Web, a unified global banking system, or a unified sensory experience, we should think about the myriad challenges involved in keeping these experiences whole.
A Proxemics of Urban Sexuality
Stefano Boeri
Architect, Politecnico of Milan; visiting professor, Harvard University Graduate School of Design; editor-in-chief, Abitare magazine
In every room, in every house, in every street, in every city, movements, relations, and spaces are also de
fined with regard to logics of sexual attraction-repulsion between individuals. Even the most insurmountable ethnic or religious barriers can suddenly disappear in the furor of intercourse; even the warmest and most cohesive community can rapidly dissolve in the absence of erotic tension. To understand how our cosmopolitan and multigendered cities work, we need a proxemics of urban sexuality.
Failure Liberates Success
Kevin Kelly
Editor-at-large, Wired magazine; author, What Technology Wants
We can learn nearly as much from an experiment that doesn’t work as from one that does. Failure is not something to be avoided but something to be cultivated. That’s a lesson from science that benefits not only laboratory research but design, sport, engineering, art, entrepreneurship, and even daily life itself. All creative avenues yield the maximum when failures are embraced. A great graphic designer will generate lots of ideas, knowing that most will be aborted. A great dancer realizes that most new moves will not succeed. Ditto for any architect, electrical engineer, sculptor, marathoner, startup maven, or microbiologist. What is science, after all, but a way to learn from things that don’t work, rather than just those that do? What this tool suggests is that you should aim for success while being prepared to learn from a series of failures. More so, you should carefully but deliberately press your successful investigations or accomplishments to the point where they break, flop, stall, crash, or fail.
Failure was not always so noble. In fact, in much of the world today, failure is still not embraced as a virtue. It is a sign of weakness and often a stigma that prohibits second chances. Children in many parts of the world are taught that failure brings disgrace and that one should do everything in one’s power to succeed without failure. Yet the rise of the West is in many respects due to the rise in tolerating failure. Indeed, many immigrants trained in a failure-intolerant culture may blossom out of stagnancy once moved into a failure-tolerant culture. Failure liberates success.
The chief innovation that science brought to the state of defeat is a way to manage mishaps. Blunders are kept small, manageable, constant, and trackable. Flops are not quite deliberate, but they are channeled so that something is learned each time things fail. It becomes a matter of failing forward. Science itself is learning how to better exploit negative results. Due to the problems of costly distribution, most negative results have not been shared, thus limiting their potential to speed learning for others. But increasingly published negative results (which include experiments that succeed in showing no effects) are becoming another essential tool in the scientific method.
Wrapped up in the idea of embracing failure is the related notion of breaking things to make them better—particularly complex things. Often the only way to improve a complex system is to probe its limits by forcing it to fail in various ways. Software, among the most complex things we make, is usually tested for quality by employing engineers to systematically find ways to crash it. Similarly, one way to troubleshoot a complicated device that’s broken is to deliberately force negative results (temporary breaks) in its multiple functions in order to locate the actual dysfunction. Great engineers have a respect for breaking things that sometimes surprises nonengineers, just as scientists have a patience with failures that often perplexes outsiders. But the habit of embracing negative results is one of the most essential tricks to gaining success.
Holism
Nicholas A. Christakis
Physician and social scientist, Harvard University; coauthor (with James H. Fowler), Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives
Some people like to build sand castles and some like to tear them apart. There can be much joy in the latter, but it is the former that interests me. You can take a bunch of minute silica crystals, pounded for thousands of years by the waves, use your hands, and make an ornate tower. Tiny physical forces govern how each particle interacts with its neighbors, keeping the castle together—at least until the force majeure of a foot appears. But this is the part I like most: Having built the castle, you step back and look at it. Across the expanse of beach, here is something new, something not present before among the endless sand grains, something risen from the ground, something that reflects the scientific principle of holism.
Holism is colloquially summarized as “The whole is greater than the sum of its parts.” What interests me, however, are not the artificial instantiations of this principle—when we deliberately form sand into ornate castles, or metal into airplanes, or ourselves into corporations—but rather the natural instantiations. Examples are widespread and stunning. Perhaps the most impressive is that carbon, hydrogen, oxygen, nitrogen, sulfur, phosphorus, iron, and a few other elements, mixed in just the right way, yield life. And life has emergent properties not present in or predictable from these constituent parts. There is a kind of awesome synergy between the parts.
Hence, I think that the scientific concept that would improve everybody’s cognitive toolkit is holism: the abiding recognition that wholes have properties not present in the parts and not reducible to the study of the parts.
For example, carbon atoms have particular, knowable physical and chemical properties. But the atoms can be combined in different ways to make, say, graphite or diamonds. The properties of those substances—properties such as darkness and softness and clearness and hardness—are properties not of the carbon atoms but rather of the collection of carbon atoms. Moreover, which particular properties the collection of atoms has depends entirely on how they are assembled—into sheets or pyramids. The properties arise because of the connections between the parts. Grasping this insight is crucial for a proper scientific perspective on the world. You could know everything about isolated neurons and be unable to say how memory works or where desire originates.
It is also the case that the whole has a complexity that rises faster than the number of its parts. Consider social networks as a simple illustration. If we have 10 people in a group, there are a maximum of 10 x 9/2 = 45 possible connections between them. If we increase the number of people to 1,000, the number of possible ties increases to 1,000 x 999/2 = 499,500. So, while the number of people has increased by a hundredfold (from 10 to 1,000), the number of possible ties (and hence this one measure of the system’s complexity) has increased more than ten thousandfold.
Holism does not come naturally. It is an appreciation not of the simple but of the complex—or, at least, of the simplicity and coherence in complex things. Unlike curiosity or empiricism, say, holism takes a while to acquire and appreciate. It is a grown-up disposition. Indeed, for the last few centuries the Cartesian project in science has been to break matter down into ever smaller bits in the pursuit of understanding. And this works to some extent. We can understand matter by breaking it down to atoms, then protons and electrons and neutrons, then quarks, then gluons, and so on. We can understand organisms by breaking them down into organs, then tissues, then cells, then organelles, then proteins, then DNA, and so on.
Putting things back together in order to understand them is harder and typically comes later in the development of a scientist or of science. Think of the difficulties in understanding how all the cells in our bodies work together, as compared with the study of the cells themselves. Whole new fields of neuroscience and systems biology and network science are arising to accomplish just this. And these fields are arising just now, after centuries of stomping on castles in order to figure them out.
TANSTAAFL
Robert R. Provine
Psychologist and neuroscientist, University of Maryland; author, Laughter: A Scientific Investigation
TANSTAAFL is the acronym for “There ain’t no such thing as a free lunch,” a universal truth having broad and deep explanatory power in science and daily life. The expression originated from the practice of saloons offering free lunch if you bought their overpriced drinks. Science fiction master Robert Heinlein introduced me to TANSTAAFL in Th
e Moon Is a Harsh Mistress, his 1966 classic in which a character warns of the hidden cost of a free lunch.
The universality of the fact that you can’t get something for nothing has found application in sciences as diverse as physics (the laws of thermodynamics) and economics, where Milton Friedman used a grammatically upgraded variant as the title of his 1975 book There’s No Such Thing as a Free Lunch. Physicists are clearly on board with TANSTAAFL; less so, many political economists in their smoke-and-mirrors world.
My students hear a lot about TANSTAAFL—from the biological costs of the peacock’s tail to our nervous system, which distorts physical reality to emphasize changes in time and space. When the final tally is made, peahens cast their ballot for the sexually exquisite plumage of the peacock and its associated vigor; likewise, it is more adaptive for humans to detect critical sensory events than to be high-fidelity light and sound meters. In such cases, lunch comes at reasonable cost, as determined by the grim but honest accounting of natural selection, a process without hand waving and incantation.
Skeptical Empiricism
Gerald Holton
Mallinckrodt Professor of Physics and professor of the history of science, emeritus, Harvard University; coeditor, Einstein for the 21st Century: His Legacy in Science, Art and Modern Culture
In politics and society at large, important decisions are all too often based on deeply held presuppositions, ideology, or dogma—or, on the other hand, on headlong pragmatism without study of long-range consequences.
Therefore I suggest the adoption of skeptical empiricism, the kind exemplified by the carefully thought-out and tested research in science at its best. It differs from plain empiricism of the sort that characterized the writings of the scientist/philosopher Ernst Mach, who refused to believe in the existence of atoms because one could not “see” them.
This Will Make You Smarter Page 9