This Will Make You Smarter

Home > Other > This Will Make You Smarter > Page 30
This Will Make You Smarter Page 30

by John Brockman


  Entanglement is “spooky action at a distance,” as Einstein liked to say (he actually did not like it at all, but at some point he had to admit that it exists). In quantum physics, two particles are entangled when a change in one particle is immediately associated with a change in the other particle. Here comes the spooky part: We can separate our “entangled buddies” as far as we can, and they will remain entangled. A change in one is instantly reflected in the other, even though they are physically far apart (and I mean in different countries!).

  Entanglement feels like magic. It is really difficult to wrap our heads around it. Yet entanglement is a real phenomenon, measurable and reproducible in the lab. And there is more. While for many years entanglement was thought to be a very delicate phenomenon, observable only in the infinitesimally small world of quantum physics (“Oh good, our world is immune from that weird stuff”) and quite volatile, recent evidence suggests that entanglement may be much more robust and widespread than we initially thought. Photosynthesis may happen through entanglement, and recent brain data suggest that entanglement may play a role in coherent electrical activity of distant groups of neurons in the brain.

  Entanglement is a good cognitive chunk, because it challenges our cognitive intuitions. Our minds seem built to prefer relatively mechanical cause-and-effect stories as explanations of natural phenomena. And when we can’t come up with one of those stories, we tend to resort to irrational thinking—the kind of magic we feel when we think about entanglement. Entangled particles teach us that our beliefs about how the world works can seriously interfere with our understanding of it. But they also teach us that if we stick with the principles of good scientific practice, of observing, measuring, and then reproducing phenomena that we can frame in a theory (or that are predicted by a scientific theory), we can make sense of things. Even weird things, like entanglement.

  Entanglement is also a good cognitive chunk because it whispers to us that seemingly self-evident cause-and-effect phenomena may not be cause-and-effect at all. The timetable of modern vaccination, probably the biggest accomplishment in modern medicine, coincides with the onset of symptoms of autism in children. This temporal correspondence may mislead us to think that the vaccination may have produced the symptoms, hence the condition of autism. At the same time, that temporal correspondence should make us suspicious of straightforward cause-and-effect associations, inviting us to take a second look and conduct controlled experiments to find out whether or not there really is a link between vaccines and autism. We now know there is no such link. Unfortunately, this belief is hard to eradicate and is producing in some parents the potentially disastrous decision to not vaccinate their children.

  The story of entanglement is a great example of the capacity of the human mind for reaching out almost beyond itself. The key word here is “almost.” Because we “got there,” it is self-evident that we could “get there.” But it didn’t feel like it, did it? Until we managed to observe, measure, and reproduce that phenomenon predicted by quantum theory, it just felt a little “spooky.” (It still feels a bit spooky, doesn’t it?) Humans are naturally inclined to reject facts that do not fit their beliefs—and, indeed, when confronted with those facts they tend to automatically reinforce their beliefs and brush the facts under the carpet. The beautiful story of entanglement reminds us that we can go “beyond ourselves,” that we don’t have to desperately cling to our beliefs, and that we can make sense of things. Even spooky ones.

  Technology Paved the Way for Humanity

  Timothy Taylor

  Archaeologist, University of Bradford, UK; author, The Artificial Ape: How Technology Changed the Course of Human Evolution

  The very idea of a “cognitive toolkit” is one of the most important items in our cognitive toolkit. It is far more than just a metaphor, for the relationship between actual physical tools and the way we think is profound and of immense antiquity.

  Ideas such as evolution and a deep prehistory for humanity are as factually well established as the idea of a round Earth. Only bigots and the misled can doubt them. But the idea that the first chipped stone tool predates, by at least half a million years, the expansion of mind that is so characteristic of humans should also be knowable by all.

  The idea that technology came before humanity and, evolutionarily, paved the way for it, is the scientific concept that I believe should be part of everybody’s cognitive toolkit. We could then see that thinking through things and with things, and manipulating virtual things in our minds, is an essential part of critical self-consciousness. The ability to internalize our own creations by abstracting them and converting “out there” tools into mental mechanisms is what allows the entire scientific project.

  Time Span of Discretion

  Paul Saffo

  Technology forecaster; managing director of foresight at Discern Analytics; distinguished visiting scholar in the Stanford Media X research network, Stanford University

  Half a century ago, while advising a UK Metals company, Elliott Jaques had a deep and controversial insight. He noticed that workers at different levels of the company had very different time horizons. Line workers focused on tasks that could be completed in a single shift, whereas managers devoted their energies to tasks requiring six months or more to complete. Meanwhile, their CEO was pursuing goals realizable only over the span of several years.

  After several decades of empirical study, Jaques concluded that just as humans differ in intelligence, we differ in our ability to handle time-dependent complexity. We all have a natural time horizon we are comfortable with: what Jaques called “time span of discretion,” or the length of the longest task an individual can successfully undertake. Jaques observed that organizations implicitly recognize this fact in everything from titles to salary. Line workers are paid hourly, managers annually, and senior executives compensated with longer-term incentives, such as stock options.

  Jaques also noted that effective organizations were comprised of workers of differing time spans of discretion, each working at a level of natural comfort. If a worker’s job was beyond his natural time span of discretion, he would fail. If it was less, he would be insufficiently challenged and thus unhappy.

  Time span of discretion is about achieving intents that have explicit time frames. And in Jaques’s model, one can rank discretionary capacity in a tiered system. Level 1 encompasses jobs such as sales associates or line workers, handling routine tasks with a time horizon of up to three months. Levels 2 to 4 encompass various managerial positions, with time horizons from one to five years. Level 5 encompasses five to ten years and is the domain of small-company CEOs and large-company executive vice presidents. Beyond Level 5, one enters the realm of statesmen and legendary business leaders, comfortable with innate time horizons of twenty years (Level 6), fifty years (Level 7) or beyond. Level 8 is the realm of hundred-year thinkers, like Henry Ford, while Level 9 is the domain of the Einsteins, Gandhis, and Galileos, individuals capable of setting grand tasks into motion that continue centuries into the future.

  Jaques’s ideas enjoyed currency into the 1970s and then fell into eclipse, assailed as unfair stereotyping or worse, a totalitarian stratification evocative of Aldous Huxley’s Brave New World. It is now time to reexamine Jaques’s theories and revive “time span of discretion” as a tool for understanding our social structures and matching them to the overwhelming challenges facing global society. Perhaps problems like climate change are intractable because we have a political system that elects Level 2 thinkers to Congress, when we really need Level 5’s in office. As such, Jaques’s ideas might help us realize that the old saying “He who thinks longest wins” is only half the story, and that the society in which everyone explicitly thinks about tasks in the context of time will be the most effective.

  Defeasibility

  Tania Lombrozo

  Cognitive psychologist, University of California–Berkeley

  On its face
, defeasibility is a modest concept, with roots in logic and epistemology. An inference is defeasible if it can potentially be “defeated” in light of additional information. Unlike deductively sound conclusions, the products of defeasible reasoning remain subject to revision, held tentatively no matter how firmly.

  All scientific claims—whether textbook pronouncements or haphazard speculations—are held defeasibly. It is a hallmark of the scientific process that claims are forever vulnerable to refinement and rejection, hostage to what the future could bring. Far from being a weakness, this is a source of science’s greatness. Because scientific inferences are defeasible, they remain responsive to a world that can reveal itself gradually, change over time, and deviate from our dearest assumptions.

  The concept of defeasibilility has proved valuable in characterizing artificial and natural intelligence. Everyday inferences, no less than scientific inferences, are vetted by the harsh judge of novel data—additional information that can potentially defeat current beliefs. On further inspection, the antique may turn out to be a fake and the alleged culprit an innocent victim. Dealing with an uncertain world forces cognitive systems to abandon the comforts of deduction and engage in defeasible reasoning.

  Defeasibility is a powerful concept when we recognize it not as a modest term of art but as the proper attitude toward all belief. Between blind faith and radical skepticism is a vast but sparsely populated space where defeasibility finds its home. Irreversible commitments would be foolish, boundless doubt paralyzing. Defeasible beliefs provide the provisional certainty necessary to navigate an uncertain world.

  Recognizing the potential revisability of our beliefs is a prerequisite to rational discourse and progress, be it in science, politics, religion, or the mundane negotiations of daily life. Consider the world we could live in if all of our local and global leaders, if all of our personal and professional friends and foes, recognized the defeasibility of their beliefs and acted accordingly. That sure sounds like progress to me. But of course I could be wrong.

  Aether

  Richard Thaler

  Economist; director, Center for Decision Research, Booth School of Business, University of Chicago; coauthor (with Cass Sunstein), Nudge: Improving Decisions About Health, Wealth, and Happiness

  I recently posted a question on Edge asking people to name their favorite example of a wrong scientific belief. One of my prized answers came from Clay Shirky. Here is an excerpt:

  The existence of ether, the medium through which light (was thought to) travel. It was believed to be true by analogy—waves propagate through water, and sound waves propagate through air, so light must propagate through X, and the name of this particular X was ether.

  It’s also my favorite because it illustrates how hard it is to accumulate evidence for deciding something doesn’t exist. Ether was both required by nineteenth-century theories and undetectable by nineteenth-century apparatus, so it accumulated a raft of negative characteristics: it was odorless, colorless, inert, and so on.

  Several other entries (such as the “force of gravity”) shared the primary function of ether: They were convenient fictions able to “explain” some otherwise ornery facts. Consider this quote from Max Pettenkofer, the German chemist and physician, disputing the role of bacteria as a cause of cholera: “Germs are of no account in cholera! The important thing is the disposition of the individual.”

  So in answer to the current Edge Question, I am proposing that we now change the usage of the word “Aether,” using the old spelling, since there is no need for a term that refers to something that does not exist. Instead, I suggest we use that term to describe the role of any free parameter used in a similar way: that is, “Aether is the thing that makes my theory work.” Replace the word “disposition” with “Aether” in Pettenkofer’s sentence to see how it works.

  Often, Aetherists (theorists who rely on an Aether variable) think their use of the Aether concept renders their theory untestable. This belief is often justified during their lifetimes, but then along come clever empiricists, such as A. A. Michelson and Edward Morley, and last year’s tautology becomes this year’s example of a wrong theory.

  Aether variables are extremely common in my own field of economics. Utility is the thing you must be maximizing in order to render your choice rational.

  Both risk and risk aversion are concepts that were once well defined but are now in danger of becoming Aetherized. Stocks that earn surprisingly high returns are labeled as risky, because, in the theory, excess returns must be accompanied by higher risk. If, inconveniently, the traditional measures of risk, such as variance or covariance with the market, are not high, then the Aetherists tell us there must be some other risk; we just don’t know what it is.

  Similarly, traditionally the concept of risk aversion was taken to be a primitive; each person had a parameter, gamma, that measured her degree of risk aversion. Now risk aversion is allowed to be time varying, and Aetherists can say with straight faces that the market crashes of 2001 and 2008 were caused by sudden increases in risk aversion. (Note the direction of the causation. Stocks fell because risk aversion spiked, not vice versa.)

  So the next time you are confronted with such a theory, I suggest substituting the word “Aether” for the offending concept. Personally, I am planning to refer to the time-varying variety of risk aversion as Aether aversion.

  Knowledge As a Hypothesis

  Mark Pagel

  Professor of evolutionary biology, University of Reading, UK; external professor, Santa Fe Institute

  The Oracle of Delphi famously pronounced Socrates to be “the most intelligent man in the world because he knew that he knew nothing.” Over two thousand years later, the mathematician-turned-historian Jacob Bronowski would emphasize—in the last episode of his landmark 1970s television series The Ascent of Man—the danger of our all-too-human conceit of thinking we know something, as evidenced in the Nazi atrocities of the Second World War. What Socrates knew, and what Bronowski had come to appreciate, is that knowledge—true knowledge—is difficult, maybe even impossible, to come by. It is prone to misunderstanding and counterfactuals, and, most important, it can never be acquired with exact precision; there will always be some element of doubt about anything we come to “know” from our observations of the world.

  What is it that adds doubt to our knowledge? It is not just the complexity of life; uncertainty is built into anything we measure. No matter how well you can measure something, you might be wrong by up to half the smallest unit you can discern.

  If you tell me I am 6 feet tall, and you can measure to the nearest inch, I might actually be 5' 11 ½" or 6' ½" and you (and I) won’t know the difference. If something is really small, you won’t even be able to measure it, and if it is really, really small, a light microscope (and thus your eye, both of which can see only objects larger than the shortest wavelength of visible light) won’t even know it’s there.

  What if you measure something repeatedly?

  This helps, but consider the plight of those charged with international standards of weights and measures. There is a lump of metal stored under a glass case in Sèvres, France. It is, by the decree of Le Système Internationale d’Unités, the definition of a kilogram. How much does it weigh? Well, by definition, whatever it weighs is a kilogram. But the fascinating thing is that it has never weighed exactly the same twice. On those days that it weighs less than a kilogram, you’re not getting such a good deal at the grocery store. On other days, you are.

  The often blithe way in which scientific “findings” are reported by the popular press can mask just how difficult it is to acquire reliable knowledge. Height and weight are—as far as we know—single dimensions. Consider, then, how much more difficult it is to measure something like intelligence, or the risk of getting cancer from eating too much meat, or whether cannabis should be legalized, or whether the climate is warming and why, or what a “shortha
nd abstraction” or even “science” is, or the risk of developing psychosis from drug abuse, or the best way to lose weight, or whether it is better to force people receiving state benefits to work, or whether prisons are a deterrent to crime, or how to quit smoking, or whether a glass of wine every day is good for you, or whether 3-D glasses will hurt your children’s eyes, or even just the best way to brush your teeth. In each case, what was actually measured, or who was measured? Who were they compared to, for how long? Are they like you and me? Were there other factors that could explain the outcome?

  The elusive nature of knowledge should remind us to be humble when interpreting it and acting on it, and this should grant us both a tolerance and skepticism toward others and their interpretations. Knowledge should always be treated as a hypothesis. It has only just recently emerged that Bronowski, whose family was slaughtered at Auschwitz, himself worked with Britain’s Royal Air Force during the Second World War calculating how best to deliver bombs—vicious projectiles of death that don’t discriminate between good guys and bad guys—to the cities of the Third Reich. Maybe Bronowski’s later humility was born of this realization—that our views can be wrong and they can have consequences for others’ lives. Eager detractors of science as a way of understanding the world will jump on these ideas with glee, waving them about as proof that “nothing is real” and that science and its outputs are as much a human construct as art or religion. This is facile, ignorant, and naïve.

  Measurement and the “science” or theories it spawns must be treated with humility precisely because they are powerful ways of understanding and manipulating the world. Observations can be replicated—even if imperfectly—and others can agree on how to make the measurements on which they depend, be they measurements of intelligence, the mass of the Higgs boson, poverty, the speed at which proteins can fold into their three-dimensional structures, or how big gorillas are.

 

‹ Prev