Book Read Free

Know This

Page 42

by Mr. John Brockman


  On the other hand, scientists often underrate the practical importance of their discoveries, so that news about them does not begin to do justice to their implications. When Edison patented the phonograph in 1878, he believed it would be used primarily for speech, such as for dictation without the aid of a stenographer, for books that would speak to blind people, for telephone conversations that could be recorded, and so on. Only later did entrepreneurs realize the enormous value of recorded music. Once they did, the music industry developed rapidly.

  The laser is another example of the underrating of the practical implications of a scientific discovery. When Schawlow and Townes published their seminal paper describing the principle of the laser in Physical Review in 1958, it produced considerable excitement in the scientific community and eventually won them Nobel Prizes. However, neither these authors nor others in their group predicted the enormous and diverse practical implications of their discovery. Lasers, apart from their many uses in science, have enabled the development of fast computers, target designation in warfare, communication over very long distances, space exploration and travel, surgery to remove brain tumors, and numerous everyday uses—bar-code scanners in supermarkets, for example. Arthur Schawlow frequently expressed strong doubts about the laser’s practicality and often quipped that it would be useful only to burglars for safecracking. Yet advances in laser technology continue to make news to this day.

  Weather Prediction Has Quietly Gotten Better

  Samuel Arbesman

  Complexity scientist; scientist in residence, Lux Capital; author, Overcomplicated: Technology at the Limits of Comprehension

  Surveying the landscape of scientific and technological change, one sees a number of small advances that together have unobtrusively yielded something startling. Through a combination of computer-hardware advances (Moore’s Law marching on), ever more sophisticated algorithms for solving certain mathematical challenges, and larger amounts of data, we have got something new—really good weather prediction. According to a recent paper in Nature,

  Advances in numerical weather prediction represent a quiet revolution because they have resulted from a steady accumulation of scientific knowledge and technological advances over many years that, with only a few exceptions, have not been associated with the aura of fundamental physics breakthroughs.*

  Despite their profound unsexiness, these predictive systems have yielded enormous progress. Our skill at forecasting the weather several days ahead has increased in accuracy by about an additional day per decade over the past few decades and changed how we think about the weather’s enormously complex system.

  This is important for a number of reasons. Understanding weather is vital for a huge number of human activities, from transportation to improving agricultural output to managing disasters. But there’s a potentially bigger reason. While I’m hesitant to extrapolate from the weather system to other complex systems—including many that are perhaps much more complex, such as living organisms or entire ecosystems—this development should give us some hope. That weather prediction has improved via technological advancement and scientific and modeling innovations means that other problems we might deem unsolvable needn’t be. This news of a “quiet revolution” in weather prediction might be a touchstone for how to think about predicting and understanding complex systems. Never say never when it comes to intractability.

  The Word: First As Art, Then As Science

  Brian Christian

  Philosopher; computer scientist; poet; co-author (with Tom Griffiths), Algoriths to Live By

  The phrase “news that stays news” was originally how Ezra Pound, in 1934, defined literature—and so it’s interesting to contemplate what, in the sciences, might meet that standard. The answer might be the emerging science of literature itself.

  Thinking about the means by which language works on the mind, Pound described a three-part taxonomy. First is phanopoeia—think “phantoms,” the images that a word or phrase conjures in the reader’s mind. Pound’s “petals on a wet black bough” is a perfect illustration. Phanopoeia, he says, is the poetic capacity most likely to survive translation. Second is melopoeia—think “melody,” the music words make. This encompasses rhyme and meter, alliteration and assonance, the things we take to be the classic backbones of poetic form. Though fiendishly difficult to translate faithfully, he notes, it doesn’t necessarily need to be, as this is the poetic capacity most likely to be appreciated even in a language you don’t know.

  Third and most enigmatic is a quality Pound called logopoeia and described as “akin to nothing but language,” “a dance of the intelligence among words.” This has proved the most elusive to characterize, but Pound later noted that he meant something like verbal register, the unique patterns of usage unique to each word. Take a pair of words like “doo” and “stool.” They can both denote the same thing; their sonic effects are about as near as any pair of words can be. And yet their difference in register—one juvenile, the other clinical—is so strong that the words can’t even be considered synonyms, as it’s almost impossible to imagine a context in which one could be substituted for the other.

  Logopoiea proves to be one of the most dazzling of poetic effects—see, for instance, the contemporary poet Ben Lerner, who writes lines like “a beauty incommensurate with syntax had whupped my cracker ass”—but also the most fragile. It’s almost impossible to translate faithfully, because every language divides its register space differently. See for instance the French film The Class (Entre les Murs), in which a teacher tells a pair of students they were behaving with “une attitude de pétasses.” The English version subtitled the line “acting like skanks” and prompted a minor furore over whether that particular word was stern enough to serve as an admonishment that would get through to an unruly student, yet inoffensive enough for a teacher to say without expecting to jeopardize his job, yet offensive enough to do exactly that. What’s more, an entire scene pivots on the fact that for the students the word strongly implies “prostitute” whereas for the teacher it has no such pointed connotation. What word in English meets all those criteria? Maybe there is no such word in English.

  Logopoeia, in fact, is so fragile that it doesn’t even survive in its own language for long. The New York Times included the word “scumbag” in a crossword puzzle in 2006, a word almost charmingly inoffensive to their editorial staff and the majority of the public but jaw-droppingly inappropriate to readers old enough to remember the word when it couldn’t be spoken in polite company, as it explicitly summoned the image of a used condom. Changes like this are everywhere in a living language. In 1990, it would have been unthinkable for my parents to say “Yo,” for instance. In 2000, when they said it, it was painful and tone-deaf, a sad attempt to sound like a younger and/or cooler generation. By 2010, it was just about as normal as “Hey.” How could a reader (let alone a translator) some centuries hence possibly be expected to know the logopoetic freight of every single word at the time of the piece’s writing?

  For the first time in human history, we have the tools to answer this question. A century after logopoeia entered the humanities, it is becoming a science. Computational linguists now have access to corpora large enough, and computational means sufficient, to see these forces in action: to observe words as they emerge, mutate, and evolve; to quantify logopoeia, the subtlest and most ephemeral of linguistic effects.

  This has changed our sense of what a word is. The question is far from academic. When the FCC moved to release a set of documents from a settlement with AT&T to the public in the mid-2000s, AT&T argued in court that this constituted “an unwarranted invasion of personal privacy,” citing that it was a “legal person” in the eyes of the law. The Third Circuit, in 2009, agreed, and the FCC appealed. The case went to the Supreme Court to decide, in effect, whether “person” and “personal” are two forms of the same word or two independent terms that happen to share a lot of their orthography (and at least some of their sense).
r />   The Court traditionally has turned to the Oxford English Dictionary in situations like this. In this instance, though, they turned instead to computational linguists, who performed an analysis across an enormous corpus of real-world usage to investigate whether the words are used in the same contexts, in the vicinity of the same words. The analysis determined that they are not. The words were shown to be divergent enough to constitute two independent terms; thus not every “person” is necessarily entitled to “personal privacy.” The documents were released. “We trust,” wrote Chief Justice John Roberts in the majority decision, “that AT&T will not take it personally.”

  The rapidly maturing science of computational linguistics, possible only in a Big Data era, has finally given scholars of the word what the telescope is to astronomy or the microscope to biology. That’s big news.

  And because words, more unstable than stars and squirrelier than paramecia, refuse to sit still, changing context subtly with every utterance, it’s news that will stay so. Pound would, I think, agree.

  The Convergence of Images and Technology

  Victoria Wyatt

  Associate professor of the indigenous arts of North America, University of Victoria, Canada

  The news is in pictures, literally and figuratively. Visual images have exploded through our world, challenging the primacy of written text. A photograph bridges the diversity of cultures and languages. Tens of thousands of independent agents send it racing through overlapping networks. Public responses surge globally with exponential speed. Political leaders act, or fail to.

  Never before have visual images so dynamically pervaded our daily lives. Never before have they been so influentially generated by amateurs as well as editors and advertisers. Digitization brings the creation of images within everyone’s purview. The Internet gives us the means to communicate visually and the imperative to do so. Images now form a necessary component of even heavily text-based Web sites. Social media coalesce around visual imagery. Written text works brilliantly in many ways, but it has never worked in quite this way.

  The convergence of technology and the visual does not announce itself with the éclat of a seminal scientific breakthrough. It claims no headlines. Our culture associates images with infancy. Pictures appear in childhood storybooks, disappearing as we progress to sophisticated novels. Our new emphasis on the image has much to surmount. In the future, some critics will condemn it as the tipping point in the death of literacy. They will be wrong. It is a tipping point and a stealthy one, but for a different reason. It lays the foundation for the paradigm shift essential to our survival.

  Reading is a linear experience. Alphanumeric text unfolds inexorably in unidirectional, chronological sequence. It calls on us to focus narrowly on symbols in lines isolated from context. To read, we retreat from our hugely complex visual environment.

  Granted, the content of written text can refer to complexities. It often does. Poetic prose can use rhymes and resonances to signal relationships and make meanings potent. Always, though, alphanumeric text comprises discrete segments, not holistic representations. We read words, sentences, paragraphs consecutively. We must gather them together ourselves to construct and consider the relationships therein. A visual image embodies the whole at a glance. All the intangible connections, all the invisible yet pregnant relationships between the component parts, present themselves in concert. It’s up to us to perceive these intertwined threads and make meaning of them. Sometimes we do; sometimes we don’t. Regardless, in the image they simultaneously exist.

  This is how we live. We do not experience our world as a series of discrete visible components. Such distortion of reality would have compromised evolutionary success. We intuit the network of invisible relationships underlying the concrete entities we see, and create holistic meaning accordingly. Visual images prompt us to do the same.

  Innovations in data visualization underscore the value of visual imagery in representing intangibles. Again, technology makes it possible. Computers find nonlinear patterns in space and time embedded in huge data sets. Programs such as spatial mapping make these complex connections vivid. Scientists have long used visualizations to portray natural systems. Increasingly, social and cultural researchers choose similar software to embody subjective human experience. Interactive maps show dynamic networks in process, not frozen instants of artificial stasis. As technology opens new avenues for exploration of relationships, disciplines across academia embrace fresh questions in emerging forms. To focus on intangibles, these questions demand the power of imagery.

  A tsunami of visual images washes over our world. But “tsunami,” though a visual metaphor, is a poor one, implying danger; rather, the new immersion in visual images counters a perilously segmented perspective. Written text is important in recent human history and will continue to be so, for obvious reasons. Authentically representing reality is not one of those reasons. Visual images gain such popularity and such currency today because they achieve what written text cannot: They show us the intangibles defining our world.

  One might think the elevation of the image would prompt us all the more to focus solely on what we see; in fact, immersion in visual imagery mirrors how we experience reality—constantly constructing meaning from invisible relationships in our visual field. The famous metaphor about perception, “You can’t see the forest for the trees,” hints at this process but misses the greater paradigm shift. The forest remains a visible entity. We need to discern the invisible, intangible ecosystem that underlies our forest and drives all that happens there. Visual images remind us to look. They help us focus on what we cannot see.

  Our future depends on how well we do that. Today’s marriage of technology and the visual gives us the means. The Internet gives us popular demand. It mirrors the complexities of holistic visual experience comprising intangible connections. Even in digital text, highlighted hyperlinks bombard us with visual reminders that relationships exist. We explore Web connections in orders and directions of our own choosing. We receive information immediately thinking of whether and with whom to share it. A generation has grown up expecting assertive interaction with nonlinear formats. Technology paired with imagery frees us from the artificial isolation of linear reading. We will never return to that solitary confinement.

  The news is in pictures. We do stand at a tipping point, created by the convergence of images and technology. In the future, this moment may be decried as the death knell for literacy, just another item in a long list of societal failings. Or it may be extolled as the popular vanguard of a paradigm that makes global problem-solving possible. What it will mean in twenty-five years depends on what we all make it mean now.

  The Mindful Meeting of Minds

  Christine Finn

  Archeologist; journalist; author, Past Poetic: Archaeology and the Poetry of W. B. Yeats and Seamus Heaney

  Sometime around the end of the last century, a TV journalist I knew reported a news story on a school that was teaching meditation to its pupils. She was personally skeptical. The resulting piece was controversial. I gather the class was discontinued.

  Fast forward to last year, when I came across a cover story in a national magazine about meditation which used an illustration of a blonde white woman in peaceful posture. The controversy this time was not about meditation per se but that the article was illustrated with an image of a cliché too far, it seemed; the audience was broader than that. Not the subject matter, then, but what was seen by some as a narrow portrayal of it.

  Full disclosure: I am also a blonde white woman and I have practiced meditation for twenty years. For much of that time I hesitated to admit it. I came to it as a postgrad working on an unfunded interdisciplinary thesis. Fizzy with discoveries in the fuzzy zone, I needed to corral my brain if I was to defend my argument as both art and science. And somewhere along the way, science got more interested in meditation. So now I can openly discuss having had my brainwaves sampled and what the results looked like on a graph.r />
  But my point here is not to make an argument for meditation. What I find interesting and newsworthy is the existence of a broadening dialogue between what was until recently a fringe subject and the rigorous realm of verification and repeatable experiment. C. P. Snow’s 1959 argument about the gulf between science and the humanities hits home in this Edge context. The current blurring of lines is encouraged by online media, even as meditation is being investigated as a salve to the digital age.

  In my example, a story about meditation reports the result of experiment, cites academic papers, draws conclusions, and suggests causes; the audience reports effects and experiential data from another form of experimentation—practice. The flow is as two-way as the attentive breathing at the center of meditation. And it has the potential to enlighten both scientist and practitioner. They can, of course, be one and the same.

  That’s what is newsworthy: the counterculture of science in the many stories (catch line, “Mindfulness”) now streaming through the media at a confident pace. Those stories gathered in parks, prisons, offices, hospitals, nursing homes, hospices—and schools. Moving betwixt and between, and toward an interesting new stillness.

  Carpe Diem

  Ernst Pöppel

  Neuroscientist; cofounder and chairman, Human Science Center, Ludwig-Maximilians-Universität, Munich; author, Mindworks

  Some 2,000 years ago, probably 23 b.c.e., the Roman poet Horace published some poems, and they lasted forever, as he predicted himself: “I have built a monument more lasting than bronze” (Exegi monumentum aere perennius). Although his words lacked modesty, he was right. The most famous ode (number 11 of the first book) is also one of the shortest, with only eight lines. Everyone knows the phrase “Enjoy the day” (carpe diem); the Latin implies more than just having fun: to actively grasp the opportunities of the day, to “seize the present.”

 

‹ Prev