Until now, we couldn’t assume that people would know how to use a computer in the way we assume they know how to count. Our interactions with computational systems depended on first acquiring the skills of numeracy and literacy. You couldn’t learn how a computer worked without first knowing how to use a keyboard. That ensured that people learned about computers with relatively staid and inflexible old brains. We think of millennial high-school tech whizzes as precocious “digital natives.” But even they only really began to learn about computers after they’d reached puberty. And that is just when brain plasticity declines precipitously.
The change in interfaces means that the next generation really will be digital natives. They will be soaked in the digital world and will learn about computers the way previous generations learned language—even earlier than previous generations learned how to read and add. Just as every literate person’s brain has been reshaped by reading, my two-year-old granddaughter’s brain will be reshaped by computing.
Is this a cause for alarm or celebration? The simple answer is that we don’t know and we won’t for at least another twenty years, when today’s two-year-olds grow up. But the past history of our species should make us hopeful. After all, those powerful early learning mechanisms are exactly what allowed us to collectively accumulate the knowledge and skill we call culture. We can develop new kinds of technology as adults because we mastered the technology of the previous generation as children. From agriculture to industry, from stone tools to alphabets to printed books, we humans reshape our world, and our world reshapes our brains. Still, the emergence of a new player in this distinctively human process of cultural change is the biggest news there can be.
The Predictive Brain
Lisa Feldman Barrett
University Distinguished Professor of psychology, Northeastern University; neuroscientist and research scientist, Psychiatry Department, Massachusetts General Hospital; lecturer in psychiatry, Harvard Medical School
Your brain is predictive, not reactive. For many years, scientists believed that your neurons spend most of their time dormant and wake up only when stimulated by some sight or sound. Now we know that all your neurons are firing constantly, stimulating one another at various rates. This intrinsic brain activity is one of the great recent discoveries in neuroscience. Even more compelling is what this brain activity represents: millions of predictions of what you will encounter next in the world, based on your lifetime of past experience.
Many predictions are at a micro level, predicting the meaning of bits of light, sound, and other information from your senses. Every time you hear speech, your brain breaks up the continuous stream of sound into phonemes, syllables, words, and ideas by prediction. Other predictions are at the macro level. You’re interacting with a friend and, based on context, your brain predicts that she will smile. This prediction drives your motor neurons to move your mouth in advance to smile back, and your movement causes your friend’s brain to issue new predictions and actions, back and forth, in a dance of prediction and action. If predictions are wrong, your brain has mechanisms to correct them and issue new ones.
If your brain didn’t predict, sports couldn’t exist. A purely reactive brain wouldn’t be fast enough to parse the massive sensory input around you and direct your actions in time to catch a baseball or block a goal. You also would go through life constantly surprised.
The predictive brain will change how we understand ourselves, since most psychology experiments still assume the brain is reactive. Experiments proceed in artificial sequences called “trials,” where test subjects sit passively, are presented with images, sounds, words, etc., and make one response at a time—say, by pressing a button. Trials are randomized to keep one from affecting the next. In this highly controlled environment, the results come out looking as if the subject’s brain makes a rapid automatic response followed by a controlled choice about 150 milliseconds later—as if the two responses came from distinct systems in the brain. These experiments fail to account for a predicting brain, which never sits awaiting stimulation but continuously prepares multiple, competing predictions for action and perception, while actively collecting evidence to select between them. In real life, moments, or “trials,” are never independent, because each brain state influences the next. Most psychology experiments are therefore optimized to disrupt the brain’s natural process of prediction.
The predictive brain presents us with an unprecedented opportunity for new discoveries about how a human brain creates a human mind. New evidence suggests that thoughts, feelings, perceptions, memories, decision making, categorization, imagination, and many other mental phenomena, which historically are treated as distinct brain processes, can all be united by a single mechanism: prediction. Even our theory of human nature is up for grabs, as prediction deprives us of our most cherished narrative, the epic battle between rationality and emotions to control behavior.
A New Imaging Tool
Alun Anderson
Former editor-in-chief and publishing director, New Scientist; author, After the Ice
New tools and techniques in science don’t usually garner as much publicity as big discoveries, but there’s a sense in which they’re much more important. Think of telescopes and microscopes: Both opened vast fields of endeavor that are still spawning thousands of major advances. And although they may not make newspaper front pages, new tools are often the biggest news for scientists—published in prestigious journals and staying at the top of citation indices for years on end. Clever tools are the long-lasting news behind the news, driving science forward for decades.
One example has just come along which I really like. A new technique makes it possible to see directly the very fast electrical activity occurring within the nerve cells of the brain of a living, behaving animal. Neuroscientists have had something like this on their wish list for years, and it’s worth celebrating. The technique puts into nerve cells a special protein that can turn the tiny voltage changes of nerve activity into flashes of light. These can be seen with a microscope and recorded in exquisite detail, providing a direct window into brain activity and the dynamics of signals traveling through nerves. This is especially important because the hot news is that information contained in the nerve pulses speeding around the brain is likely coded not just in the rate at which those pulses arrive but also in their timing, with the two working at different resolutions. To start to speak neuron and thus understand our brains, we’ll have to come to grips with the dynamics of signaling and relate it to what an animal is actually doing.
The new technique, developed by Yiyang Gong and colleagues in Mark Schnitzer’s lab at Stanford University and published in the journal Science, builds on past tools for imaging nerve impulses.* One well-established method takes advantage of the calcium ions that rush into a nerve cell as a signal speeds by. Special chemicals that emit light when they interact with calcium make this electrical activity visible, but they’re not fast or sensitive enough to capture the speed with which the brain works. The new technique goes further by using a rhodopsin protein (called Ace), which is sensitive to voltage changes in the nerve cell membrane, fused to another protein (mNeon), which fluoresces brightly. This imaging technique will take its place beside other recent developments extending the neuroscientist’s reach. New optogenetic tools enable researchers to use light signals to switch particular nerve cells off and on to help figure out what part they play in a larger circuit.
Without constantly inventing new ways to probe the brain, the eventual goal of understanding how our 90 billion nerve cells provide us with thought and feeling will be intractable. Although we have some good insights into our cognitive strategies from psychology, deep understanding of how individual neurons work, and rapidly growing maps of brain circuitry, the vital territory in the middle—how circuits of particular linked neurons work—is tough to explore. To make progress, neuroscientists dream of experiments wherein they can record what’s happening in many nerves in a ci
rcuit while also switching parts of the circuit off and on and seeing the effect on a living animal’s behavior. Thanks to new tools, this remarkable dream is close to coming true; when it does, the toolmakers will once again have proved that in science it’s new tools that create new ideas.
Sensors: Accelerating the Pace of Scientific Discovery
Paul Saffo
Technology forecaster; consulting associate professor, Stanford University
Behind every great scientific discovery is an instrument. From Galileo and his telescope to Arthur Compton and the cloud chamber, our most important discoveries are underpinned by device innovations that extend human senses and augment human cognition. This is a crucially important science-news constant, because without new tools discovery would slow to a crawl. Want to predict the next big science surprise a decade from now? Look for the fastest-moving technologies and ask what new tools they enable.
For the last half-century, digital technology has delivered the most powerful tools, in the form of processors, networking, and sensors. Processing came first, providing the brains for space probes and the computational bulldozers needed for tackling computation-intensive research. Then with the advent of the Arpanet, the Internet, and the World Wide Web, networking became a powerful medium for accessing and sharing scientific knowledge—and connecting remotely to everything from supercomputers to telescopes. But it is the third category—sensors, and an even newer category of robust effectors—that is poised to accelerate and utterly change research and discovery in the decades ahead.
First, we created our computers, then we networked them, and now we are giving them sensory organs to observe—and manipulate—the physical world in the service of science. And thanks to the phenomenon described by Moore’s Law, sensor cost/performance is racing ahead as rapidly as chip performance. Ask any amateur astronomer: For a few thousand dollars, they can buy digital cameras that were beyond the reach of observatories a decade ago.
The entire genomics field owes its very existence and future to sensors. Craig Venter’s team became the first to decode the human genome in 2001, by leveraging computational power and sensor advances to create a radically new and radically less expensive sequencing process. Moreover, the cost of sequencing is already dropping more rapidly than the curve of Moore’s Law. Follow out the Carlson Curve (as the sequencing price/performance curve was dubbed by The Economist), and the cost of sequencing a genome is likely to plummet below $1.00 well before 2030. Meanwhile, the gene editing made possible by the CRISPR/Cas9 system is possible only because of ever more powerful and affordable sensors and effectors. Just imagine the science possible when sequencing a genome costs a dime and networked sequencing labs-on-a-chip are cheap enough to be tossed out and discarded like RFID tags.
Sensors and digital technology are also driving physics discovery. The heart of CERN’s Large Hadron Collider is the CMS detector, a 14,000-ton assemblage of sensors and effectors that has been dubbed “science’s cathedral.” Like a cathedral of old, it is served by nearly 4,000 people drawn from over forty countries, and it’s so popular that a scientific journal featured a color-in centerfold of the device in its year-end issue.
Sensors are also opening vast new windows on the cosmos. Thanks to the relentless advance of sensors and effectors in the form of adaptive optics, discovery of extrasolar planets moved from science fiction to commonplace with breathtaking speed. In the near future, sensor advances will allow us to analyze exoplanetary atmospheres and look for signatures of civilization. The same trends will open new horizons for amateur astronomers, who will soon enjoy affordable technical means to match the Kepler space telescope in planet-finding prowess. Sensors are thus as much about democratizing amateur science as the creation of ever more powerful instruments. The Kepler satellite imaged a field of 115°, or a mere 0.25 percent of the sky. Planet-finding amateurs wielding digitally empowered backyard scopes could put a serious dent in the 99.75 percent of the sky yet to be examined.
Another recent encounter between amateurs and sensors offers a powerful hint of what is to come. Once upon a time, comets were named after human discoverers, and amateurs hunted comets with such passion that more than one would-be comet hunter relocated eastward in order to get an observing jump on the competition. Now, comets have names like 285P/Linear, because robotic systems are doing the discovering, and amateur comet-hunting is in steep decline. Amateurs will find other things to do (like search for planets), but it’s hard not to feel a twinge of nostalgia for a lost time when that wispy apparition across the sky carried a romantic name like Hale-Bopp or Ikeya-Seki rather than C/2011-L4 PanStarrs.
This shift in cometary nomenclature hints at an even greater sea change to come in the relationship between instrument and discoverer. Until now, the news has been of ever more powerful instruments created in the service of amplifying human-driven discovery. But just as machines today are better comet finders than humans, we are poised on the threshold of a time when machines do not merely amplify but displace the human researcher. When that happens, the biggest news of all will be when a machine wins a Nobel Prize alongside its human collaborators.
3D Printing in the Medical Field
Syed Tasnim Raza
Medical director, Cardiac Surgery Step-Down Unit, Columbia University Medical Center and New York–Presbyterian University Hospital
Within the field of medicine, arguably the most progress made in the last few decades is in clinical imaging, starting with simple X-rays and moving to such current technologies as CAT scans and fMRI. And then there is ultrasonography, extensively used in diagnostic and therapeutic interventions (such as amniocentesis during pregnancy) and imaging of the heart (echocardiography). Cardiologists have used various other imaging modalities for diagnosis of heart conditions, including heart catheterization. They perform contrast studies by injecting radio-opaque material into the heart chambers or blood vessels while recording moving images (angiograms). And then there is Computed Tomographic Angiography of the heart (CTA), with its 3D reconstruction, providing detailed information of the cardiac structure.
Now comes 3D printing, adding another dimension to the imaging of the human body. In its current form, using CAD (computer-aided design programs), engineers develop a three-dimensional computer model of any object to be “printed” (or built), which is then translated into a series of two-dimensional slices of the object. The 3D printer can then lay down thousands of layers until the vertical dimension is achieved and the object is built.
Within the last few years, this technology has been used in the medical field, particularly in surgery. In cardiac surgery, 3D printing is applied mostly in congenital heart disease. In congenital heart malformations, many variations from the normal can occur. With current imaging techniques, surgeons have a fair idea what to expect before operating, but many times they have to “explore” the heart during surgery to ascertain the exact malformation and then plan at the spur of the moment. With the advent of 3D printing, they can do a CTA scan of the heart, with its three-dimensional reconstruction, which can then be fed into the 3D printer, creating a model of the malformed heart. The surgeons can then study this model and even cut slices into it to plan the exact operation they will perform, saving valuable time during the procedure itself.
Three-dimensional printing is used in many areas of medicine, particularly in orthopedics. One of the more exciting areas is in building live organs for replacements, using living cells and stem cells layered onto scaffolding of the organ to be “grown,” so that the cells can grow into skin, earlobe, or other organs. Someday organs may be grown for each individual from his or her own stem cells, obviating the risk of rejection and avoiding poisonous anti-rejection medicines. Exciting development.
Deep Science
Brian Knutson
Associate professor of psychology and neuroscience, Stanford University
The decade of the brain is maturing into the century of the mind. New bioengineering techniques can
resolve and perturb brain activity with unprecedented specificity and scope (including neural control with optogenetics, circuit visualization with fiber photometry, receptor manipulation with DREADDs, gene sculpting with CRISPR/Cas9, and whole brain mapping with CLARITY). These technical advances have captured well-deserved media coverage and inspired support for brain-mapping initiatives. But conceptual advances are also needed. We might promote faster progress by complementing existing “broad science” initiatives with “deep science” approaches able to bridge the chasms separating different levels of analysis. Thus, some of the most interesting neuroscientific news on the horizon might highlight not only new scientific content (for example, tools and findings) but also new scientific approaches (for example, deep-versus broad-science approaches).
What is “deep science”? Deep-science approaches seek first to identify critical nodes (or units) within different levels of analysis and determine whether they share a link across those levels. If such a connection exists, then perturbing the lower-level node could causally influence the higher-level node. Some examples of deep-science approaches include using optogenetic stimulation to alter behavior or using fMRI activity to predict psychiatric symptoms. Because deep science first seeks to link different levels of analysis, it often requires collaboration of at least two experts at those levels.
Know This Page 33