The Internet of Us
Page 16
To talk about creativity, or creative acts, is to open a Pandora’s box of multifarious treasures that can soon get away from us. So let me just say what I mean by it here: a mental act is creative to the extent that it generates novel and valuable ideas. As Margaret Boden, the cognitive scientist and AI researcher, has emphasized, creative ideas needn’t be historically novel—like Descartes’ new geometrical ideas—but they are psychologically novel to the creator.24 Thus, being creative isn’t the same as being original. People can have ideas that are creative for them. As Boden says, “Suppose a twelve-year-old girl, who’d never read Macbeth, compared the healing power of sleep with someone knitting up a raveled sleeve. Would you refuse to say she was creative, just because the Bard said it first?”25 I don’t think so, and neither does Boden. Creativity is relative to a person.
But creativity is not just novelty. If that were so, then too many thoughts would satisfy the criterion of being “creative” to make it worth talking about. Creative ideas are valuable to the person’s cognitive workspace. They move things forward on the conceptual field on which they are currently playing. They are useful and fecund. They have progeny, and they contribute to the problems at hand.
Creative acts are also surprising in a certain sense. In cases of sudden insight, this leads to the “eureka” feeling. But creative acts can be surprising even if they do not necessarily provoke that “aha” feeling. Boden calls this their “impossible” aspect—that is, an idea is creative for a person when we affectively experience it as novel, when from the inside, it feels like it could not have been had prior to the moment of creation. Conditions were right, and the person suddenly “sees.”
Coming to understand why or how something is the case is a particular kind of creative mental act in the sense that I just described. That’s because, paradigmatically, it involves generating new, valuable and surprising ideas. Which ideas? Those that concern dependency relationships—how things fit together. The “grasping” of those relationships, which lies at the heart of understanding, is what makes understanding creative.
This may seem most obvious in the paradigmatic, historic cases of understanding, like Descartes’ geometrical insight or Einstein’s flash of understanding relativity upon seeing a clock. But what about less historically original acts of understanding? Consider again a child who comes to understand, for the first time, why 0.150 is smaller than 0.5. At that moment, the child is also having an insight—a realization of how things are related. Or consider again our student above, coming to understand for the first time why Lady Macbeth sees blood on her hands, or why sailing is more pleasant and efficient when the wind is not behind you. Each of these acts of understanding are creative insights for the person in question, even though they are in no way original.
They are also surprising—again, not necessarily in the “eureka” sense—because the person who comes to understand could not, relative to their past evidence and cognitive context, have understood it before that moment. If understanding is creative, then it is both active and passive. That’s because the surprising or “impossible” aspect of creativity makes creating seem at once something we do (which it is) and at the same time something happening to us. The muse suddenly strikes. Realization comes in a flash. Understanding is like this as well. It involves insight, and insight, as the very word suggests, is the opening of a door, a “disclosing,” as Heidegger said. One acts by opening the door, and then one is acted upon by seeing what lies beyond. Understanding is a form of disclosure.
9
The Internet of Us
Technology and Understanding
I began this book by pointing out a paradox: our digital form of life seems to both expand and inhibit our knowledge, simultaneously. How can that be?
As I’ve argued throughout, the first step is to see that “knowing” does not name a single kind of cognitive process, except in the minimal sense that to know is to have a grounded sense of what is true. To know can mean being receptive, or being reasonable, or understanding. Yet that is only part of the story. The second point I’ve stressed is that our digital form of life tends to put more stock in some kinds of knowing than others. Google-knowing has become so fast, easy and productive that it tends to swamp the value of other ways of knowing like understanding. And that leads to our subtly devaluing these other ways of knowing without our even noticing that we are doing so—which in turn can mean we lose motivation to know in these ways, to think that the data just speaks for itself. And that’s a problem—in the same way that our love affair with the automobile can be a problem. It leads us to overvalue one way to get where we want to go, and as a result we lose sight of the fact that we can reach our destinations in other ways—ways that have significant value all their own.
When it comes to knowing in the receptive sense, our knowledge is radically extended beyond ourselves. By virtue of the technology in our pockets, on our wrists and in our glasses, you and I are already sharing information-producing processes. We are cognitively interconnected by the strings of 1s and 0s that make up the code of the infosphere. That is the truest sense in which knowledge is more networked now, and why it is not an exaggeration to say, as Jeremy Rifkin does, that the Internet “dissolves boundaries, making authorship a collaborative open-ended process over time.”1 In turn, this raises the possibility that digital humans’ receptive abilities are not only more networked but that our acts of understanding may be becoming more networked as well.
In one really obvious sense, information technology is helping us understand more than ever before. That’s because we also know in the receptive sense more than ever before. Google-knowing is a terrific basis for understanding in the way that reading a textbook is. You can’t connect the dots if you don’t have the dots in the first place. Moreover, neuromedia, and even existing digital media, increases our ability to make connections between bits of information. That’s helpful to understanding, since understanding increases with inferential and explanatory connections between beliefs.
Yet Google-knowing, while a basis for understanding, is not itself the same as understanding because it is not a creative act.
To use the Internet is to have the testimony machine at your fingertips. That is what makes it so useful. But understanding is often said to be different from other forms of knowledge precisely because it is not directly conveyed by testimony—and thus not directly teachable.2 Again, you can give someone the basis for understanding. But in the usual cases, you can’t directly convey the understanding itself. An art teacher, for example, can give me the basis for creative thought by teaching me the rudiments of painting. She can give me ideas of what to paint and how to paint it. But I did not create these ideas; I create when I move beyond imitating to interpret these ideas in my own way. Likewise, you can give me a theorem without my understanding why it is true. And if I do come to understand why it is true, I do so because I’ve expended some effort—I’ve drawn the right logical connections. Coming to understand is something you must do for yourself.
Let’s contrast this with other kinds of knowledge. I can download receptive knowledge directly from you. You tell me that whales are mammals; I believe it, and if you are a reliable source and the proposition in question is true, I know in the receptive way. No effort needed. Or consider responsible belief: you give me some evidence for whales being mammals. You tell me that leading scientists believe it. If the evidence is good, then if I believe it, I’m doing so responsibly. But in neither case do I thereby directly understand why whales are or aren’t mammals. You can, of course, give me the explanation (assuming you have it). But to understand it, I must grasp it myself.
Or so it is generally. One might wonder, however, whether that would remain the case were we as fully integrated as the neuromedia possibility imagines. To have neuromedia would be like reading minds. You’d be able to access other people’s thoughts through little more than the intermediary of satellites. We would all be Google Completing our thoughts
for one another, and as result collaboration could very well start to feel from the inside like individual creation does now.
This is still a long way from showing that neuromedia would increase our understanding of the world all by itself. There is no doubt that information technology is already radically facilitating collaboration. And coming to understand, like any act of creation, is something you can do with others. But just because you can understand with others doesn’t alter the fact that understanding involves a personal cognitive integration—a combination of various cognitive abilities in the individual, including a grasp of dependency relations and the skill to make leaps and inferences in thought. It ultimately involves an element of individual cognitive achievement. Understanding is not something I can outsource.
Yet what makes this individual cognitive achievement so valuable? Why worry about understanding if correlation, as Chris Anderson might say, gets you to Larissa? What can it add that other forms of knowing cannot?
Understanding is a necessary condition for being able to explain, and explanations matter. A well-confirmed correlation can be the basis of (probabilistic) predictions. But prediction is not the only point of inquiry, nor should it be. Good explanations for why a correlation holds give us something more. As the eminent philosopher of science Philip Kitcher has noted, good explanations are fecund.3 They don’t just tell us what is; they lead us to what might be: they suggest further tests, further views, and they rule out certain hypotheses as well. Moreover, if you want to control something and not just predict what it will do given the preexisting data, you need to know why it does what it does. You need to understand. Thus, being able, on the basis of Google Flu Trends, to predict where the flu spreads is incredibly helpful. But if we want to know how to control its spread, we must better understand why it spreads. And once we do so, it seems likely that our predictions might themselves become more nuanced.
In fact, authors of a recent study critiquing the predictive power of Google Flu Trends have made this very point.4 The authors argue that more refined predictive techniques drawing on traditional methods of modeling can be at least as accurate as Google’s method, which they demonstrate has routinely overestimated the amount of flu cases by as much as 30 percent. They ascribe this to what they call “big data hubris,” or the assumption that sheer data size alone will always result in more predictive power. The researchers’ point is not that big data techniques aren’t helpful, but that the Google algorithm is not likely to be a good stand-alone method for predicting the spread of the flu.
Given our argument above, this is not surprising. Big data techniques are going to assist our models and explanations, not supplant them.
The creativity of understanding helps to explain our intuitive sense that understanding is a cognitive act of supreme value and importance, not just for where it gets us but in itself. Creativity matters to human beings. That’s partly because the creative problem-solver is more apt to survive, or at least to get what she wants. But we also value it as an end. It is something we care about for its own sake; being creative is an expression of some of the deepest parts of our humanity.
Finally, understanding can also have a reflexive element. Our deepest moments of understanding reveal to us how we ourselves fit into the whole. Thus, an act of understanding something or someone else can also help you understand yourself. When that happens, understanding comes with what Freud called the “oceanic feeling”—the feeling of interconnectedness.
Perhaps this is why we treasure those moments of understanding in both ourselves and others. If you’ve ever taught or coached or parented someone, you’ve tried to help someone understand. The moment they do is what makes the effort worthwhile. If that moment never comes, you regret it because that person is missing out on an act of creative personal expression, a chance to see how the parts connect to make the whole.
So even if, contrary to what I’ve suggested here, we are someday able to outsource our understanding to some coming piece of glorious technology, it is not clear that we should want to. To do so risks losing something deep, something that makes us not just digitally human, but human, period.
Information and the Ties That Bind
What would it be like if you had the Internet connected directly to your brain? That, or something like it, is the future toward which we are barreling. The hyperconnectivity of our phones, cars, watches and glasses is just the beginning. The Internet of Things has become the Internet of Everything, the Internet of Us.
These pages have spun a cautionary tale about this progress, but there is actually a lot to be optimistic about. The massive amount of data that is making hyperconnected knowing possible has the potential to help cure diseases, contribute to constructive solutions to climate change and tell us more about our own preferences, prejudices and inclinations than we ever thought possible. I look forward to these developments, and I hope you do too. My point in this book is that we should nonetheless approach the future with our eyes wide open, especially since our relationship with the Internet is becoming more and more intimate. Intimacy brings comfort, but it also makes us vulnerable.
Some of these vulnerabilities are extensions of those we already have. The Internet of Us will be comprised of human bodies that are themselves communicating with one another, and with the Net, through a variety of embedded or surface-worn devices. Data trails will follow us around like so many little sparks; dancing points not of light but of 1s and 0s. These data trails are already here. I am reminded of Aleksandr Solzhenitsyn’s remark in his 1968 book Cancer Ward:
As every man goes through life he fills in a number of forms for the record, each containing a number of questions. . . . There are thus hundreds of little threads radiating from every man, millions of threads in all. If these threads were suddenly to become visible, the whole sky would look like a spider’s web, and if they materialized as rubber bands, buses, trams and even people would all lose the ability to move, and the wind would be unable to carry torn-up newspapers or autumn leaves along the streets of the city. They are not visible, they are not material, but every man is constantly aware of their existence. . . . Each man, permanently aware of his own invisible threads, naturally develops a respect for the people who manipulate the threads.5
The threads are strings of information. They are the ties that bind us to one another, and society to us. What big data and the hyperconnectivity of knowledge are doing is making these connections brighter, more numerous, stronger and fundamentally easier to pluck. And so our respect—if that is the word—should also grow for those who have, or wish to have, their hands on these strings. Let us hope their motivations are pure, or at least neutral, while we stay on guard for the opposite. As Bertrand Russell once remarked in a somewhat different context, advances in technology never seem to bring along with them—at least, all by themselves—a change in humanity’s penchant for greed and power. That is a lesson I hope we heed—even while we look forward to the benefits the Internet of Us will bring.
Many of us share the same concerns. After the initial launch of Google Glass, the reaction was more negative than expected. While many were excited about the technology, it seemed that just as many were worried about its potential for invading privacy; others were concerned about its potential for distracting drivers. These practical objections were serious. But I can’t help wondering if the concern went deeper. Before its launch, Google cofounder Sergey Brin was reported to have said, “We started Project Glass believing that, by bringing technology closer, we can get it more out of the way.”6 Brin was meaning to emphasize the fact that Glass allows you to take pictures without fumbling for your camera. But he inadvertently put his finger on a more basic fear of the Internet of Us. We are getting technology out of the way by pulling it closer—in the case of Glass, literally making us see through it. We know technology can always alter our perspective. But this perspective-altering effect can only increase as it migrates inward.
We must be careful that
we don’t mistake the “us” in the Internet of Us for “everything else.” The digital world is a construction and, as I’ve argued, constructions are real enough. But we don’t want to let that blind us to the world that is not constructed, to the world that predates our digital selves. And the Internet of Us is not only going to affect how we see our world; it will affect our form of life. One aspect of this concerns autonomy. The hyperconnectivity of knowledge can help us become more cognitively autonomous and increase what I called epistemic equality. But I’ve argued it can also hinder our cognitive autonomy by making our ways of accessing information more vulnerable to the manipulations and desires of others. And it can lead us to overemphasize the importance of receptive knowing—knowing as downloading.
Humans are toolmakers, and information technologies are the grandest tools we have at the moment. Our tool-making nature shapes how we understand the world and our role within it. It encourages us to see the natural environment as something upon which we operate, which we use as means for our own ends, as an extension of the tools we develop to interact with it. So what happens when we extend our tools to the point that they become integrated with our life, when we become the very tools themselves? That is the most salient question about the coming Internet of Us. And it raises a danger that we cease to see our own personhood as an end in itself. Instead, we begin to see ourselves as devices to be used, as tools to be exploited.