2. He points out that others say things like, “Brains are analog, computers are digital.” But analog devices—for example, a vinyl record—work on a “smooth continuum,” whereas the digital is discrete; therefore brains are not digital computers. But again Marcus responds correctly that this analog argument also fails. Computers can be either analog or digital or both. In fact, many early computers were analog.
3. Another argument raised against brains being digital computers is: “Brains generate emotions, computers do not.” Marcus tries to answer this criticism as well, by claiming that emotions are simply information transfer and so computers will be able to do this in the future. What they already do is just like this.
Here, however, Marcus is incorrect. Emotions are not simply the transfer of information. They are largely the transfer of hormones (and if hormones are merely information, every organ in the body is a computer). And emotions are partially constrained and wholly defined by culture. Computers lack hormones and culture, so they cannot have emotions, period. Again, we could call hormones “information,” but that would immeasurably weaken the claim that brains are computers. In fact, I would go so far as to say that until a computer can urinate, be scared spitless, or need a change of underpants, it has no emotions. Being scared spitless is not merely transferring information—it is experiencing fear. It is being in a state that a computer cannot be in.
But why does Marcus even want to join the hyperbolic salesmen of artificial intelligence and claim that brains are computers? He states that “the real payoff in subscribing to the idea of a brain as a computer would come from using that idea to profitably guide research.” He argues that this is particularly useful when examining the special computer known as a “field programmable gate array.” Why does he think that research on computers profits from being compared to brains? Practically, the benefit doesn’t emerge clearly. Theoretically, on the other hand, Marcus is a subscriber to the (neurologically mistaken) view of the brain as organized into distinct “modules” (an idea that I find as convincing as the notion that I have a “boating module” of my brain).
He therefore claims that a field programmable gate array could be like the brain, or vice versa, so we should investigate this. A field programmable gate array (FPGA) is mainly an array of logical units that may be programmed via wiring the units together in different configurations, such that they are able to perform all sorts of functions. There are many advances in this technology and FPGAs have taken on a wide array of functions. Their special quality is their ability to reconfigure themselves such that they can be said to have a certain amount of flexibility. On the other hand, the use of FPGAs as models of the brain escapes none of the criticisms of Dreyfus (1994) and others, and computers simply transfer and receive and remember information. They do not have, for instance, apperceptions, incorrect memories, justifications for mistakes, emotional overrides of computational functions, or culture-based languages.
Moreover, if information is what makes the brain a computer and if we are willing to interpret the wet stuff, blood flow, electric currents of the brain, and so on, as information, then the kidney or the heart or the penis is also a digital computer. There simply is no evidence from biology or behavior that the brain is like a digital computer except in ways so superficial as to render a rock a digital computer (it is always in a single digital state).
In sum, the overblown claims that still persist that the brain is a computer are at once too strong and too weak. They are too strong because if the brain is a computer, so are the body as a whole, the liver, and the kidneys. The claims are too weak because they omit crucial components of the brain, such as its organ-like exchanges with the rest of the body, as well as apperceptions, muscle memory, and especially, cultural perspectives that can shape both the brain and the body. They also focus on the syntax of brain operations and omit semantic operations. In fact, the semantic problem of brains—pointed out in Dreyfus (1965, 1994), Searle (1980b), and others—has never been solved (and is unlikely to be until computers can acquire culture; see D. Everett 2015).
Our perceptions and the full range of our thinking is shaped significantly by our cultural network. This observation leads me to a consideration of claims for thinking in which the dualism of Descartes and the mind as computer of Turing form the core of ideas of the cognitive—my first criticism of the research program of artificial intelligence. So long as we refer to AI as artificial intelligence, I agree that it is an interesting and extremely important research program. Unfortunately, many of its proponents (Newell and Simon 1958; McCarthy 1979; Marcus 2015) want to drop this qualifier.
Following on Simon’s (1962, 1990, 1991, 1996) and Newell and Simon (1958), long-term research program on understanding and modeling human problem-solving, the authors discuss the automatization of (at least parts of) the process of scientific discovery. They make three fundamental assumptions at the outset. These assumptions are worth considering in some detail because they illustrate the contrast between the noncultural reasoning of the artificial intelligence community vs. my thesis that intelligent agents are cultural agents. To my way of thinking, comparing artificial intelligence to natural intelligence is conceptually similar to comparing the flight of a Boeing 747 with the flight of a bumblebee. There will of course be general physical principles of flight that will apply to both. Yet there is no interesting sense in which the Boeing 747 teaches us how the bumblebee flies qua bumblebee or how its flight “feels,” nor how its flight is shaped by those it is flying with. Nagel (1974) makes a case for some of what I have in mind with his arguments that consciousness is both specific to a particular organism and gestalt in its emergence from smaller parts.
Langley and his coauthors (1987, 8) list three principal assumptions that guide their investigation. First:
The human brain is an information-processing system whose memories hold interrelated symbol structures and whose sensory and motor connections receive encoded symbol structures from the outside via sensory organs and send encoded symbols to motor organs.
Second,
The brain solves problems by creating a symbolic representation of the problem (called the problem space) that is capable of expressing initial, intermediate, and final problem solutions as well as the whole range of concepts employed in the solution process.
Third,
The search for a problem solution is not carried on by random trial and error but is selective. It is guided in the direction of a goal situation (or symbolic expressions describing a goal) by rules of thumb called heuristics.
What these authors are trying to understand, of course, is the dark matter underlying scientific discovery. Describing the process does not imply that there is either a normative theory of scientific discovery for humans, nor that the heuristics of scientific discovery can be transformed into algorithms.8
This work was important, and Simon’s own pioneering understanding of human problem solving won him the Nobel Prize in Economics in 1978. Nevertheless, the points above are problematic. For example, one serious question is raised by what the authors intend by the phrase “problem solution,” and the relevance of their focus of scientific discovery to human cognition more generally. Not much, in fact. One could respond that their decision of what interests them hardly counts against their account. At the same time, the unlikelihood that their solutions will scale up to human reasoning render them too specialized for any general account of human psychology. The examples they select, for example, are tailored to solutions via symbolic processes. They nowhere address the wider range of problems human must solve, including some of the most important ones of life, such as “How should I live,” “Who should I marry,” “What should I believe,” “How can I learn to square dance,” “Who should I vote for,” and so on. These deeper problems involve physiology, culture, emotions, and a myriad of other nonsymbolic aspects of human cognition. To be sure, many of these other problems have large informational-symbolic components, but what make
s them hard and vital are precisely their non-informational components. And there are larger problems. For example, even the discovery procedures Simon et al. are interested in are cultural constructs. Therefore, I do not find strong AI to have become more relevant to the understanding of human thinking than it has ever been, in the more than three decades since the devastating criticisms of Dreyfus (1965, 1994) and Searle (1980b).
Since the earliest days of artificial intelligence, eminent proponents of the idea that brains are computers have proposed, often quite emotionally, that of course machines can think. McCarthy (1979, 1) says the following, for example: “To ascribe certain beliefs, knowledge, free will, intentions, consciousness, abilities, or wants to a machine or computer program [emphasis in original] is legitimate when such an ascription expresses the same information [emphasis mine] about the machine that it expresses about a person.”
Asking the question of why anyone would be interested in ascribing mental qualities to machines, McCarthy (1979, 5ff) offers the following reasons (which I have paraphrased slightly):
1. We may not be able to directly observe the inner state of a machine so, as we do for people whose inner states we also cannot see, we take a shortcut and simply attribute beliefs to predict what the computer will do next.
2. It is easier to ascribe beliefs to a computer program—independent of the machine that is running it—that can predict the program’s behavior, than to try to fully understand all the details of the program’s interactions with its environment.
3. Ascribing beliefs may allow generalizations that merely simulating the program’s behavior would not.
4. The belief and goal structures we ascribe to a program may be easier to understand than the details of the program are.
5. The belief and goal structures are likely to be closer to the objectives of the program designer than the program’s listing is.
6. Comparing programs is perhaps better expressed via belief-attribution than merely comparing listings.
But these are each built on both a faulty understanding of beliefs and a faulty understanding of dark matter more generally—at least, if we are on the right track in this book about dark matter. Moreover, this type of personification of computers is too powerful—it can be extended in humorous, but no less valid, ways to circumstances no one would ascribe beliefs to: thermostats, toes, plants, rocks teetering on the edge of a precipice. In fact, there are many cultures—the Pirahãs and Wari's, for example—in which beliefs are regularly ascribed to animals, to clouds, to trees, and so on, as both a convenient way of talking (like McCarthy) and a potentially religious or pantheistic allusion to deity in all objects.
Beliefs are intentional states (Searle 1983) that occur when bodies (including brains) are directed toward something, from an idea to a plant. Beliefs are formed by the individual as he or she engages in languaging and culturing, becoming culturally articulated components of individual dark matter.
People can talk about some of their beliefs and the qualia of their beliefs. The beliefs of humans can be logical or illogical, consistent or inconsistent, with other beliefs. They are partially ordered, as values are. For example, in the statements “I believe in science” vs. “I believe in God,” the ranking can be either GOD >> SCIENCE or SCIENCE >> GOD. If it were the case that programs could be meaningfully described as McCarthy claims, they would have to be able to rank their own beliefs, not take a shortcut by having someone else program their beliefs.
To the degree that programs can be described as McCarthy claims, they do not rank their “beliefs,” and they lack “values”—unless, of course, certain states describable as “beliefs” or “values” have been either (i) directly programmed by a human programmer or (ii) indirectly programmed (e.g., algorithms that allow for multiple responses or “beliefs”). Moreover, by omitting any role for culture, biology, or psychology in computers, belief-ascription is nothing more than a metaphor in a way that it is not for humans. Belief ascription in humans is not merely an epistemological convenience, or façon de parler; it is retrievable psychologically, culturally, and biologically from the individual and their society without need to assume an external designer.
Dreyfus (1965) rightly rejects such claims for computers, based on several theses (from Haugeland’s [1998] summary): (i) intelligence is essentially embodied; (ii) intelligent beings are essentially situated in the world; (iii) the world of intelligent beings is essentially human (thus I believe that Dreyfus has culture in mind when he says “world”).
Before closing this chapter, there are a couple of issues that need to be examined to further clarify the concept of culture being developed. Again, the claim is not that dark matter is culture but that culturing produces dark matter, which includes “idioculture” as one of its components; that is, the value rankings, knowledge structures, and social roles manifested or possessed uniquely by a given individual, just as a “language” is found only in individual idiolects and only used by societies via individuals.
One of those issues in need of understanding is the role and emergence of tools. How are we to characterize tools culturally—things that are used to aid individual cultural members in different tasks? Tools are dripping with dark matter and culture. I conceive of them as congealed culture. Examples include physical tools—shovels, paintings, hats, pens, plates, food—but nonphysical tools are also crucial. D. Everett (2012a) makes the case that a principal nonphysical tool is language (and its components). Culture itself is a tool.
The tool-like nature of language can be seen easily in its texts. Texts (discourses, stories, etc.) are used to exhort, to explain, to describe, and so on, and each text is embedded in a context of dark matter. Texts, including books, are of course unlike physical tools in the sense that as linguistic devices, they could in principle tell us something about the dark matter from which they partially emerge, though generally very little is conveyed. And the reason for that is clear: we talk about what we assume our interlocutor does not know (but has the necessary background knowledge to understand). And dark matter, which we do not always even know that we know, is simply overlooked or presupposed.
Language as a tool is also seen in the forms of texts. Consider in this regard once again the list of so-called contradictory principles that Harris provided above with regard to the Hindu principle “Avoid fecal matter.”
A spot must be found not too far from the house.
The spot must provide protection against being seen.
It must offer an opportunity to see anyone approaching.
It should be near a source of water for washing.
It should be upwind of unpleasant odors.
It must not be in a field with growing crops.
The first line uses the indefinite article a. In the second line the definite article the is used. From that point onward, spot is pronominalized as it. This is because of English conventions for topic tracking (Givón 1983) through a discourse. The indefinite article indicates that the noun it modifies is new information. The definite shows us that it is shared information. The pronoun reveals that it is topical. As the single word is referenced and re-referenced throughout the discourse, its changing role and relationship to shared knowledge is marked with specific grammatical devices. This is shared but unspoken, and largely ineffable knowledge to the nonspecialist.
How does the understanding of culture promoted here compare to the wider understanding of culture in a society as a whole? It is common, for example, to hear about “American culture,” “Western values,” or even “panhuman values.” According to the theory of dark matter and culture developed above, these are perfectly sensible ideas, so long as we interpret them to mean “overlapping values, rankings, roles, and knowledge,” rather than a complete homogeneity of (any notion of) culture throughout a given population. From laws to pronunciation, from architecture to music, to sexual positions and body shape, to the action of individual humans as members of communities (“likers of Beethoven,”
“eaters of haggis,” and on and on) in conjunction with an individual’s apperceptions and episodic memory—all are the products of overlapping dark matters.
A similar question arises as to whether it makes sense to speak of “national values,” and if so, how these might arise. Again, such values are values shared by a significant number of members of a nation, due to similar experiences. For example, during childhood, people of my generation had three television stations to choose from. There were few if any fast-food chains. The food available in supermarkets across the US was fairly constant. Thus a person growing up in Indiana would have very similar experiences with cultural products as a person growing up in Southern California (as my wife and I have discovered by comparing stories). This has changed dramatically over the last fifty years, but still there are a number of things shared in common—uploading videos to YouTube, reading news online, using smartphones and apps, and so on. Such experiences trigger the formation of similar dark matters across large numbers of individuals. They trigger them even at the level of an entire country, continent, or the world of television viewers, Internet access, and so on.
Just so, values can produce in an individual or in a community a sense of mission—for example, the Boers, the Zionists, the American frontiersmen and settlers in Manifest Destiny. The Third Reich. This sense of mission and purpose is what many businesses are after today as the use of the term culture has been adopted by companies as “what we are all about.” Businesses commonly dedicate web pages, documents, lectures, meetings, and so on, to establishing a sense of culture (which is often little more than an unranked list of values and occasionally goals—as though these were separate from values—with no discussion of knowledge or roles).9 Though these business-co-opted ideas of culture differ significantly from some academic understandings, they are nevertheless not terribly wide of the mark, even if they represent an unarticulated subset of the idea of culture that I am urging here.
Dark Matter of the Mind Page 15