The authors continue the analysis of the various approaches to the cognitive science examining the mainline - cognitivism - and outlining its historical evolution and the central points of the founding hypothesis - cognition as computation of representations realized as symbols, in the brain or a machine:
Symbols: The Cognitivist Hypothesis
The Foundational Cloud
Our exploration of cognitive science and human experience begins in this chapter with an examination of cognitivism - the center of our diagram - and its historical origins in the earlier, cybernetic era of cognitive science. The main idea to be presented … is that the analysis of mind undertaken by certain traditions of mindfulness/awareness provides a natural counterpart to present-day cognitivist conceptions of mind. This chapter presents the cognitivist perspective.
Let us begin by looking at the historical roots of present-day cognitivism. This short historical excursion is necessary, for a science that neglects its past is bound to repeat its mistakes and will be unable to visualize its development. Our excursion here is, of course, not intended to be a comprehensive history but only to touch on those issues of direct relevance for our concerns here. In fact, virtually all of the themes in present-day debates were already introduced in the formative years of cognitive science from 1943 to 1953. History indicates, then, that these themes are deep and hard to pursue. The "founding fathers" knew very well that their concerns amounted to a new science, and they christened this science with the new name cybernetics. This name is no longer in current use, and many cognitive scientists today would not even recognize the family connections. This lack of recognition is not idle. It reflects the fact that to become established as a science in its clear-cut cognitivist orientation, the future cognitive science had to sever itself from its roots, which were complex and entangled but also rich with possibilities for growth and development. Such a severance is often the case in the history of science: it is the price of passing from an exploratory stage to a full-fledged research program - from a cloud to a crystal. The cybernetics phase of cognitive science produced an amazing array of concrete results, in addition to its long-term (often underground) influence:
- The use of mathematical logic to understand the operation of the nervous system
- The invention of information-processing machines (such as digital computers), thereby laying the basis for artificial intelligence
- The establishment of the metadiscipline of systems theory, which has had an imprint in many branches of science, such as engineering (systems analysis, control theory), biology (regulatory physiology, ecology), social sciences (family therapy, structural anthropology, management, urban studies), and economics (game theory)
- Information theory as a statistical theory of signal and communication channels
- The first examples of self-organizing systems
This list is impressive: we tend to consider many of these notions and tools an integral part of our lives. Yet they were all nonexistent before this formative decade, and they were all produced by an intense exchange among people of widely different backgrounds. Thus the work during this era was the result of a uniquely and remarkably successful interdisciplinary effort. The avowed intention of this cybernetics movement was to create a science of mind. In the eyes of the leaders of this movement, the study of mental phenomena had been far too long in the hands of psychologists and philosophers. In contrast, these cyberneticians felt a calling to express the processes underlying mental phenomena in explicit mechanisms and mathematical formalisms.
One of the best illustrations of this mode of thinking (and its tangible consequences) was the seminal 1943 paper by Warren McCulloch and Walter Pitts, “A Logical Calculus of Ideas Immanent inNervous Activity.” Two major leaps were taken in this article: first, the proposal that logic is the proper discipline with which to understand the brain and mental activity, and second, the claim that the brain is a device that embodies logical principles in its component elements or neurons. Each neuron was seen as a threshold device, which could be either active or inactive. Such simple neurons could then be connected to one another, their interconnections performing the role of logical operations so that the entire brain could be regarded as a deductive machine.
These ideas were central for the invention of digital computers. At that time, vacuum tubes were used to implement the McCulloch-Pitts neurons whereas today we find silicon chips, but modem computers are still built on the same so-called von Neumann architecture that has been made familiar with the advent of personal computers. This major technological breakthrough also laid the basis for the dominant approach to the scientific study of mind that was to crystalize in the next decade as the cognitivist paradigm.
In fact, Warren McCulloch, more than any other figure, can serve as an exemplar of the hopes and the debates of these formative years. As can be gleaned from his collected papers in Embodiments of Mind, McCulloch was a mysterious and paradoxical figure whose tone was often poetic and prophetic. His influence seemed to wane during the later years of his life, but his legacy is being reconsidered as cognitive science becomes more aware that a thorough intertwining of the philosophical, the empirical, and the mathematical, which McCulloch's investigations exemplified, seems the best way to continue working.
His favorite description for his enterprise was "experimental epistemology" - an expression not favored by current usage. It is one of those remarkable simultaneities in the history of ideas that in the 1940s the Swiss psychologist Jean Piaget coined the expression "genetic epistemology" for his influential work, and the Austrian zoologist Konrad Lorenz started to speak of an "evolutionary epistemology."
There was, of course, considerably more to this creative decade. For instance, there was extensive debate over whether logic is indeed sufficient to understand the brain's operations, since logic neglects the brain's distributed qualities. (This debate continues today, and we will consider it in more detail later, especially as it relates to the question of "levels of explanation" in the study of cognition.) Alternative models and theories were put forth, which for the most part were to lie dormant until they were revived in the 1970s as an important alternative in cognitive science.
By 1953 the main actors of the cybernetics movement, in contrast to their initial unity and vitality, were distanced from each other, and many died shortly thereafter. It was mainly the idea of mind as logical calculation that continued.
Defining the Cognitivist Hypothesis
Just as 1943 was clearly the year in which the cybernetics phase was born, so 1956 was clearly the year that gave birth to cognitivism. During this year, at two meetings held at Cambridge and Dartmouth, new voices (such as those of Herbert Simon, Noam Chomsky, Marvin Minsky, and John McCarthy) put forth ideas that were to become the major guidelines for modern cognitive science.
The central intuition behind cognitivism is that intelligence - human intelligence included -so resembles computation in its essential characteristics that cognition can actually be defined as computations of symbolic representations. Clearly this orientation could not have emerged without the basis laid during the previous decade. The main difference was that one of the many original, tentative ideas was now promoted to a full-blown hypothesis, with a strong desire to set its boundaries apart from its broader, exploratory, and interdisciplinary roots, where the social and biological sciences figured preeminently with all their multifarious complexity.
What exactly does it mean to say that cognition can be defined as computation? As we mentioned a computation is an operation that is carried out or performed on symbols (on elements that represent what they stand for). The key notion here is that of representation or "intentionality," the philosopher's term for aboutness. The cognitivist argument is that intelligent behavior presupposes the ability to represent the world as being certain ways. We therefore cannot explain cognitive behavior unless we assume that an agent acts by representing relevant features of her situations. To the extent that her representation of a situation is accurate, the agent's behavior will be successful (all other things being equal).
This notion of representation is - at least since the demise of behaviorism - relatively uncontroversial. What is controversial is the next step, which is the cognitivist claim that the only way we can account for intelligence and intentionality is to hypothesize that cognition consists of acting on the basis of representations that are physically realized in the form of a symbolic code in the brain or a machine.
According to the cognitivist, the problem that must be solved is how to correlate the ascription of intentional or representational states (beliefs, desires, intentions, etc.) with the physical changes that an agent undergoes in acting. In other words, if we wish to claim that intentional states have causal properties, we have to show not only how those states are physically possible but how they can cause behavior. Here is where the notion of symbolic computation comes in.
Symbols are both physical and have semantic values. Computations are operations on symbols that respect or are constrained by those semantic values. In other words, a computation is fundamentally semantic or representational - we cannot make sense of the idea of computation (as opposed to some random or arbitrary operation on symbols) without adverting to the semantic relations among the symbolic expressions. (This is the meaning of the popular slogan "no computation without representation.") A digital computer, however, operates only on the physical form of the symbols it computes; it has no access to their semantic value. Its operations are nonetheless semantically constrained because every semantic distinction relevant to its program has been encoded in the syntax of its symbolic language by the programmers. In a computer, that is, syntax mirrors or is parallel to the (ascribed) semantics. The cognitivist claim, then, is that this parallelism shows us how intelligence and intentionality (semantics) are physically and mechanically possible. Thus the hypothesis is that computers provide a mechanical model of thought or, in other words, that thought consists of physical, symbolic computations. Cognitive science becomes the study of such cognitive, physical symbol systems.
To understand this hypothesis properly, it is crucial to realize the level at which it is proposed. The cognitivist is not claiming that if we were to open up someone's head and look at the brain, we would find little symbols being manipulated there. Although the symbolic level is physically realized, it is not reducible to the physical level. This point is intuitively obvious when we remember that the same symbol can be realized in numerous physical forms. Because of this nonreducibility it is quite possible that what corresponds to some symbolic expression at the physical level is a global, highly distributed pattern of brain activity. We will return to consider this idea later. For now the point to be emphasized is that in addition to the levels of physics and neurobiology, cognitivism postulates a distinct, irreducible symbolic level in the explanation of cognition. Furthermore, since symbols are semantic items, cognitivists also postulate a third distinctly semantic or representational level. (The irreducibility of this level too is intuitively obvious when we remember that the same semantic value can be realized in numerous symbolic forms.)
This multilevel conception of scientific explanation is quite recent and is one of the major innovations of cognitive science. The roots and initial formulation of the innovation as a broad scientific idea can be traced back to the era of cybernetics, but cognitivists have contributed greatly to its further rigorous philosophical articulation. We would like the reader to keep this idea in mind, for it will take on added significance when we tum to discuss the related - though still controversial - notion of emergence.
The reader should also notice that the cognitivist hypothesis entails a very strong claim about the relations between syntax and semantics. As we mentioned, in a computer program the syntax of the symbolic code mirrors or encodes its semantics. In the case of human language, it is far from obvious that all of the semantic distinctions relevant in an explanation of behavior can be mirrored syntactically. Indeed, many philosophical arguments can be given against this idea. Furthermore, although we know where the semantic level of a computer's computations comes from (the programmers), we have no idea how the symbolic expressions supposed by the cognitivist to be encoded in the brain would get their meaning.
Since our concern is with experience and cognition in its basic, perceptual modality, we will not take up such issues about language in detail here. Nonetheless, they are worth pointing out, since they are problems that lie at the heart of the cognitivist endeavor.
The cognitivist research program can be summarized, then, as answers to the following fundamental questions:
Question 1: What is cognition?
Answer: Information processing as symbolic computation-rule-based manipulation of symbols.
Question 2: How does it work?
Answer: Through any device that can support and manipulate discrete functional elements-the symbols. The system interacts only with the form of the symbols (their physical attributes), not their meaning.
Question 3: How do I know when a cognitive system is functioning adequately?
Answer: When the symbols appropriately represent some aspect of the real world, and the information processing leads to a successful solution of the problem given to the system.