The Computational Theory of Mind
Over the past thirty years, it is been common to hear the mind likened to a digital computer. This essay is concerned with a particular philosophical view that holds that the mind literally is a digital computer (in a specific sense of “computer” to be developed), and that thought literally is a kind of computation. This view—which will be called the “Computational Theory of Mind” (CTM)—is thus to be distinguished from other and broader attempts to connect the mind with computation, including (a) various enterprises at modeling features of the mind using computational modeling techniques, and (b) employing some feature or features of production-model computers (such as the stored program concept, or the distinction between hardware and software) merely as a guiding metaphor for understanding some feature of the mind. This entry is therefore concerned solely with the Computational Theory of Mind (CTM) proposed by Hilary Putnam  and developed most notably for philosophers by Jerry Fodor [1975, 1980, 1987, 1993]. The senses of ‘computer’ and ‘computation’ employed here are technical; the main tasks of this entry will therefore be to elucidate: (a) the technical sense of ‘computation’ that is at issue, (b) the ways in which it is claimed to be applicable to the mind, (c) the philosophical problems this understanding of the mind is claimed to solve, and (d) the major criticisms that have accrued to this view.
- 1. Main Theses
- 2. Philosophical Claims for the Theory
- 3. Criticisms of the Theory
- 3.1 Does syntax explain semantics?
- 3.2 Is the semantic vocabulary applied univocally?
- 3.3 Are all of our cognitive abilities formalizable and computable?
- 3.4 Is computation sufficient for understanding?
- 3.5 Is Computation Universal and Intrinsic?
- 3.6 Is CTM the “only game in town”?
- 3.7 Does psychology require vindication?
- 3.8 Externalism
- 3.9 Embodied and Embedded Cognition
- Academic Tools
- Other Internet Resources
- Related Entries
The Computational Theory of Mind combines an account of reasoning with an account of the mental states. The latter is sometimes called the Representational Theory of Mind (RTM). This is the thesis that intentional states such as beliefs and desires are relations between a thinker and symbolic representations of the content of the states: for example, to believe that there is a cat on the mat is to be in a particular functional relation (characteristic of the attitude of belief) to a symbolic mental representation whose semantic value is “there is a cat on the mat”; to hope that there is a cat on the mat is to be in a different functional relation (characteristic of the attitude of hoping rather than of believing) to a symbolic mental representation with the same semantic value.
The thesis about reasoning, which we will call the Computational Account of Reasoning (CAR), depends essentially upon this prior claim that intentional states involve symbolic representations. According to CAR, these representations have both semantic and syntactic properties, and processes of reasoning are performed in ways responsive only to the syntax of the symbols—a type of process that meets a technical definition of ‘computation’, and is known as formal symbol manipulation. (I.e., manipulation of symbols according to purely formal—i.e., non-semantic—techniques. The word ‘formal’ modifies ‘manipulation’, not ‘symbol’.)
These notions of “formal symbol manipulation” and “computation” are technical, and ultimately derive from discussions in mathematics in the late 19th and early 20th centuries. The project of formalization began in response to a crisis that developed in mathematics upon the discovery that there were consistent geometries that denied Euclid's parallel postulate. (I.e., the claim that for any line L on a plane and any point P on that plane but not located along L, there is one and only one line through P parallel to L.) The parallel postulate's overwhelming plausibility was not based upon anything else that was explicit in Euclid's system, but upon deeply seated geometric/spatial intuitions. It had long been assumed that such intuitively-correct claims in Euclidean geometry were necessarily true in the sense that they could not consistently be denied. The discovery in the early 19th century of consistent geometries that were not consonant with our spatial intuitions prompted mathematicians like Gauss, Peano, Frege and Hilbert to seek ways to regiment mathematical reasoning so that all derivations were grounded in explicit axioms and rules of inference, and the semantic intuitions of the mathematician were either excluded or explicitly codified. The most influential strategy for formalization was that of Hilbert, who treated formalized reasoning as a “symbol game”, in which the rules of derivation were expressed in terms of the syntactic (or perhaps better, non-semantic) properties of the symbols employed.
One of the powerful results of the formalist program was the discovery that large swaths of mathematics can in fact be formalized in this way—i.e., that the semantic relationships intuitively deemed important in a domain like geometry can in fact be preserved by inferences sensitive only to the syntactic form of the expressions. Hilbert himself carried out such a project with respect to geometry, and Whitehead and Russell extended such a method to arithmetic. And this project served as a model for other, ultimately less successful, reductive projects outside of mathematics, such as logical behaviorism in psychology. Even in mathematics, however, there are limits to what can be formalized, the most important and principled of which are derivative from Godel's incompleteness proof.
A second important issue in nineteenth and early twentieth century mathematics was one of delimiting the class of functions that are “computable” in the technical sense of being decidable or evaluable by the application of a rote procedure or algorithm. (Familiar examples of algorithmic procedures would be column addition and differential equations.) Not all mathematical functions are computable in this sense; and while this was known by mathematicians in the 19th century, it was not until 1936 that Alan Turing proposed a general characterization of the class of computable functions. It was in this context that he proposed the notion of a “computing machine”—i.e., a machine that does things analogous to what a human mathematician does in “computing” a function in the sense of evaluating it by application of a rote procedure. Turing's proposal was that the class of computable functions was equivalent to the class of functions that could be evaluated in a finite number of steps by a machine of the design he proposed. The basic insight here was that any operations that are sensitive only to syntax can be duplicated (or perhaps simulated) mechanically. What the mathematician following a formal algorithm does by way of recognition of syntactic patterns as syntactic, a machine can be made to do by purely mechanical means. Formalization and computation are thus closely related, and together yield the result that reasoning that can be formalized can also be duplicated (or simulated) by the right type of machine. Turing himself seems to have been of the opinion that a machine operating in this way would literally be doing the same things that the human performing computations is doing—that it would be “duplicating” what the human computer does. But other writers have suggested that what the computer does is merely a “simulation” of what the human computer does: a reproduction of human-level performance, perhaps through a set of steps that is some level isomorphic to those the human undertakes, but not in such a fashion as to constitute doing the same thing in all relevant respects. (For example, one might take the human computer's awareness that the symbols are symbols of something as partially constitutive of the operation counting as a computation.)
As mentioned more informally at the outset, CTM combines a Representational Theory of Mind (RTM) with a Computational Account of Reasoning (CAR). The RTM is in this case informed by the notion of symbolic representation employed in the technical notion of computation: mental states are held to be “representational” in the sense of including, as constituents, symbolic representations having both semantic and syntactic properties, just as symbols employed in mathematical computations do. While this claim differs from early modern versions of representationalism that likened ideas to pictures rather than symbols, it becomes philosophically important only in conjunction with the CAR. According to this account, reasoning is a process in which the causal determinants are the syntactic properties of the symbols in the “language of thought” (LOT) or “mentalese”.
The technical notions of formalization and computation arguably do some important philosophical work here: formalization shows us how semantic properties of symbols can (sometimes) be encoded in syntactically-based derivation rules, allowing for the possibility of inferences that respect semantic value to be carried out in a fashion that is sensitive only to the syntax, and bypassing the need for the reasoner to have employ semantic intuitions. In short, formalization shows us how to tie semantics to syntax. Turing's notion of a computing machine, in turn, shows us how to link up syntax to causation, in that it is possible to design a mechanism that is capable of evaluating any formalizable function.
The most obvious domain for CTM is that of occurrent propositional attitude states—that is, states that occur at some specific moment in a person's mental life, and have the sort of content that might be expressed by a propositional phrase, such as a judgment that the cat is at the door or a desire that the cat would stop tearing at the screen. Here we perhaps have the most plausible cases of mental states that might be grounded in something like token mental representations.
Within this class of occurrent states, however, we may additionally distinguish between the kinds of states that occur in explicit, conscious judgments and mental states that are not conscious because they take place at a “level of processing” that is too low to be brought to conscious awareness—e.g., processes of contour detection in early vision. (Such processes might be called“infraconscious” in distinction to the subconscious or unconscious processes championed by Freud and Jung.) (Cf. Horst 1995.) Many advocates of CTM apply the theory, not only at the level of explicit judgements and occurrent desires, but also to a broad array of infraconscious states as well.
However, advocates of CTM often speak of it more generally as an account of beliefs and desires which are then cashed out in dispositional rather than occurrent terms. Such states are arguably more problematic for CTM than occurrent states, as there are many things one might be thought to “believe” or “desire” in the dispositional senses of those terms, but which could not plausibly be supposed to be explicitly represented in the form of a symbol token.
An additional issue regarding the intended scope of the theory is that of how comprehensive an account of mental states and processes it is intended to be. Advocates of CTM and critics alike have often assumed that CTM makes claims to be a quite general account of reasoning. This is complicated, however, by Fodor's  distinction between “modular” and “global” mental processes, and his judgement (in Fodor ) that it is only the former that are likely to be computational in the classic sense. While this view has struck some readers as surprising, Fodor claims that, while advocating the truth of CTM since the 1970s, it “hadn't occurred to [him] that anyone could think that it's a very large part of the truth; still less that it's within miles of being the whole story about how the mind works.” [2000, page 1] We may therefore refine questions about the truth of CTM to questions about its truth as a theory of particular kinds of mental processes.
The notion of “computation” that has been described above, based in the work of Turing and Church, has been widely used by both advocates of CTM [Fodor 1981, 1987; Pylyshyn 1980, 1984; Haugeland 1978, 1981] and their critics [Searle 1980, 1984, 1992; Dreyfus 1972, 1992; Horst 1996, 1999]. There are, however, other notions of “computation” that have figured in the histories both of computer science and of cognitive science.
1.4.1 Human Problem-Solving
Turing's seminal article, “On Computable Numbers”, builds a new formal notion of “computation” upon the model of what a human mathematician does in solving a problem through application of an algorithm. It is interesting to note, however, that in the article, Turing uses the word ‘computer’ only for human beings performing such operations. Turing seems to have viewed the evaluation of functions through algorithmic means as something that both humans and machines are capable of doing. However, his usage in this paper still reflects an older usage of the word ‘computation’ in mathematics. Critics of CTM who view such operations, when performed by humans, as involving rich intentional states may contest the assumption that computing machines actually “compute” in this older sense, even if they perform operations on non-intentional states that mirror the formal properties of computations performed by humans.
1.4.2 Church and von Neumann
While Turing has been given pride of place in the history of digital computation, similar ideas were being introduced at about the same time by Alonzo Church  and Otto von Neumann . The proofs of Turing and Church are widely regarded as equivalent, and referred to as “the Church-Turing thesis”. Von Neumann provided an abstract architecture for a computing machine that is significantly different from Turing's at an engineering level, and production-model computers more closely resemble Von Neumann's architecture than a Turing Machine. However, from a mathematical standpoint, it was shown that any function that is computable by either type of machine is also computable by the other [Minsky 1967].
1.4.3 McCulloch and Pitts
Warren McCulloch and Walter Pitts  developed an importantly different type of computing machine, whose architecture was more directly inspired by what they saw to be similarities between neural and digital circuits. McCulloch and Pitts employed an architecture consisting of a network of nodes connected by links, which they saw as paralleling the connective structure of the brain. They viewed representations as activation patterns in such a network, and treated the nodes themselves as neither symbolic nor representational. This approach is widely viewed as the ancestor of an alternative research programme in AI, sometimes called “connectionism” [Rosenblatt 1957; Rumelhart and McClelland 1987; see also the entries on mental representation, connectionism].
1.4.4 Analog Computation
“Digital computation” is often contrasted with “analog computation”. The expression ‘analog computation’ has both a narrow technical meaning and a more general application. The colloquial meaning turns upon the notion that some machines have components that represent information in a fashion that is analogous to what is represented. (For example, use of a compass dial to represent geographic directions.) The technical meaning involves a contrast with digital systems, where “digital” means that the individual circuits are each capable of only a finite number of discrete states. (For example, a numerical value of 0 or 1.) “Analog”, in the narrow sense, simply means “not digital”, and applies to systems whose components are capable of a continuum of states. (For example, numerical values consisting of all of the real numbers from 0 to 1.) In this technical sense, “analog” systems need not be analogous to what they represent.
For purposes of technical accuracy, it should also be noted that “digital” is often erroneously conflated with “binary”. A digital system can have any (finite) number of discrete values. While production-model computers employ a binary system (one with two values, represented as 0 and 1), a three-valued (or n-valued) system (say, with values represented as 0, 1, and 2) would also count as digital.
1.4.5 Computational Neuroscience
There is an area of cognitive science called “cognitive neuroscience”. Researchers in this area are interested in neuroscience, and hence do not treat cognition in abstraction from the “implementation level”. However, they, and many other neuroscientists, often affirm that “the brain is a computer”. Just what this means, however, is often unclear. It may mean nothing more than that the brain is involved in information-processing that can be described in algorithmic terms, without a further commitment to the thesis that such information-processing is accomplished through the application of algorithms to symbolic representations. Many physical and biological processes can likewise be characterized in algorithmic terms, but such descriptions are essentially attempts to state laws governing mechanisms, and are “computational” in the sense that would license speaking of all physical or biological systems as computers.
CTM rose to prominence as one of the most important theories of mind in the 1980s. This may in part have been due to the intuitive attraction of the computer metaphor, which played upon the notion of a technology that was rapidly gaining public recognition and technological applications. By this time, moreover, the computer had influenced the understanding of the mind through the influence some projects in the sciences of cognition (such as David Marr's model of vision (Marr 1983)) and in artificial intelligence, where researchers sought to endow machines with human-level competences in reasoning, language, problem-solving and perception, though not always by replicating the mechanisms by which these are performed in humans. In addition, CTM's advocates also claimed that it provided solutions to several important philosophical problems, and its plausibility in these areas was an important contributor to its rapid rise to popularity.
The most important philosophical benefit claimed for CTM was that it purported to show how reasoning could be a non-mysterious sort of causal process, and could nonetheless be sensitive to semantic relations between judgments. The background problem here was the received view that reasons are not causes. On the one hand, it is hard to see how a purely causal process could proceed on the basis of the semantic values of propositions. To posit a mechanism that understood the meanings of mental symbols would in effect be to posit a little interpreter or homunculus inside the head, and then the same problems of coordinating reason and causation would recur for the homunculus, resulting in a regress of interpreters. On the other hand, it is hard to see how a process specified in purely causal terms could thereby count as a reasoning process, as calling something “reasoning” locates it with respect to norms and not merely to causes. (That is, to call a process “rational” is not merely to describe its causal etiology, but to say that it meets, or at least is evaluable by, certain standards of reasoning, such as validity.)
CTM (or, more specifically, the CAR) can be seen as a compatibility proof, showing the compatibility of intentional realism (i.e., a commitment to the reality of the semantic properties of mental states, and to the causal roles of mental states in the determination of behavior) with the claim that mental processes are all causal processes for which a causal mechanism could, in principle, be specified. The trick to linking semantics to causation is to link them both intermediately to syntax. Formalization shows us how to link semantics to syntax, and computation shows us how to link syntax to causal mechanisms. Therefore, there is a consistent model on which bona fide reasoning processes (processes that respect the semantic values of the terms) can be carried out through non-mysterious physical mechanisms: namely, if the mind is a computer in the sense that its mental representations are such that all semantic properties are tracked by corresponding syntactic properties that can be exploited by the “syntactic engine” (Haugeland 1981) that is causally responsible for reasoning.
A compatibility proof is in itself weak evidence for the truth of a theory. However, through the 1980s and 1990s, many philosophers were convinced by Fodor's claim that CTM is “the only game in town”—i.e., that the only accounts we have of cognitive processes are computational, and that this implies the postulation of a language of thought and operations performed over the representations in that language. Given this argument that CTM is implicit in the theories produced by the sciences of cognition (see below), its additional ability to provide a compatibility proof for physicalism and intentional realism solidified its philosophical credentials by showing that this interpretation of the sciences of cognition was philosophically productive as well.
In addition to the compatibility proof, some philosophers viewed CTM—or more precisely, RTM—as providing an explanation of the semantic properties of mental states as well. Fodor, for example, claims that, just as public language utterances inherit their semantic properties from the thoughts of the speaker, thoughts inherit their semantic properties from the mental representations in a LOT that are among their constituents. If I have a thought that refers to Bill Clinton, it is because that thought is a relation to a mental representation that refers to Bill Clinton. If I think “Clinton was President in 1995” it is because I am in a particular functional relation (characteristic of belief) that has the content “Clinton was President in 1995”.
Within this general view of the semantics of mental states, however, there are at least three variant positions, here arranged from weakest to strongest.
- Given an adequate account of the semantics of mental representations, one does not then need a further account of the semantics of intentional states, save for the fact that they “inherit” their semantic values from those of their constituent representations. (This view is absolutely neutral as to the nature of an adequate account of the semantics of mental representations. Indeed it is neutral about the prospects of such an account: it claims that given such a semantics for mental representations, no further work is needed for a semantics of intentional states.) [see criticisms]
- The claim that mental representations are symbolic representations is supposed to provide an account of their semantic nature: i.e., a mental representation is said to be “about Clinton” in exactly the same sense that other symbols (e.g., public language symbols) are said to be “about Clinton”. Thus, if one thinks there is already an adequate semantics for symbols generally (e.g., Tarskian semantics), no further semantic account is needed to cash out what it is for a mentalese symbol to have semantic properties. [see criticisms]
- The claim that the semantic properties of the symbols are explained by or supervene upon the syntactic properties. (This claim was never endorsed by major proponents of CTM such as Putnam, Fodor or Pylyshyn, and is probably best understood as a misunderstanding of CTM.) [see criticisms]
In addition to these potential contributions to philosophy of mind, CTM was at the same time in a symbiotic relationship with applications of the view of the mind as computer in artificial intelligence and the sciences of cognition. On the one hand, philosophical formulations such as CTM articulated a general view of mind and computation that was congenial to many researchers in AI and cognitive science. On the other hand, the successes of computational models of reasoning, language and perception lent credibility to the idea that such processes might be accomplished through computation in the mind as well.
Two connections with empirical research stand out as of particular historical importance. The first connection is with Chomskian linguistics. Chomsky introduced a “cognitivist revolution” in linguistics that displaced the then-prevalent behaviorist understanding of language-learning. The latter, argued Chomsky , was incapable of accounting for the fact that a child latches on to grammatical rules, and is then able to apply them in indefinitely many novel contexts, in ways underdetermined by the finite set of stimuli s/he has been exposed to. This, argued Chomsky, required the postulation of a mechanism that did not work simply on general principles of classical and operant conditioning, but was specifically optimized for language-learning. Chomskian linguists often spoke of the child's efforts at mastery of a grammar in terms of the formation and confirmation of hypotheses; and this, argued Fodor , required an inner language of thought. Chomskian linguistics was thus viewed as requiring at least RTM, and computationalists took it as plausible that the mechanisms underlying hypothesis-testing could be cashed out in computational terms.
Chomskian grammar also stressed features of linguistic competence such as systematicity (the person who is able to understand the sentence “the dog chased the cat” is able to understand the sentence “the cat chased the dog” as well) and productivity (the ability of a person to have an infinite number of thoughts generated from a finite set of lexical primitives and recursive syntactic rules). Thought, of course, also possesses these features. From a computationalist perspective, these two facts are not accidentally linked: natural language is systematic and productive because it is an expression of the thoughts of a mind that already possesses systematicity and productivity; the mind possesses these features, in turn, because thought takes place in a syntactically-structured representational system. Indeed, argues Fodor, a syntactically-structured language is the only known way of securing these features, and so there is strong prima facie reason to believe that RTM is true.
The second important link with cognitive science is with David Marr's theory of vision. Marr [Marr 1982; Marr and Poggio 1977] pioneered a computational approach to vision. While few of the details of their account have survived into current work in the science of vision, what was most influential about their work was not the empirical details but a set of powerful metatheoretical ideas. Marr distinguished between three levels that needed to be distinguished in a theory of vision (or, by extension, other cognitive processes.) At the highest level was a specification of what task a system was designed to perform: e.g., in the case of vision, to construct a three-dimensional representation of distal stimuli on the basis of inputs to the retina. This level Marr (somewhat unfortunately) called the “computational level”. At the other end of the spectrum was a level describing the “implementation” of this function by the “hardware” of the system (e.g., the neurochemical properties underlying phototransduction in retinal cells). These two levels alone present a conventional functionalist picture. But in between them Marr inserted an “algorithmic” level of explanation. Here the task of the theorist was to isolate a plausible candidate for the algorithm the system was employing in performing the task—an algorithm that must both be appropriate to the task specified at the “computational” level and compatible with the neurological facts at the “implementational” level.
Such an intermediate algorithmic level is of course closely related to a strategy for modeling visual processes: the modeler starts with psychophysical data (say, the Weber laws) and then attempts to construct models that have isomorphic input/output conditions. As computational models, the work in these is done by the algorithms used to transform input into output. While it is possible to view such modeling as simply on a par with, say, the modeling of weather systems (in which there is no assumption that what is modeled is in any interesting sense “algorithmic” or “computational”) the availability of computational modeling techniques also suggests the hypothesis that visual processes themselves are accomplished algorithmically, as algorithmic methods are at least among the available ways of accomplishing the informational tasks involved in the psychophysical data. There is, of course, a familiar philosophical ambiguity lurking in the wings here—the confusion of behaving in a fashion describable by a rule with following or applying a rule—and arguably advocates of an algorithmic level of description have not always kept this distinction in mind. Nonetheless, with this caveat (i.e., the unclarity of whether the algorithmic level is simply a level of description or whether the system is said to be applying an algorithm), some version of Marr's three-level approach quickly became something of an orthodoxy in cognitive science in the 1980s.
Marr's approach has obvious connections with CTM. Both involve inner representations and algorithmic processes that mediate transformations from one representation to another. The growth of models employing Marr's three-tiered approach in the sciences of cognition seemed to provide empirical support for the view that the mind is an algorithmic symbol-processor. But research like Marr's also suggested a moral for RTM and CTM that made them, in a way, potentially more radical. One might have held RTM and CTM only as theories of the kinds of mental processes that can be articulated in sentences in a natural-language—“high level” processes like conscious thoughts. Marr's algorithms, however, apply at a much simpler level, such as the information processes going on between two levels of cells in the visual system. Such processes are not subject to conscious inspection or intervention, and could not be reported in natural language by the speaker. Such a theory therefore involves the postulation of a host of symbols and algorithms that are not so much unconscious as infraconscious—that is, processes that take place at a far simpler level than what philosophers have been accustomed to thinking of as “thoughts”.
The strongest proposed relationship between the syntax and semantics of symbols—that semantic properties supervene upon syntactic properties—was, as mentioned, never embraced by the major proponents of CTM. Indeed, Putnam (1980) pointed out a major obstacle to such a view. It consists in a consequence of the Lowenheim-Skolem theorem in logic, from which it follows that every formal symbol system has at least one interpretation in number theory. This being the case, take any syntactic description D of Mentalese. Because our thoughts are not just about numbers, a canonical interpretation of the semantics of Mentalese (call it S) would need to map at least some of the referring terms onto non-mathematical objects. However, Lowenheim-Skolem assures that there is at least one interpretation S* that maps all of the referring terms onto only mathematical objects. S* cannot be the canonical interpretation, but there is nothing in the syntax of Mentalese to explain why S is the correct interpretation and S* is not. Therefore syntax underdetermines semantics. (Compare acknowledgment of this in Pylyshyn 1984: page 44.)
Fodor (1981) has proposed that the view that mental states are relations to symbolic representations is supposed to explain how mental states come to have semantic values and intentionality. This is, he claims, because it is mental representations that have these properties “in the first instance,” while propositional attitude states “inherit” them from the mental representations that are among their constituents. This view was criticized by Searle (1980, 1984) and Sayre (1986, 1987), and the line of criticism was developed by Horst (1996). The criticism is briefly recapitulated as follows. Suppose we represent Fodor's claim schematically as:
(F) Mental state M means P because mental representation MR means P.
Such a claim is most plausible under the assumption that the expression “… means P” is univocal over the two uses in (F)—i.e., that “… means P” functions the same way when applied to mental states (such as beliefs, desires, and occurrent judgments) and to mental representations (i.e., symbols in a language of thought). Under this assumption, the “meaning” of mental representations is clearly a potential explainer of the “meaning” of mental states, because it is precisely the same property of “meaning” that is in question in both instances.
However, this assumption is in tension with the assumption that the kind of “meaning” attributed to mental representations is the same as the kind of “meaning” that is attributed to symbols such as utterances and inscriptions. There, the critics claim, attributions of meaning, such as “This inscription meant P” have a hidden complexity in their logical structure. The verb ‘means’ does not express simply a two-place relation between inscription and its semantic value; rather, it must covertly report either (a) speaker meaning, (b) hearer interpretation or (c) interpretation licensed by a particular linguistic convention. As it is specifically symbolic meaning—i.e., “meaning” in the sense of that word which is applied to symbols and not some other use of the term—that formalization and computation show us how to link to syntax, it is important that it is this usage that is at work in CTM. But if we assume that the phrase “MR means P” in (F) must be cashed out in terms of speaker meaning, hearer interpretation, or conventional interpretability, then we are in none of these cases left with a potential explainer of the kind of “meaning” that is ascribed to mental states. Each of these notions is indeed conceptually dependent upon the notion of meaningful mental states, and so one cannot explain mental meaning in terms of symbolic meaning without being involved in an explanatory circle.
There have been two sorts of explicit replies to this line of criticism, and perhaps a third which is largely implicit. One line of reply is to say that there is a univocal usage of the semantic vocabulary: namely, what is supplied by semantic theories such as Tarski's. On this view, “a semantics” is simply a mapping from symbol-types onto their extensions, or else an effective procedure for generating such a mapping. However, such a view of “semantics” is arguably too thin to be explanatory, as there are an indefinite number of mapping relations, only a few of which are also semantic. (Cf. Blackburn 1984, Field 1972, Horst 1996.) A second line of reply is to look to an alternative “thicker” semantics, such as that of C.S. Peirce. (Cf. von Eckardt 1993.) There has arguably not been sufficient discussion of the relation between Peircean semantics and the equivocity view which distinguishes “mental meaning” (i.e., the sense of ‘meaning’ that can be applied to mental states) from speaker meaning and conventional interpretability of symbols. However, in one respect the Peircean strategy connects with a third line of reply that computationalists have made, which might also be a path to rapprochement between sides. Computationalists have generally come to endorse causal theories of the semantics of mental representations. Regardless of one's outlook on the general prospects of causal theories of meaning, a sense of “meaning” that is cashed out in terms of causal covariance or causal etiology cannot be equivalent to either speaker meaning or conventional interpretability. The good news for the computationalist is that this may save the theory from explanatory circularity and regress. However, it arguably carries the price of threatening CTM's compatibility proof: the kind of “meaning” that is shown to be capable of being tied to syntax in computing machines is the conventional kind: e.g., that such-and-such a sequence of binary digits is interpretable under certain conventions as representing a particular integer. But if the kind of “meaning” attributed to mental representations is not of this sort, then computers have not shown that the relevant sorts of “meaning” can be tied to syntax in the necessary ways. Though this problem may not be insuperable, it should perhaps at least be regarded as an open problem. Likewise, the viability of this strategy is further dependent upon the prospects of a causal semantics to explain the kind of “meaning” attributed to mental states. (Cf. Horst, 1996)
Some early critics of CTM started from the observation that not all processes are computable (that is, reducible to an algorithmic solution), and concluded that computational explanations are only possible for such mental processes as might turn out to be amenable to algorithmic techniques. However, there is strong reason to believe that there are problem domains that humans can think about and attain knowledge in, but which are not formally computable.
The oldest line of argument here is due to J.R. Lucas (1961), who has argued over a series of articles that the Gödel's incompleteness theorem poses problems for the view that the mind is a computer. More recently, Penrose (1989, 1990) has developed arguments to the same conclusion. The basic line of these arguments is that human mathematicians in fact understand and can prove more about arithmetic than is computable. Therefore there must be more to (at least this kind of) human cognition than mere computation. There has been extensive debate on this topic over the past forty years (for example, see the criticisms in Lewis , , [Bowie 1982] and [Feferman 1996]), and the continued discussion suggests that the proper view to take of this argument is still an open question.
A distinct line of argument was developed by Hubert Dreyfus (1972). Dreyfus argued that most human knowledge and competence—particularly expert knowledge—cannot in fact be reduced to an algorithmic procedure, and hence is not computable in the relevant technical sense. Drawing upon insights from Heidegger and existential phenomenology, Dreyfus pointed to a principled difference between the kind of cognition one might employ when learning a skill and the kind employed by the expert. The novice chess player might follow rules like “on the first move, advance the King's pawn two spaces”, “seek to control the center”, and so on. But following such rules is precisely the mark of the novice. The chess master simply “sees” the “right move”. There at least seems to be no rule-following involved, but merely a skilled activity. (Since the original publication of What Computers Can't Do in 1972, the play level of the best chess computers has risen dramatically; however, it bears noting that the brute force methods employed by champion chess computers seem to bear little resemblance to either novice or expert play in humans.) Dreyfus illustrates his claims with references to the problems faced by AI researchers who attempted to codify expert knowledge into computer programs. The success or failure here really has little to do with the computing machinery, but with whether expert competence in the domain in question can be captured in an algorithmic procedure. In certain well-circumscribed domains this has succeeded; but more often than not, argues Dreyfus, it is not possible to capture expert knowledge in an algorithm, particularly where it draws upon general background knowledge outside the problem domain.
There have been two main lines of response to Dreyfus's criticisms. The first is to claim that Dreyfus is placing too much weight upon the present state of work in AI, and drawing an inference about all possible rule-based systems on the basis of the failures of particular examples of a technology that is arguably still in its infancy. Part of this criticism is surely correct: to the extent that Dreyfus's argument is intended to be inductive, it is hasty and, more importantly, vulnerable to refutation by future research. Dreyfus might reply that the optimism of conventional AI researchers is equally unsupported, but we must conclude that a purely inductive generalization about what computers cannot do would provide only a weak argument. However, Dreyfus's argument is not purely inductive; it also contains a more principled claim about the nature of expert performance and the unsuitability of rule-based techniques to duplicating that performance. This argument is a complicated one, and has not received decisive support or refutation among philosophers of mind.
The second line of reply to Dreyfus's arguments is to concede that there may be a problem (perhaps even a principled problem) for a certain type of system—e.g., a rule-based system—but to claim that other types of systems avoid this problem. Thus one might look to the “bottom-up” strategies of connectionist networks or Rodney Brooks's attempts to build simple insect-level intelligence as more promising approaches that side with Dreyfus in criticizing the limitations of rule-based systems. Dreyfus himself seems to have experimented with both sides of this position. In an 1988 article with his brother Stuart, Dreyfus seemed inclined to the view that neural networks stand in better stead in this regard, and seem to handle some problem domains naturally wherein rule-and-representation approaches have encountered problems. (He has likewise endorsed aspects of Walter Freeman's attractor theory as echoing some features of Merleau-Ponty's account of how a skilled agent moves towards “maximum grip.”) However, in the 1992 version of What Computers Still Can't Do he opined that “the neglected and then revived connectionist approach is merely getting its deserved chance to fail” (xxxviii). More narrowly, however, one might point out that this line of objection concedes Dreyfus's real point, which was never that there could not be a piece of intelligent hardware called a “computer”, but rather that one could not build intelligence or cognition out of “computation” in the sense of “rule-based symbol manipulation.”
Perhaps the most influential criticism of CTM has been John Searle's (1980) thought experiment known as the “Chinese Room”. In this thought experiment, a human being is placed in the role of the CPU in a computer. He is placed inside of a room with no way of communicating with the outside except for symbolic communications that come in through a slot in the room. These are written in Chinese, and are meaningful inscriptions, albeit in a language he does not understand. His task is to produce meaningful and appropriate responses to the symbols he is handed in the form of Chinese inscriptions of his own, which he passes out of the box. In this task he is assisted by a rulebook, containing rules for what symbols to write down in response to particular input conditions. This set-up is designed to mimic in all respects the resources available to a digital computer: it can receive symbolic input and produce symbolic output, and it manipulates the symbols it receives on the basis of rules that are such that they can be applied on non-semantic information like syntax and symbol shape alone. The only difference in the thought experiment is that the “processing unit” applying these rules is a human being.
This experiment is directly a response to Alan Turing's suggestion that we replace the question “Can machines think?” with the question of whether they can succeed in an “imitation game”, in which questioners are asked to determine, on the basis of their answers alone, whether an unseen interlocutor is a person or a machine. This test has come to be called the Turing Test. Searle's response is, in essence, this: Let us assume for purposes of conversation that it is possible to produce a program—a set of rules—that would allow a machine that followed these rules to pass the Turing test. Now, would the ability to pass this test in and of itself be sufficient to establish that the thing that passed the test was a thinking thing? Searle's Chinese Room meets (by supposition) the criteria of a machine that can pass the Turing Test in Chinese: it produces responses that are taken to be meaningful and conversationally appropriate, and it does so by wholly syntactic means. However, when we ask, “Does the Chinese Room, or any portion thereof (say, the person inside) understand the utterances?” the answer Searle quite naturally urges upon us is “no”. Indeed, by supposition, the person inside does not understand Chinese, and neither the rulebook nor the human-rulebook-room system seems to be the right sort of thing to attribute understanding to at all. The upshot is clear: even if it is possible to simulate linguistic competence by purely computational means, doing so does not provide sufficient conditions for understanding.
The Chinese Room argument has enjoyed almost as much longevity as CTM itself. It has spawned a small cottage industry of philosophical articles, many of whose positions were presaged in the peer commentaries printed with its seminal appearance in Behavioral and Brain Sciences in 1980. Some critics are willing to bite the bullet and claim that “understanding” can be defined in wholly functional terms, and hence the Chinese Room system really does exhibit understanding. Others have conceded that the apparatus Searle describes would not exhibit understanding, but have argued that the addition of some further features—a robot body, sensory apparatus, the ability to learn new rules and information, or embedding it in a real environment with which to interact—would thereby confer understanding. Searle and others have adapted the thought experiment to extend the intuition that the system lacks understanding in ways that are designed to incorporate these variations. These adaptations, like the original experiment, seem to elicit different intuitions in different readers; and so the Chinese Room has remained one of the more troubling and thought-provoking contributions to this literature.
Searle (1992) also argues that, on standard definitions of computation, every object turns out to be a computer running a program, because there is some possible interpretation of its states that corresponds to the machine table for that program. (Searle suggests that the wall behind him is, under the right interpretation, running a particular word-processing program.) Searle's conclusion is that an object is a computer, not in virtue of its intrinsic properties, but only in relation to an interpretation: semantic and even computational properties “are not intrinsic to the system at all. They depend on an interpretation from the outside.” (Searle, 1992, 209) The mind may thus be a computer, in the sense of having interpretations in terms of a machine table. However, since intentionality is intrinsic to mental states, and does not depend upon external interpretation, it cannot be accounted for in computational terms. Putnam (1988) offers a similar argument in more formal terms involving Finite State Automata. (Block 1978 puts forward related arguments against functionalism in his “Chinese nation” thought-experiment, wherein he argues that the nation of China can implement a functional analysis for the mind of the sort embodied in a machine table, yet is not thereby a thinking thing.)
Some aspects of Searle's characterization may be overstated. It is controversial that every object can be described as running a (non-trivial) program; and Searle's characterization of derived intentionality sometimes make it sound as though this requires an actual interpreter, as opposed to simply the availability of an appropriate interpretation scheme. But the core of his objection (that semantic and computational properties are extrinsic) does not really require either of these claims. It is enough for his reductio argument that there are clearly some adequately complex systems that would count as computers on this definition, and that this violate our intuitions about what should count as a computer. Likewise, his argument can be made to work equally well without supposing an actual interpreter, but only interpretability-in-principle. (Compare Horst 1996.)
Searle's and Putnam's articles have drawn direct responses, and Searle's version of the argument is also based in assumptions about extrinsic interpretation that are more generally controversial in philosophy of mind. The principal line of response directed at these articles has consisted in arguments that the Searle/Putnam conclusion that every system of suitable complexity has an interpretation scheme whereby it counts as a computer running a program can be reached only by using an inappropriate notion of ‘computation’. The “interpretation” under which the molecules in a wall count as a computer running a word processing program is (a) completely ex post facto (Copeland 1996), and, more importantly, (b) is applicable only to the actual behavior of the molecules within a certain (again, arbitrarily specified) timeframe, and does not capture the counterfactual regularities of the causal behavior of a system that performs computations by application of an algorithm. (Copeland 1996, Chalmers 1996, Scheutz 1999, Piccinini 2007) As several of these writers allow, there are multiple definitions of ‘computation’ available in the literature, some of which might be suitable to license the inferences made by Searle and Putnam. What they claim in response, however, is (1) that there are stronger definitions of ‘computation’ that include causal and/or counterfactual properties, (2) that these additional properties are implicit in standard characterizations of computers (for example, in describing computation as being driven by application of algorithms), and (3) that on these definitions, paradigm examples of computers count as computers, but the kinds of examples Searle, Putnam and Block cite in their reductio arguments do not.
Also potentially controversial is Searle's insistence that the intentionality of mental states is “intrinsic”. Dennett (1987), for example, has argued for an interpretivist semantics for mental states as well, in which minds and other systems “have” semantic properties and intentionality only as viewed through the “intentional stance”. If Dennett is right to reject intrinsic intentionality, a crucial premise of Searle's argument is blocked. (See also the peer commentary on Searle 1990, and the entries on consciousness and consciousness and intentionality.) There is no consensus among philosophers as to whether an interpretivist semantics (whether Dennett's or another, such as that of Donald Davidson (1984)) is appropriate for mental states, and hence the viability of Searle's argument is a matter on which the philosophical community is sharply divided.
One cornerstone of Fodor's case for CTM was that some version of the theory was implied by cognitivist theories of phenomena like learning and language acquisition, and that these theories were the only contenders we have in those domains. Critics of CTM have since argued that there are now alternative accounts of most psychological phenomena that do not require rule-governed reasoning in a language of thought, and indeed seem at odds with it. Beginning in the late 1980s, philosophers began to become aware of an alternative paradigm for modeling psychological processes, sometimes called “neural network” or “connectionist” approaches. Such approaches had been pursued formally and empirically stemming from the early work of Wiener and Rosenblatt, and carried on through the 1960s until the present by researches such as Grossberg and Anderson. There was some philosophical recognition of early cybernetic research (e.g., Sayre (1969, 1976)); however, neural network models entered the philosophical mainstream only after the publication of Rumelhart and McClelland's (1986) Parallel Distributed Processing.
Neural network models seek to model the dynamics of psychological processes, not directly at the level of intentional states, but at the level of the networks of neurons through which mental states are (presumably) implemented. In some cases, psychological phenomena that resisted algorithmic modeling at the cognitive level just seem to “fall out” of the architecture of network models, or of network models of particular design. Several types of learning in particular seem to come naturally to network architectures, and more recently researchers such as Smolensky have produced results suggesting that at least some features of language acquisition can be simulated by his models as well.
During the late 1980s and 1990s there was a great deal of philosophical discussion of the relation between network and computational models of the mind. Connectionist architectures were contrasted with “classical” or “GOFAI” (“good old-fashioned AI”) architectures employing rules and symbolic representations. Advocates of connectionism, such as Smolensky (1987), argued that connectionist models were importantly distinct from classical computational models in that the processing involved took place (and hence the relevant level of causal explanation must be cast) at a sub-symbolic level, such as Smolensky's tensor-product encoding. Unlike processing in a conventional computer, the process is distributed rather than serial, there is no explicit representation of the rules, and the representations are not concatenative.
There is some general agreement that some of these differences do not matter. Both sides are agreed, for example, that processes in the brain are highly parallel and distributed. Likewise, even in production-model computers, it is only in stored programs that rules are themselves represented; the rules hard-wired into the CPU are not. (The concatenative character is argued by some—e.g., van Gelder , Aydede —to be significant.)
The most important “classicist” response is that of Fodor and Pylyshyn . (See also Fodor and McLaughlin .) They argue that any connectionist system that could guarantee systematicity and productivity would simply be an implementation of a classical (LOT) architecture. Much turns, however, on exactly what features are constitutive of a classical architecture. Van Gelder (1991), for example, claims that classicists are committed to specifically “concatenative compositionality” [Van Gelder 1991, p. 365)—i.e., compositionality in a linear sequence like a sentence rather than in a multi-dimensional space like Smolensky's tensor-product model—and that this means that they explain cognitive features without being merely “implementational” and hence provide a significantly different alternative to classicism. In response, Aydede , while recognizing the tendency of classicists to make assumptions that the LOT is concatenative, argues that the LOT need not be held to this stronger criterion. (Compare Loewer and Rey, 1991.) However, if one allows non-concatenative systems like Smolensky's tensor space or Pollack's Recursive Auto-Associative Memory to count as examples or implementations of LOT, more attention is needed to how the notion of a “language” of thought places constraints upon what types of “representations” are included in and excluded from the family of LOT models. There has been no generally-agreed-to resolution of this particular dispute, and while it has ceased to generate a steady stream of articles, it should be classified as an “open question”.
In terms of the case for CTM, recognition of alternative network models (and other alternative models, such as the dynamics systems approach of van Gelder) has at least undercut the “only game in town” argument. In the present dialectical situation, advocates of CTM must additionally clarify the relationship of their models to network models, and argue that their models are better as accounts of how things are done in the human mind and brain in particular problem domains. Some of the particulars of this project of clarifying the relations between classical and connectionist computational architectures are also discussed in the entry on connectionism.
The criticisms that have been canvassed here arguably do not threaten the CTM's claim to presenting a compatibility proof for intentional realism with physicalism, at least in the cases of kinds of understanding that can be formalized. This goal—of “vindicating” psychology by demonstrating its compatibility with the generality of physics—was itself a prominent part of the computationalist movement in philosophy, and explains why representational/computational theories were often seen as the main alternatives to eliminative materialism in the 1980s and 1990s.
However, this goal itself, and the corresponding commitment to a particular kind of naturalization of the mind, is insufficiently subjected to scrutiny on the current scene. We might pose the question like this: if push were to come to shove between one's commitment to the results of some empirical science like psychology and one's commitment to a metaphysical position (like materialism) or metatheory about science (like some form of the Unity of Science hypothesis), which ought to give way to the other? It is, in a way, curious that Fodor, an important defender of the autonomy of the special sciences, especially psychology (cf. Fodor 1974), seems in this instance to defer to metaphysical or metatheoretical commitments here, and hence views psychology as standing in need of vindication. By contrast, philosophers of science have, since the 1970s, increasingly been inclined to reject any metatheoretical and metaphysical standards imposed upon science from without, and more specifically have tended to favor the autonomy of local “special” sciences over assumptions that the sciences must fit together in some particular fashion. (In such a spirit, CTM's original proponent, Hilary Putnam, has more recently embraced a pragmatist pluralism.) It is possible that in another decade the recent preoccupation with vindicating psychology will be regarded as one of the last vestiges of the Positivist Unity of Science movement.
In the 1980s, CTM faced a major crisis with the growing popularity of externalist theories of semantics. (See entry externalism about mental content.) Externalists hold that mental content is not, or at least is not entirely, within the head or within the mind. For example, the fact that water is H2O is (allegedly) part of the meaning of ‘water’, even if a speaker does not know that water is H2O, and ‘elm’ refers to a certain type of tree even if a speaker does not know how to identify elms and cannot even distinguish elms from beeches. Such components of meaning as are external to the mind cannot be narrowly located in representations existing completely within the mind. And to the extent that CTM is committed to confining computation to symbols existing entirely within the mind, there seems to be a tension between CTM and externalist theories of content.
Fodor  and others responded to this concern by combining CTM with a causal account of mental content, on which a mental representation R means X just in case R-tokenings are reliably caused by Xs: for example the concept COW is reliably caused by perceptual access to cattle and the concept WATER by contact with H2O. This does not require that the representational and computational resources of the system encode the fact that the concept WATER refers to a particular molecular kind, nor all of the properties that H2O actually possesses. Nor does it require that all of the inferences licensed by its rules are true. (For example, it might have rules that generate tokenings of “water is a simple Aristotelian element”.) What is required is only that the semantic and propositional understanding (or misunderstanding) that the system possesses be accounted for in terms of syntactic features of the representations and syntactically-based inference rules. This, moreover, seems consonant with the fact that real human beings often misrepresent or are ignorant of the properties of the things they think and talk about. It does, however, require that an adequate account of content include a “narrow” component (encoded in syntax and located in the mind or brain) as well as a “broad” component (determining reference), and that psychological explanation of reasoning the “narrow” properties.
An externalist version of CTM essentially holds that the mind is a computer, but that some aspects of content are “farmed out” to an environment with which the mind causally interacts. On this view, there are clear mind/world divisions, and even divisions between such parts of the body as are involved in computation (e.g., the brain, or subareas of the brain) and the rest of the biological organism. Indeed, computationalists generally treat it as fundamentally important that cognitively and computationally equivalent systems could be realized in different media: for example, a human being, a human brain coupled with a robotic body, a computing machine in a robotic body, or a computing machine interacting with a virtual environment.
A more radical externalist thesis is that cognition is essentially embodied and embedded. Perception, action, and even imagination and reasoning are “embodied”, not only in the sense of being realized through some physical system, but in the stronger sense that they involve bodily processes that extend beyond the brain into the nervous system and even into other tissue and to biochemical processes in the body. At the same time, even the brain processes involved in cognition involve non-representational, non-computational skills of bodily know-how. The mind is also “embedded” in its environment, not only in the sense of interacting with it causally through perceptual “inputs” and behavioral “outputs”, but in the more radical sense that things outside the physical organism—from tools to prostheses to books and websites—are integrally part of cognition itself. We are, as Andy Clark puts it, already “natural-born cyborgs.” [Clark 1997, 2000, 2001, 2005; Clark and Chalmers 1998]
The relationship between CTM and embodied and embedded cognition can be viewed in two ways. On the one hand, these views might be seen as claiming, in effect, that there are aspects of cognition that are non-representational and non-computational. This, however, is fully compatible with a modest version of CTM that claims only that some aspects of cognition are representational and computational. Moreover, the computationalist can attempt to construe bodily skill as itself computational (though again see [Dreyfus 1979] for seminal discussion of problems), and extend computation to include external symbols. [Wilson 1994] On the other hand, embodied and embedded cognition might be seen as an alternative general framework for understanding cognition: not just a supplementary account of “parts” of cognition that CTM leaves out, but a fundamentally different theoretical framework.
- Aydede, Murat, 1997. “Language of Thought: A Connectionist Contribution,” Minds and Machines, 7: 57–101.
- Blackburn, Simon, 1984. Spreading the Word: Groundings in the Philosophy of Language, New York: Oxford University Press.
- Block, Ned, 1978. “Troubles with functionalism”. Minnesota Studies in the Philosophy of Science 9:261–325.
- Boden, Margaret, 1990. The Philosophy of Artificial Intelligence, Oxford University Press.
- Bowie, L., 1982. “Lucas's Number is Finally Up”, Journal of Philosophical Logic, 11: 279–85.
- Chalmers, David J., 1996. “Does a Rock Implement Every Finite-State Automaton?” Synthese, 108: 310–333.
- Chomsky, Noam, 1959. “A Review of B.F. Skinner's Verbal Behavior,” Language, 35: 26–58.
- Church, Alonzo, 1936. “A note on the Entscheidungsproblem,” Journal of Symbolic Logic, 1: 40–41.
- Clark, Andy, and David Chalmers, 1998. “The Extended Mind,” Analysis, 58: 7–19.
- Clark, Andy, 1997. Being There: Putting Brain, Body and World Together Again, Cambridge, MA: The MIT Press.
- Clark, Andy, 2000. Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence, New York: Oxford University Press.
- Clark, Andy, 2001. “Reasons, Robots and the Extended Mind,” Mind and Language, 16: 121–145.
- Clark, Andy, 2005. “Intrinsic Content, Active Memory, and the Extended Mind,” Analysis, 65: 1–11.
- Copeland, B. Jack, 1996. “What Is Computation?,” Synthese, 108 (3): 335–359.
- Cummins, Robert, 1989. Meaning and Mental Representation, Cambridge, Mass.: MIT Press.
- Davidson, Donald, 1984. Truth and Interpretation, Oxford: Clarendon Press.
- Dennett, Daniel, 1987. The Intentional Stance, Cambridge: MIT Press.
- Dreyfus, Hubert, 1972. What Computers Can't Do, New York: Harper and Row.
- Dreyfus, Hubert, 1992. What Computers Still Can't Do, Cambridge, Mass.: MIT Press.
- Dreyfus, Hubert and Stuart Dreyfus, 1988. “Making a mind versus modelling a brain: artificial intelligence back at a branch-point,” In Boden 1990, pp. 309–333.
- Feferman, S., 1996. “Penrose's Goedelian Argument”, Psyche, 2: 21–32.
- Field, Hartry, 1972. “Tarski's Theory of Truth,” Journal of Philosophy, 69: 347–375.
- Fodor, Jerry, 1974. “Special Sciences, or Disunity of Science as a Working Hypothesis,” Synthese, 28: 97–115.
- Fodor, Jerry, 1975. The Language of Thought, New York: Thomas Crowell.
- Fodor, Jerry, 1980. “Methodological Solipsism Considered as a Research Strategy in Cognitive Science,” Behavioral and Brain Sciences, 3: 63–73.
- Fodor, Jerry, 1981. Representations, Cambridge, Mass.: Bradford Books/MIT Press.
- Fodor, Jerry, 1987. Psychosemantics, Cambridge, Mass.: Bradford Books.
- Fodor, Jerry, 1990. A Theory of Content and Other Essays, Cambridge, Mass.: Bradford Books.
- Fodor, Jerry, 1993. The Elm and the Expert, Cambridge, Mass.: Bradford Books.
- Fodor, Jerry, 2000. The Mind Doesn't Work That Way, MIT Press.
- Fodor, Jerry and Brian McLaughlin, 1990. “Connectionism and the Problem of Systematicity: Why Smolensky's Solution Doesn't Work,” Cognition, 35: 193–204.
- Fodor, Jerry and Zenon W. Pylyshyn, 1988. “Connectionism and Cognitive Architecture: A Critical Analysis,” In Connections and Symbols, ed. Pinker, Steven and Jacques Mehler, MIT Press, 1988.
- Grossberg, Stephen, 1982. Studies of Mind and Brain, Boston Studies in the Philosophy of Science, Volume 70. Dordrecht, Holland: Reidel.
- Haugeland, John, 1978. “The Nature and Plausibility of Cognitivism,” Behavioral and Brain Sciences, 2: 215–226.
- Haugeland, John, ed., 1981. Mind Design, Cambridge, Mass.: MIT Press/Bradford Books.
- Horst, Steven, 1995. “Eliminativism and the Ambiguity of ‘Belief’”, Synthese, 104: 123–145.
- Horst, Steven, 1996. Symbols, Computation and Intentionality: A Critique of the Computational Theory of Mind, Berkeley and Los Angeles: University of California Press.
- Horst, Steven, 1999. “Symbols and Computation,” Minds and Machines, 9 (3): 347–381.
- Kuczynski, John-Michael, 2006. “Two Concepts of ‘Form’ and the So-Called Computational Theory of Mind,” Philosophical Psychology, 19 (6): 795–821.
- Kuczynski, John-Michael, 2007. Conceptual Atomism and the Computational Theory of Mind: A defense of content-internalism and semantic externalism, Amsterdam: John Benjamins.
- Lewis, David, 1969. “Lucas Against Mechanism”, Philosophy, 44: 231–233.
- Lewis, David, 1979. “Lucas Against Mechanism II”, Canadian Journal of Philosophy, 9: 373–376.
- Lucas, J.R., 1961. “Minds, Machines, and Godel”. Philosophy, 36: 112–127.
- Marr, D., 1983. Vision: A computational Investigation into the Human Representation and Processing of Visual Information, New York: W. H. Freeman and Company.
- Marr, D. and T. Poggio, 1977. “From understanding computation to understanding neural circuitry,” Neurosciences Res. Prog. Bull., 15: 470–488.
- McCulloch, W. S. and Pitts, W. H., 1943. “A logical calculus of the ideas immanent in nervous activity,” Bulletin of Mathematical Biophysics, 5: 115–133.
- Minsky, Marvin, 1967. Finite and Infinite Machines, London: Prentice-Hall International.
- Penrose, Roger, 1989. The Emperor's New Mind, Oxford University Press.
- Penrose, Roger, 1990. Précis of The Emperor's New Mind, Behavioral and Brain Sciences, 13: 643–705.
- Piccinini, Gualtiero, 2009. “Computationalism in the Philosophy of Mind,” Philosophy Compass, 4;3: 515–532.
- Putnam, Hilary, 1960. “Minds and Machines,” In Dimensions of Mind, edited by S. Hook. New York: New York University Press.
- Putnam, Hilary, 1961. “Brains and Behavior”, originally read as part of the program of the American Association for the Advancement of Science, Section L (History and Philosophy of Science), December 27, 1961. Reprinted in Block (1980).
- Putnam, Hilary, 1967. “The Nature of Mental States,” In Art, Mind and Religion, Edited by W.H. Capitan and D.D.Merrill. Pittsburgh:University of Pittsburgh Press. Reprinted in Block (1980).
- Putnam, Hilary, 1988. Representation and Reality. MIT Press.
- Putnam, Hilary, 1 “Models and Reality,” Journal of Symbolic Logic, 45: 464–482.
- Pylyshyn, Zenon, 1980. “Computation and Cognition: Issues in the Foundation of Cognitive Science,” The Behavioral and Brain Sciences, 3: 111–132.
- Pylyshyn, Zenon, 1984. Computation and Cognition: Toward a Foundation for Cognitive Science, Cambridge, Mass: Bradford Books/MIT Press.
- Rosenblatt, Frank, 1958. “The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain,” Cornell Aeronautical Laboratory, Psychological Review, 65 (6): 386–408.
- Rumelhart, David E., James McClelland, and the PDP Research Group, 1986. Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Cambridge, Mass: MIT Press.
- Sayre, Kenneth, 1969. Consciousness: A Philosophic Study of Minds and Machines, New York: Random House.
- Sayre, Kenneth, 1976. Cybernetics and the Philosophy of Mind, Atlantic Highlands, New Jersey: Routledge & Kegan Paul.
- Sayre, Kenneth, 1986. “Intentionality and Information Processing: An Alternative Model for Cognitive Science,” Behavioral and Brain Sciences, 9 (1): 121–138.
- Sayre, Kenneth, Synthese, 70: 247–269.
- Scheutz, Matthias, 1999. “When Physical Systems Realize Functions…” Mind and Machines, 9: 161–196.
- Searle, John, 1980. “Minds, Brains and Programs,” Behavioral and Brain Sciences, 3: 417–424.
- Searle, John, 1984. Minds, Brains and Science, Cambridge, Mass.: Harvard University Press.
- Searle, John, 1990. Presidential Address. Proceedings of the American Philosophical Association.
- Searle, John, 1990. “Consciousness, Explanatory Inversion, and Cognitive Science,” Behavioral and Brain Science, 13: 585–642.
- Searle, John, 1992. The Rediscovery of the Mind, Cambridge, Mass.: MIT Press.
- Smolensky, Paul, 1988. “The Proper Treatment of Connectionism,” Behavioral and Brain Sciences, 11 (1): 1–74.
- Turing, Alan, 1936. “On computable numbers,” Proceedings of the London Mathematical Society, 24:230–265.
- van Gelder, Timothy, 1991. “Classical Questions, Radical Answers: Connectionism and the Structure of Mental Representations,” in Connectionism and the Philosophy of Mind, ed. Terrence Horgan and John Tienson, Studies in Cognitive Systems (Volume 9), Kluwer Academic Publishers, 1991.
- Von Eckardt, Barbara, 1993. What is Cognitive Science?, Cambridge, Massachusetts: MIT Press.
- Von Neumann, John, “First Draft of a Report on the EDVAC,” Contract No. W-670-ORD-4926, Between the United States Army Ordnance Department and the University of Pennsylvania Moore School of Electrical Engineering, University of Pennsylvania. June 30, 1945.
- Wilson, Robert A., 1994. “Wide Computationalism,” Mind, 103: 351–72.
- Winograd, Terry, and Fernando Flores, 1986. Understanding Computers and Cognition, Norwood, New Jersey: Ablex Publishing Corporation.
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
[Please contact the author with suggestions.]