This is a file in the archives of the Stanford Encyclopedia of Philosophy.

Notes

[1] Lycan (1986), Davies (1989, 1995), Cummins (1986), Hadley (1995). A parallel discussion is going on in AI: Kirsh (1990).

[2] See Fodor (1985, 1986, 1987: chp.1), Devitt (1990).

[3] But see Rey (1992, 1993) for an attempt to expand LOTH to sensations and qualia.

[4] See for instance the controversy involved in the so-called imagery debate. The literature here is huge but the following sample may be useful: Block (1981, 1983b), Dennett (1978), Kosslyn (1980), Pylyshyn (1978), Rey (1981), Sterelny (1986), Tye (1991).

[5] E.g., Marr (1982), or any textbook on vision or language comprehension and production.

[6] E.g., Kosslyn (1980, 1994); Shepard and Cooper (1982). In fact, some theorists even go so far as to claim that all cognition is done in an image-like symbol system -- early British empiricists from Locke to Hume held something like this view, but more recently, see L. Barsalou and his colleagues who have been developing models to that effect (Barsalou 1993a, Barsalou et al 1993b, Barsalou and Prinz 1997).

[7] The controversial issue here is not the absurdity of the claim that there are literally pictures or images in the brain. Probably no one believes this claim these days. Rather, postulating picture-like representations is to be cashed out in functionalist terms. Pictures as mental representations presumably bear some non-arbitrary isomorphisms to what they represent, although it is hard to make this sort of claim crystal-clear in purely functionalist terms. See, for instance, Kosslyn (1980, 1981), Block (1983a, 1983b), Tye (1984).

[8] The issues here are too complex and difficult to go over here in any useful detail, but for a general criticism of pictures as mental representations, see the critical essays in Block (1981) and Rey (1981); for an attempt to overcome many such criticisms, see Barsalou and Prinz (1997) and Prinz (1997). The contemporary debate about the adequacy of a purely imagistic medium for capturing what is involved in making a judgment and discursive thinking seem to parallel some of Kant's critique of British Empiricism in general and of Hume's associationism in particular, as indeed emphasized by many classicists like Fodor and Pylyshyn (1988), Rey (1997).

[9] See, e.g., Barwise and Etchemendy's Hyperproof (1995).

[10] For a non-nativist but otherwise quite Fodorian account of concept acquisition, see Margolis (forthcoming).

[11] But see Rey (1992, 1993) for an attempt to extend LOTH in this direction.

[12] Also, Hubert Dreyfus and John Haugeland's many writings indicate that they are realist about propositional attitudes but would reject LOTH nevertheless.

[13] Almost all British empiricists might be put in this latter category too, but they were in fact closer to LOTH by having embraced something like (B1) in some imagistic version. But it looks like they could not be better than being associationist regarding thought processes: they could not exploit the clear implications of modern symbolic logic and the advancement of computers -- they did not have their Frege and Turing, though Hobbes came close. This rendering of RTM relies on a broad interpretation of the notion of mental representation, of course, which has not always been the intended interpretation of Fodor: there are many places where he defends RTM (by that name) meaning to include (B) by default (Fodor 1981b, 1985, 1987, 1998). This should cause no confusion. Here I have chosen to stick to the literal meaning of the phrase rather than to its historically more accurate use -- this has become necessary, at any rate, in the light of the recent classicism/connectionism debate to which we will return below.

[14] For a powerful elaboration of this line of thought, see Rey (1997).

[15] A number of proposals have been offered by contemporary theorists (who are not necessarily defenders of LOTH as opposed to being mere RTM theorists but whose proposals can be adapted by LOT theorists) about how exactly to pursue that project. See, for instance, Fodor (1987, 1990), Dretske (1981, 1988), Millikan (1984, 1993), Papineau (1987), Devitt (1996), Loar (1982a), Field (1972, 1978), Block (1986).

[16] Tarski (1956), Field (1972), Davidson (1984).

[17] Although I described the line above as official and presented it as requiring a compositional semantics, and although almost all the defenders of LOTH conceive of it in this way because they think that is what empirical facts about thought and language demand, nevertheless it is perhaps important to be pedantic about exactly what LOTH is minimally committed to. Minimally, it is not committed to regarding the internal code as having a compositional semantics, namely a semantics where the meaning of complex sentences are determined by the meanings of its constituents together with their syntax; this, in effect, requires that the atomic expressions always make (approximately) the same semantic contributions to the whole of which they are constituents (idioms excepted). But strictly speaking LOTH can live without having a strictly compositional semantics if it turns out that there are other ways of explaining those empirical facts about the mind to which I will come below. Admittedly, in such a case LOTH would lose some portion of its appeal and interest. But even if this scenario turns out to be the case, there are still a lot of facts for LOTH to explain. Having said this, however, I will simply forget it in what follows.

[18] For fairness I should add that Searle's and Haugeland's criticisms are directed against AI community at large, and there, it has been common to conceive the computational model of mind as potentially involving a complete solution to semantic worries among others. Thus, Haugeland termed his target `GOFAI' (the Good Old Fashion Artifical Intelligence). Similarly, Searle's famous Chinese Room Argument was directed against what he called `Strong AI'.

[19] See Fodor (1985, 1987), Fodor and Pylyshyn (1988) for an elaborate presentation of this argument for LOTH.

[20] It should be noted however that (B1) is a meta-architectural condition that needs to be satisfied by any particular grammar for Mentalese, just as an analogue for (B1) is a condition upon the specific grammar of all systematic languages (see below).

[21] It is somewhat confusing that Fodor and Pylyshyn called this empirical cognitive regularity "compositionality" of cognitive capacities. In particular, the empirical phenomenon -- i.e., the fact that systematically connected thoughts are also always semantically related or semantically close to each other -- that needs to be explained is explained by LOT theorists by what is also called semantic compositionality: namely, the semantic value of a complex expression is a function of the semantic value of its atomic constituents such that each atomic constituent makes approximately the same semantic contribution to the context in which it occurs. This is what the postulation of a combinatorial semantics in conjunction with a combinatorial syntax buys for LOT-theorists in adequately explaining the empirical regularity in question. See Fodor and Pylyshyn (1988: 41-5).

[22] For a prioristic arguments of this sort, see also Lycan (1993) and Davies (1989, 1991).

[23] For example, Patricia and Paul Churchland, who have been the champions of eliminativism, hope that connectionism is the long waited theory which will provide the scientific foundations of the elimination of folk psychological constructs in "psychology" (P.S. Churchland 1986, 1987; Churchland and Sejnowski 1989; P.M. Churchland 1990; P.S. Churchland and P.M. Churchland 1990). Ramsey, Stich and Garon (1991) have recently defended the claim that if certain sorts of connectionist models turn out to be right then the elimination of folk psychology will be inevitable. Dennett (1986), and Cummins and Schwartz (1987) have also pointed out the potential of connectionism in the elimination of at least certain aspects of folk psychology.

[24]In fact, it is not clear at all, how connectionism can genuinely give support to intentional eliminativism as far as the units (or collections of units) in connectionist networks are treated as representing. If they are not treated as such, it is hard to see how they could be models of cognitive phenomena, and thus hard to see how they can present any eliminativist challenge. However, there appear to be two vague strands among eliminativists in this regard. One stems from the intuition that it is unlikely that there are really any concrete, isolable, and modularly identifiable symbol structures realized in the brain that would correspond to what Stich has called (1983: 237ff.) functionally discrete beliefs and desires of folk psychology, and connectionist networks, it is claimed, will vindicate this intuition. For similar remarks, among others, see Dennett (1986, 1991a), Clark (1988, 1989b). The second trend seems to be that connectionism will vindicate the claim that the explanation of mental phenomena doesn't require a full-blown semantics for such higher-order states as propositional attitudes. Rather, all that is needed is an account of some form of information processing at a much lower level, which, it is hoped, will be sufficient for the whole range of cognitive phenomena. Again, it is not clear what the proposals are. But see Paul Churchland (1990).

[25] It seems clear from some of the so far proposed models that many connectionists have been developing their models ultimately with an eye to capture the generalizations in their respective psychological domain. To see this it is enough to look at some of the papers in the second PDP volume (Rumelhart, McClelland and the PDP Research Group, 1986) among which Rumelhart and McClelland's paper on modeling learning the past tenses of English verbs is particularly celebrated. At the end, it is of course an open empirical question whether connectionist models will ultimately be able to capture them, or whether the generalizations they come up with will be compatible with or be the ones implicitly recognized by the folk, just as it is an open question whether classical models will ultimately be successful in this respect. Whatever the final outcome might be, however, it is prima facie the case that many connectionists intend their models to be taken as contributions within the intentional realist tradition. Smolensky (1988) is the most articulated defense of something like this position. He calls his position "the Proper Treatment of Connectionism" (PTC) and clearly separates it from various eliminativist positions.

[26] Premise (iii) is intimately connected to (ii) and (iv). So its rejection by itself does not mean much. Premise (iii), according to F&P, is there to prevent certain ad hoc solutions on the part of connectionists in the explanation of cognitive regularities mentioned in (ii). Premise (v) is close to being a tautology. So no one has any quarrel with it, although van Gelder (1991) comes very close to rejecting it on the ground that with every shift in scientific paradigms the conceptual apparatus of the previous and challenged paradigms becomes inadequate to correctly characterize the new and challenging paradigm.

[27] Some people who object to (ii) or (iv) are Dennett (1991b), Sterelny (1990), Rumelhart and McClelland (1986), Clark (1989b, 1991), Braddon-Mitchell and Fitzpatrick (1990), Butler (1991), Matthew (1994), Aizawa (forthcoming), Garson (forthcoming) and Wallis (forthcoming).

[28] Smolensky, for instance, is very explicit in his rejection of premise (vi): " ...distributed connectionist architectures, without implementing the Classical architecture, can nonetheless provide structured mental representations and mental processes sensitive to that structure" (1990a: 215) .