|This is a file in the archives of the Stanford Encyclopedia of Philosophy.|
how to cite
Stanford Encyclopedia of Philosophy
Historically, RTM (which goes back at least to Aristotle) is a theory of commonsense psychological states, such as belief, desire (the propositional attitudes), and perception. According to RTM, to believe that p, for example, is, in part, to bear the belief-relation (whatever that may be) to a mental representation that means that p. To perceive that a is is, in part (propositional attitudes may also be involved), to have a sensory experience of some kind which is appropriately related (however that may be) to a's being .
The leading contemporary version of RTM, the Computational Theory of Mind (CTM), makes the further claims that the brain is a kind of computer and that mental processes are computations on mental representations. According to CTM, cognitive states are constituted by computational relations to mental representations of various kinds, and cognitive processes are rule-governed sequences of such states.
CTM develops RTM by attempting to explain all psychological states and processes in terms of mental representation. In the course of constructing detailed empirical theories of human and animal cognition and developing models of cognitive processes implementable in artificial information processing systems, cognitive scientists have proposed a variety of types mental representations. While some of these may be suited to be mental relata of commonsense psychological states, some -- so-called "subpersonal" or "sub-doxastic" representations -- are not. Though many philosophers believe that CTM stands to provide the best scientific explanations of cognition and behavior, there is disagreement over whether or not such explanations will vindicate the commonsense psychological explanations (and representations) of prescientific RTM.
Mental representation has also been of interest to philosophers who hold that the semantic properties of expressions of natural language (and many non-linguistic symbols as well) are inherited from the mental states of their users. For these theorists, RTM is a component of a complete theory of linguistic meaning.
Intentional Eliminativists, such as Churchland, Stich and (perhaps) Dennett argue that no such things as propositional attitudes (and their implicated representational states) are necessary to the explanation of our mental lives and behavior.
Churchland denies that the generalizations of commonsense propositional-attitude psychology are true. He (1981) argues that folk psychology is a theory of the mind with a long history of failure and decline, and that it resists incorporation into the framework of modern scientific theories (including cognitive psychology). As such, it is comparable to alchemy and phlogiston theory, and ought to suffer a comparable fate. Commonsense psychology is false, and the states (and representations) it quantifies over simply don't exist. (It should be noted that Churchland is not an eliminativist about representation tout court. See, e.g., Churchland 1989.)
Dennett (1987a) grants that the generalizations of commonsense psychology are true and indispensable, but denies that this is sufficient reason to believe in the entities they seem to quantify over. He argues that to give an intentional explanation of a system's behavior is merely to adopt the "intentional stance" toward it. If the strategy of assigning contentful states to a system and predicting and explaining its behavior (on the assumption that it is rational -- i.e., that it behaves as it should, given the propositional attitudes it should have in its environment) is successful, then the system is intentional, and the propositional-attitude generalizations we apply to it are true. But there is nothing more to having a propositional attitude than this. (See Dennett 1987a: 29.)
Though he has been taken to be thus claiming that intentional explanations should be construed instrumentally, Dennett (1991) insists that he is a "moderate" realist about propositional attitudes, since he believes that the patterns in the behavior and behavioral dispositions of a system on the basis of which we (truly) attribute intentional states to it are objectively real. In the event, however, that there are two or more explanatorily adequate but substantially different systems of intentional ascriptions to an individual, Dennett claims, there is no fact of the matter about what the system believes (1987b, 1991). This does suggest an irrealism at least with respect to the sorts of things Fodor and Dretske take beliefs to be, though it is not the view that there is simply nothing in the world that makes intentional explanations true.
(Davidson (1973, 1974) and Lewis (1974) also defend the view that what it is to have a propositional attitude is just to be interpretable in a particular way. It is, however, not completely clear whether they intend their views to imply irrealism about propositional attitudes.)
Stich (1983) argues that cognitive psychology does not (or, in any case, should not) taxonomize mental states by their semantic properties at all. The generalizations of a scientific psychology will not quantify over representational states qua representational, and commonsense psychology will not be vindicated by CTM (properly understood). Attribution of psychological states by content is, Stich believes, sensitive to factors that render it problematic in the context of a scientific psychology -- viz., relations between an organism and its environment, and relations among psychological states.
Cognitive psychology seeks systematic causal explanations of behavior and cognition, and the causal powers of a mental state are determined by its intrinsic "structural" or "syntactic" properties (Stich calls this "the principle of psychological autonomy." See Stich 1978.) The semantic properties of a mental state (paradigmatically, its reference), in contrast, are determined by extrinsic properties -- viz., its history and environmental relations. Thus, such properties cannot figure in causal explanations of behavior.
Moreover, to ascribe a psychological state to an individual by content is (roughly) to say that the individual is in a state content-identical to the sort of state that typically causes one's own utterances of the sentence appearing in the content clause of the attribution. But the appropriateness of such ascriptions depends on what other psychological states the ascribee is (or is disposed to be) in. Since attribution by content is thus holistic, content-based psychological explanation of subjects who differ substantially from the ascriber(s) in their total system of beliefs is precluded (the generalizations will not apply).
Finally, ascription of a psychological state by content is sensitive to what other states an individual is disposed to be in as a result of being in that state (in particular, what inferences the subject is disposed to make). But this renders content-based theories unable to make sense of subjects whose states have inference potential substantially different from those of the theorist. (Stich makes these last two points using examples of individuals whose psychology is, from the point of view of the ascriber, pathological.)
Stich also rejects, for essentially the reasons just enumerated, what he calls the Weak Representational Theory of Mind, on which psychological states have content but psychological generalizations do not apply to them in virtue of it. He argues for a syntactic theory of the mind, on which the semantic properties of mental states play no explanatory role: only the syntactic properties of mental states are relevant to their computational profiles.
Explanations in cognitive science appeal to a variety of types of mental representation, including, for example, the "mental models" of Johnson-Laird 1983, the "retinal arrays," "primal sketches" and "2½ -D sketches" of Marr 1982, the "sub-symbolic" structures of Smolensky 1989, the "quasi-pictures" of Kosslyn 1980, and the "interpreted symbol-filled arrays" of Tye 1991 -- in addition to representations that may be appropriate to the explanation of commonsense psychological states. Computational explanations have been offered of, among other mental phenomena, belief (Fodor 1975, Field 1978), visual perception (Marr 1982, Osherson, et al. 1990), rationality (Newell and Simon 1972, Fodor 1975, Johnson-Laird and Wason 1977), language learning and use (Chomsky 1965, Pinker 1989), and musical comprehension (Lerdahl and Jackendoff 1983).
There is, however, disagreement among proponents of CTM as to what kinds of representations the brain uses, what kinds of neural structures realize them, and what kinds of brain-processes realize computations -- in short, on what kind of computer the brain is. The central debate here is between proponents of Classical Architectures and proponents of Connectionist Architectures.
The Classicists (e.g., Turing 1950, Fodor 1975, Newell and Simon 1976, Marr 1982, Fodor and Pylyshyn 1988) hold that mental representations are symbolic structures, which typically have semantically evaluable constituents, and mental processes are rule-governed manipulations of them. The Connectionists (e.g., McCulloch and Pitts 1943, Rumelhart and McClelland 1986, Rumelhart 1989, Smolensky 1988) hold that mental representations are realized by patterns of activation in a network of simple processors ("nodes") and mental processes consist of the spreading activation of such patterns. The nodes themselves are, typically, not taken to be semantically evaluable; nor do the patterns have semantically evaluable constituents. (Though there are versions of Connectionism -- "localist" versions -- on which individual nodes are taken to have semantic properties (Ballard 1986, Ballard and Hayes, 1984). It is arguable, however, that localist theories are neither definitive nor representative of the connectionist program (Smolensky 1988, 1991; Chalmers 1993).)
The Classicists are motivated (in part) by properties thought seems to share with language. Fodor's Language of Thought Hypothesis (LOTH) (Fodor 1975, 1987), according to which the system of mental symbols constituting the neural basis of thought is taken to be structured like a symbolic language, provides a well-worked-out version of the Classical approach as applied to commonsense psychology. (Cf. also Marr 1982 for an application of classical approach in scientific psychology.) According to the LOTH, the potential infinity of complex representational mental states isgenerated from a finite stock of primitive representational states, combined in accordance with recursive rules. This combinatorial structure accounts for the properties of productivity and systematicity of the system of mental representation. A representational system is productive if there are indefinitely many distinct representations that may be constructed in it; it is systematic if the constructability of some representations is intrinsically connected to the constructability of others. As in the case of symbolic languages, including natural languages (though Fodor does not suppose either that the LOTH explains only linguistic capacities or that only verbal creatures have this sort of cognitive architecture), these properties of thought are explained by appeal to the independent contentfulness of the representational units and their combinability into contentful complexes. That is, the semantics of both language and thought is compositional: the content of a complex representation is determined by the contents of its constituents and their structural configuration.
The Connectionists are motivated mainly by a consideration of the architecture of the brain, which apparently consists of layered networks of interconnected neurons. They argue that this sort of architecture is unsuited to carrying out classical serial computations. For one thing, processing in the brain is typically massively parallel. In addition, the elements whose manipulation drives computation in connectionist networks (principally, the connections between nodes) are neither semantically compositional nor semantically evaluable, as they are on the classical approach. This contrast with classical computationalism is often characterized by saying that representation is, with respect to computation, distributed as opposed to local: representation is local if it is computationally basic; and distributed if it is not. (Another way of putting this is to say that for the Classicist mental representations are computationally atomic, whereas for the Connectionist they are not.)
Moreover, connectionists argue that information processing as it occurs in connectionist networks more closely resembles actual human cognitive functioning. For example, whereas on the classical view learning involves something like hypothesis formation and testing (Fodor 1981c), on the connectionist model it is a matter of evolving distribution of weights on the connections between nodes, and typically does not involve the representation of identity conditions for the objects of knowledge. The connectionist network is "trained up" by repeated exposure to objects it is to learn to distinguish; and this seems to model at least one type of human learning quite well. Further, degradation in the performance of such networks in response to damage is a gradual, not sudden as in the case of a classical information processor, and hence more accurately models the loss of human cognitive function as it typically occurs in response to brain damage. It is also sometimes claimed that connectionist systems show the kind of flexibility in response to novel situations typical of human cognition -- situations in which classical systems are relatively "brittle" or "fragile."
Some philosophers have maintained that Connectionism entails irrealism about propositional attitudes. Ramsey, Stich and Garon (1990) have argued that if Connectionist models of cognition are basically correct, then there are no discrete representational states as conceived in ordinary commonsense psychology and classical cognitive science. Others, however (e.g., Smolensky 1989), hold that certain types of higher-level patterns of activity in a neural network may be roughly identified with the representational states of commonsense psychology. Still others (e.g., Fodor and Pylyshyn 1988, Heil 1991, Horgan and Tienson 1996) argue that language-of-thought style representation is both necessary in general and realizable within connectionist architectures. (MacDonald and MacDonald 1995 collects the central contemporary papers in the Classicist/Connectionist debate, and provides useful introductory material as well. See also Von Eckardt forthcoming.)
Whereas Stich (1983) accepts that mental processes are computational, but denies that computations are sequences of mental representations, others accept the notion of mental representation, but deny that CTM provides the correct account of mental states and processes.
Van Gelder (1995), for example, denies that psychological processes are computational. He argues that cognitive systems are dynamic, and that cognitive states are not relations to mental symbols, but quantifiable states of a complex system consisting of (in the case of human beings) a nervous system, a body and the environment in which they are embedded. Cognitive processes are not rule-governed sequences of discrete symbolic states, but continuous, evolving total states of dynamic systems determined by continuous, simultaneous and mutually determining states of the systems' components. Representation in a dynamic system is essentially information-theoretic; though the bearers of information are not symbols, but state variables or parameters. (See also Port and Van Gelder 1995.)
Horst (1996), on the other hand, argues that though computational models may be useful in scientific psychology, they are of no help in achieving a philosophical understanding of the intentionality of commonsense mental states. CTM attempts to reduce the intentionality of such states to the intentionality of the mental symbols they are relations to. But, he claims, the relevant notion of symbolic content is essentially bound up with the notions of convention and intention. So CTM involves itself in a vicious circularity: the very properties that are supposed to be reduced are (tacitly) appealed to in the reduction.
Another important issue for proponents of RTM is how mental representations come to have their semantic properties. There are two basic types of theory of content determination, informational theories and functional theories. Though theories of these types were designed to account for the content of commonsense psychological states, they may, at least in broad outline, serve as theories of content determination for sub-personal representational states as well.
Informational theories (Dretske 1981, 1988) hold that the content of a mental representation is grounded in the information it carries about what does (Devitt 1996) or would (Fodor 1987, 1990) cause its tokening. (Roughly, a state S of an object O carries the information that an object O* is in state S* iff it O*'s being in S* reliably causes O to be in S.) Informational theorists agree that information alone is not sufficient for the kind of content appropriate to commonsense psychological states such as belief, though they disagree on what additional properties an informational state must have in order to be a representation of the appropriate kind. Information is taken to be insufficient for two reasons. First, there are objects whose states carry information about states of affairs (for example, ringing telephones, tree trunks and speedometers) which they cannot be said to represent in the sense relevant to psychological states. Second, there is the infamous "Disjunction Problem," which reveals that bare informational theories are unable to account for the fact that causal relations hold between mental/neural states and states of affairs they do not represent -- i.e., the fact that we can think false thoughts.
The main attempts to solve the Disjunction Problem are the Asymmetric Dependency Theory (Fodor 1987, 1990a, 1994) and the Teleological Theory (Fodor 1990b, Millikan 1984, Papineau 1987, Dretske 1988, 1994). According to the Asymmetric Dependency Theory, the causal relation that determines content is the one without which the others would not hold, but which would itself hold even if the others did not. For example, since we would not (or would not be disposed to) token a mental representation of a horse when confronted with a zebra (say, in non-optimal perceptual conditions) if we did not (or were not disposed to) token a mental representation of a horse when confronted with a horse, but not vice versa, the mental representation tokened in the presence of horses means horse, in spite of the fact that there is a causal(informational) relation between it and zebras.
According to the Teleological Theory, the relation that determines content is the one the representation-producing mechanism has the selected (by evolution or learning) function of subserving. (For example, zebra-caused horse-representations do not mean zebra, because the mechanism by which those tokens were produced has the selected function of indicating horses, not zebras. The horse-representation-producing mechanism that responds to zebras is malfunctioning.)
Functional theories (Harman 1973, Block 1986) hold that the content of a mental representation is grounded in its (causal, computational, inferential) relations to other mental representations. They differ on whether relata should include all other mental representations or only some of them, and on whether to include external states of affairs. The view that the content of a mental representation is determined by its inferential/computational relations with all other representations is holism; the view it is determined by relations to only some other mental states is localism (or molecularism). (The view that the content of a mental state depends on none of its relations to other mental states is atomism.) Functional theories which recognize no content-constitutive external relata have been called solipsistic (Harman 1987). Some theorists posit distinct roles for internal and external connections, the former determining semantic properties analogous to sense, the latter determining semantic properties analogous to reference (Sterelny 1989).
Generally, those who, like informational theorists, think relations to one's (natural or social) environment are at least partially determinative of the content of one's mental representations are externalists (e.g., Putnam 1975, Burge 1979, 1986), whereas those who, like some proponents of functional theories, think representational content is determined by intrinsic properties alone, are internalists (or individualists; cf. Putnam 1975, Fodor 1981b).
This issue is of central importance, since the explanations of cognitive science are causal (computational), and the representational states these explanations quantify over are supposed to be subsumed under psychological generalizations in virtue of their content. If, however (as stressed by Stich), a mental representation's having a particular content is due to factors extrinsic to it, it is unclear how its having that content could determine its causal powers, which, arguably, must be intrinsic (Stich 1983; see also Fodor, 1982, 1987, 1994). Some who accept the Putnam and Burge arguments for externalism have argued that internal factors determine a component of the content of a mental representation. They say that mental representations have both "narrow" content (determined by intrinsic factors) and "wide" content (determined by narrow content plus extrinsic factors). (The distinction may as well be applied to the sub-personal representations of cognitive science as to those of commonsense psychology. See Von Eckardt 1993: 189.)
Narrow content has been variously construed. Putnam (1975), Fodor (1982: 114; 1994: 39ff), and Block (1986: 627ff), for example, seem to understand it as something like de dicto content (i.e.,Fregean sense, or perhaps character à la Kaplan 1989). On this construal, narrow content is metaphysically context-independent and directly expressible. Fodor (1987) and Block (1986), however, have also characterized narrow content as metaphysically context-independent but radically inexpressible. On this construal, narrow content is a kind of proto-content, or content-determinant, and can be specified only indirectly, via specifications of context/wide-content pairings. On both construals, narrow contents are characterized as functions from context to (wide) content. The narrow content of a representation is determined by properties intrinsic to it or its possessor -- its syntactic structure or its intramental computational or inferential role, for example.
Fodor (1994, 1998) has more recently urged that cognitive science might not need narrow mental representations in order to supply naturalistic (causal) explanations of human cognition and action, since the sorts of cases they were introduced to handle, viz., Twin-Earth cases and Frege cases, are either nomologically impossible or dismissible as exceptions to non-strict psychological laws.
Disagreement over phenomenal representation concerns the existence and nature of phenomenal properties (Dennett 1988 argues that there simply are no such things as qualia as ordinarily conceived), and the role they play (if they exist) in determining the content of sensory, perceptual, and imagistic representations. If a sensation is the mere having of a qualitative experience (a quale: a pain or tickle, an experience of blue or smoothness), while perception is sensory experience of something (in the external world), then, if perceptions are constituted in part by sensations -- that is, if they have sensory phenomenal properties -- a crucial question is what, if anything, such properties have to do with their representationalcontent.
Some historical discussions of the representational properties of the mind (e.g., Aristotle 1984, Locke 1978, Hume 1978) assumed that phenonmenal representations -- viz., percepts ("impressions") and images ("ideas") -- are the only kinds of mental representations, and that the mind represents the world in virtue of being in states that resemble it. On such a view, mental representations have their content in virtue of their introspectable phenomenal features. Powerful arguments, however, focusing on the lack of generality (Berkeley 1975), ambiguity (Wittgenstein 1953),and non-compositionality (Fodor 1981c) of perceptual and imagistic representations, as well as their unsuitability to function as logical (Frege 1918, Geach 1957) or mathematical (Frege 1953) concepts, and the symmetry of resemblance (Goodman 1976), convinced philosophers that no theory of the mind can get by with only imagistic representations. Some contemporary philosophers (Harman 1990, Leeds 1993, Rey 1991, Tye 1995, 2000) have argued that theories of the representational mind can get by with no phenomenal properties, since symbolic representations can do all the representational work of perception and imagination. (Block 1996 calls such philosophers "Representationists.")
Others (Evans 1982; Peacocke 1983, 1989, 1992; Raffman 1995; Shoemaker 1990) argue that a satisfactory theory of the representational mind must acknowledge phenomenal representations. (Block 1996 calls such philosophers "Phenomenists.") They claim that phenomenal properties are (at least partly) responsible for the representational powers of (at least) perceptual experiences (they do not claim that symbolic representation plays no role in determining the content of experience). Peacocke (1983, 1992), and Raffman (1995), for example, argue that we are capable of mentally representing perceivable properties of our environment that we do not (or cannot) represent symbolically.
Peacocke 1992 develops the notion of a perceptual "scenario" (an assignment of phenomenal properties to coordinates of a three-dimensional egocentric space), whose content is "correct" (a semantic property) if in the corresponding "scene" (the portion of the external world with the same origin and axes as the scenario) properties are distributed as they are in the scenario. He claims that such scenarios are possible in the absence of symbolic representations corresponding to the properties represented. (Cf. the distinction in Dretske 1969 between epistemic and non-epistemic perception.)
Still others, including Chalmers 1996, Flanagan 1992, Goldman 1993, Jackendoff 1987, Levine 1993, 1995, McGinn 1992, Searle 1990 and Strawson 1995, claim that purely symbolic (conscious) representational states themselves have a proprietary phenomenology. If this claim is correct, the question of what, if anything, these properties have to do with content rearises for symbolic representation. (A Representationist answer could not be eliminativist with respect to phenomenal content if this claim is correct.)
Kosslyn claims that the results suggest that the tasks were accomplished via the examination and manipulation of mental representations that themselves have spatial properties -- i.e., pictorial representations, or images. Others, principally Pylyshyn (1979, 1981a, 1981b), argue that the empirical facts can be explained in terms exclusively of discursive, or propositional representations and cognitive processes defined over them. (Pylyshyn takes such representations to be sentences in a language of thought.)
The idea that pictorial representations are literally pictures in the head is not taken seriously by proponents of the pictorial view of imagery (see, e.g., Kosslyn and Pomerantz 1977). The claim is, rather, that mental images represent in a way that is relevantly like the way pictures represent. (Attention has been focused on visual imagery -- hence the designation pictorial; though of course there may imagery in other modalities -- auditory, olfactory, etc. -- as well.)
The distinction between pictorial and discursive representation can be characterized in terms of the distinction between analog and digital representation (Goodman 1976). This distinction has itself been variously understood (Fodor and Pylyshyn 1981, Goodman 1976, Haugeland 1981, Lewis 1971, McGinn 1989), though a widely accepted construal is that analog representation is continuous (i.e., in virtue of continuously variable properties of the representation), while digital representation is discrete (i.e., in virtue of properties a representation either has or doesn't have) (Dretske 1981). (An analog/digital distinction may also be made with respect to cognitive processes. (Block 1983.))
On this understanding of the analog/digital distinction, phenomenal representations, which represent in virtue of properties that may vary continuously (such as being more or less bright, loud, vivid, etc.), would be analog, while non-phenomenal representations, whose properties do not vary continuously (a thought cannot be more or less about Paris: either it is or it is not) would be digital. It seems clear, however, that commitment to pictorial representation is not ipso facto commitment to phenomenal representation, since representations may have non-phenomenal properties that vary continuously.
There are, moreover, other ways of understanding pictorial representation that presuppose neither phenomenality nor analogicity.
Kosslyn, (1980, 1983), for example, uses the metaphor of a spatial display on a CRT screen to characterize pictorial representation: images are generated on a screen by a computer using information stored (in discursive form) in memory, and the spatial properties of the screen-image (which is composed of illuminated and unilluminated pixels) may correspond to the spatial properties of the imaged object. This isomorphism may be achieved without literal spatial representation in the brain. According to Kosslyn, a mental representation is "quasi-pictorial" when every part of the representation corresponds to a part of the object represented, and relative distances between parts of the object represented are preserved among the parts of the representation. Distances between two locations within a representation may be defined by number of intervening positions. (Kosslyn 1982.)
Moreover, intervention need not be spatial: it may, for example, be functional. That is, the distance between two points on an object might be represented by computational distance between the representations of the points -- if, for example, the number of computational steps required to combine stored information about the positions of the particular points equals (or is proportional to) the number of spatially intermediate points on the object represented. (Cf. Rey 1981.)
Tye (1991) proposes a view of images on which they are hybrid representations, consisting both of pictorial and discursive elements. On Tye's account, images are "(labeled) interpreted symbol-filled arrays." The symbols represent discursively, while their arrangement in arrays has representational significance (the location of each "cell" in the array represents a specific viewer-centered 2-D location on the surface of the imagined object).
Linguistic acts seem to share such properties with mental states. Suppose I say that ocelots take snuff. I am talking about ocelots, and if what I say of them (that they take snuff) is true of them, then my utterance is true. Now, to say that ocelots take snuff is (in part) to utter a sentence that means that ocelots take snuff. Many philosophers have thought that the semantic properties of linguistic expressions are inherited from the mental states they are conventionally used to express (Grice 1957, Fodor 1978, Schiffer 1988, Searle 1983). On this view, the semantic properties of linguistic expressions are the semantic properties of the representations that are the mental relata of the states they are conventionally used to express.
(Others, however, e.g., Davidson (1975, 1982) have suggested that the kind of thought human beings are capable of is not possible without language, so that the dependency might be reversed, or somehow mutual (see also Sellars 1956). (But see Martin 1987 for a defense of the claim that thought is possible without language. See also Chisholm and Sellars 1958.) Schiffer (1987) subsequently despaired of the success of what he calls "Intention Based Semantics.")
It is also widely held that in addition to having such properties as reference, truth-conditions and truth -- so-called extensional properties -- expressions of natural languages also have intensional properties, in virtue of expressing properties or propositions -- i.e., in virtue of having meanings or senses, where two expressions may have the same reference, truth-conditions or truth value, yet express different properties or propositions (Frege 1892). So, if the semantic properties of natural-language expressions are inherited from the thoughts and concepts they express (or vice versa -- or both), then an analogous distinction may be appropriate for mental representations.
This distinction, which is accepted by many philosophers of mind, can be made out in a number of different ways (for example, the de dicto/de re interpretation of the narrow/wide distinction mentioned above). In general, theories of mental representation that accept a distinction between intrinsic and extrinsic determinants of content are called "two-factor" theories (Field 1978, Loar 1981, McGinn 1982). Such components may or may not be taken to be analogous to the intension and extension of a linguistic expression (Sterelny 1989).
Table of Contents
First published: March 30, 2000
Content last modified: December 18, 2002