Phenomenal Intentionality

First published Mon Aug 29, 2016; substantive revision Wed Feb 22, 2017

Phenomenal intentionality is a kind of intentionality, or aboutness, that is grounded in phenomenal consciousness, the subjective, experiential feature of certain mental states. The phenomenal intentionality theory (PIT), is a theory of intentionality according to which there is phenomenal intentionality, and all other kinds of intentionality at least partly derive from it. In recent years, PIT has increasingly been seen as one of the main approaches to intentionality.

1. Introduction

The phenomenal intentionality theory (PIT) is a theory of intentionality, the aboutness of mental states. While many contemporary theories of intentionality attempt to account for intentionality in terms of causal relations, informational relations, functional roles, or other “naturalistic” ingredients, PIT aims to account for it in terms of phenomenal consciousness, the felt, subjective, or “what it’s like” (Nagel 1974) aspect of mental life.

According to PIT, the key ingredient giving rise to intentional states is phenomenal consciousness. Pautz (2013) describes PIT as taking a “consciousness first” approach to intentionality, since it claims that consciousness grounds or is explanatorily prior to intentionality. Kriegel (2011a, 2013a) describes the approach as one on which consciousness is the “source” of intentionality; consciousness “injects” intentionality into the world. These ways of characterizing PIT suggest a reductive picture, on which intentionality is reduced to or explained in terms of consciousness, but in section 2.3 we will see that some versions of PIT are not reductive.

By explaining intentionality in terms of phenomenal consciousness, PIT challenges the traditional view that the mind divides into two mutually exclusive and independent types of states: intentional states and phenomenal states (see Kim 1998 for a clear articulation of the traditional view). According to PIT, intentional states and phenomenal states are intimately related. Some phenomenal states are inherently intentional, and all intentional states are either phenomenal states or importantly related to phenomenal states.

Phenomenal intentionality has been discussed under that label only recently (Horgan and Tienson 2002 and Loar 2003, related to influential work by Searle, Siewert, and others), but many modern philosophers have suggested a close relation between thought, which is characteristically intentional, and perception, which is characteristically phenomenal, or consciousness itself. Rationalists such as Descartes held that all cognition is conscious, for example, while empiricists such as Hume and Locke held that all cognition is grounded in perceptual experience. Later, Brentano, Husserl and the phenomenologists that they influenced conceived of intentionality primarily as a conscious phenomenon. Like PIT, all of these views can be understood as taking a “consciousness first” approach. For more on the history of PIT, see Kriegel (2013a) and the entry consciousness and intentionality.

In this article we describe various versions of PIT, their motivations, their problems, and their relations to other views.

2. The phenomenal intentionality theory

2.1 The general view

Intentionality is the aboutness or directedness of mental states. For example, a thought that snow is white seems to “say” or represent that snow is white. Similarly, your current visual experience might represent a blue cup, or that there is a blue cup in front of you. When a state exhibits intentionality, we can think of it as involving the instantiation of an intentional property, a property of representing something. What the state represents is its (intentional) content. In this article, we use the word “state” for instantantiations of properties, which are sometimes called token states. (See the entries on Intentionality and mental representation.)

Phenomenal consciousness is the felt, subjective, or “what it’s like” aspect of mental states (see Nagel 1974). Paradigmatic examples of phenomenal states include perceptual experiences, pains, emotional feelings, episodes of mental imagery, and cognitive experiences such as the feeling of déjà vu. Each of these states has a characteristic phenomenal character; there is something that it’s like to be in it. When there is something that it is like for a subject, we can say that she instantiates a phenomenal property, or that she has a phenomenal state. (See the entry on consciousness.)

Phenomenal intentionality is intentionality that is constituted by phenomenal consciousness. A phenomenal intentional state is an instantiation of an intentional property that is constituted by its subject's (past, present, and/or future) phenomenal states. An example of a phenomenal intentional state might be an instantiation of representing a red cube that is constituted by a visual experience of a red cube. When you experience a red cube, it seems that you automatically and necessarily represent a red cube in virtue of experiencing the cube. When an intentional state is not a phenomenal intentional state, we will say that it is a non-phenomenal intentional state.

To a first approximation, the phenomenal intentionality theory (PIT) is the view that there are phenomenal intentional states, and that these are the central sort of intentional states. The next subsections discuss ways that our initial gloss on PIT can be precisified.

2.2 Weak, Moderate, and Strong PIT

The following three theses about the relationship between phenomenal intentional states and intentional states are ways of precisifying the idea that phenomenal intentionality plays a central role in the mind:

Strong PIT: All intentional states are phenomenal intentional states.
Moderate PIT: All intentional states either are phenomenal intentional states or are at least partly grounded in some way in phenomenal intentional states.
Weak PIT: Some intentional states are phenomenal intentional states.

The “all” in the above formulations of PIT should be understood as quantifying over all actual intentional states, not all metaphysically possible intentional states. In the same way that some physicalist theories of intentionality allow that there are merely possible forms of intentionality that are independent of physical properties, PIT can allow that there are non-actual intentional states that have nothing to do with phenomenal consciousness. Of course, more specific versions of PIT might make stronger claims.

Of the three views mentioned above, Strong PIT asserts the strongest possible relationship between phenomenal intentionality and intentionality: it claims that phenomenal intentionality is the only kind of intentionality there is. Relatively few hold this view, but versions of it have been defended by Pitt (2004), Farkas (2008a), and Mendelovici (2010). The difficulty with this view, as we will see below, is that it is not clear that there are enough phenomenal states or phenomenal states of the right kind to constitute all intentional states. For example, it is not easy to see how standing beliefs like your belief that grass is green could be constituted by phenomenal states.

Moderate PIT is a significantly weaker view. It is compatible with the existence of non-phenomenal intentional states, but claims that any such non-phenomenal intentional states are at least partly grounded in phenomenal intentional states.

There are different ways of explicating the intuitive notion of grounding used in the definition of Moderate PIT. For our purposes here, we can say that A grounds B when B obtains in virtue of A. This gloss is itself in need of further analysis, but for now it is enough to know that grounding is an asymmetric relation of metaphysical determination (see Trogdon 2013 for an introduction to grounding). An example of grounding that is similar to the grounding relation posited by some proponents of Moderate PIT is the (alleged) grounding of linguistic meaning in speaker intentions: on many views of language, words have their meanings in virtue of speakers’ intentions toward them. To say that A is partly grounded in B is to that say that A is grounded in the combination of B and other factors.

There are different views of how phenomenal intentionality might partly ground non-phenomenal intentionality:

One view is that non-phenomenal intentional states are simply dispositions to have phenomenal intentional states, and that these dispositions get their contents from the phenomenal intentional states that they are dispositions to bring about (Searle 1983, 1990, 1991, 1992). On this view, standing beliefs about grass that are not phenomenal intentional states are dispositions to have such states. This view does not count as a version of Strong PIT because dispositions to token phenomenal intentional states or phenomenal states are not constituted by such states (a disposition to have X is distinct from and not constituted by merely having X).

Another view is that non-phenomenal intentional states get their intentionality from functional relations they bear to phenomenal intentional states (Loar 2003a, Horgan & Tienson 2002, Graham, Horgan, & Tienson 2007). On this view, a standing belief that grass is green might have its content in virtue of being suitably connected to a host of phenomenal intentional states. This view is not a version of Strong PIT because functional connections to phenomenal states are something above and beyond phenomenal states.

A third view is that non-phenomenal intentionality is a matter of ideal rational interpretation (Kriegel 2011a,b). On this view, the relevant phenomenal intentional states are in the mind of an ideal rational interpreter. This views is not a variant on Strong PIT because facts about rational interpretation are something above and beyond phenomenal facts, so intentional states are not fully constituted by phenomenal states on this view.

More variations on Moderate PIT will be discussed below. Proponents of Moderate PIT (or something close to it) include Loar (1987, 1988, 1995, 2003a, 2003b), Searle (1983, 1990, 1991, 1992), Goldman (1993a,b), Siewert (1998), McGinn (1988), Kriegel (2003, 2011a,b), Horgan, Tienson & Graham (2003, 2004, 2006, 2007), Georgalis (2006), Pitt (2004, 2009, 2011), Farkas (2008a,b, 2013), Mendola (2008), Chalmers (2010: xxiv), Bourget (2010), Pautz (2010, 2013), Smithies (2012, 2013a, 2013b, 2014), and Montague (2016).

Weak PIT merely claims that there is phenomenal intentionality. It allows that there are non-phenomenal intentional states that have nothing to do with phenomenal consciousness. The proponents of Weak PIT are too many to list. As we will see below, Weak PIT is entailed by some widely accepted views in philosophy of mind, including many forms of representationalism about phenomenal consciousness, the view that phenomenal states are identical to intentional states (but not necessarily vice versa).

Since Moderate PIT is the strongest view that is endorsed by most proponents of phenomenal intentionality theories, it is the view that has the best claim to being the phenomenal intentionality theory. For this reason, this article will focus mainly on Moderate PIT. Unless otherwise indicated, we will use “PIT” to refer to Moderate PIT.

2.3 Grounding, Identity, and Reductive PIT

Our definition of phenomenal intentional states is neutral between two types of views regarding how phenomenal states constitute intentional states. On grounding views, phenomenal intentional states are grounded in phenomenal states (either in individual states or in sets of such states). Since grounding is asymmetric, this view implies that phenomenal intentional states are distinct from the phenomenal states that ground them. In contrast, identity views take the relation that obtains between phenomenal intentional states and phenomenal states (in virtue of which the former are constituted by the latter) to be that of identity: certain instantiations of intentional properties are identical to instantiations of phenomenal properties. On this view, phenomenal intentional states are identical to individual phenomenal states or sets of phenomenal states.

Farkas (2008a,b) and Pitt (2004) (though not Pitt 2009) defend a grounding version of PIT. It is also possible to read Horgan and Tienson (2002) as holding a grounding version of PIT, since they take the relevant relation between phenomenal intentionality and phenomenal consciousness to be that of “constitutive determination”, which might be understood as a kind of grounding relation. Other proponents of PIT seem to favor an identity view.

We can also distinguish between versions of PIT that are reductive and versions that are not. To a first approximation, a theory of intentionality is reductive if it specifies the nature of intentionality in non-intentional terms that are more basic or fundamental if the theory is true. A theory that is not reductive might either be neutral on the reduction question or incompatible with reduction.

All grounding versions of (Moderate or Strong) PIT are reductive. Such views entail that all intentionality is ultimately grounded in phenomenal states. Since grounding is asymmetric, the grounding phenomenal states cannot themselves be intentional and are more fundamental than intentional states. An identity version of (Moderate or Strong) PIT will be reductive if it holds or entails that phenomenal intentional states are identical to phenomenal states and that phenomenal descriptions are more fundamental than intentional descriptions (just as “H2O” descriptions are more fundamental than “water” descriptions). If such views are correct, it should be possible to understand phenomenal states independently of intentionality.

Versions of (Moderate or Strong) that identify phenomenal intentional states with phenomenal states can also be nonreductive. On such nonreductive views, phenomenal descriptions of intentional states are not more fundamental then intentional descriptions. Exactly which versions of PIT that identify phenomenal intentional states with phenomenal states are reductive or nonreductive is an open question.

Regardless of whether PIT provides a reductive account of intentionality in general, versions of Moderate PIT that allow for non-phenomenal intentional states aim to reduce such states to phenomenal intentionality and other ingredients, so they are provide a reductive account of at least some intentional states. We will discuss some of these views below.

2.4 Other dimensions of variation

Another important dimension of variation between versions of PIT concerns the extent of phenomenal intentionality. This disagreement cuts across the disagreement between Weak, Moderate, and Strong PIT. For example, Loar (2003a), who falls in the Moderate PIT camp, mostly limits phenomenal intentionality to perceptual and other sensory states. By contrast, other advocates of Moderate PIT (for example, Strawson and Pitt) claim that many thoughts have phenomenal intentionality. Most theorists maintain that unconscious subpersonal states, such as states in early visual processing or unconscious linguistic processing, lack phenomenal intentionality, though Bourget (2010, forthcoming) and Pitt (2009, Other Internet Resources) claim that some such states might have phenomenal intentionality that we are unaware of.

Phenomenal intentionality theorists also disagree on which mental states, if any, have non-phenomenal intentionality. Horgan & Tienson (2002) and Kriegel (2011a) claim that at least some unconscious subpersonal states have non-phenomenal intentionality, while Searle (1990, 1991, 1992) and Mendelovici (2010) deny this. Searle (1990, 1991, 1992) takes at least some standing states, such as non-occurrent beliefs and desires, to have non-phenomenal intentionality, which Strawson (2008) and Mendelovici (2010) deny.

3. The place of PIT in logical space

This section outlines some important relations between PIT and other views.

3.1 Alternative theories of intentionality

Reductive PIT stands in contrast with two well-known classes of reductive theories of intentionality: tracking theories (Stampe 1977; Dretske 1988, 1995; Millikan 1984; Fodor 1987), which take the intentionality of a mental state to be determined by causal-informational-historical links between it and things in the environment (see the entries on causal theories of mental content and teleological theories of mental content), and conceptual role theories, which take the content of mental states to be a matter of their relations to other mental states and sometimes to the external world (Block 1986; Harman 1987; Greenberg & Harman 2007). (See the section Conceptual role in the entry on narrow mental content.) Reductive PIT also contrasts with primitivism, the view that intentionality cannot be reduced. Reductive PIT is a competitor to tracking, conceptual-role, and primitivist theories in that it is an alternative account of the grounds of intentionality generally. Versions of PIT that are not reductive are also competitors to these theories, but to a more limited extent: they only offer an alternative explanation of the grounds of non-phenomenal intentional states. Such views are compatible with reducing phenomenal intentional states to tracking states and similar.

While PIT offers a different account of the grounds of intentionality than conceptual-role and tracking theories, it is noteworthy that all versions of PIT are strictly speaking compatible with these theories. It could turn out that PIT is true, but phenomenal consciousness reduces to conceptual role or tracking, so both PIT and the tracking or conceptual-role theory are true. The sense in which PIT and tracking or conceptual-role theories compete is the same sense in which the view that the Earth’s inner core is made of iron and the view that it is made of carbon would be competitors if both were considered seriously. These views are competitors because we would not expect both to be true. This is despite the fact that it is epistemically possible that carbon is a kind of iron and both theories are true.

3.2 Other views of the relationship between consciousness and intentionality

Representationalism (or intentionalism) is the view that phenomenal states are intentional states that meet certain further conditions (Harman 1990, Dretske 1995, Lycan 1996, Tye 2000, Byrne 2001, Chalmers 2004; see also the entry Representational theories of consciousness). As in the case of PIT, some versions of representationalism are reductive while others are not. When one says that phenomenal states are intentional states that meet certain conditions, one might intend this as a reduction of phenomenal consciousness, or one might merely intend to point out a true identity. As in the case of PIT, many proponents of representationalism take the view to be reductive.

The reductive versions of representationalism and PIT are incompatible: if consciousness reduces to intentionality, then intentionality does not reduce to consciousness, and vice-versa. However, versions of PIT and representationalism that are not reductive are compatible. It is common for the two views to be combined: many advocates of PIT also endorse a version of representationalism, claiming that all phenomenal states are also representational states (Horgan and Tienson 2002; Graham, Horgan, and Tienson 2007; Pautz 2010; Mendelovici 2013; Bourget 2010). In the other direction, representationalism, as we are understanding the view, is committed to Weak PIT, because it entails that some intentional states are phenomenal states.

Another view of the relationship between intentionality and consciousness is what Horgan & Tienson dub “separatism”, the view that consciousness and intentionality are wholly distinct mental phenomena (see e.g., Kim 1998). On this view, consciousness and intentionality do not bear interesting metaphysical relations to each other. For example, there is no identity or grounding relation between them (separatists reject both PIT and representationalism). Separatism is typically associated with the view that consciousness is limited to perceptual and sensory states, and intentionality is limited to beliefs, desires, and other propositional attitudes.

4. Arguments for PIT

This section overviews the main arguments and motivations for PIT.

4.1 The phenomenological case

Horgan and Tienson (2002) argue for the claim that “[t]here is a kind of intentionality, pervasive in human mental life, that is constitutively determined by phenomenology alone” (2002: 520). This is a (fairly strong) version of Weak PIT. They do so by arguing for the following two principles:

IOP: (The intentionality of phenomenology) Mental states of the sort commonly cited as paradigmatically phenomenal (e.g., sensory-experiential states such as color-experiences, itches, and smells) have intentional content that is inseparable from their phenomenal character.
POI: (The phenomenology of intentionality) Mental states of the sort commonly cited as paradigmatically intentional (e.g., cognitive states such as beliefs, and conative states such as desires), when conscious, have phenomenal character that is inseparable from their intentional content.

We take IOP to say that each paradigmatic phenomenal property has an associated intentional content such that, necessarily, all instances of the property have this content. We take POI to say that each paradigmatic intentional property has some phenomenal character such that, necessarily, all instances of the property have this phenomenal character.

Horgan and Tienson defend IOP by appealing to broadly phenomenological considerations:

You might see, say, a red pen on a nearby table, and a chair with red arms and back a bit behind the table. There is certainly something that the red you see is like to you. But the red that you see is seen, first, as a property of objects. These objects are seen as located in space relative to your center of visual awareness. And they are experienced as part of a complete three-dimensional scene—not just a pen with table and chair, but a pen, table, and chair in a room with floor, walls, ceiling, and windows. This spatial character is built into the phenomenology of the experience. (Horgan & Tienson 2002: 521, footnote suppressed)

The key point is that the redness that we notice in experience seems to be a property of external objects, which suggests that the redness is a represented property. This argument echoes the transparency considerations for representationalism (see the entry on Representational Theories of Consciousness).

Horgan and Tienson’s case for POI (the phenomenology of intentionality) rests primarily on detailed phenomenological observations purporting to show that there are phenomenological features corresponding to most contents of propositional attitudes as well the attitudes of belief and desires. We discuss these arguments in section 5.

With IOP and POI in hand, Horgan and Tienson proceed to argue for the widespread existence of phenomenal intentionality by considering the case of a subject’s phenomenal duplicate, which is a creature that has all the same phenomenal states as the subject throughout its existence. The following is a reconstruction of the key steps of their argument:

  1. The perceptual phenomenal states of a pair of phenomenal duplicates necessarily share some contents, including many perceptual contents (from IOP).
  2. Therefore, the phenomenal duplicates necessarily have the same perceptual beliefs.
  3. Therefore, the phenomenal duplicates necessarily share many of their non-perceptual beliefs.
  4. Therefore, the phenomenal duplicates necessarily share many of their intentional contents at the level of perception, perceptual beliefs, and non-perceptual beliefs.

The general idea is that phenomenal states, with their phenomenally determined intentionality, bring in their train much of the rest of the “web of belief”.

Horgan and Tienson argue for the transition from (1) to (2) by articulating in some detail how the contents of perceptual experiences, either individually or in groups, bring in their train perceptual beliefs. A key idea, supported by POI, is that perceptual beliefs and other attitudes towards perceptible contents have phenomenal characters closely associated with them. For example, there is a phenomenology of accepting various contents as true. The suggestion is that once one has a vast number of perceptual experiences with their associated perceptual content, and feelings of accepting and rejecting some of these, that is enough to qualify as having a number of perceptual beliefs.

Regarding the transition from (2) to (3), a key idea, again derived from POI, is that non-perceptual beliefs have extensive phenomenology. For example, according to Horgan and Tienson, there is something that it’s like to wonder whether to cook meatloaf for dinner. The phenomenology of such non-perceptual thoughts, together with one’s vast collection of perceptual beliefs and perceptual experiences, fixes a large number of non-perceptual beliefs and other non-perceptual propositional attitudes. (4) combines conclusions (1)(3).

The following considerations, while not exactly Horgan and Tienson’s, seem to go in the same direction as their line of argument: If all beliefs and desires have phenomenal characters unique to them, as Horgan and Tienson take themselves to have established, then phenomenal duplicates will share these phenomenal characters. By IOP, these phenomenal characters must determine contents. Plausibly, they determine the contents of the beliefs and desires that they characterize. So if an individual has a given belief content C, then his or her phenomenal duplicate has this content as the content of an experience. Moreover, the duplicate has a feeling of accepting C. It is then quite plausible that the duplicate believes C.

Horgan and Tienson’s argument establishes Weak PIT, but it does not yet establish any version of Moderate PIT. It could be that many intentional states are phenomenal intentional states, but some intentional states are neither phenomenal intentional states nor grounded in phenomenal intentionality (Bailey and Richards (2014) point out related limitations of the argument). However, when combined with Horgan and Tienson's arguments for the claim that many non-phenomenal intentional states are grounded in phenomenal intentionality (more on this view below), we have some support for Moderate PIT.

4.2 Assessibility for accuracy

Siewert argues for Weak PIT by arguing that phenomenal states are automatically assessable for accuracy. A key element of Siewert’s argument is his assumption that phenomenal characters can be identified with “how it seems for it to look some way to someone”. We take this to mean that phenomenal states are states of things seeming a certain way, where the relevant kind of seeming is the kind we are familiar with from cases where things look a certain way in perception (perhaps there are other kinds of seemings that are not phenomenal states). Siewert’s argument is contained in the following passage, in which we have added numbers corresponding to the premises and conclusions in our rendering of the argument below:

First, consider some instance of its seeming to you as it does for it to look as if something is shaped and situated in a certain way, such as its seeming to you just as it does on a given occasion for it to look as if there is something X-shaped in a certain position. (1) If it seems this way to you, then it appears to follow that it does look to you as if there is something X-shaped in a certain position. (3) If this is right, then its seeming this way to you is a feature in virtue of which you are assessable for accuracy—(5) that is to say, it is an intentional feature. For, from what we have said, (2) if it seems to you as it does for it to look this way, then, if it is also the case that there is something X-shaped in a certain position, it follows that the way it looks to you is accurate. (Siewert 1998: 221)

Siewert suggests that this argument straightforwardly generalizes to a large number of perceptual experiences.

Let S be the phenomenal state in which it seems to you just as it does on a given occasion for it to look as if there is something X-shaped in a certain position. The argument in the above quote can be broken down as follows:

  1. Necessarily, if you are in S, it looks to you as if things are a certain way W, in virtue of being in S.
  2. Necessarily, if it looks to you as if things are in way W, things being (or not being) W would make you accurate (or inaccurate).
  3. Therefore, if you are in S, you are assessable for accuracy with respect to things being W, in virtue of being in S.
  4. If you are assessable for accuracy in virtue of being in a certain state, this state has intentional content.
  5. Necessarily, S has intentional content.

Siewert does not explicitly defend premises (1) and (2). (4) is defended in section 6.2 of the book.

One might object that (3) does not follow from (1) and (2). Perhaps S is such that, necessarily, things being a certain way (or not) would make the bearer of S accurate (or not), but it is not in virtue of being in S that its bearer is assessable for accuracy. The assessability might come from the inevitable addition of an interpretation to S in all circumstances. Siewert argues against this possibility extensively between pages 222 and 245, ruling out various sources of interpretation.

Gertler (2001) objects that there is an alternative explanation of Siewert’s observations about the co-occurrence of phenomenal and intentional properties: intentional properties automatically give rise to phenomenal properties. Gertler argues that Siewert has not ruled out this alternative, and so fails to establish PIT.

4.3 Internalism and brains in a vat

It is possible to argue for PIT on the basis of internalism about mental content, the view that what a subject’s mental states represent is fully determined by her intrinsic properties. (The alternative to internalism is externalism. Intentional content that is determined by a subject’s intrinsic properties is said to be narrow as opposed to wide. See the entries on narrow mental content and externalism about mental content.)

Loar (2003a) makes the following argument from internalism to PIT. First, Loar proposes the following two desiderata for a theory of intentionality: (1) The theory should be a non-referential theory, where a non-referential theory is a theory that does not take intentionality to be a matter of reference to external entities, for example, concrete or abstract objects. This desideratum is motivated by internalism. (Note that, for Loar, intentionality is not the same thing as reference, and so a non-referential theory of intentionality does not commit one to denying that there is such a thing as reference.) (2) The theory should accommodate externalism about reference and truth-conditions (see, e.g., Putnam 1975, Burge 1979, Kripke 1980).

Loar then argues that non-phenomenal internalist views fail to meet desiderata (1) and (2). The first view he considers is short-arm functionalism, the view that causal interactions between brain states give rise to intentionality. The second view is a version of the descriptivist theory of reference combined with short-arm functionalism about its primitive representations. Having excluded these views, he argues that a version of PIT can meet his two desiderata. Phenomenal properties are inherently intentional in that they exhibit directedness, or purport to refer. Since purporting to refer is not the same thing as referring, the result is non-referential mental content. This satisfies the first desideratum.

Loar argues that his view satisfies the second desideratum by arguing that phenomenal properties do not by themselves secure reference or truth-conditions. Instead, reference and truth-conditions are a matter of externally-determined relations, as externalists such as Putnam (1975), Burge (1979), and Kripke (1980) claim. However, which externally-determined relations matter for reference depends on a subject’s non-referential internalist content. Horgan, Tienson, & Graham (2004) also suggest that PIT is the best available theory of narrow content.

Loar (2003a) and Horgan, Tienson & Graham (2004) defend internalism by appealing to brain in a vat scenarios. A brain in a vat duplicate is an exact physical duplicate of a normally embodied human brain that is kept in a vat of life-sustaining liquids and is hooked up to a computer that delivers to it the same kinds of stimulation its embodied twin receives. It intuitively seems that a brain in a vat would have a mental life “matching” that of its embodied twin. The brain in a vat and its twin would have matching perceptual experiences, perceptual judgments, and beliefs. For example, when the embodied twin believes that she is lying on the beach sipping a frappé, the brain in a vat twin believes that they are lying on the beach sipping a frappé. However, while the normal subject’s belief might be true, the envatted subject’s beliefs, and many other of their mental states, would be false or non-veridical.

Farkas (2008a) agrees with Loar and Horgan et al. (2004) that PIT is the best available theory of narrow content, but criticizes Loar (2003a) and Horgan et al. (2004) for making a concession to externalism (or broad content), by allowing externally determined reference and truth-conditions (Loar) and broad content. Instead, Farkas argues that internalist, phenomenally constituted intentionality is all that a theory of intentionality needs.

Wilson (2003) objects to Loar’s appeal to brains in vats, claiming that intuitions concerning them are theoretical intuitions and are likely to be rejected by many of PIT’s opponents. For this reason, they do not provide a neutral starting point from which to argue for PIT.

4.4 Aspectual shape

In an early defense of PIT, Searle (1990, 1991, 1992) puts forth an argument based on the “aspectual shapes” of intentional states. The argument is rather complex and open to several interpretations, but here is one simplified way of understanding it: Searle begins by noting that all intentional states have an aspectual shape, where an aspectual shape is a matter of how something is represented. For example, there is a difference between representing Hesperus and representing Phosphorus, or representing Superman and representing Clark Kent. The differences lie not in which objects are represented, but in how they are represented; these are differences in their aspectual shapes.

Searle then argues that no internal or external unconscious physical or functional facts can determine aspectual shapes. The only thing that can determine aspectual shape is consciousness. If that is so, then it looks like unconscious states can only have their aspectual shapes in virtue of their connections to conscious states. Searle concludes, more specifically, that unconscious intentional states involve dispositions to have conscious states, a thesis that he calls the connection principle. (Sometimes he says that unconscious states are potentially accessible to consciousness, apparently meaning that they can be introspected consciously (1992, p. 156), but other times he says what we say here: that unconscious states involve dispositions to have conscious states (1992, p. 159, 161–162). The latter is what Searle says as part of his argument for the connection principle, and this interpretation is more in line with the argument he deploys.)

In sum, the argument seems to go as follows:

  1. All intentional states have aspectual shape.
  2. Only states that are conscious or involve dispositions to have conscious states have aspectual shape.
  3. Therefore, all intentional states are either conscious or involve dispositions to have conscious states.

According to the argument’s conclusion, all intentional states are either phenomenal intentional states or involve dispositions to have such states. This is a version of Moderate PIT.

Searle’s arguments have elicited a large number of responses. Fodor and Lepore (1994) argue that there is no suitable way of cashing out what it would take for a state to be potentially conscious such that Searle’s claims are both plausible and tendentious (i.e., that they entail, as Searle claims, that much of cognitive science’s appeal to non-conscious intentional states is misguided). Searle (1994) responds to Fodor and Lepore. Davies (1995) argues that Searle might be right about a kind of intentionality, but that there are other kinds of intentionality invoked in cognitive science that are not dependent on consciousness. Van Gulick (1995) argues that Searle’s notion of aspectual shape smuggles in the notion of consciousness, and that on a less contentious understanding of aspectual shape, his argument that consciousness is the only way of accounting for aspectual shape does not succeed. Baaren (1999) also takes issue with the notion of aspectual shape. See also the commentaries accompanying Searle (1990).

4.5 Content determinacy

Another line of argument for PIT similar to Searle’s has to do with content determinacy. Graham, Horgan & Tienson (2007) and Horgan & Graham (2012) argue that it is difficult to see how unconscious neural activity, functional role, dispositions to behavior, and other possible physical bases of intentionality can yield the sorts of determinate contents we manifestly represent (see also Dennett 1987, Quine 1960: ch. 2, and Kripke 1982). For example, no causal, functional, and purely physical features of one’s brain or outside of it seems to make it the case that one is thinking about rabbits rather than undetached-rabbit-parts. A Martian looking down on Earth and having complete knowledge of all Earthly physical facts could not tell whether we are representing rabbits or undetached rabbit parts. Thus, it appears that a physical-functional theory of intentionality will predict that one’s concept RABBIT is indeterminate between the two contents.

Similarly, nothing about our brains, their finite dispositions, or their environments indicates that our word “plus” means the plus operator rather than a Kripkean quus operator, an operator that works just like plus when the operands are less than 57 and returns 5 when either operand is 57 or greater (see Kripke 1982). If we do determinately represent plus and rabbits, something other than tracking relations, dispositions towards behaviors, internal functional roles, or brain states has to determine this. Along similar lines, Strawson (2008) argues that phenomenal intentional facts about what we take an intentional state to refer to play a key role in determining what an intentional state refers to.

Some argue that phenomenal consciousness is capable of explaining content determinacy. According to Graham, Horgan and Tienson, there is a phenomenal difference between representing rabbits and representing undetached-rabbit-parts. Since PIT claims that phenomenal intentional content is determined by phenomenal character, it allows that the two states have distinct contents. The supposition that there is high-level cognitive phenomenology corresponding to such abstract contents as rabbits and undetached-rabbit-parts is key to this argument. This is a controversial claim, but one that is quite central to many versions of PIT. We discuss this claim in section 5.

Arguments for PIT from content determinacy rely on the strong claim that the totality of physical facts do not fix determinate contents and that we have determinate contents. While PIT does not entail dualism about consciousness, this claim does, so it goes beyond what is required to defend PIT (see Pautz 2013, who objects to arguments for PIT from content determinacy for related reasons). This claim will be resisted by anyone who thinks that physicalism about the mind is well motivated. One might say that the intuition that physical facts cannot fix determinate contents arises from the fact that we do not have a suitably good understanding of how intentionality arises from physical facts; had we such an understanding, the intuition would disappear.

4.6 Intentional states about non-existents

Relationalism about intentionality is the view that intentionality is a relation to distinctly existing entities that serve as contents. Non-relationalism about intentionality (sometimes called adverbialism) is the view that intentionality is not a relation to distinctly existing entities that serve as contents.

Kriegel (2007, 2011a) argues that an adverbial view of intentionality provides the best explanation of how we can represent things that don’t exist, such as Bigfoot, and that PIT is the best candidate adverbial view of intentionality. Kriegel first argues that the following three intuitively appealing claims are inconsistent:

  • (a) One can represent non-existents.
  • (b) One cannot bear a relation to non-existents.
  • (c) Representing something involves (constitutively) bearing a relation to it. (Kriegel 2007: 308)

One of these claims needs to be rejected. Kriegel argues that it is (c), the claim that asserts relationalism. Kriegel’s argument proceeds by a process of elimination.

Kriegel considers rejecting (a). On this proposal, when we seem to represent dragons, Bigfoot, or Santa Claus, we either fail to have an intentional state, or we represent something else. One reason Kriegel rejects the first option is that it implies that there is a gap between trying to represent and representing, which he takes to be implausible. On the second option, when we seem to represent non-existent concrete entities, we are really just representing something else, such as existent abstract entities (e.g., universals or propositions), existent mental entities (e.g., sense data or ideas), or existent possible but non-actual entities. But Kriegel takes this option to be highly counterintuitive. When we seem to be thinking about concrete flesh-and-blood Bigfoot, we are in fact thinking about an abstract or mental entity. Another worry is that accounting for the representation of non-existents seems like the wrong kind of reason to accept the existence of these abstract, mental, or merely possible entities.

Another option is to reject (b). Kriegel argues that just as a monadic property cannot be instantiated without an existing particular that instantiates it, so too a relation cannot be instantiated without existing particulars that instantiate it. In short, it is a general rule that relations need relata. Rejecting (b) is tantamount to claiming that the intentionality relation is an exception to this general rule, which is unappealing.

Kriegel concludes that we should reject (c). He calls his non-relational view “adverbialism”, since it draws its inspiration from the adverbialist views of perception of Ducasse (1942) and Chisholm (1957). According to Kriegel’s adverbialism, representing Bigfoot is not standing in a relation to an entity, but rather instantiating a non-relational intentional property, which we might describe as the property of representing Bigfoot-wise.

So far, this only motivates adverbialism. The final step of the argument motivates PIT: One objection to adverbialism is that it is mysterious what non-relational intentional properties are. What is it to represent Bigfoot-wise? Kriegel suggests that a plausible account of these properties is that they are phenomenal properties. Phenomenal properties are usually taken to be non-relational and there is independent reason to think they give rise to intentionality (see the other arguments in this section). The resulting picture is one on which phenomenal intentionality is non-relational. Kriegel suggests that this view can be combined with the view that non-phenomenal intentionality is derived from phenomenal intentionality, and is relational.

In short, Kriegel’s argument attempts to show that PIT is the best way to account for the representation of non-existents.

This argument motivates non-relational versions of PIT. However, it does not motivate relational versions of PIT, on which intentionality is relational. Loar (2003a), Pitt (2009), Kriegel (2007, 2011a), and Mendelovici (2010) hold non-relational versions of PIT, while Pautz (2013) and Bourget (2010) defend a relational version of PIT, on which both phenomenal properties and intentional properties are relational.

4.7 Other arguments for PIT

We will briefly mention two other lines of argument for PIT.

One resolves around the idea that norms of rationality are constitutive of (non-phenomenal) intentional states. Pautz writes:

Consciousness grounds rationality because it is implicated in basic epistemic norms. … In turn, the facts about rationality help to constitutively determine belief and desire (Davidson, Lewis). So consciousness also ultimately grounds belief and desire. (Pautz 2014: 176)

This line of argument combines two claims that have been defended independently. The first is a view of non-phenomenal states (chiefly, propositional attitudes) on which they derive their contents from norms of rationality (Davidson 2001, Lewis 1983, Chalmers 2012). The second is the view that consciousness plays a role in determining rational norms (Siewert 1998, Campbell 2002, Smithies 2012, 2014). In addition to the above passage from Pautz, this argument for PIT is also made in Chalmers 2012: 467 and Pautz 2013: 226.

Another line of argument for PIT is that there is nothing to determine who a given non-conscious state of mind belongs to, unless that state consists in a disposition to produce a conscious mental state of the right sort (Ludwig 1996). Kriegel (2003) similarly argues that only PIT can account for the fact that intentional states have a subjective or “for-me” character.

5. Cognitive phenomenology

Many defenses and elaborations of PIT argue that occurrent thoughts have a rich and varied phenomenology. Such a view of thought is required by versions of PIT that claim that the contents we normally attribute to thoughts are phenomenal contents (see section 6.2). Cognitive phenomenology is also a widely debated topic independently of any connection to PIT (see, e.g., Cognitive Phenomenology, edited by Bayne and Montague 2011).

Advocates of PIT that take thought content to be phenomenal content mainly focus on arguing for the following two claims:

(Proprietary) Thoughts have proprietary phenomenal characters.
(Individuative) Thoughts have individuative phenomenal characters.

The term “proprietary” is due to David Pitt (2004). Thought has a proprietary phenomenal character just in case the phenomenal characters of thoughts are special or unique to thought, i.e., they are not perceptual, verbal, bodily, or affective phenomenal characters, or other phenomenal characters that are present in mental states other than thoughts. Following current usage, we call all of the aforementioned kinds of phenomenology sensory phenomenology, and the putative proprietary phenomenology of thought cognitive phenomenology. The claim that thought has a proprietary phenomenology is then just the claim that it has a non-sensory phenomenology.

Thoughts have individuative phenomenal characters just in case thoughts with different intentional contents have different phenomenal characters, and thoughts with different phenomenal characters have different intentional contents. (We use “individuative” in the way Bayne and Montague (2011: ch. 1) use it. Pitt (2004) uses the term “distinctive” for a similar notion and the term “individuative” to mean something else.)

While most advocates of PIT take thoughts to have individuative phenomenal characters, they need not do so. Grounding PIT can allow that there is a one-many grounding relation between contents and phenomenal characters. For example, phenomenal properties r1 and r2 might both ground the intentional property of representing red421. If all intentional properties were grounded in this way, PIT would be true, but (Individuative) might not be.

It is possible for thoughts to have proprietary but not individuative phenomenal characters. For example, suppose every thought came with either a generic feeling of understanding or a generic feeling of confusion. These phenomenal characters might be proprietary in that they do not occur outside of thoughts, but they are not individuative, since thoughts with different intentional contents might have the same phenomenal characters.

It is also possible for thoughts to have individuative but not proprietary phenomenal characters. For example, suppose every thought came with a different kind of perceptual imagery. Then thoughts with different contents would have different phenomenal characters, but these phenomenal characters would not be special to thoughts, since perceptual states would have them too.

In addition to the claims that there is a proprietary and an individuative phenomenology of thought, advocates of PIT usually aim to establish that thought’s content is phenomenal intentional content, and thus that thought’s intentional properties are obtained in the requisite way from phenomenal properties.

While much of the discussion of the phenomenology of thought involves careful argumentation and consideration of cases, it is worth mentioning that many advocates of a proprietary phenomenology of thought find the view obvious, and the negation of the view clearly false or even absurd. Strawson writes:

To deny this [cognitive phenomenology], one must hold that the total lifelong character of our lived experience—everything that life is to us experientially—consists entirely of bare or pure sensation or feeling of one kind or another. It must, for example, be false to say that anguish at someone’s death includes conscious comprehending believing entertaining of the proposition that he is dead. (Strawson 2011a: 295, italics in original)

In a similar vein, Kriegel writes:

For my part, I am persuaded of the existence of cognitive experience […] most vividly by something like everyday experiential overwhelm: it simply seems that my inner life is much more interesting to me than it would be if my conscious experience consisted merely in perceptual experiences. (Kriegel 2011a: 50)

In what follows we discuss the main arguments that have been offered to supplement such appeals to the alleged obviousness of cognitive phenomenology.

5.1 Phenomenal contrast cases

Phenomenal contrast cases are cases of two thoughts that are alike in sensory phenomenal character but differ in thought content.

Siewert (1998) asks his readers to compare an experience of hearing or reading a sentence without understanding, as when one reads a difficult passage without paying attention to it, and an experience of hearing or reading a sentence with understanding. There is clearly a phenomenal difference between these cases. Siewert argues that the difference is not a difference in verbal or perceptual imagery, since the verbal and perceptual imagery might be the same in both cases. The best explanation of the phenomenal contrast is that thought involves proprietary cognitive phenomenology.

Strawson (1994) argues for a kind of “understanding experience” by contrasting the cases of a monolingual English speaker and a monolingual French speaker listening to the news in French. The experiences of the two subjects differ in a way that is not fully explained by a difference in sensory phenomenology. The best explanation involves a difference in cognitive phenomenology. Siewert (1998) also employs examples involving the comparison of hearing sentences in familiar versus unfamiliar languages.

As it stands, Strawson’s argument can only establish that thought has a proprietary phenomenology, but Kriegel (2011a: 49) extends it to argue that thought has an individuative phenomenology. He asks us to imagine a case of two languages involving graphically and phonetically identical words such that the same report can be interpreted in one language as describing a faraway war and in the other language a children’s bedtime story. Monolingual speakers of each language will experience different phenomenal characters upon reading or hearing this report. The best explanation of this involves a difference in cognitive phenomenology. This supports the claim that cognitive phenomenology is individuative.

Other arguments from phenomenal contrast cases aim to create the contrasting experiences in the reader herself. Horgan and Tienson (2002) present the reader with sentences that are likely to give rise to two different interpretations, such as the following:

(Relatives) Visiting relatives can be boring.

On one reading, the sentence is about the act of visiting relatives. On another reading, the sentence is about relatives that visit. Both readings are likely to generate the same verbal imagery, but they differ in content. Horgan and Tienson encourage the reader to notice that they also differ in phenomenal character. If this is right, then this suggests that thought has a proprietary and individuative phenomenology.

The following sentences are also used to generate phenomenal contrast cases:

(Dogs) Dogs dogs dog dog dogs. (Horgan and Tienson 2002)
(Time) Time flies! (Horgan and Tienson 2002)
(Bar) Before she had a chance to pass the bar, she decided to change directions, but she was not so pleasantly surprised with where she wound up. (Siewert 1998: 279)

(Dogs) might at first be read or heard without understanding, but might subsequently be read with understanding, giving rise to a phenomenal contrast case. (Time) can be read as a cliché or as a command at the insect races. (Bar) can be read as being about an aborted legal career or a trip around town. Again, the claim is that these different readings of the sentences give rise to different phenomenal experiences, and that the best explanation of this is that thought has a proprietary and individuative phenomenology.

Though instances of pairs of cases differing in intentional content and differing in phenomenal character provide some evidence for the existence of individuative cognitive phenomenology, in order for the thesis that thought has a individuative phenomenology to be true, there have to be no cases of thoughts that are alike in content but that differ in phenomenal character. Wilson (2003) responds to Horgan and Tienson by accepting their observations in their phenomenal contrast cases, but attempting to provide a counterexample to (Individuative):

In the spirit of Horgan and Tienson’s appeal for a reader to “pay attention to your own experience” ([2002] p. 521), I have just done the decisive experiment: I thought first that George Bush is President of the United States, and had CNN-mediated auditory and visual phenomenology that focussed on one of his speeches. I then took a short break, doodled a little, wandered around the room, and then had a thought with that very same content and … nothing. Or at least nothing distinctly Bush-like, as in the first case. (Wilson 2003: 417)

If Wilson is right, this not only shows that arguments based on phenomenal contrast ultimately fail, but also provides positive considerations against (Individuative), since it shows that there can be thoughts with the same contents that fail to have the same phenomenal character.

Versions of PIT that require only a one-many relation between phenomenal intentional content and phenomenal character, and hence that do not need to endorse (Individuative), can accommodate observations such as Wilson’s putative observation, since they allow that multiple phenomenal characters can ground or constitute the same phenomenal intentional content.

Another kind of objection to arguments from phenomenal contrast agrees that there is a phenomenal difference between the relevant cases, but claims that this difference is exhausted by sensory phenomenology, where this might include the phenomenology of perceptual imagery, affective experience, or verbal imagery (see, e.g., Lormand 1996, Tye and Wright 2011, Levine 2011, Robinson 2011, Carruthers and Veillet 2011). What makes the phenomenal contrast cases described above vulnerable to this kind of objection is that they do not control for every potential accompanying perceptual imagery. This leaves open the possibility that the observed phenomenal differences are fully accounted for by such imagery.

Chudnoff (2013) provides a phenomenal contrast case that he claims avoids this reply. He asks his readers to compare the experience of an array of dots to an experience of the same array of dots experienced as part of a proof for a mathematical theorem. In the second experience, but not in the first, the perceptual experience involves cognitive phenomenology. The array of dots is in some sense experienced as part of a larger whole, representative of something, or in some sense meaningful. (Chudnoff 2015a,b also contain extensive critical discussions of phenomenal contrast cases.)

One might worry that, like the original phenomenal contrast cases, Chudnoff’s case does not control for certain forms of accompanying imagery, in this case, verbal imagery. The adamant opponent of cognitive phenomenology might insist that just as the phenomenal differences in the phenomenal contrast cases involving sentences might be explained by perceptual imagery, the differences in Chudnoff’s cases can be accounted for by differences in verbal phenomenology.

It might seem that what is needed is a phenomenal contrast case that plausibly controls for both verbal and perceptual phenomenology, as well other kinds of sensory phenomenology. Mendelovici (2010: 107) argues that thoughts about chiliagons (one-thousand sided figures) and megagons (one-million sided figures) might involve the same mental imagery (both shapes effectively look like circles) and so might provide the basis for such cases. Imagine a person who mistakenly uses the word “megagon” to mean chilliagon. Compare her experience of viewing a chilliagon and thinking that it is a chilliagon with your experience of viewing a megagon and thinking that it is a megagon. Since you both use the word “megagon” to describe the shape you’re thinking about, and since the two shapes are perceptually similar, you will likely have the same perceptual and verbal imagery. If there is a phenomenal difference between the two cases, it is plausibly attributed to a difference in thought content.

5.2 Spontaneous thoughts

Siewert (1998, 2011) claims that sudden realizations are cases in which cognitive phenomenology is particularly noticeable.

[Y]ou are standing at the door to your house, reaching in your pants pocket for the door key, and find it empty. You feel a sudden panic; you think perhaps you have locked yourself out; you try to remember where you put the keys, then recall switching them to your coat pocket earlier; you reach and find them there—relief. (Siewert 1998: 277)

I meet a friend, and she asks me, “Did you bring the book?” For a moment I am at a loss as to what book she’s talking about—and then I realize in an instant what book it is. (Siewert 2011: 258)

Siewert claims that such realizations needn’t involve any verbal or perceptual imagery. In the case of the first example, you don’t think the words, “I have locked myself out” or visualize your keys. Siewert takes these and other similar examples to show that thought has a proprietary phenomenology.

Similarly, in order to argue that the phenomenal properties of thought are not merely associated with verbal imagery, Horgan and Tienson (2002) point to examples of spontaneous thoughts we have when engaging in activities such as cooking or working in a garage or woodshop:

There is something that it is like to think that a certain tool is just there—in that cabinet, say—but such beliefs are typically not verbalized either vocally or subvocally or by way of verbal imagery. (Horgan and Tienson 2002: 523)

Like Siewert’s examples, this example helps motivate the claim that thought has a proprietary phenomenology.

This line of argument relies heavily on introspection. Detractors of cognitive phenomenology (for example, Robinson 2011 and Tye & Wright 2011) claim that their own observations of sudden realization reveal less phenomenology. This results in an apparent stalemate.

5.3 The tip-of-the-tongue phenomenon

Some experiences with a cognitive character seem to make a fairly good case for a minimal amount of proprietary phenomenology of thought. For example, Goldman (1993a) invokes the tip-of-the-tongue phenomenon to argue that thought has a proprietary phenomenology, an argument he attributes to Jackendoff (1987).

When one tries to say something but can’t think of the word, one is phenomenologically aware of having requisite conceptual structure, that is, of having a determinate thought-content one seeks to articulate. What is missing is the phonological form: the sound of the sought-for word. The absence of this sensory quality, however, does not imply that nothing (relevant) is in awareness. Entertaining the conceptual unit has a phenomenology, just not a sensory phenomenology. (Goldman 1993a: 24)

The tip-of-the-tongue phenomenon occurs when one cannot think of a word, so it involves the absence of verbal phenomenology corresponding to that word. But instances of this phenomenon do involve some phenomenology. Goldman proposes that this phenomenology is non-sensory.

Lormand (1996) responds to this suggestion by providing an alternative account of the relevant phenomenology on which it is sensory, which he also takes to be supported by Jackendoff 1987. According to Lormand, the relevant phenomenology involves a sensory phenomenal experience of a void, which is akin to hearing silence, along with an experience of effort, whose phenomenology is also sensory.

5.4 Epistemic markers of consciousness

Phenomenal consciousness has various epistemic markers: It gives rise to (at least the appearance of) an explanatory gap (see Levine 1983 and the entry on consciousness), it is susceptible to zombie thought experiments (see Chalmers 1996 and the entry on zombies), and it is susceptible to the knowledge argument (see Jackson 1982 and the entry on qualia: the knowledge argument). These arguments usually focus on sensory phenomenal consciousness. For example, Levine’s central example is that of pain and Jackson’s is that of experiencing red.

The initial plausibility of these kinds of arguments might be taken to serve as an indicator of phenomenal consciousness: plausibly, if these arguments have some traction with some mental state, then that mental state is likely to have phenomenal properties. Some have used the presence or absence of such markers to argue for or against cognitive phenomenology.

Goldman (1993b) argues that a version of Jackson’s (1982) thought experiment can be run with propositional attitudes, such as doubt and disappointment:

Jackson’s example is intended to dramatize the claim that there are subjective aspects of sensations that resist capture in functionalist terms. I suggest a parallel style of argument for attitude types. Just as someone deprived of any experience of colors would learn new things upon being exposed to them, viz., what it feels like to see red, green, and so forth, so (I submit) someone who had never experienced certain propositional attitudes, e.g., doubt or disappointment, would learn new things on first undergoing these experiences. There is “something it is like” to have these attitudes, just as much as there is “something it is like” to see red. (Goldman 1993b: 365)

In other words, Goldman argues that Jackson’s thought experiment is compelling in the case of propositional attitudes, and that this supports the claim that propositional attitudes have proprietary phenomenal properties above and beyond functional properties. (Presumably, Goldman intends his argument to apply only to occurrent propositional attitudes, since he takes standing states to be purely dispositional (1993b: 366).) Goff (2012) makes similar observations.

Horgan (2011a) also uses epistemic indicators of phenomenal consciousness to argue for cognitive phenomenology. He argues that since partial zombies lacking cognitive phenomenology are conceivable and phenomenally different from us, we have cognitive phenomenology.

Interestingly, Carruthers and Veillet (2011) use epistemic indicators to argue against cognitive phenomenology. They claim that thought is not susceptible to the explanatory gap, and thus that there is no cognitive phenomenology.

5.5 Self-knowledge

Pitt (2004) argues that there is a kind of self-knowledge that can only be explained by cognitive phenomenology. Pitt’s argument not only aims to establish that there is a proprietary and individuative cognitive phenomenology, but also that this phenomenology is constitutive of thought’s content, i.e., that thought’s content is phenomenal intentional content.

Pitt’s argument runs as follows: Normally, we can consciously, introspectively, and non-inferentially (1) distinguish an occurrent thought from other mental states, (2) distinguish an occurrent thought from other occurrent thoughts, and (3) identify which occurrent thoughts we are thinking. Pitt considers various explanations of these abilities, and argues that the only plausible explanation is that thought has a proprietary, individuative, and constitutive phenomenology. Thought’s proprietary phenomenology explains how we can tell the difference between thoughts and other kinds of mental states, thought’s individuative phenomenology explains how we can tell the difference between one thought and another, and thought’s phenomenology being constitutive of its content explains how we can identify which thoughts we are thinking.

Levine (2011) argues that Pitt (2004) fails to rule out an alternative explanation of the relevant kind of self-knowledge: immediate self-knowledge is a matter of non-inferentially coming to have an intentional state that represents that one is thinking what one is in fact thinking. In having such a state, one is automatically aware of its content. Pitt (2011) responds that, when properly understood, Levine’s proposal can’t work unless there is the contested kind of cognitive phenomenology.

Goldman (1993a,b) also uses considerations from self-knowledge to argue for a phenomenology of thought. He argues that the way we can tell what mental states we are in is not through their functional roles or neural properties, but through their phenomenal properties. In the case of cognitive states, the best explanation for how we can discriminate between different strengths of desires or degrees of belief is that thoughts have an accompanying phenomenology.

6. Challenges for PIT

The main challenge for PIT is to explain the intentionality of mental states that may reasonably be taken to have intentionality, but appear not to have phenomenal intentionality. Four types of mental state give rise to challenges of this kind: thoughts, standing propositional attitudes, wide intentional states, and occurrent unconscious states. Such states don’t seem to be phenomenal intentional states, so it is not immediately clear how PIT can account for them.

There are three general strategies for handling a problematic state: eliminativism, inflationism, and derivativism. Eliminativism consists in denying the existence of the putative intentional state (or denying that it is an intentional state). Inflationism consists in claiming that the state in question is a phenomenal intentional state. In the case of thought, this strategy often involves arguing for rich cognitive phenomenology (see section 5). Derivativism agrees that the problematic state is not a phenomenal intentional state, but maintains that it nonetheless derives its content in part from phenomenal intentional states, so it is at least partly grounded in such states. We will now discuss these strategies in more detail in relation to the four problematic kinds of states.

6.1 Thoughts

Thoughts are occurrent conceptual states, the kinds of states we go through when we think, reflect, or muse over something, including occurrent beliefs and occurrent desires. Thoughts, especially thoughts about abstract ideas such as democracy and the square root function, might seem to lack phenomenal properties. Even if thoughts have phenomenal properties, it does not seem that these phenomenal properties are rich or determinate enough to fully account for their intentional properties. These phenomenal properties might seem to be limited to verbal and visual imagery, for example.

Inflationism is the most widely endorsed strategy for dealing with occurrent thoughts, at least in cases of thoughts that do not seem to have wide contents (see 6.3 below for the latter). Strawson (1994, 2008), Siewert (1998), Horgan & Tienson (2002), Horgan, Tienson & Graham (2004), and Pitt (2009) all hold that occurrent thought has a phenomenology that is rich and determinate enough to determine its intentional contents. Horgan & Tienson (2002), Horgan, Tienson & Graham (2004), and Pitt (2009) also argue that the difference between beliefs, desires, and other kinds of attitudes is phenomenally constituted. The case for this approach rests on the arguments for cognitive phenomenology we discuss above.

Loar (2003a,b), Bourget (2010, 2015), and Mendelovici (2010) maintain that thoughts have a fairly impoverished phenomenology that cannot fully constitute all the contents we might want to attribute to them. Loar (2003a,b) endorses a derived content strategy on which much of thought’s content is determined by the “lateral connections” between thoughts and other mental states. The network of interconnected states eventually derives its content from phenomenal intentional states. Bourget (2010) adopts a derived content strategy on which thoughts derive their contents from phenomenal intentional states through a variety of derivation mechanisms.

Mendelovici (2010) has a largely eliminativist take on the intentionality of thought. Like Pitt (2004, 2009), she holds that all intentional states are phenomenal intentional states; unlike Pitt, she maintains that the phenomenology of thought is too impoverished to capture all the contents we might pre-theoretically want to attribute to thoughts. However, she recognizes the existence of derived representational content, which answer to the rich contents we tend to attribute to thoughts. Derived representation states are not strictly speaking intentional states, but they fill the role that intentional states with rich contents have been thought to play.

6.2 Standing propositional attitudes

Standing propositional attitudes are states one is in independently of what one is thinking about or experiencing at the time (independently of one’s occurrent states). For example, five minutes ago you had the standing belief that monkeys like bananas even though you weren’t occurrently thinking that content. Standing propositional attitudes do not seem to have phenomenal properties, and so, it seems their intentionality is not phenomenal intentionality.

As far as we can tell, no one has applied the inflationist strategy to standing propositional attitudes: no one claims that they are phenomenal intentional states.

Strawson (2008) and Mendelovici (2010) adopt the eliminativist strategy as part of their defenses of PIT: they deny that standing beliefs and other standing propositional attitudes are intentional states. As Strawson puts it, “To have a belief is not to be in any contentful mental state.” (p. 271) Rather, it is to be disposed to be in such a state. Horgan & Tienson (2002) are not eliminativists about the intentionality of standing states, but they do not consider them part of the scope of their version of PIT.

Searle (1990, 1991, 1992), Bourget (2010), and Kriegel (2011a,b) favor derivativism about standing states. Searle holds that non-phenomenal intentional states have their intentionality in virtue of subjects’ dispositions to have conscious states. This account applies most naturally to standing propositional attitudes. Bourget (2010) holds a similar but more nuanced view according to which standing propositional attitudes derive from connections to occurrent thoughts, which themselves either are phenomenal intentional states or derive their contents from distinct phenomenal intentional states (see the next section on the derivativist strategy for thoughts).

The simple derived content approach defended by Searle and Bourget is open to well-known objections. One of these objections, discussed by Peacocke (1998), is that a state that causes occurrent thoughts to the effect that P is not a belief that P unless it is accompanied by the right behavior. Imagine someone who claims not to be sexist and tends to form occurrent non-sexist thoughts but who behaves in demonstrably sexist ways. Such an individual is naturally said to have unconscious sexist beliefs.

Kriegel’s (2011a,b) account aims to explain standing states and unconscious occurrent states in a unified way. On his account, which he calls interpretivism, a non-phenomenal state s has a certain derived intentional content C just in case an ideal interpreter is disposed to ascribe C to s. An ideal interpreter is a being that is perfectly rational and knows all the phenomenal and non-phenomenal (but not derivatively intentional) facts about the world. The ideal interpreter’s intentional states are all phenomenal intentional states. Kriegel’s interpretivism might seem to involve a circular explanation: how can interpretations explain intentionality if they are themselves intentional states? Circularity is avoided by restricting the explanation to non-phenomenal intentional states. Since the ideal interpreter’s interpretations are phenomenal states, they are not within the scope of the explanation.

The disagreement between eliminativism and derivativism about standing states might be partly terminological. All of the above-mentioned theorists agree that standing states are a matter of a certain kind of disposition to have phenomenal states. What they disagree on is whether the potentially conscious or dispositional states count as intentional states.

6.3 Wide intentional states

Wide intentional states are intentional states that depend on relations to items in our environments. Prime candidates of wide intentional states are thoughts about natural kinds (e.g., H2O) and thoughts about individuals (e.g., Bill Gates). Arguably, individuals that are phenomenally alike and have all the same phenomenal intentional states can nonetheless differ in their wide intentional states. So, it seems that wide intentional states are not phenomenal intentional states.

Broad intentional states are states that in some way depend on the external environment. They are states for which externalism is true (see section 4.3).

A Twin Earth case helps illustrate the options available in the case of broad intentional states (see Putnam 1975). Consider two individuals, Alice and Twin Alice. Alice lives on Earth, while Twin Alice lives on a copy of Earth located far away from us in this world. Let us suppose that Alice and Twin Alice are phenomenal duplicates: they lead phenomenally identical existences.

Alice and Twin Alice each has a brother called “Bob.” When Alice thinks a thought that she would express by making the sounds “Bob is happy,” it seems that her thought is true at just the worlds where Bob is happy. By contrast, it seems that the thought that Twin Alice expresses with “Bob is happy” in her idiolect is one that is true at just the worlds where Twin Bob is happy. So it looks like the Alices’ thoughts have different truth conditions. This suggests that the Alices’ thoughts have different contents. Alice’s thought represents that Bob is happy, while Twin Alice’s thought represents that Twin Bob is happy. The Alices’ “Bob”-thoughts are paradigmatic examples of putatively broad intentional states.

Few advocates of PIT seem to endorse an inflationist strategy for broad intentional states. Even advocates of PIT who take consciousness to be relational in character seem to agree that what a subject gets related to in consciousness depends solely on her intrinsic properties (Pautz 2010). However, Campbell (2002) holds that perceptual experience is broad and intentional, and his view might be counted as a type of phenomenal intentionality theory.

Siewert (1998), Kriegel (2007), and Farkas (2008a) adopt an eliminativist strategy with respect to broad intentional states. Their views are the same in broad outline. On their views, the two Alices’ thoughts have the same content, and that content is narrow. We can account for the fact that the two Alices’ thoughts are made true by different Bobs by adding contextual parameters to their common content: their common content is not a function from possible worlds to truth values but a function from possible worlds and relevant elements of context to truth values. The introduction of contexts enables us to account for the fact that the Alices’ thoughts are true at different worlds. For example, one (over-simplistic) view along these lines could state that the common content of the two Alices’ thoughts can be modeled as a function from worlds W and contexts of use C that returns true just in case the person that bears the name “Bob” in C is happy in W. Given that different contexts are relevant to Alice and Twin Alice, different worlds can satisfy the common thought they express as “Bob is happy”. If this is the right way to think about content, the Bobs’ case and other cases motivating broad content do not force us to recognize broad contents.

Pitt (1999, 2011) also endorses an eliminativist strategy, arguing against externalist intuitions. Mendelovici (2010) also endorses eliminativism but claims that she can capture many externalist intuitions through the notion of derived mental representation (see the previous section).

Derivativist strategies have also been applied to broad thought contents (Loar 2003a,b, Horgan and Tienson 2002, Horgan, Tienson & Graham 2004, Bourget 2010, Chalmers 2010). The idea here is that broad intentional states have two contents: a phenomenally constituted narrow content, and a broad content that is determined by its narrow content together with relevant factors in the environment. So Alice’s thought has two contents: one narrow and one broad. The broad content of her thought is true at just the worlds where Bob is happy. The narrow content is true at the worlds where a person bearing certain Bob-like characteristics is happy. The relevant Bob-like characteristics might, for example, centrally involve being called “Bob” by people of a certain community (assuming the community can in turn be specified in terms of narrow contents).

Of course, other accounts of the narrow content of Alice’s thought are possible; proponents of the derived approach to broad intentional states are largely neutral regarding the specific form of narrow content. The options available to proponents of PIT are the same as for theories of narrow content in general. For instance, this approach to PIT for broad thoughts can draw on all the resources of two-dimensional theories of narrow content (see Chalmers 2002a and the entries on two-dimensional semantics and narrow mental content).

Pautz (2013) offers a related derivativist approach that he dubs phenomenal functionalism. On this view, facts about (sensory) phenomenal states and their internal causal roles fix the facts about what is rational for an agent to believe. These facts about rationality in turn fix the narrow contents of an individual’s beliefs. Wide contents are fixed by causal relations between beliefs and the environment.

6.4 Unconscious occurrent states

Cognitive science posits numerous forms of occurrent unconscious representation, e.g., dorsal stream states and internal representations of syntactic structures. It seems that such states have intentional properties but lack phenomenal properties, so their intentionality cannot be phenomenal intentionality.

Some supporters of PIT adopt an eliminativist strategy towards unconscious states. Searle (1990, 1991, 1992) argues, roughly, for the claim that only conscious or potentially conscious states exhibit intentionality. Since most unconscious states posited by cognitive science are not potentially conscious, they are not intentional. Searle presents this view of unconscious states as being in conflict with cognitive science. In contrast, Graham, Horgan, and Tienson (2007) and Mendelovici (2010) highlight the agreement between the assumptions of cognitive science and eliminativism about unconscious states: everyone agrees that unconscious states play functional roles, bear tracking relations to things in the environment, and have no phenomenal properties. Everyone also agrees that it can be fruitful to treat unconscious states as if they represented certain contents. The main disagreement is over whether unconscious states really do qualify as intentional.

Bourget (2010, 2015) and Pitt (2009, Other Internet Resources) suggest that an inflationist strategy may be acceptable in case of at least some unconscious occurrent states. On their views, we can have phenomenal states that we are not aware of. Unconscious occurrent states could be such states.

A derived content strategy is also an option in the case of some unconscious occurrent states. Bourget (2010) argues for this strategy on the basis that the low-level systems that allegedly support unconscious occurrent intentional states don’t seem intentional when they are taken out of the organisms in which they belong. Kriegel’s interpretivism (2011a,b) is also meant to apply to unconscious occurrent states (see section 6.2).


  • Barsalou, L.W., 1999, “Perceptual symbol systems”, Behavioral and Brain Sciences, 22(4): 577–660.
  • Bayne, Tim & Michelle Montague (eds.), 2011, Cognitive Phenomenology, Oxford: Oxford University Press.
  • Block, N., 1986, “Advertisement for a semantics for psychology”, Midwest Studies in Philosophy, 10(1): 615–78.
  • Bourget, David, 2010, “Consciousness is Underived Intentionality”, Noûs, 44(1): 32–58.
  • –––, forthcoming, “The Role of Consciousness in Grasping and Understanding”, Philosophy and Phenomenological Research, published online 27 August 2015. doi:10.1111/phpr.12208
  • Bourget, David & Angela Mendelovici, 2014, “Tracking Representationalism”, in Andrew Bailey (ed.), Philosophy of Mind: The Key Thinkers, Continuum, London: Bloomsbury Academic, 209–235.
  • Brentano, F., 1874, Psychology from empirical standpoint, O. Kraus (ed.), English edition translated by A.C. Rancurello, D.B. Terrell, and L.L. McAlister, London: Routledge and Kegan Paul, 1973.
  • Brewer, B., 1999, Perception and Reason, Oxford: Oxford University Press.
  • Burge, T., 1979, “Individualism and the mental”, Midwest Studies in Philosophy, 4(1): 73–122.
  • Byrne, A., 2009,“ Experience and content”, Philosophical Quarterly, 59(236): 429–451.
  • –––, 2015, “Skepticism about the Internal World”, In Gideon Rosen, Alex Byrne, Joshua Cohen & Seana Valentine Shiffrin (eds.), The Norton Introduction to Philosophy, W. W. Norton, New York.
  • Campbell, John, 2002, “Reference and Consciousness”, Oxford: Oxford University Press. doi:10.1093/0199243816.001.0001
  • Carruthers, Peter & Bénédicte Veillet, 2011, “The Case Against Cognitive Phenomenology”, in Bayne & Montague 2011: 35–56. doi:10.1093/acprof:oso/9780199579938.003.0002
  • Chalmers, David J., 1996, The Conscious Mind: In Search of a Fundamental Theory, Oxford: Oxford University Press.
  • –––, 2002a, “The components of content”, in Chalmers 2002b: 608–633
  • ––– (ed.), 2002b, Philosophy of Mind: Classical and Contemporary Readings, Oxford: Oxford University Press.
  • –––, 2004, “The representational character of experience”, in Brian Leiter (ed.), The Future for Philosophy, Oxford: Oxford University Press, pp. 153–181.
  • –––, 2010, The Character of Consciousness, Oxford: Oxford University Press.
  • –––, 2012, Constructing the World, Oxford: Oxford University Press.
  • Chisholm, R. 1957. Perceiving: A Philosophical Study, Ithaca, NY: Cornell University Press.
  • Chudnoff, Elijah, 2013, “Intellectual Gestalts”, in Kriegel 2013b: 174–193. doi:10.1093/acprof:oso/9780199764297.003.0010
  • –––, 2015a, Cognitive Phenomenology, New York: Routledge.
  • –––, 2015b, “Phenomenal Contrast Arguments for Cognitive Phenomenology”, Philosophical and Phenomenological Research, 91(1): 82–104. doi:10.1111/phpr.12177
  • Davidson, D., 2001, Essays Into Truth and Interpretation, Oxford: Oxford University Press.
  • Davies, M., 1995, “Consciousness and the varieties of aboutness”, In C. Macdonald (ed.), Philosophy of Psychology: Debates on Psychological Explanation, Oxford: Oxford University Press.
  • Dennett, D.C., 1987, The Intentional Stance, Cambridge, MA: MIT Press.
  • Dretske, F., 1996, “Phenomenal externalism, or if meanings ain’t in the head, where are qualia?” Philosophical Issues, 7: 143–158.
  • Ducasse, C.J., 1942, “Moore’s Refutation of Idealism”, In P.A. Schlipp (ed.), The Philosophy of G.E. Moore, La Salle IL: Open Court, 232–233
  • Farkas, Katalin, 2008a, “Phenomenal intentionality without compromise”, The Monist, 91(2): 273–93.
  • –––, 2008b, The Subject’s Point of View, Oxford: Oxford University Press.
  • –––, 2013, “Constructing a World for the Senses”, in Kriegel 2013b: 99–115. doi:10.1093/acprof:oso/9780199764297.003.0006
  • Fodor, J.A. & E. Lepore, 1994, “What is the connection principle?” Philosophy and Phenomenological Research, 54(4): 837–45.
  • Georgalis, N., 2003, “The fiction of phenomenal intentionality”, Consciousness and Emotion, 4(2): 243–256.
  • –––, 2006, The Primacy of the Subjective: Foundations for a Unified Theory of Mind and Language, Cambridge, MA: Bradford Book/MIT Press.
  • Gertler, Brie, 2001, “The relationship between phenomenality and intentionality: Comments on Siewert’s The Significance of Consciousness”, Psyche, 7(17). URL=<>
  • Goff, P., 2012, “Does Mary know I experience plus rather than quus? A new hard problem”, Philosophical Studies, 160(2): 223–235.
  • Goldman, A., 1993a, “The psychology of folk psychology”, Behavioral and Brain Sciences, 16: 15–28.
  • –––, 1993b, “Consciousness, folk psychology, and cognitive science”, Consciousness and Cognition, 2(4): 364–382.
  • Gonzalez-Castan, Oscar L., 1999, “The connection principle and the classificatory scheme of reality”, Teorema, 18(1): 85–98.
  • Graham, George , Terence E. Horgan, & John L. Tienson, 2007, “Consciousness and intentionality”, in Velmans & Schneider 2007: 468–484.
  • Greenberg, Mark & Gilbert Harman, 2007, “Conceptual role semantics”, in Ernest LePore & Barry Smith (eds.), Oxford Handbook of Philosophy of Language, Oxford: Oxford University Press, 293–322.
  • Harman, Gilbert, 1987, “(Nonsolipsistic) conceptual role semantics”, in Ernest LePore (ed.), New Directions in Semantics, New York: Academic Press, 55–81
  • –––, 1990, “The intrinsic quality of experience”, Philosophical Perspectives, 4: 31–52.
  • Horgan, Terence, 2011a, “From Agentive Phenomenology to Cognitive Phenomenology: A Guide for the Perplexed”, in Bayne & Montague 2011: 57–78. doi:10.1093/acprof:oso/9780199579938.003.0003
  • –––, 2011b, “Phenomenal intentionality and the evidential role of perceptual experience: comments on Jack Lyons, Perception and Basic Beliefs”, Philosophical Studies, 153(3): 447–455.
  • –––, 2013, “Original Intentionality is Phenomenal Intentionality”, The Monist, 96(2): 232–251.
  • Horgan, Terence & George Graham, 2012, “Phenomenal Intentionality and Content Determinacy”, in Richard Schantz (ed.), Prospects for Meaning, Berlin: De Gruyter. 321–344.
  • Horgan, Terence E. & John L. Tienson, 2002, “The intentionality of phenomenology and the phenomenology of intentionality”, in Chalmers 2002b: 520–533.
  • Horgan, Terence E., John L. Tienson, & George Graham, 2003, “The phenomenology of first-person agency”, in Sven Walter & Heinz-Dieter Heckmann (eds.), Physicalism and Mental Causation, Imprint Academic. pp. 323–341.
  • –––, 2004, “Phenomenal intentionality and the brain in a vat”, in Richard Schantz (ed.), The Externalist Challenge, Berlin: Walter De Gruyter. pp. 297–318.
  • –––, 2006, “Internal-world skepticism and mental self-presentation”, in Uriah Kriegel & Kenneth Williford (eds.), Self-Representational Approaches to Consciousness, Cambridge, MA: MIT Press. pp. 41–62.
  • Hume, David, 1739/2000, A Treatise on Human Nature, Oxford: Oxford University Press.
  • Jackendoff, Ray, 1987, Consciousness and the Computational Mind, Cambridge, MA: MIT Press.
  • Jackson, Frank, 1982, “Epiphenomenal qualia”, Philosophical Quarterly, 32(April): 127–136.
  • –––, 1998, From Metaphysics to Ethics, Oxford: Oxford University Press.
  • Kaplan, David, 1977, “Demonstratives”, in Joseph Almog, John Perry & Howard Wettstein (eds.), Themes From Kaplan, Oxford University Press. 481–563.
  • Kim, Jaegwon, 1998, Current Issues in Philosophy of Mind, New York: Cambridge University Press.
  • Kriegel, Uriah, 2007, “Intentional inexistence and phenomenal intentionality”, Philosophical Perspectives, 21(1): 307–340.
  • –––, 2011a, The Sources of Intentionality, Oxford: Oxford University Press.
  • –––, 2011b, “Cognitive Phenomenology as the Basis of Unconscious Content”, in Bayne & Montague 2011: 79–102. doi:10.1093/acprof:oso/9780199579938.003.0004
  • –––, 2012, “Towards a New Feeling Theory of Emotion”, European Journal of Philosophy, 3: 420–442.
  • –––, 2013a, “Phenomenal intentionality past and present: introductory”, Phenomenology and the Cognitive Sciences, 12(3): 437–444.
  • –––, (ed.), 2013b, Phenomenal Intentionality, Oxford: Oxford University Press.
  • Kripke, Saul A., 1980, Naming and Necessity, Cambridge, MA: Harvard University Press.
  • –––, 1982, Wittgenstein on Rules and Private Language, Cambridge, MA: Harvard University Press.
  • Levine, Joseph, 1983, “Materialism and qualia: The explanatory gap”, Pacific Philosophical Quarterly, 64(October): 354–61.
  • –––, 2003, “Experience and representation”, in Smith & Jokic 2003: 57–76.
  • –––, 2008, “Secondary Qualities: Where Consciousness and Intentionality Meet”, The Monist, 91(2): 215–236.
  • –––, 2011, “On the Phenomenology of Thought”, in Bayne & Montague 2011: 103–120. doi:10.1093/acprof:oso/9780199579938.003.0005
  • Lewis, David K., 1983, Philosophical Papers, Oxford: Oxford University Press.
  • Loar, Brian, 1987, “Subjective intentionality”, Philosophical Topics, 15(1): 89–124.
  • –––, 1988, “Social content and psychological content”, in Robert H. Grimm & D. D. Merrill (eds.), Contents of Thought, Tucson: University of Arizona Press, 99–110.
  • –––, 1995, “Reference from the first person perspective”, Philosophical Issues, 6: 53–72.
  • –––, 2003a, “Phenomenal intentionality as the basis of mental content”, in Martin Hahn & B. Ramberg (eds.), Reflections and Replies: Essays on the Philosophy of Tyler Burge, Cambridge, MA: MIT Press, 229–258.
  • –––, 2003b, “Transparent experience and the availability of qualia”, in Smith & Jokic 2003: 77–96.
  • Lormand, Eric, 1996, “Nonphenomenal consciousness”, Noûs, 30(2): 242–61.
  • Ludwig, Kirk A., 1996, “Explaining why things look the way they do”, in Kathleen Akins (ed.), Perception, Oxford: Oxford University Press, pp. 18–60.
  • –––, 2002, “Phenomenal consciousness and intentionality: Comments on The Significance of Consciousness”, Psyche, 8(8). URL = <>
  • Lycan, William G., 2001, “The case for phenomenal externalism”, Philosophical Perspectives, 15(s15): 17–35.
  • McGinn, Colin, 1988, “Consciousness and content”, Proceedings of the British Academy, 74: 219–39.
  • –––, 1991, The Problem of Consciousness: Essays Toward a Resolution, Maldon, MA: Blackwell.
  • Mendelovici, Angela, 2010, Mental Representation and Closely Conflated Topics, Ph.D. dissertation, Princeton University
  • –––, 2013, “Intentionalism About Moods”, Thought: A Journal of Philosophy, 2(2): 126–136. doi:10.1002/tht3.81
  • Mendelovici, Angela & David Bourget, 2014, “Naturalizing Intentionality: Tracking Theories Versus Phenomenal Intentionality Theories”, Philosophy Compass 9(5): 325–337. doi:10.1111/phc3.12123
  • Mendola, J., 2008, Anti-Externalism, Oxford: Oxford University Press.
  • Miller, G.H., 1999, “How phenomenological content determines the intentional object”, Husserl Studies, 16(1): 1–24.
  • Mohanty, J., 1989, Transcendental Phenomenology: An Analytic Account, Maldon, MA: Basil Blackwell.
  • Montague, M., 2016, The Given: Experience and its Content. OUP.
  • Nelkin, Dana K., 2001, “Phenomenal consciousness and intentionality”, Psyche, 7(13). URL = <>
  • Pautz, Adam, 2010, “A Simple View of Consciousness”, in Robert C. Koons and George Bealer (ed.), The Waning of Materialism, Oxford: Oxford University Press, pages 25–66. doi:10.1093/acprof:oso/9780199556182.003.0002
  • –––, 2013, “Does Phenomenology Ground Mental Content?”, in Kriegel 2013b: 194–234 doi:10.1093/acprof:oso/9780199764297.003.0011
  • –––, 2014, “The Real Trouble with Armchair Arguments Against Phenomenal Externalismi”, in Sprevak and Kallestrup (eds) New Waves in the Philosophy of Mind, Palgrave.
  • Peacocke, Christopher, 1998, “Conscious Attitudes, Attention, and Self-Knowledge”, in Crispin Wright, Barry C. Smith, and Cynthia Macdonald (eds.), Knowing Our Own Minds, Oxford: Oxford University Press, 63–98. doi:10.1093/0199241406.003.0004
  • Pitt, David, 1999, “In defense of definitions”, Philosophical Psychology, 12(2): 139–156.
  • –––, 2004, “The phenomenology of cognition, or, what is it like to think that P?” Philosophy and Phenomenological Research, 69(1): 1–36.
  • –––, 2009, “Intentional psychologism”, Philosophical Studies, 146(1): 117 - 138.
  • –––, 2011, “Introspection, Phenomenality, and the Availability of Intentional Content”, in Bayne & Montague 2011: 141–173. doi:10.1093/acprof:oso/9780199579938.003.0007
  • Putnam, H., 1975, “The meaning of ‘meaning’”, Minnesota Studies in the Philosophy of Science, 7: 131–193.
  • Quine, W.V.O., 1960, Word & Object, Cambridge, MA: MIT Press.
  • Richards, T. Brad & Andrew R. Bailey, 2014, “Horgan and Tienson on phenomenology and intentionality.”, Philosophical Studies 167 (2):313–326.
  • Robinson, William S., 2011, “A frugal view of cognitive phenomenology”, in Bayne and Montague 2011: 197–214. doi:10.1093/acprof:oso/9780199579938.003.0009
  • Russell, Bertrand, 1910, “Knowledge by acquaintance and knowledge by description”, Proceedings of the Aristotelian Society, 11(5): 108—28.
  • Schwitzgebel, E., 2002, “A phenomenal, dispositional account of belief”, Noûs, 36(2): 249–75.
  • Seager, William E. & David Bourget, 2007, “Representationalism about consciousness”, in Velmans & Schneider 2007: 261–276.
  • Searle, J.R., 1983, Intentionality: An Essay in the Philosophy of Mind, Cambridge: Cambridge University Press.
  • –––, 1987, “Indeterminacy, empiricism, and the first person”, Journal of Philosophy, 84(3): 123–146.
  • –––, 1990, “Consciousness, explanatory inversion and cognitive science”, Behavioral and Brain Sciences, 13: 585–642.
  • –––, 1991, “Consciousness, unconsciousness and intentionality”, Philosophical Issues, 1(1): 45–66.
  • –––, 1992, The Rediscovery of the Mind, Cambridge, MA: MIT Press.
  • –––, 1994, “The connection principle and the ontology of the unconscious: A reply to Fodor and Lepore”, Philosophy and Phenomenological Research, 54(4): 847–55.
  • Siegel, Susanna, 2010, The Contents of Visual Experience, Oxford: Oxford University Press.
  • Siewert, Charles, 1998, The Significance of Consciousness, Princeton: Princeton University Press.
  • Smith, D.W., 1986, “The ins and outs of perception”, Philosophical Studies, 49(March): 187–211.
  • Smith, Quentin & Aleksandar Jokic (eds.), 2003, Consciousness: New Philosophical Perspectives, Oxford: Oxford University Press.
  • Smithies, D., 2014, The Phenomenal Basis of Epistemic Justification. In Jesper Kallestrup & Mark Sprevak (eds.), New Waves in Philosophy of Mind. Palgrave MacMillan 98–124.
  • –––, 2013a, The Significance of Cognitive Phenomenology. Philosophy Compass 8 (8):731–743.
  • –––, 2013b, The Nature of Cognitive Phenomenology. Philosophy Compass 8 (8):744–754.
  • –––, 2012. The mental lives of zombies. Philosophical Perspectives 26 (1):343–372.
  • Strawson, Galen, 1994, Mental Reality, Cambridge, MA: MIT Press.
  • –––, 2008, “Real intentionality 3: Why intentionality entails consciousness”, in Galen Strawson, Real Materialism and Other Essays, Oxford: Oxford University Press, 53–74.
  • –––, 2011a, “Cognitive phenomenology: real life”, in Bayne & Montague 2011: 285–325. doi:10.1093/acprof:oso/9780199579938.003.0013
  • –––, 2011b, “Real naturalism”, Proceedings and Addresses of the American Philosophical Association, 86(2).
  • Travis, Charles S., 2004, “The silence of the senses”, Mind, 113(449): 57–94.
  • Trogdon, Kelly, 2013, “An Introduction to Grounding”, in Miguel Hoeltje, Benjamin Schnieder & Alex Steinberg (eds.), Varieties of Dependence: Ontological Dependence, Grounding, Supervenience, Response-Dependence, (Basic Philosophical Concepts), München: Philosophia Verlag, pages 97–122.
  • Tye, Michael, 2000, Consciousness, Color, and Content, Cambridge, MA: MIT Press.
  • –––, forthcoming, “Phenomenal externalism, lolita, and the planet xenon”, in Terence E. Horgan & David Sosa (eds.), Collection on the Philosophy of Jaegwon Kim, Cambridge, MA: MIT Press.
  • Tye, Michael & Briggs Wright, 2011, “Is there a phenomenology of thought?” in Bayne & Montague 2011: 326–344. doi:10.1093/acprof:oso/9780199579938.003.0014
  • Van Baaren, Robbert, 1999, “A critical evaluation of Searle’s connection principle”, Teorema, 18(1): 73–83.
  • van Gulick, Robert, 1995, “How should we understand the relation between intentionality and phenomenal consciousness”, Philosophical Perspectives, 9: 271–89. doi: 10.2307/2214222
  • Velmans, Max & Susan Schneider (eds.), 2007, The Blackwell Companion to Consciousness, Maldon, MA: Blackwell.
  • Wilson, R.A., 2003, “Intentionality and phenomenology”, Pacific Philosophical Quarterly, 84(4): 413–431.

Other Internet Resources


Many thanks to David Chalmers, Daniel Stoljar, and SEP’s anonymous referees for extensive comments and discussion. Thanks also to Kati Farkas, Adam Pautz, and David Pitt for very helpful comments on previous drafts of this entry.

Copyright © 2017 by
David Bourget <>
Angela Mendelovici <>

This is a file in the archives of the Stanford Encyclopedia of Philosophy.
Please note that some links may no longer be functional.