Virtually every aspect of self-deception, including its definition and paradigmatic cases, is a matter of controversy among philosophers. Minimally, self-deception involves a person who seems to acquire and maintain some false belief in the teeth of evidence to the contrary as a consequence of some motivation, and who may display behavior suggesting some awareness of the truth. Beyond this, philosophers divide over whether self-deception is intentional, involves belief or some other sub-or-non-doxastic attitude, whether self-deceivers are morally responsible for their self-deception, and whether self-deception is morally problematic (and if it is in what ways and under what circumstances), whether self-deception is beneficial or harmful, whether and in what sense collectives can be self-deceived, how this might affect individuals within such collectives, whether our penchant for self-deception was selected for or merely an accidental byproduct of our evolutionary history, and if it was selected, why?
The discussion of self-deception and its associated puzzles sheds light on the ways motivation affects belief acquisition and retention and other belief-like cognitive attitudes; it also prompts us to scrutinize the notion of belief and the limits of such folk psychological concepts to adequately explain phenomena of this sort. And yet insofar as self-deception represents an obstacle to self-knowledge, both individually and collectively, it is more than just another interesting philosophical puzzle. It is a problem of existential concern, since it suggests that there is a distinct possibility that we live with distorted views of our selves, others and the world that may make us strangers to ourselves and blind to the nature of our significant moral engagements.
- 1. Definitional Issues
- 2. Traditional Intentional Approaches
- 3. Revisionist Approaches
- 4. Twisted Self-Deception
- 5. Morality and Self-deception
- 6. Origin of Self-Deception: Adaptation or Spandrel
- 7. Collective Self-Deception
- Academic Tools
- Other Internet Resources
- Related Entries
What is self-deception? Traditionally, self-deception has been modeled on interpersonal deception, where A intentionally gets B to believe some proposition p, all the while knowing or believing truly that ~p. Such deception is intentional and requires the deceiver to know or believe that ~p and the deceived to believe that p. One reason for thinking self-deception is analogous to interpersonal deception of this sort is that it helps us to distinguish self-deception from mere error, since the acquisition and maintenance of the false belief is intentional not accidental. If self-deception is properly modeled on such interpersonal deception, self-deceivers intentionally get themselves to believe that p, all the while knowing or believing truly that ~p. On this traditional model, then, self-deceivers apparently must (1) hold contradictory beliefs, and (2) intentionally get themselves to hold a belief they know or believe truly to be false.
The traditional model of self-deception, however, has been thought to raise two paradoxes: One concerns the self-deceiver’s state of mind—the so-called ‘static’ paradox. How can a person simultaneously hold contradictory beliefs? The other concerns the process or dynamics of self-deception—the so-called ‘dynamic’ or ‘strategic’ paradox. How can a person intend to deceive herself without rendering her intentions ineffective? (Mele 1987a; 2001)
The requirement that self-deceivers holds contradictory beliefs raises the ‘static’ paradox, since it seems to pose an impossible state of mind, namely, consciously believing that p and ~p at the same time. As deceiver, she must believe that ~p, and, as deceived, she must believe that p. Accordingly, the self-deceiver consciously believes that p and ~p. But if believing both a proposition and its negation in full awareness is an impossible state of mind to be in, then self-deception as it has traditionally been understood seems to be impossible as well.
The requirement that the self-deceiver intentionally get herself to hold a belief she knows to be false raises the ‘dynamic’ or ‘strategic’ paradox, since it seems to involve the self-deceiver in an impossible project, namely, both deploying and being duped by some deceitful strategy. As deceiver, she must be aware she’s deploying a deceitful strategy; but, as the deceived, she must be unaware of this strategy for it to be effective. And yet it is difficult to see how the self-deceiver could fail to be aware of her intention to deceive. A strategy known to be deceitful, however, seems bound to fail. How could I be taken in by your efforts to get me to believe something false, if I know what you’re up to? But if it’s impossible to be taken in by a strategy one knows is deceitful, then, again, self-deception as it has traditionally been understood seems to be impossible as well.
These paradoxes have led a minority of philosophers to be skeptical that self-deception is conceptually possible or even coherent (Paluch 1967; Haight 1980; Kipp 1980). Borge (2003) contends that accounts of self-deception inevitably give up central elements of our folk-psychological notions of “self” or “deception” to avoid paradox, leaving us to wonder whether this framework itself is what gets in the way of explaining the phenomenon. Such skepticism toward the concept may seem warranted, given the obvious paradoxes involved. Most philosophers, however, have sought some resolution to these paradoxes, instead of giving up on the notion itself, not only because empirical evidence suggests that self-deception is not only possible, but pervasive (Sahdra & Thagard 2003), but also because the concept does seem to pick out a distinct kind of motivated irrationality. Philosophical accounts of self-deception can be organized into two main groups: those that maintain that the paradigmatic cases of self-deception are intentional, and those that deny this. Call these approaches intentionalist and revisionist respectively. Intentionalists find the model of intentional interpersonal deception apt, since it helps to explain the selectivity of self-deception and the apparent responsibility of self-deceiver, as well as providing a clear way of distinguishing self-deception from other sorts of motivated belief such as wishful thinking. Revisionists are impressed by the static and dynamic paradoxes allegedly involved in modeling self-deception on intentional interpersonal deception and, in their view, the equally puzzling psychological models used by intentionalists to avoid these paradoxes, such as semi-autonomous subsystems, unconscious beliefs and intentions and the like. To avoid paradox and psychological exotica revisionist approaches reformulate the intention requirement, the belief requirement or both.
The chief problem facing intentional models of self-deception is the dynamic paradox, namely, that it seems impossible to form an intention to get oneself to believe what one currently disbelieves or believes is false. For one to carry out an intention to deceive oneself one must know what one is doing, to succeed one must be ignorant of this same fact. Intentionalists agree on the proposition that self-deception is intentional, but divide over whether it requires the holding of contradictory beliefs, and thus over the specific content of the alleged intention involved in self-deception. Insofar as even the bare intention to acquire the belief that p for reasons not having to do with one’s evidence for p seems unlikely to succeed if directly known, most intentionalists introduce some sort of temporal or psychological partition to insulate self-deceivers from their deceptive stratagems. When self-deceivers are not consciously aware of their beliefs to the contrary or their deceptive intentions, no paradox seems to be involved in deceiving oneself. Many approaches utilize some combination of psychological and temporal division (e.g., Bermúdez 2000).
Some intentionalists argue that self-deception is a complex process that is often extended over time and as such a self-deceiver can consciously set out to deceive herself into believing that p, knowing or believing that ~p, and along the way lose her belief that ~p, either forgetting her original deceptive intention entirely, or regarding it as having, albeit accidentally, brought about the true belief she would have arrived at anyway (Sorensen 1985; Bermúdez 2000). So, for instance, an official involved in some illegal behavior might destroy any records of this behavior and create evidence that would cover it up (diary entries, emails and the like), knowing that she will likely forget having done these things over the next few months. When her activities are investigated a year later, she has forgotten her tampering efforts and based upon her falsified evidence comes to believe falsely that she was not involved in the illegal activities of which she is accused. Here the self-deceiver need never simultaneously hold contradictory beliefs even though she intends to bring it about that she believes that p, which she regards as false at the outset of the process of deceiving herself and true at its completion. The self-deceiver need not even forget her original intention to deceive, so an unbeliever who sets out to get herself to believe in God (since she thinks such a belief is prudent, having read Pascal) might well remember such an intention at the end of the process and deem that by God’s grace even this misguided path led her to the truth. It is crucial to see here that what enables the intention to succeed in such cases is the operation of what Johnston (1988) terms ‘autonomous means’ (e.g., the normal degradation of memory, the tendency to believe what one practices, etc.) not the continued awareness of the intention. Some non-intentionalists take this to be a hint that the process by which self-deception is accomplished is subintentional (Johnston 1988). In any case, while it is clear that such temporal partitioning accounts apparently avoid the static and dynamic paradoxes, many find such cases fail to capture the distinctive opacity, indirection and tension associated with garden-variety cases of self-deception (Levy 2004).
Another strategy employed by intentionalists is the division of the self into psychological parts that play the role of the deceiver and deceived respectively. These strategies range from positing strong division in the self, where the deceiving part is a relatively autonomous subagency capable of belief, desire and intention (Rorty 1988); to more moderate division, where the deceiving part still constitutes a separate center of agency (Pears 1984, 1986; 1991); to the relatively modest division of Davidson, where there need only be a boundary between conflicting attitudes (1982, 1985). Such divisions are prompted in large part by the acceptance of the contradictory belief requirement. It isn’t simply that self-deceivers hold contradictory beliefs, which though strange, isn’t impossible. One can believe that p and believe that ~p without believing that p & ~p, which would be impossible. The problem such theorists face stems from the appearance that the belief that ~p motivates and thus form a part of the intention to bring it about that one acquire and maintain the false belief that p (Davidson 1985). So, for example, the Nazi official’s recognition that his actions implicate him in serious evil motivates him to implement a strategy to deceive himself into believing he is not so involved; he can’t intend to bring it about that he holds such a false belief if he doesn’t recognize it is false, and he wouldn’t want to bring such a belief about if he didn’t recognize the evidence to the contrary. So long as this is the case, the deceptive subsystem, whether it constitutes a separate center of agency or something less robust, must be hidden from the conscious self being deceived if the self-deceptive intention is to succeed. While these psychological partitioning approaches seem to resolve the static and dynamic puzzles, they do so by introducing a picture of the mind that raises many puzzles of its own. On this point, there appears to be consensus even among intentionalists that self-deception can and should be accounted for without invoking divisions not already used to explain non-self-deceptive behavior, what Talbott (1995) calls ‘innocent’ divisions.
Some intentionalists reject the requirement that self-deceivers hold contradictory beliefs (Talbott 1995; Bermúdez 2000). According to such theorists, the only thing necessary for self-deception is the intention to bring it about that one believe that p where lacking such an intention one would not have acquired that belief. The self-deceiver thus need not believe that ~p. She might have no views at all regarding p, possessing no evidence either for or against p; or she might believe that p is merely possible, possessing evidence for or against p too weak to warrant belief that p or ~p (Bermúdez 2000). Self-deceivers in this minimal sense intentionally acquire the belief that p, despite their recognition at the outset that they do not possess enough evidence to warrant this belief by selectively gathering evidence supporting p or otherwise manipulating the belief-formation process to favor belief that p. Even on this minimal account, such intentions will often be unconscious, since a strategy to acquire a belief in violation of one’s normal evidential standards seems unlikely to succeed if one is aware of it.
A number of philosophers have moved away from modeling self-deception directly on intentional interpersonal deception, opting instead to revise either the intention or the belief requirement traditional intentionalist models assume. Those revising the intention requirement typically treat it self-deception as a species of motivationally biased belief, thus avoiding the problems involved with intentionally deceiving oneself. Call these non-intentionalist or deflationary approaches. Those revising the belief requirement either posit some other non-doxastic or quasi-doxastic attitude toward the proposition involved (‘hopes’, ‘suspicions’, ‘doubts’, ‘anxieties’ Edwards 2013; ‘besires’ Egan 2009; ‘pretense’ Gendler 2007; ‘imagination’ Lazar 1999), or alter the content of the proposition believed in a way that avoids the paradoxes associated with the dual belief requirement embedded in traditional intentionalist models of self-deception (e.g., Holton 2001; Funkhouser 2005; Fernández 2013). Call these revision of belief approaches. Deflationary approaches focus on the process of self-deception, while the revision of belief approaches focus on the product. A revision of either of these aspects, of course, has ramifications for the other. For example, if self-deception doesn’t involve belief, but some other non-doxastic attitude (product), then one may well be able to intentionally enter that state without paradox (process). This section considers non-intentional or deflationary approaches and the worries such approaches raise (3.1), and revision of belief approaches (3.2)
These non-intentionalists allow that phenomena answering to the various intentionalist models available may be possible, but everyday or ‘garden-variety’ self-deception can be explained without adverting to subagents, or unconscious beliefs and intentions, which, even if they resolve the static and dynamic puzzles of self-deception, raise many puzzles of their own. If such non-exotic explanations are available, intentionalist explanations seem unwarranted.
The main paradoxes of self-deception seem to arise from modeling self-deception too closely on intentional interpersonal deception. Accordingly, non-intentionalists suggest the intentional model be jettisoned in favor of one that takes ‘to be deceived’ to be nothing more than to believe falsely or be mistaken in believing (Johnston 1988; Mele 2001). For instance, Sam mishears that it will be a sunny day and relays this misinformation to Joan with the result that she believes it will be a sunny day. Joan is deceived in believing it will be sunny and Sam has deceived her, albeit unintentionally. Initially, such a model may not appear promising for self-deception, since simply being mistaken about p or accidentally causing oneself to be mistaken about p doesn’t seem to be self-deception at all but some sort of innocent error—Sam doesn’t seem self-deceived, just deceived. Non-intentionalists, however, argue that in cases of self-deception the false belief is not accidental but motivated by desire (Mele 2001), anxiety (Johnston 1988, Barnes 1997) or some other emotion regarding p or related to p. So, for instance, when Allison believes against the preponderance of evidence available to her that her daughter is not having learning difficulties, the non-intentionalist will explain the various ways she misreads the evidence by pointing to such things as her desire that her daughter not have learning difficulties, her fear that she has such difficulties, or anxiety over this possibility. In such cases, Allison’s self-deceptive belief that her daughter is not having learning difficulties, fulfills her desire, quells her fear or reduces her anxiety, and it is this function (not an intention) that explains why her belief formation process is bias. Allison’s false belief is not an innocent mistake, but a consequence of her motivational states.
Some non-intentionalists suppose that self-deceivers recognize at some level that their self-deceptive belief that p is false, contending that self-deception essentially involves an ongoing effort to resist the thought of this unwelcome truth or is driven by anxiety prompted by this recognition (Bach 1981; Johnston 1988). So, in Allison’s case, her belief that her daughter is having learning difficulties along with her desire that it not be the case motivate her to employ means to avoid this thought and to believe the opposite. Others, however, argue the needed motivation can as easily be supplied by uncertainty or ignorance whether p, or suspicion that ~p (Mele 2001, Barnes 1997). Thus, Allison need not hold any opinion regarding her daughter’s having learning difficulties for her false belief that she is not experiencing difficulties to count as self-deception, since it is her regarding evidence in a motivationally biased way in the face of evidence to the contrary, not her recognition of this evidence, that makes her belief self-deceptive. Accordingly, Allison need not intend to deceive herself nor believe at any point that her daughter is in fact having learning difficulties. If we think someone like Allison is self-deceived, then self-deception requires neither contradictory beliefs nor intentions regarding the acquisition or retention of the self-deceptive belief. Such approaches are ‘deflationary’ in the sense that they take self-deception to be explicable without reaching for what Mele calls ‘mental exotica’ (Mele 2001). (For more on this sort of objection see Self-Deception and Tension below).
On such deflationary views of self-deception, one need only hold a false belief that p, possess evidence that ~p, and have some desire or emotion that explains why p is believed and retained. In general, if one possesses evidence that one normally would take to support ~p and yet believes that p instead due to some desire, emotion or other motivation one has related to p, then one is self-deceived.
Deflationary Worries and Modifications
Critics contend these deflationary accounts do not adequately distinguish self-deception from other sorts of motivated believing such as wishful thinking, cannot explain the peculiar selectivity associated with self-deception, its characteristic ‘tension’, or they way it involves a failure of self-knowledge.
Self-Deception and Wishful Thinking: What distinguishes wishful thinking from self-deception, according to intentionalists, just is that the latter is intentional while the former is not (e.g., Bermúdez 2000). Non-intentionalists respond that what distinguishes wishful thinking from self-deception is that self-deceivers recognize evidence against their self-deceptive belief whereas wishful thinkers do not (Bach 1981; Johnston 1988), or merely possess, without recognizing, greater counterevidence than wishful thinkers. Scott-Kakures (2002) argues that in “wishful believing the subject’s cognition is hijacked by desire, while in self-deception the subject is a willing participant in directing cognition towards the doxastic embrace of the favored proposition.” For Scott-Kakures, motivation exerts an influence on the self-deceivers reflective reasoning and hypothesis testing that the self-deceivers mistakenly believe to be free of such bias. For wishful thinkers motivation plays triggers a belief formation process in which the person does not play an active, conscious role (Scott-Kakures 2002; see also, Szabados 1973). While the precise relationship between wishful thinking and self-deception is clearly a matter of debate, there are plausible ways of distinguishing the two that do not invoke the intention to deceive.
Self-Deception and Selectivity: Another objection raised by intentionalists is that deflationary accounts cannot explain the selective nature of self-deception, termed the ‘selectivity problem’ by Bermúdez (1997, 2000). Why is it, such intentionalists ask, that we are not rendered bias in favor of the belief that p in many cases where we have a very strong desire that p (or anxiety or some other motivation related to p)? Intentionalists argue that an intention to get oneself to acquire the belief that p offers a relatively straightforward answer to this question. Mele (2001), drawing on empirical research regarding lay hypothesis testing (Trope and Lieberman 1996), argues that selectivity may be explained in terms of the agent’s assessment of the relative costs of erroneously believing that p and ~p. So, for example, Josh would be happier believing falsely that the gourmet chocolate he finds so delicious isn’t produced by exploited farmers than falsely believing that it is, since he desires that it not be so produced. Because Josh considers the cost of erroneously believing his favorite chocolate is tainted by exploitation to be very high—no other chocolate gives him the same pleasure, it takes a great deal more evidence to convince him that his chocolate is so tainted than it does to convince him otherwise. It is the low subjective cost of falsely believing the chocolate is not tainted that facilitates Josh’s self-deception. But we can imagine Josh having the same strong desire that his chocolate not be tainted by exploitation and yet assessing the cost of falsely believing it is not tainted differently. Say, for example, he works for an organization promoting fair trade and non-exploitive labor practices among chocolate producers and believes he has an obligation to accurately represent the labor practices of the producer of his favorite chocolate and would, furthermore, lose credibility if the chocolate he himself consumes is tainted by exploitation. In these circumstances, Josh is more sensitive to evidence that his favorite chocolate is tainted, despite his desire that it not be, since the subjective cost of being wrong is higher for him than it was before. It is the relative subjective costs of falsely believing p and ~p that explains why desire or other motivation biases belief in some circumstances and not others.
Challenging this solution, Bermúdez (2000) suggests that the selectivity problem may reemerge, since it isn’t clear why we frequently do not become self-deceived in cases where there is a relatively low cost for holding a self-deceptive belief favored by our motivations. Mele (2001), however, points out that intentional strategies have their own ‘selectivity problem’, since it isn’t clear why some intentions to acquire a self-deceptive belief succeed while others do not.
Smith (2014) also addresses a version of the selectivity problem, proposing a way to account for the the selectivity and apparent success aptness of self-deception without resorting to intentions. In his view, deflationary strategies like Mele’s end up making self-deception an accidental byproduct of having particular motivational states and therefore not easily distinguished from other sorts of motivated believing. Smith looks to examples of deception in non-human organisms to find a way to explain the “teleofunctional” nature of self-deception without having to attribute intentions. Smith points out that when a mirror orchid deceives a male scolid wasp by mimicking the appearance and odor of a female scolid wasp, the deception in question can hardly be considered accidental even though the orchid lacks the capacity for intentional behavior. With this sort of deception in mind, Smith proposes an alternative non-intentional model in which a “subpersonal neural mechanism with the proper fuction of selectively inhibiting the process of producing or extracting information from subdoxastic models” accounts for self-deceivers false belief without requiring an intention to deceive. Moreover, by proposing that true information is encoded in some sub-doxastic state, Smith thinks the selectivity of self-deception may also be accounted for. In this latter sense, Smith might be viewed as both a revision of intention and a revision of belief theorist (more on the latter below).
Self-Deception and Tension: A number of philosophers have complained that deflationary account fails to explain a certain ‘tension’ supposed to be present in cases of genuine self-deception (e.g., Audi 1997; Bach 1997; Nelkin 2002; Funkhouser 2005; Fernández 2013). This ‘tension’ involves some sort of psychic or behavioral ‘conflict’. Self-deceivers are said to experience ‘doubts’, ‘suspicions’, and the like regarding p, and to display ambiguous behavior some pointing toward belief that p and some toward ~p. Since deflationary approaches deny that self-deceivers reject the dual belief requirement, specifically, denying that self-deceivers need to hold that ~p, it seems difficult for such approaches to explain why self-deceivers would struggle with doubts or display behavior at variance with their false belief that p. Regarding the former, deflationary theorists accept the possibility that self-deceivers may ‘suspect’ or think there is a significant chance that ~p (Mele 2001, 2009, 2010). Clearly, a person who believes that p and suspects that ~p may experience tension, moreover, such attitudes combined with a desire that p help explain certain sorts of avoidance behavior highlighted by critics. For his part, Mele things self-deception may often involve tension, but it certainly need not, that is, some self-deception is tension free. While Lynch (2012) does not think tension is necessary, he accepts the idea that it is characteristic of self-deception, and can be accounted for by construing self-deception as involving an unwarranted degree of confidence that p, rather than wholehearted belief that p as Mele does. On Lynch’s version of deflation, self-deceivers will encounter and appreciate evidence that casts doubt their assumption that p. This appreciation serves to motivate “attempts to deal with it (by, for example, trying to explain it away)” and explains the doubts regarding p critics have identified.
As Lynch (2012) points out, these tensions are not the only, or perhaps, the central one’s raised by critics of deflation. For such critics, self-deception involves a deeper behavioral conflict. Specifically, what self-deceivers say about p is at odds with their non-verbal behavior, which justifies the attribution of the belief that ~p (Audi, 1997; Patten 2003; Funkhouser 2005; Fernández 2013). For example, Ellen says that she is doing well in her biology class, but systematically avoids looking at the results on her quizzes and tests. She says she doesn’t need to look; she knows she didn’t miss anything. When her teacher tries to catch her after class to discuss her poor grades, she is rushes off. Similarly, when she sees an email from her teacher, she ignores it. Ellen’s behavior suggests to these critics that she knows she is not doing well in her biology class, despite her avowals to the contrary. Insofar as deflationary approaches deny people like Ellen know the truth, they fail adequately to explain her self-deception. While some propose these cases as a type of self-deception deflation cannot explain (Fernández 2013), others go further, suggesting these cases show that deflation is not an account of self-deception at all, but of self-delusion (Audi 2007; Funkhouser 2005). In either case, critics think such cases cannot adequately be explained by deflationary account.
The significant difference between what deflationary accounts have in view (namely, people who do not believe the unwelcome truth that ~p, having a motivation driven, unwarranted skepticism toward it), and deep conflict theorist do (namely, people who know the unwelcome truth that ~p and avoid reflecting on it or encountering evidence for it) prompts us to ask whether these phenomena belong to the same psychological kind, according to Lynch (2012). If they did, ‘self-deception’ would be rendered ambiguous. As Mele (2010) points out, anyone meeting his first condition, namely, acquiring the false belief that p could not be self-deceived on deep conflict approaches (Audi 1997, 2007; Gendler 2007). Similarly, anyone meeting Audi’s condition that self-deceivers are merely disposed to avow sincerely that p when they unconsciously believe that ~p could not be self-deceived on deflationary models like Mele’s. So, which approach has correctly identified the meaning of ‘self-deception’? Lynch (2012) argues that ‘deep-conflict’ cases do not obviously resemble cases of interpersonal deception, “making it a mystery why they would be considered species of the same genus.” To be fair, such approaches do reflect the fact that deceivers often know the truth, but it does seem strange to say a person could not be deceived regarding p just because they actually hold this false belief. In view of these sorts of considerations, Lynch (2012) argues that such deep conflict cases are not properly understood as self-deception, but more nearly resemble what Longeway (1990) calls ‘escapism’. Whether such cases constitute a distinct psychological kind or not, or whether they reflect people’s pretheoretical understanding of self-deception remains unclear, but deflationary approaches do seem to be capable of explaining some of the behavior such theorists insist justifies attributing an unconscious belief that ~p. Moreover, deep conflict theorists need to explain why we should think one avowing that p does not also believe it, and why the behavior in question cannot be explained by nearby proxies like suspicion that p (Mele 2010)..
Self-Deception and Self-Knowledge: A number of theorists have argued that deflationary approaches fail to capture the distinctive failure of self-knowledge involved in cases of self-deception (Holton, 2001; Scott-Kakures 2002; Funkhouser 2005; Fernández 2013). Holton (2001) argues that Mele’s conditions for being self-deceived are not sufficient, because they do not require self-deceivers to hold false beliefs about themselves. It seems possible for a person to acquire a false belief that p as a consequence of treating data relevant to p in a motivationally biased way, when the data available to her provides greater warrant for p, in such a way that she retains accurate self-knowledge. Such a person would readily admit to ignoring certain data, because it would undermine a belief she cherishes. She makes no mistakes about herself, her beliefs or her belief formation process. Such a person, Holton argues, would be willfully ignorant, but not self-deceived. If, however, her strategy was sufficiently opaque to her, she would be apt to deny that she was ignoring relevant evidence, and affirm that her belief was the result of what Scott-Kakures (2002) calls “reflective, critical reasoning.” These erroneous beliefs represent a failure of self-knowledge that seems, according to these critics, essential to self-deception. Scott-Kakures (2002) contends that this sort of error is also what distinguishes self-deception from wishful thinking (see above) and restricts it to those capable of higher-order beliefs. Mele (2010), for his part, thinks adding the following sufficient condition to his account would put to rest these concerns:
S‘s acquiring the believe that p is a product of “reflective, critical reasoning,” and S is wrong in regarding that reasoning as properly directed.
Fernández (2013) distinguishes this sort of self-knowledge error, which focus on the justification of one’s beliefs, from those that involve errors about what one believes. One worry about characterizing the failure in terms of justification, according to Fernández (2013), is that it requires a degree of awareness about one’s reasons for believing that would rule out those who do not engage in reflection on their reasons for belief. Fernández (2013), like Funkhouser (2005), endorses an error about belief approach to account for the sort of ‘deep conflict’ cases described above. Whether Mele’s (2010) proposed condition requires too much sophistication from self-deceivers is debatable, but it does suggests a way of accounting for the intuition that self-deceivers fail to know themselves that does not require them to harbor hidden beliefs or intentions.
Approaches that focus on revising the notion that self-deception requires holding that p and ~p, the dual-belief requirement implied by traditional intentionalism, either introduce some “doxastic proxy” (Baghramian and Nicholson 2013) to replace one or both beliefs, or alter the content of the self-deceiver’s belief in a way that preserves tension without involving outright conflict. These approaches resolve the doxastic paradox either by denying that self-deceivers believe that p, the welcome but unwarranted belief (Audi 1982, 1988; Funkhouser 2005; Gendler 2007; Fernádez 2013), by denying that they believe that ~p, the unwelcome but warranted belief (Barnes 1997; Mele 2001), or by denying that they hold either that p or ~p (Edwards 2013; Porcher 2012).
Denying the Unwelcome Belief: As noted above, deflationary approaches in the tradition of Mele, deny that self-deceivers need to hold the unwelcome but warranted belief that ~p.[; it is sufficient for self-deceivers to acquire the unwarranted false belief that p in a motivated way.] To explain the selectivity and tension that are often cited as reasons for attributing ~p to self-deceivers, Mele contends an attitude of suspicion, not belief, ~p, would suffice. He also suggests that beliefs other than the belief that ~p could do the necessary work without being directly contrary to what the self-deceiver comes to believe. Citing Rorty’s (1988) case of Dr. Androvna, a cancer specialist who believes she does not have cancer, but who draws up a detailed will and writes uncharacteristically effusive letters suggesting her impending departure, Mele (2009) points out that Androvna’s behavior might easily be explained by her holding another belief, namely, “there is a significant chance that she has cancer.” And, this belief is compatible with Androvna’s belief that she does not, in fact, have cancer.
Denying the Welcome Belief: Another strand of revision of belief approaches focuses on the welcome belief that p, proposing a variety of alternatives to this belief that function in ways that explain what self-deceivers typically say and do. Self-deceivers display ambiguous behavior that not only falls short of what one would expect from a person who believes that p, but actually justifies the attribution of the belief that ~p. For instance, Androvna’s letter writing and will preparation might be taken as reasons for attributing to her the belief that she won’t recover, despite her verbal assertions to the contrary. Accordingly, Robert Audi (1982, 1988) attributes the unconscious belief that ~p to self-deceivers and proposes sincere avowal or a disposition to avow that p as a proxy for the belief that p. Sincere avowal that p does not entail belief that p, though belief-like, it does not carry with it “the full range of behavior one would expect from genuine belief” (Audi 1988). Sincere avowal isn’t exactly an attitude toward the proposition, but may suggest some non-doxastic attitude is at play in the disposition to avow. Gendler (2007) suggests ‘pretense’ that p plays the role of belief in terms “introspective vivicy, motivation of action in a wide range of circumstances.” Unlike belief, however, pretense is reality indifferent: this attitude is held primarily because the self-deceiver wishes to dwell in a world in which p obtains. Lazar (1999) closely resembles Gendler in taking self-deceptive beliefs to be better understood along the lines of imaginations or fantasies that directly express the self-deceivers wishes, fears, hopes and the like, since they show a relative insensitivity to evidence (unlike beliefs) but guide behavior (like beliefs). Along these lines Egan (2009) proposes an intermediate state between belief and desire, ‘besire’, to account for the special pattern of behavior displayed by self-deceivers.
Others, instead of adjusting the attitude toward the welcome proposition p by offering a non-doxastic proxy, substitute a higher-order belief (Funkhouser 2005; Fernández 2013). Funkhouser (2005), for instance, contends that self-deceivers don’t believe p, they believe that they believe that p, and this false second-order belief “I think that I believe that p” underlies and underwrites their sincere avowal that p as well as their ability to entertain p as true. In this way, self-deception is a kind of failure of self-knowledge, a misapprehension or misattribution of one’s own beliefs. By shifting the content of the self-deceptive belief to the second-order, this approach not only avoids the doxastic paradox, it explains the characteristic ‘tension’ or ‘conflict’ attributed to self-deceivers in terms of the disharmony between the first-order and second-order beliefs, the latter explaining their avowed belief and the former their behavior that goes against that avowed belief (Funkhouser 2005; Fernández 2013).
Denying both the Welcome Belief and the Unwelcome Belief: Given the variety of proxies that have been offered for both the welcome and the unwelcome belief, it should not be surprising that some argue that self-deception can be explained without attributing either belief to self-deceivers, a position Edwards (2013) refers to as ‘nondoxasticism’. Porcher (2012) recommends against attributing beliefs to self-deceivers on the grounds that what they believe is indeterminate, since they are, as Schwitzgebel (2001, 2010) contends, “in-between-believing,” neither fully believing that p nor fully not believing that p. For Porcher (2012), self-deceivers show the limits of the folk psychological concepts of belief and suggest the need to develop a dispositional account of self-deception that focuses on the ways that self-deceivers’ dispositions deviate from those of stereotypical full belief. Funkhouser (2009) also points to the limits of folk psychological concepts and suggests that in cases involving deep conflict between behavior and avowal “the self-deceived produce a confused belief-like condition so that it is genuinely indeterminate what they believe regarding p.” Edwards (2013), however, rejects the claim that the belief is indeterminate or that folk psychological concepts are inadequate. She argues that folk psychology offers a wide variety of nondoxastic attitudes such as ‘hope,’ ‘suspicion,’ ‘anxiety,’ and the like that are more than sufficient to explain self-deception without adverting to belief. In her view, the so-called doxastic problem can be resolved simply by avoiding the attribution of doxastic attitudes of any kind.
While revision of belief approaches suggest a number of non-paradoxical ways of thinking about self-deception. Some worry that those approaches denying that self-deceivers hold the welcome but unwarranted belief that p eliminate what is central to the notion of self-deception, namely, deception (see e.g., Lynch 2012, Mele 2010). On such approaches, self-deceivers may be mistaken about what they believe, but they don’t actually hold an unwarranted belief regarding the target proposition at the first-order level. However, those approaches focusing on higher-order beliefs locate the deception or error at the second-order level. The Self-deceiver’s false belief that she believe that p is unwarranted and may well be the result of motivation (Funkhouser 2005; Fernández 2013). Interestingly, if this false second-order belief is acquired in a suitably biased way, it may fit Mele’s deflationary protoanalysis (Mele 2009), which is not restricted to first-order beliefs, thus capturing the intuition that self-deceivers are, in fact, deceived about something. Whatever the verdict, these revision of belief approaches suggest that our way of characterizing belief may not be fine-grained enough to account for the subtle attitudes or meta-attitudes self-deceivers bear to the proposition in question. Taken together these approaches make it clear that the question regarding what self-deceivers believe is by means resolved.
Self-deception that involves the acquisition of an unwanted belief, termed ‘twisted self-deception’ by Mele (1999, 2001), has generated a small but growing literature of its own, most recently, Barnes (1997), Mele (1999, 2001), Scott-Kakures (2000; 2001). A typical example of such self-deception is the jealous husband who believes on weak evidence that his wife is having an affair, something he doesn’t want to be the case. In this case, the husband apparently comes to have this false belief in the face of strong evidence to the contrary in ways similar to those ordinary self-deceivers come to believe something they want to be true.
One question philosophers have sought to answer is how a single unified account of self-deception can explain both welcome and unwelcome beliefs. If a unified account is sought, then it seems self-deception cannot require that the self-deceptive belief itself be desired. Pears (1984) has argued that unwelcome belief might be driven by fear or jealousy. My fear of my house burning down might motivate my false belief that I have left the stove burner on. This unwelcome belief serves to ensure that I avoid what I fear, since it leads me to confirm that the burner is off. Barnes (1997) argues that the unwelcome belief must serve to reduce some relevant anxiety; in this case my anxiety that my house is burning. Scott-Kakures (2000; 2001) argues, however, that since the unwelcome belief itself does not in many cases serve to reduce but rather to increase anxiety or fear, their reduction cannot be the purpose of that belief. Instead, he contends that we think of the belief as serving to make the agent’s goals and interests more probable than not, in my case, preserving my house. My testing and confirming an unwelcome belief may be explained by the costs I associate with being in error, which is determined in view of my relevant aims and interests. If I falsely believe that I have left the burner on, the cost is relatively low—I am inconvenienced by confirming that it is off. If I falsely believe that I have not left the burner on, the cost is extremely high—my house being destroyed by fire. The asymmetry between these relative costs alone may account for my manipulation of evidence confirming the false belief that I have left the burner on. Drawing upon recent empirical research, both Mele (2001) and Scott-Kakures (2000) advocate a model of this sort, since it helps to account for the roles desires and emotions apparently play in cases of twisted self-deception. Specifically, Mele refuses to identify the motivating desire as a desire that p, leaving the content of the motivation in question open. Nelkin (2002), however, argues that the motivation for self-deceptive belief formation should be restricted to a desire to believe that p. She points out that the phrase “unwelcome belief” is ambiguous, since a belief itself might be desirable even if its being true is not. I might want to hold the belief that I have left the burner on, but not want it to be the case that I have left it on. The belief is desirable in this instance, because holding it ensures that it will not be true. In Nelkin’s view, then, what unifies cases of self-deception—both twisted and straight—is that the self-deceptive belief is motivated by a desire to believe that p; what distinguishes them is that twisted self-deceivers do not want p to be the case, while straight self-deceivers do. Restricting the motivating desire to a desire to believe that p, according to Nelkin, makes clear what twisted and straight self-deception have in common as well as why other forms of motivated belief formation are not cases of self-deception. Mele (2009) argues that self-deceivers need not have the specific desire to believe that p, since a variety of other desires might well alter the acceptance thresholds such that p is believed, even a desire not to acquire a false belief that p might serve to motivate self-deceptive belief that p.
Nelkin (2012) acknowledges the boundaries between cases of self-deception and other sorts of irrational motivated belief are blurry, but notes that scrutiny of the content of the motivation is necessary for adjudicating individual cases, and suggests that the nearer this content gets to the desire to believe that p the more clearly it is a case of self-deception. Moreover, she contends that her “desire to believe” model helps to differentiate self-deception from other cases of motivated biased belief and to explain why self-deceivers may be morally responsible in some cases (see section 5.1 below). Though non-intentional models of twisted self-deception dominate the landscape, whether desire, emotion or some combination of these attitudes plays the dominant role in such self-deception and whether their influence merely triggers the process or continues to guide it throughout remain matters of controversy.
Despite the fact that much of the contemporary philosophical discussion of self-deception has focused on epistemology, philosophical psychology and philosophy of mind, the morality of self-deception has been the central focus of discussion historically. As a threat to moral self-knowledge, a cover for immoral activity, and a violation of authenticity, self-deception has been thought to be morally wrong or, at least, morally dangerous. Some thinkers, what Martin (1986) calls ‘the vital lie tradition’, however, have held that self-deception can in some instances be salutary, protecting us from truths that would make life unlivable (e.g., Rorty 1972; 1994). There are two major questions regarding the morality of self-deception: First, can a person be held morally responsible for self-deception and if so under what conditions? Second, is there is anything morally problematic with self-deception, and if so, what and under what circumstances? The answers to these questions are clearly intertwined. If self-deceivers cannot be held responsible for self-deception, then their responsibility for whatever morally objectionable consequences it might have will be mitigated if not eliminated. Nevertheless, self-deception might be morally significant even if one cannot be taxed for entering into it. To be ignorant of one’s moral self, as Socrates saw, may represent a great obstacle to a life well lived whether or not one is at fault for such ignorance.
Whether self-deceivers can be held responsible for their self-deception is largely a question of whether they have the requisite control over the acquisition and maintenance of their self-deceptive belief. In general, intentionalists hold that self-deceivers are responsible, since they intend to acquire the self-deceptive belief, usually recognizing the evidence to the contrary. Even when the intention is indirect, such as when one intentionally seeks evidence in favor of p or avoids collecting or examining evidence to the contrary, self-deceivers seem intentionally to flout their own normal standards for gathering and evaluating evidence. So, minimally, they are responsible for such actions and omissions.
Initially, non-intentionalist approaches may seem to remove the agent from responsibility by rendering the process by which she is self-deceived subintentional. If my anxiety, fear, or desire triggers a process that ineluctably leads me to hold the self-deceptive belief, I cannot be held responsible for holding that belief. How can I be held responsible for processes that operate without my knowledge and which are set in motion without my intention? Most non-intentionalist accounts, however, do allow for the possibility that self-deceivers are responsible for individual episodes of self-deception, or for the vices of cowardice and lack of self-control from which they spring, or both. To be morally responsible in the sense of being an appropriate target for praise or blame requires, at least, that agents have control over the actions in question. Mele (2001), for example, argues that many sources of bias are controllable and that self-deceivers can recognize and resist the influence of emotion and desire on their belief acquisition and retention, particularly in matters they deem to be important, morally or otherwise. The extent of this control, however, is an empirical question. Nelkin (2012), however, argues that since Mele’s account leaves the content of motivation driving the bias unrestricted, the mechanism in question is so complex that “it seems unreasonable to expect the self-deceiver to [be] guard against” its operation.
Other non-intentionalists take self-deceivers to be responsible for certain epistemic vices such as cowardice in the face of fear or anxiety and lack of self-control with respect the biasing influences of desire and emotion. Thus, Barnes (1997) argues that self-deceivers “can, with effort, in some circumstances, resist their biases” (83) and “can be criticized for failing to take steps to prevent themselves from being biased; they can be criticized for lacking courage in situations where having courage is neither superhumanly difficult nor costly” (175). Whether self-deception is due to a character defect or not, ascriptions of responsibility depend upon whether the self-deceiver has control over the biasing effects of her desires and emotions.
Levy (2004) has argued that non-intentional accounts of self-deception that deny the contradictory belief requirement should not suppose that self-deceivers are typically responsible, since it is rarely the case that self-deceivers possess the requisite awareness of the biasing mechanisms operating to produce their self-deceptive belief. Lacking such awareness, self-deceivers do not appear to know when or on which beliefs such mechanisms operate, rendering them unable to curb the effects of these mechanisms, even when they operate to form false beliefs about morally significant matters. Levy also argues that if self-deceivers typically lack the control necessary for moral responsibility in individual episodes of self-deception, they also lack control over being the sort of person disposed to self-deception.
Non-intentionalists may respond by claiming that self-deceivers often are aware of the potentially biasing effects their desires and emotions might have and can exercise control over them (DeWeese-Boyd 2007). They might also challenge the idea the self-deceivers must be aware in the ways Levy suggests. One well known account of control, employed by Levy, holds that a person is responsible just in case she acts on a mechanism that is moderately responsive to reasons (including moral reasons), such that were she to possess such reasons this same mechanism would act upon those reasons in at least one possible world (Fischer and Ravizza 1999). Guidance control, in this sense, requires that the mechanism in question be capable of recognizing and responding to moral and non-moral reasons sufficient for acting otherwise. In cases of self-deception, deflationary views may suggest that the biasing mechanism, while sensitive and responsive to motivation, is too simple to itself be responsive to reasons. Nelkin (2011, 2012) argues for understanding reasons responsiveness to apply to the agent, not the mechanism, requiring that agent’s have the capacity to exercise reason in the situation under scrutiny. Accordingly, the question isn’t whether the biasing mechanism itself is reasons responsive but whether the agent governing its operation is, that is, whether self-deceivers typically could recognize and respond to moral and non-moral reasons to resist the influence of their desires and emotions and instead exercise special scrutiny of the belief in question. According to Nelkin (2012), expecting self-deceivers to have such a capacity is more likely if we understand the desire driving their bias a desire to believe that p, since awareness of this sort of desire would make it easier to guard against its influence on the process of determining whether p. In view of these considerations, it is plausible that self-deceivers have the requisite control for moral responsibility on deflationary approaches, and certainly not obvious that they lack it.
Insofar as it seems plausible that in some cases self-deceivers are apt targets for censure, what prompts this attitude? Take the case of a mother who deceives herself into believing her husband is not abusing their daughter because she can’t bear the thought that he is a moral monster (Barnes 1997). Why do we blame her? Here we confront the nexus between moral responsibility for self-deception and the morality of self-deception. Understanding what obligations may be involved and breached in cases of this sort will help to clarify the circumstances in which ascriptions of responsibility are appropriate.
While some instances of self-deception seem morally innocuous and others may even be thought salutary in various ways (Rorty 1994), the majority of theorists have thought there to be something morally objectionable about self-deception or its consequences in many cases. Self-deception has been considered objectionable because it facilitates harm to others (Linehan 1982) and to oneself, undermines autonomy (Darwall 1988; Baron 1988), corrupts conscience (Butler 1722), violates authenticity (Sartre 1943), manifests a vicious lack of courage and self-control that undermine the capacity for compassionate action (Jenni 2003), violates an epistemic duty to properly ground self-ascriptions (Fernández 2013), or violates a general duty to form beliefs that “conform to the available evidence” (Nelkin 2012). Linehan (1982) argues that we have an obligation to scrutinize the beliefs that guide our actions that is proportionate to the harm to others such actions might involve. When self-deceivers induce ignorance of moral obligations, of the particular circumstances, of likely consequences of actions, or of their own engagements, by means of their self-deceptive beliefs, they may be culpable. They are guilty of negligence with respect to their obligation to know the nature, circumstances, likely consequences and so forth of their actions (Jenni 2003; see also Nelkin 2012). Self-deception, accordingly, undermines or erodes agency by reducing our capacity for self-scrutiny and change. (Baron 1988) If I am self-deceived about actions or practices that harm others or myself, my ability to take responsibility and change are also severely restricted. Joseph Butler, in his well-known sermon “On Self-Deceit”, emphasizes the ways in which self-deception about one’s moral character and conduct, ‘self-ignorance’ driven by inordinate ‘self-love’, not only facilitates vicious actions but hinders the agent’s ability to change by obscuring them from view. Such ignorance, claims Butler, “undermines the whole principle of good … and corrupts conscience, which is the guide of life” (“On Self-Deceit”). Existentialist philosophers such as Kierkegaard and Sartre, in very different ways, viewed self-deception as a threat to ‘authenticity’ insofar as self-deceivers fail to take responsibility for themselves and their engagements past, present and future. By alienating us from our own principles, self-deception may also threaten moral integrity (Jenni 2003). Furthermore, self-deception also manifests certain weakness of character that dispose us to react to fear, anxiety, or the desire for pleasure in ways that bias our belief acquisition and retention in ways that serve these emotions and desires rather than accuracy. Such epistemic cowardice and lack of self-control may inhibit the ability of self-deceivers to stand by or apply moral principles they hold by biasing their beliefs regarding particular circumstances, consequences or engagements, or by obscuring the principles themselves. In all these ways and a myriad of others, philosophers have found some self-deception objectionable in itself or for the consequences it has on our ability to shape our lives.
Those finding self-deception morally objectionable generally assume that self-deception or, at least, the character that disposes us to it, is under our control to some degree. This assumption need not entail that self-deception is intentional only that it is avoidable in the sense that self-deceivers could recognize and respond to reasons for resisting bias by exercising special scrutiny (see section 5.1). It should be noted, however, that self-deception still poses a serious worry even if one cannot avoid entering into it, since self-deceivers may nevertheless have an obligation to overcome it. If exiting self-deception is under the guidance control of self-deceivers, then they might reasonably be blamed for persisting in their self-deceptive beliefs when they regard matters of moral significance.
But even if agents don’t bear specific responsibility for their being in that state, self-deception may nevertheless be morally objectionable, destructive and dangerous. If radically deflationary models of self-deception do turn out to imply that our own desires and emotions, in collusion with social pressures toward bias, lead us to hold self-deceptive beliefs and cultivate habits of self-deception of which we are unaware and from which cannot reasonably be expected to escape on our own, self-deception would still undermine autonomy, manifest character defects, obscure us from our moral engagements and the like. For these reasons, Rorty (1994) emphasizes the importance of the company we keep. Our friends, since they may not share our desires or emotions, are often in a better position to recognize our self-deception than we are. With the help of such friends, self-deceivers may, with luck, recognize and correct morally corrosive self-deception.
Evaluating self-deception and its consequences for ourselves and others is a difficult task. It requires, among other things: determining the degree of control self-deceivers have; what the self-deception is about (Is it important morally or otherwise?); what ends the self-deception serves (Does it serve mental health or as a cover for moral wrongdoing?); how entrenched it is (Is it episodic or habitual?); and, whether it is escapable (What means of correction are available to the self-deceiver?). As Nelkin (2012) contends, determining whether and to what degree self-deceivers are culpably negligent will ultimately need to be determined on a case by case basis in light of answers to such questions about the stakes at play and the difficulty involved. Despite the difficulties involved in assigning responsibility, this discussion indicates the wide variety of serious moral problems associated with self-deception that account in part for the attention that has been devoted to this phenomenon from the very beginning, especially within the Christian tradition (e.g., Dyke 1633; Butler 1726).
Quite aside from the doxastic, strategic and moral puzzles self-deception raises, there is the evolutionary puzzle of it’s origin. Why do human beings have this capacity in the first place? Why would natural selection allow a capacity to survive that undermines the accurate representation of reality, especially, when inaccuracies about individual ability or likely risk can lead to catastrophic errors?
Many argue that self-deceptively inflated views of ourselves, our abilities, our prospects, our control, so-called ‘positive illusions’, confer direct benefits in terms of psychological wellbeing, physical health and social advancement that serve fitness (Taylor and Brown, 1994; Taylor, 1989; McKay and Dennett, 2009). Just because ‘positive illusions’ make us ‘feel good,’ of course, it does not follow that they are adaptive. From an evolutionary perspective, whether an organism ‘feels good’ or is ‘happy’ is not significant unless it enhances survival and reproduction. McKay and Dennett (2009) argue that positive illusions are not only tolerable evolutionarily speaking, they actually appear to directly contribute to fitness. Overly positive beliefs about our abilities or chances for success appear to make us more apt to exceed our abilities and achieve success than more accurate beliefs would (Taylor and Brown 1994, Bandura 1989). According to Johnston and Fowler (2011), overconfidence is “advantageous, because it encourages individuals to claim resources they could not otherwise win if it came to a conflict (stronger but cautious rivals will sometime fail to make a claim, and it keeps them from walking away from conflicts they would surely win.” Inflated attitudes regarding the personal qualities and capacities of one’s partners and children also would seem to enhance fitness by facilitating the thriving of offspring (McKay and Dennett 2009).
Alternatively, some argue that self-deception evolved to facilitate interpersonal deception by eliminating the cues and cognitive load that consciously lying produces and by mitigating retaliation should the deceit become evident (von Hippel and Trivers 2011; Trivers 2011, 2000, 1991). On this view, the real gains associated with ‘positive illusions’ and other self-deceptions are byproducts that serve this greater evolutionary end by enhancing self-deceiver’s ability to deceive. Von Hippel and Trivers (2011) contend that “by deceiving themselves about their own positive qualities and the negative qualities of others, people are able to display greater confidence than they might otherwise feel, thereby enabling them to advance socially and materially.” Critics have pointed to data suggesting high self-deceivers are deemed less trustworthy than low self-deceivers (McKay and Dennett 2011). Others have complained that there is little data to support this hypothesis (Dunning 2011, Van Leeuwen 2007a, 2013a). Some challenge this theory by noting that a simple disregard for the truth would serve as well as self-deception and have the advantage of retaining true representations (McKay and Prelec 2011), or that often self-deceivers are the only one’s deceived (Van Leeuwen 2007a; Kahlil 2011). Van Leeuwen (2013a) raises the concern that the wide variety of phenomena identified by this theory as self-deception render the category so broad that it is difficult to tell whether it is a unified phenomenon traceable to particular mechanisms that could plausibly be sensitive to selection pressures.
In view of these shortcomings, Van Leeuwen (2007a) argues the capacity for self-deception is not a product of evolution; it is a spandrel—a byproduct of other aspects of our cognitive architecture—not an adaptation, at least, not in the strong sense of being positively selected. That leaves open the possibility that this capacity has nevertheless been retained as a consequence of its fitness value. Lopez and Fuxjager (2011) argue that the broad research on the so called “winner effect”—the increased probability to achieve victory in social or physical conflicts following prior victories—lends support to the idea that self-deception is at least weakly adaptive, since self-deception in the form of positive illusions, like past wins, confers a fitness advantage. Lamba and Nityananda (2014) test the theory that self-deceived are better at deceiving others, specifically whether overconfident individuals are overrated by others and underconfident individuals, underrated. In their study, students in tutorials were asked to predict their own performance on the next assignment as well as that of each of their peers in the tutorial in terms of absolute grade and relative rank. Comparing these predictions and the actual grades given on the assignment suggests a strong positive relationship between self-deception and deception. Those who self-deceptively rated themselves higher were rated higher by their peers as well. These findings lend suggestive support to the claim self-deception facilitates other deception. While these studies certainly do not supply all the data necessary to support the theory that the propensity to self-deception should be viewed as adaptation, they do suggest ways to test these evolutionary hypotheses by focusing upon specific phenomena. Whether or not the psychological and social benefits identified by these theories explain the evolutionary origins of the capacity for self-deceit, they may well shed light on its prevalence and persistence, as well as pointing to ways to identify contexts in which this tendency presents high collective risk (Lamba and Nityandanda 2014).
Collective self-deception has received scant direct philosophical attention as compared with its individual counterpart. Collective self-deception might refer simply to a group of similarly self-deceived individuals or to a group-entity, such as a corporation, committee, jury or the like, that is self-deceived. These alternatives reflect two basic perspectives social epistemologists have taken on ascriptions of propositional attitudes to collectives. On the one hand, such attributions might be taken summatively as simply an indirect way of attributing those states to members of the collective (Quinton 1975/1976). This summative understanding, then, considers attitudes attributed to groups to be nothing more than metaphors expressing the sum of the attitudes held by their members. To say that students think tuition is too high is just a way of saying that most students think so. On the other hand, such attributions might be understood non-summatively as applying to collective entities, themselves ontologically distinct from the members upon which they depend. These so-called ‘plural subjects’ (Gilbert 1989, 1994, 2005) or ‘social integrates’ (Pettit 2003), while supervening upon the individuals comprising them, may well express attitudes that diverge from individual members. For instance, saying NASA believed the O-rings on the space shuttle’s booster rockets to be safe need not imply that most or all the members of this organizations personally held this belief only that the institution itself did. The non-summative understanding, then, considers collectives to be, like persons, apt targets for attributions of propositional attitudes, and potentially of moral and epistemic censure as well. Following this distinction, collective self-deception may be understood in either a summative or non-summative sense.
In the summative sense, collective self-deception refers to self-deceptive belief shared by a group of individuals, who each come to hold the self-deceptive belief for similar reasons and by similar means, varying according to the account of self-deception followed. We might call this self-deception across a collective. In the non-summative sense, the subject of collective self-deception is the collective itself, not simply the individuals comprising it. The following sections offer an overview of these forms of collective self-deception, noting the significant challenges posed by each.
Understood summatively, we might define collective self-deception as the holding of a false belief in the face of evidence to the contrary by a group of people as a result of shared desires, emotions, or intentions (depending upon the account of self-deception) favoring that belief. Collective self-deception is distinct from other forms of collective false belief—such as might result from deception or lack of evidence—insofar as the false belief issues from the agents’ own self-deceptive mechanisms (however these are construed), not the absence of evidence to the contrary or presence of misinformation. Accordingly, the individuals constituting the group would not hold the false belief if their vision weren’t distorted by their attitudes (desire, anxiety, fear or the like) toward the belief. What distinguishes collective self-deception from solitary self-deception just is its social context, namely, that it occurs within a group that shares both the attitudes bringing about the false belief and the false belief itself. Compared to its solitary counterpart, self-deception within a collective is both easier to foster and more difficult to escape, being abetted by the self-deceptive efforts of others within the group.
Virtually all self-deception has a social component, being wittingly or unwittingly supported by one's associates (See Ruddick 1988). In the case of collective self-deception, however, the social dimension comes to the fore, since each member of the collective unwittingly helps to sustain the self-deceptive belief of the others in the group. For example, my cancer stricken friend might self-deceptively believe her prognosis to be quite good. Faced with the fearful prospect of death, she does not form accurate beliefs regarding the probability of her full recovery, attending only to evidence supporting full recovery and discounting or ignoring altogether the ample evidence to the contrary. Caring for her as I do, I share many of the anxieties, fears and desires that sustain my friend’s self-deceptive belief, and as a consequence I form the same self-deceptive belief via the same mechanisms. In such a case, I unwittingly support my friend’s self-deceptive belief and she mine—our self-deceptions are mutually reinforcing. We are collectively or mutually self-deceived, albeit on a very small scale. Ruddick (1988) calls this ‘joint self-deception.’
On a larger-scale, sharing common attitudes, large segments of a society might deceive themselves together. For example, we share a number of self-deceptive beliefs regarding our consumption patterns. Many of the goods we consume are produced by people enduring labor conditions we do not find acceptable and in ways that we recognize are environmentally destructive and likely unsustainable. Despite our being at least generally aware of these social and environmental ramifications of our consumptive practices, we hold the overly optimistic beliefs that the world will be fine, that its peril is overstated, that the suffering caused by the exploitive and ecologically degrading practices are overblown, that our own consumption habits are unconnected to these sufferings anyway, even that our minimal efforts at conscientious consumption are an adequate remedy (See, Goleman 1989). When self-deceptive beliefs such as these are held collectively, they become entrenched and their consequences, good or bad, are magnified (Surbey 2004).
The collective entrenches self-deceptive beliefs by providing positive reinforcement by others sharing the same false belief, as well as protection from evidence that would destabilize the target belief. There are, however, limits to how entrenched such beliefs can become and remain self-deceptive. The social support cannot be the sole or primary cause of the self-deceptive belief, for then the belief would simply be the result of unwitting interpersonal deception and not the deviant belief formation process that characterizes self-deception. If the environment becomes so epistemically contaminated as to make counter-evidence inaccessible to the agent, then we have a case of false belief, not self-deception. Thus, even within a collective a person is self-deceived just in case she would not hold her false belief if she did not possess the motivations skewing her belief formation process. This said, relative to solitary self-deception, the collective variety does present greater external obstacles to avoiding or escaping self-deception, and is for this reason more entrenched. If the various proposed psychological mechanisms of self-deception pose an internal challenge to the self-deceiver’s power to control her belief formation, then these social factors pose an external challenge to the self-deceiver’s control. Determining the how superable this challenge is will affect our assessment of individual responsibility for self-deception as well as the prospects of unassisted escape from it.
Collective self-deception can also be understood from the perspective of the collective itself in a non-summative sense. Though there are varying accounts of group belief, generally speaking, a group can be said to believe, desire, value or the like just in case its members “jointly commit” to these things as a body (Gilbert 2005). A corporate board, for instance, might be jointly committed as a body to believe, value and strive for whatever the CEO recommends. Such commitment need not entail that each individual board member personally endorses such beliefs, values or goals, only that as members of the board they do (Gilbert 2005). While philosophically precise accounts of non-summative self-deception remain largely unarticulated, the possibilities mirror those of individual self-deception. When collectively held attitudes motivate a group to espouse a false belief despite the group’s possession of evidence to the contrary, we can say that the group is collectively self-deceived in a non-summative sense.
For example, Robert Trivers (2000) suggests that ‘organizational self-deception’ led to NASA’s failure to represent accurately the risks posed by the space shuttle’s O-ring design, a failure that eventually led to the Challenger disaster. The organization as a whole, he argues, had strong incentives to represent such risks as small. As a consequence, NASA’s Safety Unit mishandled and misrepresented data it possessed that suggested that under certain temperature conditions the shuttle’s O-rings were not safe. NASA, as an organization, then, self-deceptively believed the risks posed by O-ring damage were minimal. Within the institution, however, there were a number of individuals who did not share this belief, but both they and the evidence supporting their belief were treated in a bias manner by the decision-makers within the organization. As Trivers (2000) puts it, this information was relegated “to portions of … the organization that [were] inaccessible to consciousness (we can think of the people running NASA as the conscious part of the organization).” In this case, collectively held values created a climate within NASA that clouded its vision of the data and led to its endorsement of a fatally false belief.
Collective self-deceit may also play a significant role in facilitating unethical practices by corporate entities. For example, a collective commitment by members of a corporation to maximizing profits might lead members to form false beliefs about the ethical propriety of the corporation’s practices. Gilbert (2005) suggests that such a commitment might lead executives and other members to “simply lose sight of moral constraints and values they previously held”. Similarly, Tenbrunsel and Messick (2004) argue that self-deceptive mechanisms play a pervasive role in what they call ‘ethical fading’, acting as a kind of ‘bleach’ that renders organizations blind to the ethical dimensions of their decisions. They argue that such self-deceptive mechanisms must be recognized and actively resisted at the organizational level if unethical behavior is to be avoided. More specifically, Gilbert (2005) contends that collectively accepting that “certain moral constraints must rein in the pursuit of corporate profits” might shift corporate culture in such a way that efforts to respect these constraints are recognized as part of being a good corporate citizen. In view of the ramifications this sort of collective self-deception has for the way we understand corporate misconduct and responsibility, understanding its specific nature in greater detail remains an important task.
Collective self-deception understood in either the summative or non-summative sense raises a number of significant questions such as whether individuals within collectives bear responsibility for their self-deception or the part they play in the collective’s self-deception, and whether collective entities can be held responsible for their epistemic failures. Finally, collective self-deception prompts us to ask what means are available collectives and their members to resist, avoid and escape self-deception. To answer these and other questions, more precise accounts of these forms of self-deception are needed. Given the capacity of collective self-deception to entrench false beliefs and to magnify their consequences—sometimes with disastrous results—collective self-deception is not just a philosophical puzzle; it is a problem that demands attention.
- Ames, R.T., and W. Dissanayake, (eds.), 1996, Self and Deception, New York: State University of New York Press.
- Audi, R., 2007, “Belief, Intention, and Reasons for Action,” in Rationality and the Good, J. Greco, A. Mele, and M. Timmons(ends.), New York: Oxford University Press.
- –––, 1989, “Self-Deception and Practical Reasoning,” Canadian Journal of Philosophy, 19: 247–266.
- –––, 1982, “Self-Deception, Action, and Will,” Erkenntnis, 18: 133–158.
- –––, 1976, “Epistemic Disavowals and Self-Deception,” The Personalist, 57: 378–385.
- Bach, K., 1997, “Thinking and Believing in Self-Deception,” Behavioral and Brain Sciences, 20: 105.
- –––, 1981, “An Analysis of Self-Deception,” Philosophy and Phenomenological Research, 41: 351–370.
- Baghramian, M., and A. Nicholson, 2013, “The Puzzle of Self-Deception,” Philosophy Compass, 8(11): 1018–1029.
- Barnes, A., 1997, Seeing through Self-Deception, New York: Cambridge University Press.
- Baron, M., 1988, “What is Wrong with Self-Deception,” in Perspectives on Self-Deception, B. McLaughlin and A. O. Rorty (eds.), Berkeley: University of California Press.
- Bayne, T. & J. Fernandez Eds., 2009, Delusion and Self-Deception: Affective and Motivational Influences on Belief Formation, New York: Psychology Press.
- Bok, S., 1989, “Secrecy and Self-Deception,” in Secrets: On the Ethics of Concealment and Revelation, New York: Vintage.
- –––, 1980, “The Self Deceived,” Social Science Information, 19: 923–935.
- Bermúdez, J., 2000, “Self-Deception, Intentions, and Contradictory Beliefs,” Analysis, 60(4): 309–319.
- –––, 1997, “Defending Intentionalist Accounts of Self-Deception,” Behavioral and Brain Sciences, 20: 107–8.
- Bird, A., 1994, “Rationality and the Structure of Self-Deception,” in S. Gianfranco (ed.), European Review of Philosophy (Volume 1: Philosophy of Mind), Stanford: CSLI Publications.
- Borge, S., 2003, “The Myth of Self-Deception,” The Southern Journal of Philosophy, 41: 1–28.
- Brown, R., 2003, “The Emplotted Self: Self-Deception and Self-Knowledge.,” Philosophical Papers, 32: 279–300.
- Butler, J., 1726, “Upon Self-Deceit,” in D.E. White (ed.), 2006, The Works of Bishop Butler, Rochester: Rochester University Press. [Available online]
- Chisholm, R. M., and Feehan, T., 1977, “The Intent to Deceive,” Journal of Philosophy, 74: 143–159.
- Cook, J. T., 1987, “Deciding to Belief without Self-deception,” Journal of Philosophy, 84: 441–446.
- Dalton, P., 2002, “Three Levels of Self-Deception (Critical Commentary on Alfred Mele’s Self-Deception Unmasked),” Florida Philosophical Review, 2(1): 72–76.
- Darwall, S., 1988, “Self-Deception, Autonomy, and Moral Constitution,” in Perspectives on Self-Deception, B. McLaughlin and A. O. Rorty (eds.), Berkeley: University of California Press.
- Davidson, D., 1985, “Deception and Division,” in Actions and Events, E. LePore and B. McLaughlin (eds.), New York: Basil Blackwell.
- –––, 1982, “Paradoxes of Irrationality,” in Philosophical Essays on Freud, R. Wollheim and J. Hopkins (eds.), Cambridge: Cambridge University Press.
- Demos, R., 1960, “Lying to Oneself,” Journal of Philosophy, 57: 588–95.
- Dennett, D., 1992, “The Self as a Center of Narrative Gravity,” in Consciousness and Self: Multiple Perspectives, F. Kessel, P. Cole, and D. Johnson (eds.), Hillsdale, NJ: L. Erlbaum.
- de Sosa, R., 1978, “Self-Deceptive Emotions,” Journal of Philosophy, 75: 684–697.
- –––, 1970, “Self-Deception,” Inquiry, 13: 308–321.
- DeWeese-Boyd, I., 2014, “Self-Deceptive Religion and the Prophetic Voice”, Journal for Religionsphilosophie, 3: 26–37.
- DeWeese-Boyd, I., 2007, “Taking Care: Self-Deception, Culpability and Control,” teorema, 26(3): 161–176.
- Dunn, R., 1995, “Motivated Irrationality and Divided Attention,” Australasian Journal of Philosophy, 73: 325–336.
- Dunning, D., 2011, “Get Thee to a Laboratory,” Commentary on target article, “The Evolution and Psychology of Self-Deception,” by W. von Hippel and R. Trivers, Behavioral and Brain Sciences, 34(1): 18–19.
- –––, 1995, “Attitudes, Agency and First-Personality,” Philosophia, 24: 295–319.
- –––, 1994, “Two Theories of Mental Division,” Australasian Journal of Philosophy, 72: 302–316.
- Dupuy, J-P., (ed.), 1998, Self-Deception and Paradoxes of Rationality (Lecture Notes 69), Stanford: CSLI Publications.
- Dyke, D., 1633, The Mystery of Selfe-Deceiving, London: William Standby.
- Edwards, S., 2013, “Nondoxasticism about Self-Deception,” Dialectica, 67(3): 265–282.
- Elster, J., (ed.), 1985, The Multiple Self, Cambridge: Cambridge University Press.
- Fairbanks, R., 1995, “Knowing More Than We Can Tell,” The Southern Journal of Philosophy, 33: 431–459.
- Fernández, J., 2013, “Self-deception and self-knowledge,” Philosophical Studies 162(2): 379–400.
- Fingarette, H., 1998, “Self-Deception Needs No Explaining,” The Philosophical Quarterly, 48: 289–301.
- Fingarette, H., 1969, Self-Deception, Berkeley: University of California Press; reprinted, 2000.
- Fischer, J. and Ravizza, M., 1998, Responsibility and Control. Cambridge: Cambridge University Press.
- Funkhouser, E., 2009, “Self-Deception and the Limits of Folk Psychology,” Social Theory and Practice, 35(1): 1–13.
- –––, 2005, “Do the Self-Deceived Get What They Want?,” Pacific Philosophical Quarterly, 86(3): 295–312.
- Funkhouser, E. and D. Barrett, 2016, “Robust, unconscious self-deception: Strategic and flexible,” Philosophical Psychology, 29(5): 1–15.
- Gendler, T. S., 2007, “Self-Deception as Pretense,” Philosophical Perspectives, 21: 231–258.
- Gilbert, Margaret, 2005, “Corporate Misbehavior and Collective Values,” Brooklyn Law Review, 70(4): 1369–80.
- –––, 1994, “Remarks on Collective Belief,” in Socializing Epistemology, F. Schmitt (ed.), Lanham, MD: Rowman and Littlefield.
- –––, 1989, On Social Facts, London: Routledge.
- Goleman, Daniel, 1989, “What is negative about positive illusions?: When benefits for the individual harm the collective,” Journal of Social and Clinical Psychology, 8: 190–197.
- Graham, G., 1986. “Russell’s Deceptive Desires,” The Philosophical Quarterly, 36: 223–229.
- Haight, R. M., 1980, A Study of Self-Deception, Sussex: Harvester Wheatsheaf.
- Hales, S. D., 1994, “Self-Deception and Belief Attribution,” Synthese, 101: 273–289.
- Hernes, C., 2007, “Cognitive Peers and Self-Deception,” teorema, 26(3): 123–130.
- Hauerwas, S. and Burrell, D., 1977, “Self-Deception and Autobiography: Reflections on Speer’s Inside the Third Reich,” in Truthfulness and Tragedy, S. Hauerwas with R. Bondi and D. Burrell, Notre Dame: University of Notre Dame Press.
- Holton, R., 2001, “What is the Role of the Self in Self-Deception?,” Proceedings of the Aristotelian Society, 101(1): 53–69.
- Jenni, K., 2003, “Vices of Inattention,” Journal of Applied Philosophy, 20(3): 279–95.
- Johnson, D., and Fowler, J. 201, “The Evolution of Overconfidence,”Nature, 477: 317–320.
- Johnston, M., 1988, “Self-Deception and the Nature of Mind,” in Perspectives on Self-Deception, B. McLaughlin and A. O. Rorty (eds.), Berkeley: University of California Press.
- Kahlil, E., 2011, “The Weightless Hat,” Commentary on target article, “The Evolution and Psychology of Self-Deception,” by W. von Hippel and R. Trivers, Behavioral and Brain Sciences, 34(1): 30–31.
- Kirsch, J., 2005, “What’s So Great about Reality?,” Canadian Journal of Philosophy, 35(3): 407–428.
- Lazar, A., 1999, “Deceiving Oneself Or Self-Deceived?,” Mind, 108: 263–290.
- –––, 1997, “Self-Deception and the Desire to Believe,” Behavioral and Brain Sciences, 20: 119–120.
- Lamba, S and Nityandanda, V., 2014, “Self-Decieved Individuals are Better at Deceiving Others,” PLOS One, 9/8: 1–6.
- Levy, N., 2004, “Self-Deception and Moral Responsibility,” Ratio (new series), 17: 294–311.
- Linehan, E. A. 1982, “Ignorance, Self-deception, and Moral Accountability,” Journal of Value Inquiry, 16: 101–115.
- Lockhard, J. and Paulhus, D. (eds.), 1988, Self-Deception: An Adaptive Mechanism?, Englewood Cliffs: Prentice-Hall.
- Longway, J., 1990, “The Rationality of Self-deception and Escapism,” Behavior and Philosophy, 18: 1–19.
- Lopez, J., and M. Fuxjager, 2012, “Self-deception’s adaptive value: Effects of positive thinking and the winner effect,” Consciousness and Cognition, 21(1): 315–324.
- Lynch, K., 2014, “Self-deception and shifts of attention,” Philosophical Explorations, 17(1): 63–75.
- –––, 2013, “Self-Deception and Stubborn Belief,” Erkenntnis, 78(6): 1337–1345.
- –––, 2012, “On the ‘tension’ inherent in Self-Deception,” Philosophical Psychology, 25(3): 433–450.
- –––, 2010, “Self-deception, religious belief, and the false belief condition,” Heythrop Journal, 51(6): 1073–1074.
- –––, 2009, “Prospects for an Intentionalist Theory of Self-Deception,” Abstracta, 5(2): 126–138.
- Martin, M., 1986, Self-Deception and Morality, Lawrence: University Press of Kansas.
- –––, (ed.), 1985, Self-Deception and Self-Understanding, Lawrence: University Press of Kansas.
- Martínez Manrique, F., 2007, “Attributions of Self-Deception,” teorema, 26(3): 131–143.
- McLaughlin, B. and Rorty, A. O. (eds.), 1988, Perspectives on Self-Deception, Berkeley: University of California Press.
- McKay, R. and Dennett, D., 2009, “The Evolution of Misbelief,” Behavioral and Brain Sciences, 32(6): 493–561.
- McKay, R., Prelec, D., “Protesting Too Much: Self-Deception and Self-Signaling,” Commentary on target article, “The Evolution and Psychology of Self-Deception,” by W. von Hippel and R. Trivers, Behavioral and Brain Sciences, 34(1): 34–35.
- Mele, A., 2001, Self-Deception Unmasked, Princeton: Princeton University Press.
- –––, 2000, “Self-Deception and Emotion,” Consciousness and Emotion, 1: 115–139.
- –––, 1999, “Twisted Self-Deception,” Philosophical Psychology, 12: 117–137.
- –––, 1997, “Real Self-Deception,” Behavioral and Brain Sciences, 20: 91–102.
- –––, 1987a, Irrationality: An Essay on Akrasia, Self-Deception, Self-Control, Oxford: Oxford University Press.
- –––, 1987b, “Recent Work on Self-deception,” American Philosophical Quarterly, 24: 1–17.
- –––, 1983, “Self-Deception,” Philosophical Quarterly, 33: 365–377.
- Mijović-Prelec, D., and Prelec, D., 2010, “Self-deception as Self-Signaling: A Model and Experimental Evidence,” Philosophical Transactions of the Royal Society B, 365: 227–240.
- Moran, R., 1988, “Making Up Your Mind: Self-Interpretation and Self-constitution,” Ratio (new series), 1: 135–151.
- Nelkin, D., 2012, “Responsibility and Self-Deception: A Framework,” Humana.Mente Journal of Philosophical Studies, 20: 117–139.
- –––, 2002, “Self-Deception, Motivation, and the Desire to Believe,” Pacific Philosophical Quarterly, 83: 384–406.
- Nicholson, A., 2007.“Cognitive Bias, Intentionality and Self-Deception,” teorema, 26(3): 45–58.
- Noordhof, P., 2009, “The Essential Instability of Self-Deception,” Social Theory and Practice, 35(1): 45–71.
- –––, 2003, “Self-Deception, Interpretation and Consciousness,” Philosophy and Phenomenological Research, 67: 75–100.
- Paluch, S., 1967, “Self-Deception,” Inquiry, 10: 268–78.
- Patten, D., 2003, “How do we deceive ourselves?,” Philosophical Psychology, 16(2): 229–46.
- Pears, D., 1991, “Self-Deceptive Belief Formation,” Synthese, 89: 393–405.
- –––, 1984, Motivated Irrationality, New York: Oxford University Press.
- Pettit, Philip, 2006, “When to Defer to Majority Testimony — and When Not,” Analysis, 66(3): 179–187.
- –––, 2003, “Groups with Minds of Their Own,” in Socializing Metaphysics, F. Schmitt (ed.), Lanham, MD: Rowman and Littlefield.
- Philström, S., 2007, “Transcendental Self-Deception,” teorema, 26(3): 177–189.
- Porcher, J., 2012, “Against the Deflationary Account of Self-Deception,” Humana Mente, 20: 67–84.
- Quinton, Anthony, 1975/1976, “Social Objects,” Proceedings of the Aristotelian Society, 75: 1–27.
- Räikkä, J. 2007, “Self-Deception and Religious Beliefs,” Heythrop Journal, 48: 513–526.
- Rorty, A. O., 1994, “User-Friendly Self-Deception,” Philosophy, 69: 211–228.
- –––, 1983, “Akratic Believers,” American Philosophical Quarterly, 20: 175–183.
- –––, 1980, “Self-Deception, Akrasia and Irrationality,” Social Science Information, 19: 905–922.
- –––, 1972, “Belief and Self-Deception,” Inquiry, 15: 387-410.
- Sahdra, B. and Thagard, P., 2003, “Self-Deception and Emotional Coherence,” Minds and Machines, 13: 213–231.
- Sanford, D,1988, “Self-Deception as Rationalization,” in Perspectives on Self-Deception, B. McLaughlin and A. O. Rorty (eds.), Berkeley: University of California Press.
- Sartre, J-P., 1946, L’etre et le néant, Paris: Gallimard; trans. H. E. Barnes, 1956, Being and Nothingness, New York, Washington Square Press.
- Scott-Kakures, D., 2002, “At Permanent Risk: Reasoning and Self-Knowledge in Self-Deception,” Philosophy and Phenomenological Research, 65: 576–603.
- –––, 2001, “High anxiety: Barnes on What Moves the Unwelcome Believer,” Philosophical Psychology, 14: 348–375.
- –––, 2000, “Motivated Believing: Wishful and Unwelcome,” Noûs, 34: 348–375.
- Sorensen, R., 1985, “Self-Deception and Scattered Events,” Mind, 94: 64–69.
- Surbey, M., 2004, “Self-deception: Helping and hindering personal and public decision making,” in Evolutionary Psychology, Public Policy and Personal Decisions, C. Crawford and C. Salmon (eds.), Mahwah, NJ: Lawrence Earlbaum Associates.
- Szabados, B., 1973, “Wishful Thinking and Self-Deception,” Analysis, 33(6): 201–205.
- Talbott, W. J., 1997, “Does Self-Deception Involve Intentional Biasing,” Behavoir and Brain Sciences, 20: 127.
- –––, 1995, “Intentional Self-Deception in a Single Coherent Self,” Philosophy and Phenomenological Research, 55: 27–74.
- Taylor, S. and Brown, J., 1994, “Positive Illusion and Well-Being Revisited: Separating Fact from Fiction,” Psychological Bulletin, 116: 21–27.
- Taylor, S. and Brown, J., 1988, “Illusion and Wedll-Being: A Social Psychological Perspective on Mental Health,” Psychological Bulletin, 103(2): 193–210.
- Tenbrusel, A.E. and D. M Messick, 2004, “Ethical Fading: The Role of Self-Deception in Unethical Behavior,” Social Justice Research, 7(2): 223–236.
- Trivers, R., 2011, The Folly of Fools: The Logic of Deceit and Self-Deception in Human life, New York: Basic Books.
- Trivers, R., 2000, “The Elements of a Scientific Theory of Self-Deception,” in Evolutionary Perspectives on Human Reproductive Behavior, Dori LeCroy and Peter Moller (eds.), Annals of the New York Academy of Sciences, 907: 114–131.
- Trivers, R., 1991, “Deceit and Self-Deception: The relationship between Communication and Consciousness,” Man and Beast Revisited, 907: 175–191.
- Tversky, A., 1985, “Self-Deception and Self-Perception,” in The Multiple Self, Jon Elster (ed.), Cambridge: Cambridge University Press.
- Van Fraassen, B., 1995, “Belief and the Problem of Ulysses and the Sirens,” Philosophical Studies, 77: 7–37.
- –––, 1984, “Belief and Will,” Journal of Philosophy, 81: 235–256.
- Van Leeuwen, N., 2013a, “Review of Robert Trivers’ The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life,” Cognitive Neuropsychiatry, 18(1-2): 146–151.
- –––, 2013b, “Self-Deception,” in International Encyclopedia of Ethics, H. LaFollette (ed.), New York: Wiley-Blackwell.
- –––, 2009, “Self-Deception Won’t Make You Happy,” Social Theory and Practice, 35(1): 107–132.
- –––, 2007a, “The Spandrels of Self-deception: Prospects for a biological theory of a mental phenomenon,” Philosophical Psychology, 20(3): 329–348.
- –––, 2007b, “The Product of Self-Deception,” Erkenntnis, 67(3): 419–437.
- Von Hippel, W. & Trivers, R., 2011, “The Evolution and Psychology of Self-Deception,” Behavioral and Brain Sciences, 34(1): 1–56.
- Whisner, W., 1993, “Self-Deception and Other-Person Deception,” Philosophia, 22: 223–240.
- –––, 1989, “Self-Deception, Human Emotion, and Moral Responsibility: Toward a Pluralistic Conceptual Scheme,” Journal for the Theory of Social Behaviour, 19: 389–410.
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
[Please contact the author with suggestions.]
The author would like to thank Margaret DeWeese-Boyd and Douglas Young and the editors for their help in constructing and revising this entry.