“Implicit bias” is a term of art referring to relatively unconscious and relatively automatic features of prejudiced judgment and social behavior. While psychologists in the field of “implicit social cognition” study “implicit attitudes” toward consumer products, self-esteem, food, alcohol, political values, and more, the most striking and well-known research has focused on implicit attitudes toward members of socially stigmatized groups, such as African-Americans, women, and the LGBTQ community. For example, imagine Frank, who explicitly believes that women and men are equally suited for careers outside the home. Despite his explicitly egalitarian belief, Frank might nevertheless implicitly associate women with the home, and this implicit association might lead him to behave in any number of biased ways, from trusting feedback from female co-workers less to hiring equally qualified men over women. Psychological research on implicit bias is relatively recent (§1), but a host of metaphysical (§2), epistemological (§3), and ethical questions (§4) about implicit bias are pressing.
- 1. Introduction: History, Methods, and Models of Implicit Social Cognition
- 2. Metaphysics
- 3. Epistemology
- 4. Ethics
- 5. Future Research
- Academic Tools
- Other Internet Resources
- Related Entries
While Allport’s (1954) The Nature of Prejudice remains a touchstone for psychological research on prejudice, the study of implicit social cognition has two distinct and more recent sets of roots. The first stems from the distinction between “controlled” and “automatic” information processing made by cognitive psychologists in the 1970s (e.g., Shiffrin & Schneider 1977). While controlled processing was thought to be voluntary, attention-demanding, and of limited capacity, automatic processing was thought to unfold without attention, to have nearly unlimited capacity, and to be hard to suppress voluntarily (Payne & Gawronski 2010; see also Bargh 1994). In important early work on implicit cognition, Fazio and colleagues showed that attitudes can be understood as activated by either controlled or automatic processes. In Fazio’s (1995) “sequential priming” task, for example, following exposure to social group labels (e.g., “black”, “women”, etc.), subjects’ reaction times (or “response latencies”) to stereotypic words (e.g., “lazy” or “nurturing”) are measured. People respond more quickly to concepts closely linked together in memory, and most subjects in the sequential priming task are quicker to respond to words like “lazy” following exposure to “black” than “white”. Researchers standardly take this pattern to indicate a prejudiced automatic association between semantic concepts. The broader notion embedded in this research was that subjects’ automatic responses were thought to be “uncontaminated” by controlled or strategic responses (Amodio & Devine 2009).
While this first stream of research focused on automaticity, a second stream focused on (un)consciousness. Many studies demonstrated that awareness of stereotypes can affect social judgment and behavior in relative independence from how subjects respond on measures of their explicit attitudes (Devine 1989; Devine & Monteith 1999; Dovidio & Gaertner 2004; Greenwald & Banaji 1995; Banaji et al. 1993). These studies were influenced by theories of implicit memory (e.g., Jacoby & Dallas 1981; Schacter 1987), leading to Greenwald & Banaji’s original definition of implicit attitudes as
introspectively unidentified (or inaccurately identified) traces of past experience that mediate favorable or unfavorable feeling, thought, or action toward social objects. (1995: 8)
The guiding idea here, as Dovidio and Gaertner (1986) put it, is that in the modern world prejudice has been “driven underground”.
These issues are still controversial, hence the somewhat vague characterization of implicit social cognition as concerning relatively unconscious and relatively automatic features of judgment and social behavior.
What a person says is not necessarily a good representation of the whole of what she feels and thinks, nor of how she will behave. The central advance of research on implicit social cognition is the apparent ability to measure people’s attitudes without having to ask them directly. These “indirect” measures do not require that participants be aware of what is being measured.
Consider Frank again. His implicit gender bias may be revealed on one of many indirect measures of attitudes, such as sequential priming or the “Implicit Association Test” (IAT; Greenwald et al. 1998). The IAT—the most well-known test of implicit attitudes—is a reaction time measure. In a standard IAT, the subject attempts to sort words or pictures into categories as fast as possible while making as few errors as possible. In the images below, the correct answers would be left, right, left, right.
All images are copyright of Project Implicit and reproduced here with permission.
An IAT score is computed by comparing speed and error rates on the “blocks” (or trials) in which the pairing of concepts is consistent with common stereotypes (images 1 and 3) to the blocks in which the pairing of the concepts is inconsistent with common stereotypes (images 2 and 4). If he is typical of most subjects, Frank will be faster and make fewer errors on stereotype-consistent trials than stereotype-inconsistent trials. While this “gender-career” IAT pairs concepts (e.g., “male” and “career”), other IATs, such as the “race-evaluation” IAT, pair a concept to an evaluation (e.g., “black” and “bad”). Other IATs test implicit attitudes toward body image, age, sexual orientation, and so on. Over 16 million unique participants having taken an IAT as of 2014 (Nosek p.c.). One recent review (Nosek et al. 2007), which tested over 700,000 subjects on the race-evaluation IAT, found that over 70% of white participants more easily associated black faces with negative words (e.g., war, bad) and white faces with positive words (e.g., peace, good). The researchers consider this an implicit preference for white faces over black faces.
Although the IAT remains the most popular indirect measure of attitudes, it is far from the only one. Other prominent indirect measures, many of which are derivations of sequential priming, are semantic priming (Banaji & Hardin 1996) and the Affect Misattribution Procedure (AMP; Payne et al. 2005). Also, a “second generation” of categorization-based measures has been developed in order to improve psychometric validity. For example, the Go/No-go Association Task (GNAT; Nosek & Banaji 2001) presents subjects with one target object rather than two in order to determine whether preferences or aversions are primarily responsible for scores on the standard IAT (e.g., on a measure of racial attitudes whether one has an implicit preference for whites or an implicit aversion to blacks; Brewer 1999). Multinomial (or formal process) models have also been developed in order to identify distinct processes contributing to performance on indirect measures. For example, elderly people tend to show greater bias on the race-evaluation IAT compared with younger people, but this may be due to their having stronger implicit preferences for whites or having weaker control over their biased responding (Nosek et al. 2011). Multinomial models, like the Quadruple Process Model (Conrey et al. 2005), are used to tease apart these possibilities. The Quad model identifies four distinct processes that contribute to responses: (1) the automatic activation of an association; (2) the subject’s ability to determine a correct response (i.e., a response that reflects one’s subjective assessment of truth); (3) overriding automatic associations; and (4) general response biases (e.g., favoring right-handed responses). Each of these variables are unobserved processes, and the Quad model treats them as probabilities.
Several large reviews have found the IAT in particular to be reliable, relatively unsusceptible to intentional distortion, and most ominously, predictive of a range of discriminatory behavior, in some cases better than self-report (Greenwald et al. 2003, 2009; Nosek et al. 2005, 2007; Lane et al. 2007). The stronger one’s associations of good with white faces and bad with black faces on the black-white IAT, the more likely one is to perpetrate hiring discrimination (Bertrand et al. 2005); to “shoot” unarmed black men in a computer simulation than unarmed white men (Correll et al. 2002; Glaser & Knowles 2008; see §3.2.1 for further discussion); and to diagnose white patients described in case vignettes with coronary artery disease and prescribe thrombolysis for them compared to black patients described as having equivalent symptoms and electrocardiogram results (Green et al. 2007). Overall, the IAT appears to predict many distinct kinds of behavior, in particular non-verbal and “micro-behavior” (Valian 1998, 2005; Dovidio et al. 2002; Cortina 2008; Cortina et al. 2011; Brennan 2013) as well as actions undertaken when information is incomplete, decisions need to be made quickly, or agents are stressed or under cognitive load. One reason findings like these are significant is that they may help to explain why racial inequality persists in countries like the United States in which explicit attitudes toward race have changed dramatically over the past 75 years (Banaji & Greenwald 2013). Indeed, in the large study referenced above—run by Nosek and colleagues (2007)—the most frequent answer to the question, “who do you prefer, black people or white people?” was “I have no preference”.
The focus in the psychological literature has been more on measurement than on theorizing. There are, however, several models that attempt to identify the processes that govern the functioning of implicit attitudes.
According to MODE (“Motivation and Opportunity as Determinants”; Fazio 1990; Fazio & Towles-Schwen 1999; Olson & Fazio 2009) and the related MCM (“Meta-Cognitive Model”; Petty 2006; Petty et al. 2007), attitudes are associations between objects and “evaluative knowledge” of those objects. MODE posits one singular representation underlying the behavioral effects measured by direct and indirect tests. Thus, MODE denies the distinction between implicit and explicit attitudes. The difference between direct and indirect measures, then, reflects a difference in the control that subjects have over the measured behavior. Control is understood in terms of motivation and opportunity to deliberate. When an agent has low motivation or opportunity to engage in deliberative thought, her automatic attitudes will guide her behavior and judgment. Indirect measures manufacture this situation (of low control due to low motivation and/or opportunity to deliberate).
Influenced by dual process theories of mind, RIM (“Reflective-Impulsive Model”; Strack & Deutsche 2004) and APE (“Associative-Propositional Evaluation”; Gawronski & Bodenhausen 2006, 2011) define implicit and explicit attitudes in terms of distinct operating principles. The central distinction at the heart of both RIM and APE is between “associative” and “propositional” processes. Associative processes are said to underlie an impulsive system that functions according to classic associationist principles of similarity and contiguity. Indirect measures are thought of as assessing the momentary accessibility of elements or nodes of a network of associations. Propositional processes, on the other hand, underlie a reflective system that validates the information provided by activated associations. Direct measures are thought to capture this process of validation, which is said to operate according to agents’ syllogistic reasoning and judgments of logical consistency. In sum, the key distinction between associative and propositional processes according to RIM and APE is that propositional processing alone depends on an agent’s assessment of the truth of a given representation. APE in particular aims to explain the interactions between and mutual influences of associative and propositional processes in judgment and behavior.
Drawing on multi-system models of neuroanatomy, Amodio and colleagues identify three distinct mechanisms underlying implicit social cognition in their “Memory Systems Model” (MSM; Amodio & Ratner 2011). These mechanisms correspond to semantic memory, fear conditioning, and instrumental learning. Each is identified with a distinct neural region (the left PFC and temporal lobe for semantic memory; the amygdala for fear conditioning; and the basal ganglia for instrumental learning) as well as a distinct psychological construct (semantic memory = cognition; fear conditioning = affect; instrumental learning = behavior). While these mechanisms typically work in concert, MSM treats them as distinct constructs, the status or manipulation of which predicts distinct behaviors. Drawing on theories of learning found in computational neuroscience, Huebner (forthcoming) similarly identifies three systems that interact to generate implicit biases: an associative Pavlovian system that triggers approach and withdrawal reaction; an associative “model-free” system that reflexively assigns values to actions based on outcomes of previous actions; and a reflective model-based system that generates forward-looking decision trees. How an agent makes decisions and acts will be determined by the aggregate “voting” patterns of these systems.
Philosophers have divided into roughly two camps with respect to the metaphysics of implicit attitudes. While some argue that implicit attitudes are sui generis associative states (§2.1), others argue that they are beliefs or belief-like mental states (§2.2).
The psychological literature usually describes implicit attitudes as “associations”, but what exactly does this mean? Gendler (2008a,b, 2011, 2012) advances a view according to which the intentional content of implicit attitudes is comprised of tightly woven co-activating components. The components themselves—representational (R), affective (A), and behavioral (B)—are drawn from the classic tripartite view of attitudes in social psychology. Gendler argues that these components are “bundled” together or “cluster” in such a way that when an implicitly biased person sees a black face in a particular context, for example, the agent’s representation will automatically activate particular feelings and behaviors (i.e., an R–A–B cluster). This is in contrast to the “combinatoric” nature of ordinary beliefs and desires, that is, that any belief could, in principle, be combined with any desire. Gendler dubs automatically activated clusters of representations, feelings, and behaviors “aliefs”. So while the belief that “that is a black man” is not fixed to any particular feelings or behavior, an alief will have content like, “Black man! Scary! Avoid!”
“To have an alief”, Gendler writes, is
to a reasonable approximation, to have an innate or habitual propensity to respond to an apparent stimulus in a particular way. It is to be in a mental state that is… associative, automatic and arational. As a class, aliefs are states that we share with non-human animals; they are developmentally and conceptually antecedent to other cognitive attitudes that the creature may go on to develop. Typically, they are also affect-laden and action-generating. (2008b: 557, original emphasis; see also 2008a: 641)
According to Gendler, aliefs explain a wide array of otherwise puzzling cases of belief-behavior discordance, including not only implicit bias, but also phobias, fictional emotions, and bad habits (2008b: 554). In fact, Gendler suggests (2008a: 663) that aliefs are causally responsible for much of the “moment-by-moment management” of human behavior, whether that behavior is belief-concordant or not.
A central concern about alief is a “hodgepodge” worry that aliefs do not form a unified kind (Egan 2011; Currie & Ichino 2012; Doggett 2012; Nagel 2012; Mandelbaum 2013). It is important to note that this is not an issue that is limited to debate about alief. In the psychological literature, Amodio & Devine (2006, 2009), for example, distinguish between three fundamental kinds of implicit attitudes: implicit semantic associations, implicit affective evaluations, and implicit behavioral motivations. They go on to argue that interventions to combat implicit bias should be designed to affect these separate kinds of states separately. So why lump together visual appearances, emotional responses, and motor impulses in a single state such as alief, rather than understand them as a cluster of distinct but frequently co-occurring states? Currie and Ichino (2012) suggest that the unity of alief cannot be justified simply in terms of the close causal connections between a particular set of R–A–Bs. Many beliefs and desires are closely connected, but do not thereby constitute a single state. For example, my belief that I am being pursued by a bull will almost invariably be paired with a desire to avoid being impaled on its horns and bull-avoiding behavior. Gendler’s reply is to emphasize the combinatoric nature of beliefs and desires (2012: 805–806). In the charging bull case (which is Gendler's own example), we can tell whether my running from the bull is belief- or alief-driven by considering whether additional counterfactual information could change my desire to flee from the bull (e.g., if I discovered that death by bull-goring would be a painless path to eternal salvation). If so, then this can be considered a case of belief-guided action; if not, a case of alief-guided action.
Drawing the alief/belief distinction along the axis of evidence-sensitivity raises additional challenges, however. While Gendler argues that beliefs reflect what an agent takes to be true, while aliefs are yoked to how things merely seem, in some cases beliefs appear to be insensitive to changes in evidence (e.g., Ross et al. 1975) and alief-like states appear to track changes in the world quite well. Commentators on the alief/belief distinction have pointed to “intelligent” alief-like states involved in skill (Schwitzgebel 2010), pattern-tracking (Nagel 2012), jazz improvisation (Currie & Ichino 2012), and ethical action (Brownstein & Madva 2012a). Rather than conclude that cases of intelligent alief call for abandoning the alief/belief distinction, however, it is also possible to propose an amended conception of the intentional content of alief in order to account for the ability of our automatic responses to flexibly modify and improve over time (Brownstein & Madva 2012b; Madva 2012).
An alternative to drawing the distinction between implicit and explicit attitudes along the axis of evidence-sensitivity is to draw it along the axis of integration with an agent’s person-level attitudes. Levy (2012, forthcoming) pursues this approach, arguing that implicit attitudes are sui generis states that he calls “patchy endorsements”. What distinguishes patchy endorsements from ordinary mental states like beliefs is that they fail to respond to the semantic contents of other mental states in a systematic way. For example, implicit attitudes are implicated in behaviors for which it is difficult to give an inferential explanation (e.g., Dovidio et al. 1997) and implicit attitudes change in response to irrelevant information (e.g., Gregg et al. 2006; Han et al. 2006). What’s more, implicit attitudes fail in some cases to respond properly to the logical form of semantic statements. Subjects have been shown to form equivalent implicit attitudes on the basis of information and the negation of that information (e.g., Gawronski et al. 2008). This failure to process negation is a challenge for any belief-based account of implicit attitudes (Levy forthcoming; Madva ms c).
Others have argued that familiar notions of belief, desire, and pretense can in fact explain what notions like alief are meant to elucidate (Egan 2011; Kwong 2012; Mandelbaum 2013). Most defend some version of what Schwitzgebel (2010) calls Contradictory Belief (Egan 2008, 2011; Huebner 2009; Gertler 2011; Huddleston 2012; Muller & Bashour 2011; Mandelbaum 2013, 2014, forthcoming). Drawing upon theories of the “fragmentation” of the mind (Lewis 1982; Stalnaker 1984), Contradictory Belief holds that an agent’s implicit and explicit attitudes both reflect what she believes, and that these different sets of beliefs may be causally responsible for different behaviors in different contexts (Egan 2008).
In the psychological literature, De Houwer and colleagues defend a propositional account of implicit attitudes that can be thought of as supporting Contradictory Belief (Mitchell et al. 2009; Hughes et al. 2011; De Houwer forthcoming). On this model, propositions have three defining features: (1) propositions are statements about the world that specify the nature of the relation between concepts (e.g., “I am good” and “I want to be good” are propositions that involve the same two concepts—“me” and “good”—but differ in the way that the concepts are related); (2) propositions can be formed rapidly on the basis of instructions or inferences; and (3) subjects are conscious of propositions (De Houwer forthcoming). On the basis of data showing that implicit attitudes purportedly meet these conditions—for example, implicit attitudes can be formed on the basis of one-shot instruction as well as inference—De Houwer (forthcoming) claims that implicit attitudes are propositional states (i.e., beliefs). This claim represents an application of Mitchell and colleague’s (2009) broader argument that all learning is propositional (i.e., there is no case in which learning is the result of the automatic associative linking of mental representations).
One version of the generic belief interpretation claims that implicit biases are better understood as cognitive “schemas” than as evaluative attitudes. Schemas are clusters of culturally shared concepts and beliefs. More precisely, schemas are abstract knowledge structures that specify the defining features and attributes of a target (Fiske & Linville 1980). The term “mother”, for example, invokes a schema that attributes a collection of attributes to the person so labelled (Haslanger 2013). Schemas are distinct from attitudes in the psychological sense in that they are “coldly” cognitive (Valian 2005). They are tools for social categorization, and while schemas may help to organize and interpret feelings and motivations, but they are themselves affectless. One advantage of focusing on schemas is that doing so emphasizes that implicit bias is not a matter of straightforward antipathy toward members of socially stigmatized groups.
Another version of the generic belief approach stems from recent work in the philosophy of language. This approach focuses on stereotypes that involve generalizing extreme or horrific behavior from a few individuals to groups. Such generalizations, such as “pit bulls maul children” or “Muslims are terrorists”, can be thought of as a particular kind of generic statement, which Leslie (forthcoming) calls a “striking property generic”. This subclass of generics is defined by having predicates that express properties that people typically have a strong interest in avoiding. Building on earlier work on the cognitive structure and semantics of generics (Leslie 2007, 2008), Leslie notes a particularly insidious feature of social stereotyping: even if just a few members of what is perceived to be an essential kind (e.g., pit bulls, Muslims) exhibit a harmful or dangerous property, then a generic that attributes the property to the kind likely will be judged to be true. This is only the case with striking properties, however. As Leslie (forthcoming) points out, it takes far fewer instances of murder for one to be considered a murderer than it does instances of anxiety to be considered a worrier. Striking property generics may thus illuminate some social stereotypes (e.g., “black men are rapists”) better than others (e.g., “black men are athletic”). Beeghly (2014), however, construes generics as expressions of cognitive schemas, which may broaden the scope of explanation by way of generic statements. In all of these cases, generics involve an array of doxastic properties. Generics involve inferences to dispositions, for example (Leslie forthcoming). That is, generic statements about striking properties will usually be judged true if and only if some members of the kind possess the property and other members of the kind are judged to be disposed to possess it.
Building upon Ryle (1949/2009), Schwitzgebel (2006/2010, 2010, 2013) argues that attitudes (in the philosophical sense, e.g., beliefs) have a broad (or “multitrack”) profile, the content of which is determined by the folk-psychological stereotype for having that attitude. Agents with implicit biases pose an interesting challenge to this dispositional approach, since these agents often match only part of the relevant folk-psychological stereotypes. Schwitzgebel’s solution is a “gradualist dispositionalism”: Frank (§1) “in-between believes” that men and women are equally suited for careers outside the home. On this view, “believe” is a vague predicate that admits of in-between cases. Frank’s bias is best described, not in terms of what’s inside his metaphoric “belief box”, but rather, in terms of the relevant suite of his personality traits. The advantage of this approach is that personality attributions readily admit of vague cases. Just as we might say that Frank is partly agreeable if he extols the virtues of compassion yet sometimes treats strangers rudely, we should say that Frank is partly prejudiced.
A related trait-based approach treats the results of indirect measures as reflective of elements of attitudes rather than as indicative of attitudes themselves (Machery forthcoming). On this view, there is no such thing as an implicit attitude. Attitudes (in the psychological sense, i.e., likings and dislikings) are understood as broad-track dispositions to perceive, attend, cognize, and behave in particular ways in particular contexts (i.e., they are traits). Indirect measures, such as the IAT, quantify components of the “psychological bases” of attitudes. This is meant to explain the somewhat low correlations between different indirect measures (§1.2); each measure quantifies different psychological bases of attitudes toward a particular target. The full trait, such as “liking white people”, is comprised of these and other psychological bases (such as those quantified by direct measures).
The most explicit defense of Contradictory Belief has been via a theory of “Spinozan Belief Fixation” (SBF; Gilbert 1991; Egan 2008, 2011; Huebner 2009; Mandelbaum 2011, 2013, 2014, forthcoming). Proponents of SBF are inspired by Spinoza’s rejection of the concept of the will as a cause of free action (Huebner 2009: 68), an idea which is embodied in what they call the theory of “Cartesian Belief Fixation” (CBF). CBF holds that ordinary agents are capable of evaluating the truth of an idea (or representation, or proposition) delivered to the mind (via sensation or imagination) before believing or disbelieving it. Agents can choose to believe or disbelieve P, according to CBF, in other words, via deliberation or judgment. SBF, on the other hand, holds that as soon as an idea is presented to the mind, it is believed. Beliefs on this view are understood to be unconscious propositional attitudes that are formed automatically as soon as an agent registers or tokens their content. For example, one cannot entertain or consider or imagine the proposition that “dogs are made out of paper” without immediately and unavoidably believing that dogs are made out of paper, according to SBF (Mandelbaum 2014). More pointedly, one cannot entertain or imagine the stereotype that “women are bad at math” without believing that women are bad at math. As Mandelbaum (2014) puts it, the automaticity of believing according to SBF explains why people are likely to have many contradictory beliefs; in order to reject P, one must already believe P.
One line of argument for SBF’s account of implicit attitudes focuses on the fact that these states appear to be unresponsive in some cases to the kinds of reinforcement learning based interventions that ought to affect associatively-structured states. And at the same time, implicit attitudes appear to be responsive in some cases to the kinds of logical and persuasion based interventions that ought to affect doxastic states (Mandelbaum 2013, forthcoming). Another line of argument advanced by Mandelbaum (forthcoming) focuses on dissonance literature and balance theories (e.g., Heider 1958; Aronson & Cope 1968; Walther 2002). For example, subjects who have negative implicit attitudes toward A (a fictitious person) will have positive implicit attitudes toward B if they are told that A dislikes B (Gawronski et al. 2005), which suggest that subjects may be reasoning that “the enemy of my enemy is my friend”. These interpretations of the data are controversial, however. It is not yet clear if implicit attitudes really are more responsive to argument than reinforcement learning or if inference-making really is present in the dissonance literature in the way Mandelbaum suggests. What is also not yet clear is whether and how evidence for propositional influences on implicit attitudes is evidence for propositional structure, or whether and how evidence of propositional structure is evidence of these states being beliefs (Levy forthcoming; Madva ms c). Finally, broader questions about SBF focus on its revisionism with respect to the ordinary concept of belief (e.g., that belief plays an important normative role in the guidance of action).
Philosophical work on the epistemology of implicit bias has focused on three related questions. First, do we have knowledge of our own implicit biases, and if so, how? Second, do the emerging data on implicit bias demand that we become skeptics about our perceptual beliefs or our overall status as epistemic agents? And third, are we faced with a dilemma between our epistemic and ethical values due to the pervasive nature of implicit bias?
Implicit attitudes are typically thought of as unconscious states, but what exactly does this mean? There are several possibilities: there might be no phenomenology associated with occurrent implicit attitudes; agents might be unaware of the content of their implicit attitudes, the source of their implicit attitudes, or the effects of their implicit attitudes on their behavior; and agents might be unaware of the relations with other attitudes and beliefs into which their implicit attitudes stand. Gawronski and colleagues (2006) argue that agents typically lack “source” and “impact” awareness of their implicit attitudes, but typically have “content” awareness. Evidence for content awareness stems from “bogus pipeline” experiments (e.g., Nier 2005) in which participants are led to believe that inaccurate self-reports will be detected by the experimenter. In these experiments, participants’ implicit and explicit attitudes come to be more closely correlated, suggesting that participants are aware of the content of the attitudes detected by indirect measures and shift their reports when they believe that the experimenter will notice discrepancies between their implicit and explicit attitudes. Additional evidence for content awareness is found in studies in which experimenters bring indirect measures and self-reports into conceptual alignment (e.g., Banse et al. 2001) and studies in which agents are asked to predict their own implicit attitudes (Hahn et al. 2013).
These data do not determine whether agents come to be aware of the content of their implicit attitudes through introspection or through inference from other sources (e.g., by reading articles like this one or by drawing inferences from one’s own behavior). This distinction is relevant for determining whether the awareness agents have of the content of their implicit attitudes constitutes knowledge of those attitudes. If our awareness of the content of our implicit attitudes derives from inferences we make based on (for example) our behavior, then the question is whether these inferences are justified, assuming knowledge entails justified true belief. Some have suggested that the facts about implicit bias warrant a “global” skepticism toward our capacities as epistemic agents (Saul 2012; see §3.2.2). If this is right, then we ought to worry that own inferences about the content of our implicit attitudes from our behavior are likely to be unjustified. Others, however, have argued that people are typically very good interpreters of their own attitudes (e.g., Carruthers 2009; Levy 2012), in which case it may be more likely that our inferences about the content of our implicit attitudes are well-justified. But whether the inferences we make about our own attitudes are well-justified would be moot if it were shown that we have direct introspective access to those attitudes.
One sort of skeptical worry stems from research on the effects of implicit bias on perception (§3.2.1). This leads to a worry about the status of our perceptual beliefs. A second kind of skeptical worry focuses on what implicit bias may tell us about our capacities as epistemic agents in general (§3.2.2).
Compared with participants who were first shown pictures of white faces, those who were primed with black faces in Payne (2001) were faster to identify pictures of guns as guns and were more likely to misidentify pictures of tools as guns. This finding has been directly and conceptually replicated many times (e.g., Payne et al. 2002; Conrey et al. 2005) and is an instance of a broader set of findings about the effects of attitudes and beliefs on perception (e.g., Barrick et al. 2002; Proffitt 2006). Payne’s findings are chilling particularly in light of police shootings of unarmed black men in recent years, such as Amadou Diallo and Oscar Grant. The findings suggest that agents’ implicit associations between “black men” and “guns” affect their judgment and behavior by affecting what they see. This may be cause for a particular kind of epistemic concern. As Siegel (2012, 2013) puts it, the worry is that implicit bias introduces a circular structure into belief formation. If an agent believes that black men are more likely than white men to have or use guns, and this belief causes the agent to more readily see ambiguous objects in the hands of black men as guns, then when the agent relies upon visual perception as evidence to confirm her beliefs, she will have moved in a vicious circle.
Whether implicit attitudes are cause for this sort of epistemic concern depends on what sort of causal influence social attitudes have on visual perception. Payne’s weapons bias findings would be a case of cognitive penetration if the black primes make the images of tools look like images of guns. This would certainly introduce a circular structure in belief formation. But other scenarios raise the possibility of illicit belief formation without genuine cognitive penetration. Consider what Siegel calls “perceptual bypass”: the black primes do not cause the tools to look like guns (i.e., the prime does not cause a change in perceptual experience), yet the agent believes nevertheless that what she sees is a gun (i.e., the prime causes a chance in perceptual judgment), perhaps because she is in a heightened state of anxiety, which in turn causes her to notice the gun-like elements of her visual array. This will count as a case of illicit belief formation inasmuch as the agent’s social attitudes cause her to be insensitive to her visual stimuli in a way that confirms her antecedent attitudes (Siegel 2012). One scenario that would allay the worry about illicit belief formation, however, is if the change in perception is due to what Siegel calls a “global selection effect”. For example, no circular causal influence on her beliefs would obtain if the white prime causes the agent to make a controlled decision to focus more carefully when identifying the ambiguous objects. In this case the agent would simply be processing different sensory information due to her choice to focus more carefully. Empirical evidence can help to sort through these possibilities, though perhaps not settle between them conclusively.
A broader worry is that research on implicit bias should cause agents to mistrust their knowledge-seeking faculties in general. “Bias-related doubt” (Saul 2012) is stronger than traditional forms of skepticism (e.g., external world skepticism) in the sense that it suggests that our epistemic judgments are not just possibly but often likely mistaken. Implicit biases are likely to degrade our judgments across many domains, e.g., professors’ judgments about student grades, journal submissions, and job candidates. Moreover, as Fricker (2007) points out, the testimony of members of stigmatized groups is likely to be discounted due to implicit bias, which, Saul suggests, can magnify these epistemic failures as well as create others, such as failing to recognize certain questions as relevant for inquiry (Hookway 2010). The key point about these examples is that our judgments are likely to be affected by implicit biases even when “we think we’re making judgments of scientific or argumentative merit” (Saul 2012: 249). Moreover, unlike errors of probabilistic reasoning, these effects generalize across many areas of day-to-day life. We should be worried, Saul argues,
whenever we consider a claim, an argument, a suggestion, a question, etc from a person whose apparent social group we’re in a position to recognize. (Saul 2012: 250).
Bias-related doubt may be diminished if successful interventions can be developed to correct for epistemic errors caused by implicit bias. In some cases, the fix may be simple, such as anonymous review of job candidate dossiers. But other contexts will certainly be more challenging. More generally, Saul’s account of bias-related doubt takes a strongly pessimistic stance toward the normativity of our unreflective habits. “It is difficult to see”, she writes, “how we could ever properly trust [our habits] again once we have reflected on implicit bias” (2012: 254). Others, however, have stressed the ways in which unreflective habits can have epistemic virtues (e.g., Arpaly 2004; Railton 2014; Brownstein & Madva 2012a,b; Nagel 2012; Antony forthcoming). Squaring the reasons for pessimism about the epistemic status of our habits with these streams of thought will be important in future research.
Gendler (2011) and Egan (2011) argue that implicit bias creates a conflict between our ethical and epistemic aims. Concern about ethical/epistemic dilemmas is at least as old as Pascal, as Egan points out, but is also incarnated in contemporary research on the value of positive illusions (e.g., “I am brilliant!”; e.g., Taylor & Brown 1988—which may promote well-being despite being false). The dilemma surrounding implicit bias stems from the apparent unavoidability of stereotyping, which Gendler traces to the way in which social categorization is fundamental to our cognitive capacities. For agents who disavow common social stereotypes for ethical reasons, this creates a conflict between what we know and what we value. As Gendler puts it,
if you live in a society structured by racial categories that you disavow, either you must pay the epistemic cost of failing to encode certain sorts of base-rate or background information about cultural categories, or you must expend epistemic energy regulating the inevitable associations to which that information—encoded in ways to guarantee availability—gives rise. (2011: 37)
Gendler offers four examples: cross-race recognition deficits; stereotype threat; cognitive depletion following interracial interactions; and “forbidden” base-rates. The first two examples are notably harmful to targets of bias. The fact that agents are better at recognizing and remembering faces of members of their ingroup than faces of outgroups (e.g., Meissner & Brigham 2001) can lead, for example, to discrimination in eyewitness testimony (e.g., Levinson 2007; Kang et al. 2012). In this case, the epistemic hazard is the loss of individuating information about members of outgroups. Stereotype threat—or the threat of confirming a stereotype about a group to which one belongs—leads to impaired performance across a wide variety of domains when one’s membership in a social group is brought to mind (e.g., Steele & Aronson 1995). Here too it seems that social knowledge leads to bad outcomes; stereotype threat leads members of stigmatized groups to lose access to, or confidence in, their own knowledge in high-stakes situations (Gendler 2011: 49–50). Gendler’s third example—cognitive depletion following interracial interactions (e.g., Trawalter & Richeson 2006; Richeson & Shelton 2007)—affects perpetrators of bias. In short, when white subjects interact with black confederates, they perform more poorly than controls on subsequent tests of cognitive control (e.g., a Stroop Task). The final example—forbidden base rates, or useful statistical generalizations that utilize problematic social knowledge—is a case in which everyone involved suffers. For example, participants who are asked to set insurance premiums for hypothetical neighborhoods will accept actuarial risk as a justification for setting higher premiums for particular neighborhoods but will not do so if they are told that actuarial risk is correlated with the racial composition of that neighborhood (Tetlock et al. 2000). This “epistemic self-censorship on non-epistemic grounds” makes it putatively impossible for agents to be both rational and equitable (Gendler 2011: 55, 57).
Egan (2011) raises problems for intuitive ways of diffusing this dilemma, settling instead on the idea that making epistemic sacrifices for our ethical values may simply be worth it. Others have been more unwilling to accept that implicit bias does in fact create an unavoidable ethical-epistemic dilemma (Mugg 2013; Beeghly 2014; Madva forthcoming). One way of diffusing the dilemma, for example, is to suggest that it is not social knowledge per se that has costs, but rather that the accessibility of social knowledge in the wrong circumstances has cognitive costs (Madva forthcoming). The solution to the dilemma, then, is not ignorance, but the situation-specific regulation of stereotype accessibility. For example, the accessibility of social knowledge can be regulated by agents’ goals and habits (Moskowitz & Li 2011).
Most philosophical writing on the ethics of implicit bias has focused on two distinct (but related) questions. First, are agents morally responsible for their implicit biases (§4.1)? Second, can agents change their implicit biases and/or control the effects of these attitudes on their explicit judgments and behavior (§4.2)?
Researchers working on moral responsibility for implicit bias often make two key distinctions. First, they distinguish responsibility for attitudes from responsibility for judgments and behavior. One can ask whether agents are responsible for their implicit attitudes as such, that is, or whether agents are responsible for the effects of their implicit attitudes on their judgments and behavior. Most have focused on the latter question, as will I. A second important distinction is between being responsible and holding responsible. This distinction can be glossed in a number of non-equivalent but related ways. It can be glossed as a distinction between blameworthiness and actual expressions of blame; between backward- and forward-looking responsibility (i.e., responsibility for things one has done in the past versus responsibility for doing certain things in the future); and between responsibility as a form of judgment versus responsibility as a form of sanction. Most have focused on the former of these disjuncts (being responsible, blameworthiness, etc.) via three kinds of approaches: arguments from the importance of awareness or knowledge of one’s implicit attitudes (§4.1.1); arguments from the importance of control over the impact of one’s implicit attitudes on one’s judgment and behavior (§4.1.2); and arguments from “attributionist” and “Deep Self” considerations (§4.1.3).
It is plausible that conscious awareness of our implicit biases is a necessary condition for moral responsibility for those biases. Saul articulates the intuitive idea, suggesting that we
abandon the view that all biases against stigmatised groups are blameworthy … [because a] person should not be blamed for an implicit bias that they are completely unaware of, which results solely from the fact that they live in a sexist culture. (2013: 55, emphasis in original)
Saul’s claim appears to be in keeping with folk psychological attitudes about blameworthiness and implicit bias. Cameron and colleagues (2010) found that subjects were considerably more willing to ascribe moral responsibility to “John” when he was described as acting in discriminatory ways against blacks despite “thinking that people should be treated equally, regardless of race” compared to when he was described as acting in discriminatory ways despite having a “sub-conscious dislike for African Americans” that he is “unaware of having”.
Recalling Gawronski and colleagues’ claim that agents often do have content awareness of their implicit attitudes (§3.1), it would seem that typical agents are responsible for their implicit biases on the basis of the argument from awareness. However, if the question is whether agents are blameworthy for behaviors affected by implicit biases (rather than for having biases themselves), then perhaps impact awareness is what matters most (Holroyd 2012). That said, lacking impact awareness of the effects of implicit bias on our behavior may not exculpate agents from responsibility even in principle. One possibility is that implicit biases are analogous to moods in the sense that being in an introspectively unnoticed bad mood can cause one to act badly (Madva ms b). There is debate about whether unnoticed moods are exculpatory (e.g., Korsgaard 1997; Levy 2011). One possibility is that bad moods and implicit biases both diminish blameworthiness, but do not undermine it as such. This claim depends in part on moral responsibility admitting of degrees.
One problem with focusing on impact awareness, however, as Holroyd (2012) points out, is that we may be unaware of the impact of a great many cognitive states on our behavior. The focus on impact awareness may lead to a global skepticism about moral responsibility, in other words. This suggests that impact awareness may not serve as a good criterion for distinguishing responsibility for implicit attitudes from responsibility from other cognitive states, notwithstanding whether global skepticism about moral responsibility is defensible.
A second way to unpack the argument from awareness is to focus on what agents ought to know about implicit bias, rather than what they do know. This approach indexes moral responsibility to one’s social and epistemic environment. For example, Kelly & Roedder (2008) argue that a “savvy grader” is responsible for adjusting her grades to compensate for her likely biases because she ought to be aware of and compelled by research on implicit bias. In a similar spirit, Washington & Kelly (forthcoming) compare two hypothetical egalitarians with equivalent psychological profiles, the only difference between them being that the “Old School Egalitarian” is evaluating résumés in 1980 and the “New Egalitarian” is doing so in 2014. While neither has heard of implicit bias, Washington & Kelly argue that the New Egalitarian is morally culpable in a way that the Old School Egalitarian isn’t. Only the New Egalitarian could have, and ought to have, known about his likely implicit biases, given the comparative states of art of psychological research in 1980 and 2014. The underlying intuition here is that assessments of responsibility change with changes in an agent’s social and epistemic environment.
A third way of unpacking the argument from awareness is to focus on the way in which an attitude does or does not integrate with a variety of the agent’s other attitudes once it becomes conscious (Levy 2012; see §2.1). On this view, attitudes that cause responsible behavior are available to a broad range of cognitive systems. For example, in cognitive dissonance experiments (e.g., Festinger 1956), agents attribute confabulatory reasons to themselves and then tend to act in accord with those self-attributed reasons. The self-attribution of reasons in this case, according to Levy (2012), has an integrating effect on behavior, and thus can be thought of as underwriting the sort of agency required for moral responsibility. Crucially, it is when the agent becomes conscious of her self-attributed reasons that they have this integrating effect. This provides grounds for claiming that attitudes for which agents are responsible are those that integrate behavior when the agent becomes aware of the content of those attitudes. Implicit attitudes are not like this, according to Levy. What’s morally important is that
awareness of the content of our implicit attitudes fails to integrate them into our person level concerns in the manner required for direct moral responsibility. (Levy 2012: 9).
The fact that implicit attitudes are often defined in contrast to “controlled” cognitive processes (§§1.3.1–1.3.2) implies that these states may affect behavior in a way that bypasses a person’s agential capacities. The fact that implicit biases seem to “rebound” in response to intentional efforts to suppress them supports this interpretation (Huebner 2009; Follenfant & Ric 2010). Early research suggesting that implicit attitudes reflect mere awareness of stereotypes, rather than personal attitudes, also implies that these states reflect processes that “happen to” agents. More recently, however, philosophers have questioned the ramifications of these and other data for the notion of control relevant to moral responsibility.
Perhaps the most familiar way of understanding control in the responsibility literature is in terms of a psychological mechanism that would allow an agent to act differently than she otherwise would act when there is sufficient reason to do so (Fischer & Ravizza 2000). The question facing this sort of reasons-responsiveness view of control is whether automatized behaviors—which unfold in the absence of explicit reasoning—should be thought of as under an agent’s control. Some have argued that automaticity and control are not mutually exclusive. Holroyd & Kelly (forthcoming) advance a notion of “ecological control”, and Suhler and Churchland (2009) offer an account of nonconscious control that underwrites automaticity itself, yet is ostensibly sufficient for underwriting responsibility. Others have distinguished between automaticity and automatisms (e.g., sleepwalking), with some drawing the relevant moral distinction in terms of agents’ ability to “pre-program” their automatic actions (but not automatistic actions) via previous controlled choices (e.g., Wigley 2007), and others drawing the distinction in terms of agents’ ability to consciously monitor their automatic actions (e.g., Levy & Bayne, 2004). Others still have distinguished between “indirect” and “direct” control over one’s attitudes or behavior (e.g., Holroyd 2012; Levy & Mandelbaum forthcoming; Sie & Voorst Vader-Bours forthcoming). Holroyd (2012) argues that there are many things over which we do not hold direct and immediate control, yet for which we are commonly held responsible, such as learning a skill, speaking a foreign language, and even holding certain beliefs. None of these abilities or states can be had by fiat of will; rather, they take time and effort to obtain. This suggests that we can be held responsible for attitudes or behaviors over which we only have indirect long-range control. The question, then, of course, is whether agents can exercise indirect long-range control over their implicit biases. Mounting evidence suggests that we can (§4.2).
“Attributionist” and Deep Self theories of moral responsibility represent an alternative to arguments from awareness and control. According to these theories, for an agent to be responsible for an action is for that action to “reflect upon” the agent “herself”. A common way of speaking is to say that responsibility-bearing actions are attributable to agents in virtue of reflecting upon the agent’s “deep self”, where the deep self represents the person’s fundamental evaluative stance (Sripada ms). Although there is much disagreement in the literature about what the deep self really is, as well as what it means for an attitude or action to reflect upon it, attributionists agree that people can be morally responsible for actions that are non-conscious (e.g., “failure to notice” cases), non-voluntary (e.g., actions stemming from strong emotional reactions), or otherwise divergent from an agent’s will (Frankfurt 1971; Watson 1975, 1996; Scanlon 1998; A. Smith 2005, 2008, 2012; Hieronymi 2008; Sher 2009; and H. Smith 2011).
One influential view developed in recent years is that agents are responsible for just those actions or attitudes that stem from, or are susceptible to modification by, the agent’s “evaluative” or “rational” judgments, which are judgments for which it is appropriate (in principle) to ask the agent her reasons (in a justifying sense) for holding (Scanlon 1998; A. Smith 2005, 2008, 2012). A. Smith suggests that implicit biases stem from rational judgments, because
a person’s explicitly avowed beliefs do not settle the question of what she regards as a justifying consideration. (2012: 581–582, fn 10)
An alternative approach sees the source of the “deep self” in an agent’s “cares” rather than in her rational judgments (Shoemaker 2003, 2011; Jaworska 2007; Sripada ms). Cares have been described in different ways, but in this context are thought of as psychological states with motivational, affective, and evaluative dispositional properties. It is an open question whether implicit attitudes are reflective of an agent’s cares (Brownstein ms). It is also possible that even in cases in which an implicit attitude or a concomitant action is not attributable to an agent’s deep self, it may still be appropriate to hold the agent responsible for violating some duty or obligation she holds due to her implicit biases (Zheng forthcoming). Glasgow (forthcoming) similarly argues for responsibility for implicit biases that may not be attributable to agents. His view unfolds in terms of responsibility for actions from which agents are nevertheless alienated. Glasgow defends this view on the basis of “Content-Sensitive Variantism” and “Harm-Sensitive Variantism”, a pair of views according to which alienation exculpates depending on extra-agential features of an action, such as the content of the action or the kind of harm it creates. These variantist views are fairly strongly revisionist with respect to traditional conceptions of responsibility in the 20th century philosophical literature. Some have argued that research on implicit bias calls for revisionism of this sort (Vargas 2005; Faucher forthcoming).
Researchers working in applied ethics may be less concerned with questions about in-principle culpability and more concerned with investigating how to change or control our implicit biases. Of course, anyone committed to fighting against prejudice and discrimination will share this interest. Policymakers and workplace managers should also be concerned with finding effective interventions, given that they are already directing tremendous public and private resources toward anti-discrimination programs in workplaces, universities, and other domains affected by intergroup conflict. Yet as Paluck and Green (2009) suggest, the effectiveness of many of the strategies commonly used remains unclear. Most studies on prejudice reduction are non-experimental (lacking random assignment), are performed without control groups, focus on self-report surveys, and gather primarily qualitative (non-quantitative) data.
An emerging body of laboratory-based research suggests that strategies are available for regulating implicit biases, however. One way to class these strategies is in terms of those that purport to change the apparent associations underlying agents’ implicit attitudes, compared with those that purport to leave implicit associations intact but enable agents to control the effects of those attitudes on judgment and behavior (Stewart & Payne 2008; Mendoza et al. 2010; Lai et al. 2013). For example, a “change-based” strategy might reduce individuals’ automatic associations of “white” with “good” while a “control-based” strategy might enable individuals to prevent that association from affecting their behavior. Below I briefly describe some of these interventions. For comparison of the data on their effectiveness, see Lai and colleagues (2014).
- Intergroup contact (Aberson et al. 2008; Dasgupta & Rivera 2008):
- long studied for its effects on explicit prejudice (e.g., Allport 1954; Pettigrew & Tropp 2006), interaction between members of different social groups appears to diminish implicit prejudice as well.
- Approach training (Kawakami et al. 2007, 2008; Phills et al. 2011):
- participants repeatedly “negate” stereotypes and “affirm” counter-stereotypes by pressing a button labelled “NO!” when they see stereotype-consistent images (e.g., of a black face paired with the word “athletic”) or “YES!” when they see stereotype-inconsistent images (e.g., of a white face paired with the word “athletic”). Other experimental scenarios have had participants push a joystick away from themselves to “negate” stereotypes and pull the joystick toward themselves to “affirm” counter-stereotypes.
- Evaluative conditioning (Olson & Fazio 2006; De Houwer 2011):
- a widely used technique whereby an attitude object (e.g., a picture of a black face) is paired with another valenced attitude object (e.g., the word “genius”), which shifts the valence of the first object in the direction of the second.
- Counter-stereotype exposure (Blair et al. 2001; Dasgupta & Greenwald 2001):
- increasing individuals’ exposure to images, film clips, or even mental imagery depicting members of stigmatized groups acting in stereotype-discordant ways (e.g., images of female scientists).
- Implementation intentions (Gollwitzer & Sheeran 2006; Stewart & Payne 2008; Mendoza et al. 2010; Webb et al. 2012):
- “if-then” plans that specify a goal-directed response that an individual plans to perform on encountering an anticipated cue. For example, in a “Shooter Bias” test, where participants are given the goal to “shoot” all and only those individuals shown holding guns in a computer simulation, participants may be asked to adopt the plan, “if I see a black face, I will think ‘safe!’” (Mendoza et al. 2010).
- “Cues for control” (Monteith 1993; Monteith et al. 2002):
- techniques for noticing prejudiced responses, in particular the affective discomfort caused by the inconsistency of those responses with participants’ egalitarian goals.
- Priming goals, moods, and motivations (Huntsinger et al. 2010; Moskowitz & Li 2011; Mann & Kawakami 2012):
- priming egalitarian goals, multicultural ideologies, or particular moods can lower scores of prejudice on indirect measures of attitudes.
There is some doubt about this way of categorizing interventions, as some control-based interventions may also change agents’ underlying associations and some association-based interventions may also promote control (Stewart & Payne 2008; Mendoza et al. 2010). More significant though are concerns about the efficacy of these interventions over time (Olson & Fazio 2006; Mandelbaum ms), their practical feasibility (Bargh 1999; Schneider 2004), and the possibility that they may distract from broader problems of economic and institutional forms of injustice (Anderson 2010; Dixon et al. 2012; see §5). Of course, most of the research on interventions like these is recent, so it is simply not clear yet which strategies, or combination of strategies (Devine et al. 2012), will or won’t be effective. Some have voiced optimism about the role lab-based interventions like these can play as elements of broader efforts to combat prejudice and discrimination (e.g., Kelly et al. 2010a; Madva ms a).
Nosek and colleagues (2011) suggest that the second generation of research on implicit social cognition will come to be known as the “Age of Mechanism”. One important question that the next generation of research may address is whether the mechanisms underlying different forms of implicit bias (e.g., implicit racial biases vs. implicit gender biases) are heterogeneous. Some have already begun to carve implicit social attitudes into kinds (Amodio & Devine 2006; Holroyd & Sweetman forthcoming; Madva & Brownstein ms). Future research on implicit bias in particular domains of social life may also help to illuminate this issue, such as research on implicit bias in legal practices (e.g., Lane et al. 2007; Kang 2009) and in medicine (e.g., Green et al. 2007; Penner et al. 2010), as well as research on implicit intergroup attitudes toward non-black racial minorities, such as Asians and Latinos (Dasgupta 2004), and cross-cultural research on implicit attitudes in non-Western countries (e.g., Dunham et al. 2013a). This research may also come to influence our understanding of the metaphysics of implicit attitudes, which will also surely be shaped by broader open questions about the bonds between cognition and affect in implicit cognition (e.g., Pessoa forthcoming; Madva & Brownstein ms); what’s left when the dust settles in current controversies over social priming techniques; the fate of dual-process theories (e.g., Frankish forthcoming); and data on the development of implicit attitudes in childhood (e.g., Dunham et al. 2013b).
Future research on epistemology and implicit bias may tackle a number of questions, for example: does the testimony of social and personality psychologists about statistical regularities justify believing that you are biased? What can developments in vision science tell us about illicit belief formation due to implicit bias? In what ways is implicit bias depicted and discussed outside academia (e.g., in stand-up comedy focusing on social attitudes)? Also germane are future methodological questions, such as how research on implicit social cognition may interface with large-scale correlational sociological studies on social attitudes and discrimination (Lee forthcoming). Another crucial methodological question is whether and how theories of implicit bias—and more generally psychological approaches to understanding social phenomena—can come to be integrated with broader social theories focusing on race, gender, class, disability, etc. Important discussions have begun (e.g., Valian 2005; Kelly & Roedder 2008; Faucher & Machery 2009; Anderson 2010; Machery et al. 2010; Madva ms a), but there is no doubt that more connections must be drawn to relevant work on identity (e.g., Appiah 2005), critical theory (e.g., Delgado & Stefancic 2012), feminist epistemology (Grasswick 2013), and race and political theory (e.g., Mills 1999).
As with all of the above, questions in theoretical ethics about moral responsibility for implicit bias will certainly be influenced by future empirical research. One noteworthy intersection of theoretical ethics with forthcoming empirical research will focus on the interpersonal effects of blaming and judgments about blameworthiness for implicit bias. This research aims to have practical ramifications for mitigating intergroup conflict as well, of course. On this front, arguably the most pressing question, however, is about the durability of psychological interventions once agents leave the lab. How long will shifts in biased responding last? Will individuals inevitably “relearn” their biases? Is it possible to leverage the lessons of “situationism” in reverse, such that shifts in individuals’ attitudes create environments that provoke more egalitarian behaviors in others (Sarkissian 2010; Brownstein forthcoming)?
- Aberson, C., M. Porter, & A. Gaffney, 2008, “Friendships predict Hispanic student’s implicit attitudes toward Whites relative to African Americans”, Hispanic Journal of Behavioral Sciences, 30: 544–556.
- Allport, G., 1954, The Nature of Prejudice, Reading: Addison-Wesley.
- Amodio, D. & P. Devine, 2006, “Stereotyping and evaluation in implicit race bias: evidence for independent constructs and unique effects on behavior”, Journal of Personality and Social Psychology, 91(4): 652.
- –––, 2009, “On the interpersonal functions of implicit stereotyping and evaluative race bias: Insights from social neuroscience”, in Attitudes: Insights from the new wave of implicit measures, R. Petty, R. Fazio, & P. Briñol (eds.), Hillsdale, NJ: Erlbaum, pp. 193–226.
- Amodio, D. & K. Ratner, 2011, “A memory systems model of implicit social cognition”, Current Directions in Psychological Science, 20(3): 143–148.
- Anderson, E., 2010, The Imperative of Integration, Princeton: Princeton University Press.
- –––, 2012, “Epistemic justice as a virtue of social institutions”, Social Epistemology, 26(2): 163–173.
- Antony, L., forthcoming, “Bias: friend or foe? Reflections on Saulish Skepticism”, in Brownstein & Saul (eds.) forthcomingA.
- Appiah, A., 2005, The Ethics of Identity, Princeton: Princeton University Press.
- Arkes, H. & P. Tetlock, 2004, “Attributions of implicit prejudice, or ‘would Jesse Jackson ‘fail’ the Implicit Association Test?’”, Psychological Inquiry, 15: 257–278.
- Aronson, E. & C. Cope, 1968, “My Enemy’s Enemy is My Friend”, Journal of Personality and Social Psychology, 8(1): 8–12.
- Arpaly, N., 2004, Unprincipled Virtue: An Inquiry into Moral Agency, Oxford: Oxford University Press.
- Ashburn-Nardo, L., M. Knowles, & M. Monteith, 2003, “Black Americans’ implicit racial associations and their implications for intergroup judgment”, Social Cognition, 21:1, 61–87.
- Banaji, M. & A. Greenwald, 2013, Blindspot, New York: Delacorte Press.
- Banaji, M. & C. Hardin, 1996, “Automatic stereotyping”, Psychological Science, 7(3): 136–141.
- Banaji, M., C. Hardin, & A. Rothman, 1993, “Implicit stereotyping in person judgment”, Journal of personality and Social Psychology, 65(2): 272.
- Banse, R., J. Seise, & N. Zerbes, 2001, “Implicit attitudes towards homosexuality: Reliability, validity, and controllability of the IAT”, Zeitschrift für Experimentelle Psychologie, 48: 145–160.
- Bar-Anan, Y. & B. Nosek, forthcoming, “A Comparative Investigation of Seven Implicit Measures of Social Cognition”, Behavior Research Methods. [available online]
- Bargh, J., 1994, “The four horsemen of automaticity: Awareness, intention, efficiency, and control in social cognition”, in Handbook of social cognition (2nd ed.), R. Wyer, Jr. & T. Srull (eds.), Hillsdale, NJ: Lawrence Erlbaum Associates, Inc., pp 1–40.
- –––, 1999, “The cognitive monster: The case against the controllability of automatic stereotype effects”, in Chaiken & Trope (eds.) 1999: 361–382.
- Barrick, C., D. Taylor, & E. Correa, 2002, “Color sensitivity and mood disorders: biology or metaphor?”, Journal of affective disorders, 68(1): 67–71.
- Beeghly, E., 2014, Seeing Difference: The Epistemology and Ethics of Stereotyping, PhD diss., University of California, Berkeley, California.
- Begby, E., 2013, “The Epistemology of Prejudice”, Thought: A Journal of Philosophy, 2(2): 90–99.
- Bertrand, M., D. Chugh, & S. Mullainathan, 2005, “Implicit discrimination”, American Economic Review, 94–98.
- Bertrand, M. & S. Mullainathan, 2004, “Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market”, NBER Working Papers from National Bureau of Economic Research, Inc., No. 9873.
- Blair, I., J. Ma, & A. Lenton, 2001, “Imagining Stereotypes Away: The Moderation of Implicit Stereotypes through Mental Imagery”, Journal of personality and social psychology, 81(5): 828–841.
- Blum, L., forthcoming, “The Too Minimal Political, Moral, and Civil Dimension of the “Stereotype Threat” Paradigm”, in Brownstein & Saul (eds.) forthcomingB.
- Bodenhausen, G. & B. Gawronski, 2014, “Attitude Change”, in The Oxford Handbook of Cognitive Psychology, D. Reisberg (ed.), New York: Oxford University Press.
- Brennan, S., 2013, “Rethinking the Moral Significance of Micro-Inequities: The Case of Women in Philosophy”, in Women in Philosophy: What Needs to Change?, F. Jenkins and K. Hutchinson (eds.), Oxford: Oxford University Press.
- Brewer, M., 1999, “The psychology of prejudice: Ingroup love and outgroup hate?”, Journal of social issues, 55(3): 429–444.
- Brownstein, M., forthcoming, “Implicit Bias, Context, and Character”, in Brownstein & Saul (eds.) forthcomingB.
- Brownstein, M. & A. Madva, 2012a, “Ethical Automaticity”, Philosophy of the Social Sciences, 42(1): 67–97.
- –––, 2012b, “The Normativity of Automaticity”, Mind and Language, 27(4): 410–434.
- Brownstein, M. & J. Saul (eds.), forthcomingA, Implicit Bias & Philosophy: Volume I, Metaphysics and Epistemology, Oxford: Oxford University Press.
- ––– (eds.), forthcomingB, Implicit Bias and Philosophy: Volume 2, Moral Responsibility, Structural Injustice, and Ethics, Oxford: Oxford University Press.
- Cameron, C., B. Payne, & J. Knobe, 2010, “Do theories of implicit race bias change moral judgments?”, Social Justice Research, 23: 272–289.
- Carruthers, P., 2009, “How we know our own minds: the relationship between mindreading and metacognition”, Behavioral and Brain Sciences, 32: 121–138.
- Chaiken, S. & Y. Trope (eds.), 1999, Dual-process theories in social psychology, New York: Guilford Press.
- Clark, A., 1997, Being There: Putting Brain, Body, and World Together Again, Cambridge, MA: MIT Press.
- Conrey, F., J. Sherman, B. Gawronski, K. Hugenberg, & C. Groom, 2005, “Separating multiple processes in implicit social cognition: The Quad-Model of implicit task performance”, Journal of Personality and Social Psychology, 89: 469–487.
- Correll, J., B. Park, C. Judd, & B. Wittenbrink, 2002, “The police officer’s dilemma: Using race to disambiguate potentially threatening individuals”, Journal of Personality and Social Psychology, 83: 1314–1329.
- Cortina, L., 2008, “Unseen injustice: Incivility as modern discrimination in organizations”, Academy of Management Review, 33: 55–75.
- Cortina, L., D. Kabat Farr, E. Leskinen, M. Huerta, & V. Magley, 2011, “Selective incivility as modern discrimination in organizations: Evidence and impact”, Journal of Management, 39(6): 1579–1605.
- Cunningham, W. & P. Zelazo, 2007, “Attitudes and evaluations: A social cognitive neuroscience perspective”, Trends in cognitive sciences, 11(3): 97–104.
- Cunningham, W., P. Zelazo, D. Packer, & J. Van Bavel, 2007, “The iterative reprocessing model: A multilevel framework for attitudes and evaluation”, Social Cognition, 25(5): 736–760.
- Currie, G. & A. Ichino, 2012, “Aliefs don’t exist, but some of their relatives do”, Analysis, 72: 788–798.
- Dasgupta, N., 2004, “Implicit Ingroup Favoritism, Outgroup Favoritism, and Their Behavioral Manifestations”, Social Justice Research, 17(2): 143–168.
- Dasgupta, N. & A. Greenwald, 2001, “On the malleability of automatic attitudes: Combating automatic prejudice with images of admired and disliked individuals”, Journal of Personality and Social Psychology, 81: 800–814.
- Dasgupta, N. & L. Rivera, 2008, “When social context matters: The influence of long-term contact and short-term exposure to admired group members on implicit attitudes and behavioral intentions”, Social Cognition, 26: 112–123.
- De Houwer, J., 2011, “Evaluative Conditioning: A review of functional knowledge and mental process theories”, in Associative Learning and Conditioning Theory: Human and Non-Human Applications, T. Schachtman and S. Reilly (eds.), Oxford: Oxford University Press, pp. 399–417.
- –––, forthcoming, “A Propositional Model of Implicit Attitudes”, Social Psychology and Personality Compass.
- De Houwer, J., G. Crombez, E. Koster, & N. Beul, 2004, “Implicit alcohol-related cognitions in a clinical sample of heavy drinkers”, Journal of behavior therapy and experimental psychiatry, 35(4): 275–286.
- De Houwer, J., S. Teige-Mocigemba, A. Spruyt, & A. Moors, 2009, “Implicit measures: A normative analysis and review”, Psychological bulletin, 135(3): 347.
- Delgado, R. & J. Stefancic, 2012, Critical race theory: An introduction, New York: NYU Press.
- Devine, P., 1989, “Stereotypes and prejudice: Their automatic and controlled components”, Journal of Personality and Social Psychology, 56: 5–18.
- Devine, P., P. Forscher, A. Austin, & W. Cox, 2012, “Long-term reduction in implicit race bias; a prejudice habit-breaking intervention”, Journal of Experimental Social Psychology, 48(6): 1267–1278.
- Devine, P. & M. Monteith, 1999, “Automaticity and control in stereotyping”, in Chaiken & Trope (eds.) 1999: 339–360.
- Dixon, J., M. Levine, S. Reicher, & K. Durrheim, 2012, “Beyond prejudice: Are negative evaluations the problem and is getting us to like one another more the solution?”, Behavioral and Brain Sciences, 35(6): 411–425.
- Doggett, T., 2012, “Some questions for Tamar Szabó Gendler”, Analysis, 72: 764–774.
- Doris, J., 2002, Lack of character: Personality and moral behavior, Cambridge: Cambridge University Press.
- Dovidio, J. & S. Gaertner, 1986, Prejudice, Discrimination, and Racism: Historical Trends and Contemporary Approaches, Academic Press.
- –––, 2004, “Aversive racism”, Advances in experimental social psychology, 36: 1–51.
- Dovidio, J., K. Kawakami, & S. Gaertner, 2002, “Implicit and explicit prejudice and interracial interaction”, Journal of Personality and Social Psychology,82: 62–68.
- Dovidio, J., K. Kawakami, C. Johnson, B. Johnson, & A. Howard, 1997, “On the nature of prejudice: Automatic and controlled processes”, Journal of Experimental Social Psychology, 33: 510–540.
- Dreyfus, H. & S. Dreyfus, 1992, “What is Moral Maturity? Towards a Phenomenology of Ethical Expertise”, in Revisioning Philosophy, J. Ogilvy (ed.), Albany: State University of New York.
- Dunham, Y., M. Srinivasan, R. Dotsch, & D. Barner, 2013a, “Religion insulates ingroup evaluations: the development of intergroup attitudes in India”, Developmental Science, 17(2): 311–319. doi: 10.1111/desc.12105
- Dunham, Y., E. Chen, & M. Banaji, 2013b, “Two Signatures of Implicit Intergroup Attitudes Developmental Invariance and Early Enculturation”, Psychological science, 24(6) 860–868.
- Eberhardt, J., P. Goff, V. Purdie, & P. Davies, 2004, “Seeing black: race, crime, and visual processing”, Journal of personality and social psychology, 87(6): 876.
- Egan, A., 2008, “Seeing and believing: perception, belief formation and the divided mind”, Philosophical Studies, 140(1): 47–63.
- –––, 2011. “Comments on Gendler’s ‘The epistemic costs of implicit bias,’”, Philosophical Studies, 156: 65–79.
- Faucher, L., forthcoming, “Revisionism and Moral Responsibility”, in Brownstein & Saul (eds.) forthcomingB.
- Faucher, L. & E. Machery, 2009, “Racism: Against Jorge Garcia’s moral and psychological monism”, Philosophy of the Social Sciences, 39: 41–62.
- Fazio, R., 1990, “Multiple processes by which attitudes guide behavior: The MODE model as an integrative framework”, Advances in experimental social psychology, 23: 75–109.
- –––, 1995, “Attitudes as object-evaluation associations: Determinants, consequences, and correlates of attitude accessibility”, in Attitude strength: Antecedents and consequences (Ohio State University series on attitudes and persuasion, Vol. 4), R. Petty & J. Krosnick (eds.), Hillsdale, NJ: Lawrence Erlbaum Associates, Inc., pp. 247–282.
- Fazio, R. & T. Towles-Schwen, 1999, “The MODE model of attitude-behavior processes”, in Chaiken & Trope (eds.) 1999: 97–116.
- Festinger, L., 1956, A theory of cognitive dissonance, Stanford, CA: Stanford University Press.
- Fischer, J. & M. Ravizza, 2000, Responsibility and control: A theory of moral responsibility, Cambridge: Cambridge University Press.
- Fiske, S. & P. Linville, 1980, “What does the schema concept buy us?”, Personality and Social Psychology Bulletin, 6(4): 543–557.
- Follenfant, A. & F. Ric, 2010, “Behavioral Rebound following stereotype suppression”, European Journal of Social Psychology, 40: 774–782.
- Frankfurt, H., 1971, “Freedom of the Will and the Concept of a Person”, The Journal of Philosophy, 68(1): 5–20.
- Frankish, K., forthcoming, “Implicit bias, dual process, and metacognitive motivation”, in Brownstein & Saul (eds.) forthcomingA.
- Fricker, M., 2007, Epistemic Injustice: Power & the Ethics of Knowing, Oxford: Oxford University Press.
- Friese, M., W. Hofmann, & M. Wänke, 2008, “When impulses take over: Moderated predictive validity of explicit and implicit attitude measures in predicting food choice and consumption behavior”, British Journal of Social Psychology, 47(3): 397–419.
- Galdi, S., L. Arcuri, & B. Gawronski, 2008, “Automatic mental associations predict future choices of undecided decision-makers”, Science, 321(5892): 1100–1102.
- Gawronski, B. & G. Bodenhausen, 2006, “Associative and propositional processes in evaluation: an integrative review of implicit and explicit attitude change”, Psychological bulletin, 132(5): 692–731.
- –––, 2011. “The associative-propositional evaluation model: Theory, evidence, and open questions”, Advances in Experimental Social Psychology, 44: 59–127.
- Gawronski, B., R. Deutsch, S. Mbirkou, B. Seibt, & F. Strack, 2008, “When ‘Just Say No’ is not enough: Affirmation versus negation training and the reduction of automatic stereotype activation”, Journal of Experimental Social Psychology, 44: 370–377.
- Gawronski, B., W. Hofmann, & C. Wilbur, 2006, “Are “implicit attitudes unconscious?”, Consciousness and Cognition, 15: 485–499.
- Gawronski, B., E. Walther, & H. Blank, 2005, “Cognitive Consistency and the Formation of Interpersonal Attitudes: Cognitive Balance Affects the Encoding of Social Information”, Journal of Experimental Social Psychology, 41: 618–26.
- Gendler, T., 2008a, “Alief and belief”, The Journal of Philosophy, 105(10): 634–663.
- –––, 2008b, “Alief in action (and reaction)”, Mind and Language, 23(5): 552–585.
- –––, 2011, “On the epistemic costs of implicit bias”, Philosophical Studies, 156: 33–63.
- –––, 2012, “Between reason and reflex: response to commentators”, Analysis, 72(4): 799–811.
- Gertler, B., 2011, “Self-Knowledge and the Transparency of Belief”, in Self-Knowledge, A. Hatzimoysis (ed.), Oxford: Oxford University Press.
- Gilbert, D., 1991, “How mental systems believe”, American Psychologist, 46: 107–119.
- Glaser, J. & E. Knowles, 2008, “Implicit motivation to control prejudice”, Journal of Experimental Social Psychology, 44: 164–172.
- Glasgow, J., forthcoming, “Alienation and Responsibility”, in Brownstein & Saul (eds.) forthcomingB.
- Goguen, S., forthcoming, “Stereotype Threat, Epistemic Injustice, and Irrationality”, in Brownstein & Saul (eds.) forthcomingA.
- Gollwitzer, P. & P. Sheeran, 2006, “Implementation intentions and goal achievement: A meta-analysis of effects and processes”, in Advances in experimental social psychology, M. Zanna (ed.), Academic Press, pp. 69–119.
- Grasswick, H., 2013, “Feminist Social Epistemology”, The Stanford Encyclopedia of Philosophy, (Spring 2013 edition), E. Zalta (ed.), <http://plato.stanford.edu/archives/spr2013/entries/feminist-social-epistemology/>.
- Green, A., D. Carney, D. Pallin, L. Ngo, K. Raymond, L. Lezzoni, & M. Banaji, 2007, “Implicit bias among physicians and its prediction of thrombolysis decisions for black and white patients”, Journal of General Internal Medicine, 22: 1231–1238.
- Greenwald, A. & M. Banaji, 1995, “Implicit social cognition: attitudes, self-esteem, and stereotypes”, Psychological review, 102(1): 4.
- Greenwald, A., M. Banaji, & B. Nosek, forthcoming, “Statistically Small Effects of the Implicit Association Test Can Have Societally Large Effects”, Journal of personality and social psychology. doi: 10.1037/pspa0000016
- Greenwald, A. & S. Farnham, 2000, “Using the implicit association test to measure self-esteem and self-concept”, Journal of personality and social psychology, 79(6): 1022–1038.
- Greenwald, A., D. McGhee, & J. Schwartz, 1998, “Measuring individual differences in implicit cognition: The implicit association test”, Journal of Personality and Social Psychology, 74: 1464–1480.
- Greenwald, A., B. Nosek, & M. Banaji, 2003, “Understanding and using the implicit association test: I. An improved scoring algorithm”, Journal of personality and social psychology, 85(2): 197–216.
- Greenwald, A., T. Poehlman, E. Uhlmann, & M. Banaji, 2009, “Understanding and Using the Implicit Association Test: III Meta-Analysis of Predictive Validity”, Journal of Personality and Social Psychology, 97(1): 17–41.
- Gregg A., B. Seibt, & M. Banaji, 2006, “Easier done than undone: Asymmetry in the malleability of implicit preferences”, Journal of Personality and Social Psychology, 90: 1–20.
- Hahn, A., C. Judd, H. Hirsh, & I. Blair, 2013, “Awareness of Implicit Attitudes”, Journal of Experimental Psychology-General, 143(3): 1369–1392.
- Han, H., M. Olson, & R. Fazio, 2006, “The influence of experimentally-created extrapersonal associations on the Implicit Association Test”, Journal of Experimental Social Psychology, 42: 259–272.
- Harari, H. & J. McDavid, 1973, “Name stereotypes and teachers’ expectations”, Journal of Educational Psychology, 65(2): 222–225.
- Haslanger, S., 2000, “Gender and race:(what) are they? (What) do we want them to be?”, Nous, 34(1): 31–55.
- –––, 2013, “Social Meaning and Philosophical Method”, Presidential Address, Eastern Division of the American Philosophical Association.
- Heider, F., 1958, The Psychology of Interpersonal Relations, New York: Wiley.
- Hieronymi, P., 2008, “Responsibility for believing”, Synthese, 161: 357–373.
- Holroyd, J., 2012, “Responsibility for Implicit Bias”, Journal of Social Philosophy, 43(3): 274–306.
- Holroyd, J. and D. Kelly, forthcoming, “Implicit Bias, Character, and Control”. in J. Webber and A. Masala (eds.) From Personality to Virtue, Oxford: Oxford University Press.
- Holroyd, J. & J. Sweetman, forthcoming, “The Heterogeneity of Implicit Biases”, in Brownstein & Saul (eds.) forthcomingA.
- Hookway, C., 2010, “Some Varieties of Epistemic Injustice: Response to Fricker”, Episteme, 7(2): 151–163.
- Houben, K. & R. Wiers, 2008, “Implicitly positive about alcohol? Implicit positive associations predict drinking behavior”, Addictive behaviors, 33(8): 979–986.
- Huddleston, A., 2012, “Naughty beliefs”, Philosophical studies, 160(2): 209–222.
- Huebner, B., 2009, “Trouble with Stereotypes for Spinozan Minds”, Philosophy of the Social Sciences, 39: 63–92.
- –––, forthcoming, “Implicit Bias, Reinforcement Learning, and Scaffolded Moral Cognition”, in Brownstein & Saul (eds.) forthcomingA.
- Hughes, S., D. Barnes-Holmes, & J. De Houwer, 2011, “The dominance of associative theorizing in implicit attitude research: Propositional and behavioral alternatives”, The Psychological Record, 61(3): 465–498.
- Hundleby, C., forthcoming, “The Status Quo Fallacy: Implicit Bias and Fallacies of Argumentation”, in Brownstein & Saul (eds.) forthcomingA.
- Huntsinger, J., S. Sinclair, E. Dunn, & G. Clore, 2010, “Affective regulation of stereotype activation: It’s the (accessible) thought that counts”, Personality and Social Psychology Bulletin, 36(4): 564–577.
- Jacoby, L. & M. Dallas, 1981, “On the relationship between autobiographical memory and perceptual learning”, Journal of Experimental Psychology: General, 110(3): 306.
- James, W., 1890/1950, The Principles of Psychology, Volumes 1&2, New York: Dover Books.
- Jaworska, A., 2007, “Caring and Internality”, Philosophy and Phenomenological Research, 74(3): 529–568.
- Jeannerod, M., 2006, Motor cognition: What actions tell to the self, Oxford: Oxford University Press.
- Kang, J., 2009, “Implicit Bias: A Primer for Courts”, National Center for State Courts.
- Kang, J., M. Bennett, D. Carbado, P. Casey, N. Dasgupta, D. Faigman, R. Godsil, A. Greenwald, J. Levinson, & J. Mnookin, 2012, “Implicit bias in the courtroom”, UCLA Law Review, 59(5): 1124–1186.
- Kawakami, K., J. Dovidio, & S. van Kamp, 2007, “The Impact of Counterstereotypic Training and Related Correction Processes on the Application of Stereotypes”, Group Processes and Intergroup Relations, 10(2): 139–156.
- Kawakami, K., J. Steele, C. Cifa, C. Phills, & J. Dovidio, 2008, “Approaching math increases math = me, math = pleasant”, Journal of Experimental Social Psychology, 44: 818–825.
- Kelly, D., L. Faucher, & E. Machery, 2010a, “Getting Rid of Racism: Assessing Three Proposals in Light of Psychological Evidence”, Journal of Social Philosophy, 41(3): 293–322.
- Kelly, D., E. Machery, & R. Mallon, 2010b, “Race and Racial Cognition”, in The Moral Psychology Handbook, J. Doris & the Moral Psychology Reading Group (eds.), Oxford: Oxford University Press, pp. 433–472.
- Kelly, D. & E. Roedder, 2008, “Racial Cognition and the Ethics of Implicit Bias”, Philosophy Compass, 3(3): 522–540.
- Korsgaard, C., 1997, “The Normativity of Instrumental Reason”, in Ethics and Practical Reason, G. Cullity & B. Gaut (eds.), Oxford: Clarendon Press, pp 27–68.
- Kwong, J., 2012, “Resisting Aliefs: Gendler on Alief-Discordant Behaviors”, Philosophical Psychology, 25(1): 77–91.
- Lai, C., K. Hoffman, & B. Nosek, 2013, “Reducing implicit prejudice”, Social and Personality Psychology Compass, 7: 315–330.
- Lai, C., M. Marini, S. Lehr, C. Cerruti, J. Shin, J. Joy-Gaba, A. Ho, … & B. Nosek, 2014, “Reducing implicit racial preferences: I. A comparative investigation of 17 interventions”, Journal of Experimental Psychology: General, 143(4): 1765–1785.
- Lane, K., J. Kang, & M. Banaji, 2007, “Implicit Social Cognition and Law”, Annual Review of Law and Social Science, 3: 427–451.
- Lee, C., forthcoming, “Revisiting Current Causes of Women's Underrepresentation in Science”, in Brownstein & Saul (eds.) forthcomingA.
- Leslie, S., 2007, “Generics and the Structure of the Mind”, Philosophical Perspectives, 375–405.
- –––, 2008, “Generics: Cognition and Acquisition”, Philosophical Review, 117(1): 1–49.
- –––, forthcoming, “The original sin of cognition: Fear, prejudice, and generalization”, The Journal of Philosophy.
- Levinson, J., 2007, “Forgotten Racial Equality: Implicit Bias, Decision making, and Misremembering”, Duke Law Journal, 57(2): 345–424.
- Levy, N., 2011, “Expressing Who We Are: Moral Responsibility and Awareness of our Reasons for Action”, Analytic Philosophy, 52(4): 243–261.
- –––, 2012, “Consciousness, Implicit Attitudes, and Moral Responsibility”, Noûs, 48: 21–40.
- –––, forthcoming, “Neither fish nor fowl: Implicit attitudes as patchy endorsements”, Noûs. doi: 10.1111/nous.12074
- Levy, N. & T. Bayne, 2004, “Doing without deliberation: automatism, automaticity, and moral accountability”, International Review of Psychiatry, 16(3): 209–215.
- Levy, N. & E. Mandelbaum, forthcoming, “The Powers that Bind: Doxastic Voluntarism and Epistemic Obligation”, in The Ethics of Belief, J. Matheson & R. Vitz (eds.), Oxford: Oxford University Press.
- Lewis, D., 1982, “Logic for Equivocators”, Nous, 431–441.
- Machery, E., forthcoming, “De-Freuding Implicit Attitudes”, in Brownstein & Saul (eds.) forthcomingA.
- Machery, E. & L. Faucher, 2005, “Social construction and the concept of race”, Philosophy of Science, 72(5): 1208–1219.
- Machery, E., Faucher, L. & D. Kelly, 2010, “On the alleged inadequacy of psychological explanations of racism”, The Monist, 93(2): 228–255.
- Madva, A., 2012, The hidden mechanisms of prejudice: Implicit bias and interpersonal fluency, PhD dissertation, Columbia University.
- –––, forthcoming, “Virtue, Social Knowledge, and Implicit Bias”, in Brownstein & Saul (eds.) forthcomingA.
- Mai, R., S. Hoffmann, J. Helmert, B. Velichkovsky, S. Zahn, D. Jaros, … & H. Rohm, 2011, “Implicit food associations as obstacles to healthy nutrition: the need for further research”, The British Journal of Diabetes & Vascular Disease, 11(4): 182–186.
- Maison, D., A. Greenwald, & R. Bruin, 2004, “Predictive validity of the Implicit Association Test in studies of brands, consumer attitudes, and behavior”, Journal of Consumer Psychology, 14(4): 405–415.
- Mallon, R., forthcoming, “Stereotype threat and persons”, in Brownstein & Saul (eds.) forthcomingA.
- Mandelbaum, E., 2011, “The architecture of belief: An essay on the unbearable automaticity of believing”, Doctoral dissertation, University of North Carolina, Carolina.
- –––, 2013, “Against alief”, Philosophical Studies, 165:197–211.
- –––, 2014, “Thinking is Believing”, Inquiry, 57(1): 55–96.
- –––, forthcoming, “Attitude, Association, and Inference: On the Propositional Structure of Implicit Bias”, Noûs. doi: 10.1111/nous.12089
- Mann, N. & K. Kawakami, 2012, “The long, steep path to equality: Progressing on egalitarian goals”, Journal of Experimental Psychology: General, 141(1): 187.
- McConahay, J., 1982, “Self-interest versus racial attitudes as correlates of anti-busing attitudes in Louisville: Is it the buses or the Blacks?”, The Journal of Politics, 44(3): 692–720.
- McConahay, J., B. Hardee, & V. Batts, 1981, “Has racism declined in America? It depends on who is asking and what is asked”, Journal of conflict resolution, 25(4): 563–579.
- Meissner, C. & J. Brigham, 2001, “Thirty years of investigating the own-race bias in memory for faces: A meta-analytic review”, Psychology, Public Policy, and Law, 7(1): 3–35.
- Mendoza, S., P. Gollwitzer, & D. Amodio, 2010, “Reducing the Expression of Implicit Stereotypes: Reflexive Control Through Implementation Intentions”, Personality and Social Psychology Bulletin, 36(4): 512–523.
- Merleau-Ponty, M., 1945/2013, Phenomenology of Perception, New York: Routledge.
- Millikan, R., 1995, “Pushmi-pullyu representations”, Philosophical Perspectives, 9: 185–200.
- Mills, C., 1999, The Racial Contract, Ithaca, NY: Cornell University Press.
- Mitchell, C., J. De Houwer, & P. Lovibond, 2009, “The propositional nature of human associative learning”, Behavioral and Brain Sciences, 32(2): 183–198.
- Monteith, M., 1993, “Self-regulation of prejudiced responses: Implications for progress in prejudice-reduction efforts”, Journal of Personality and Social Psychology, 65(3): 469–485.
- Monteith, M., L. Ashburn-Nardo, C. Voils, & A. Czopp, 2002, “Putting the brakes on prejudice: on the development and operation of cues for control”, Journal of personality and social psychology”, 83(5): 1029–1050.
- Moskowitz, G. & P. Li, 2011, “Egalitarian goals trigger stereotype inhibition: a proactive form of stereotype control”, Journal of Experimental Social Psychology, 47(1): 103–116.
- Moss-Racusin, C., J. Dovidio, V. Brescoll, M. Graham, & J. Handelsman, 2012, “Science faculty’s subtle gender biases favor male students”, Proceedings of the National Academy of the Sciences, 109(41): 16474–16479. DOI: 10.1073/pnas.1211286109.
- Mugg, J., 2013, “What are the cognitive costs of racism? A reply to Gendler”, Philosophical studies, 166(2): 217–229.
- Muller, H. & B. Bashour, 2011, “Why alief is not a legitimate psychological category”, Journal of Philosophical Research, 36: 371–389.
- Nagel, J., 2012, “Gendler on alief”, Analysis, 72(4): 774–788.
- Nier, J., 2005, “How dissociated are implicit and explicit racial attitudes?: A bogus pipeline approach”, Group Processes & Intergroup Relations, 8: 39–52.
- Nisbett, R. & T. Wilson, 1977, “Telling more than we can know: Verbal reports on mental processes”, Psychological review, 84(3): 231–259.
- Nosek, B. & M. Banaji, 2001, “The go/no-go association task”, Social Cognition, 19(6): 625–666.
- Nosek, B., M. Banaji, & A. Greenwald, 2002, “Harvesting intergroup implicit attitudes and beliefs from a demonstration website”, Group Dynamics, 6: 101–115.
- Nosek, B., J. Graham, & C. Hawkins, 2010, “Implicit Political Cognition”, in Handbook of implicit social cognition: Measurement, theory, and applications, B. Gawronski & B. Payne (eds.), New York, NY: Guilford Press, pp. 548–564.
- Nosek, B., A. Greenwald, & M. Banaji, 2005, “Understanding and using the Implicit Association Test: II. Method variables and construct validity”, Personality and Social Psychology Bulletin, 31(2): 166–180.
- –––, 2007, “The Implicit Association Test at Age 7: A Methodological and Conceptual Review”, in Automatic Processes in Social Thinking and Behavior, J.A. Bargh (ed.), Philadelphia: Psychology Press.
- Nosek, B., C. Hawkins, & R. Frazier, 2011, “Implicit social cognition: from measures to mechanisms”, Trends in cognitive sciences, 15(4): 152–159.
- Olson, M. & R. Fazio, 2001, “Implicit attitude formation through classic conditioning”, Psychological Science, 12(5): 413–417.
- –––, 2006, “Reducing automatically activated racial prejudice through implicit evaluative conditioning”, Personality and Social Psychology Bulletin, 32: 421–433.
- –––, 2009, “Implicit and explicit measures of attitudes: The perspective of the MODE model”, Attitudes: Insights from the new implicit measures, 19–63.
- Oswald, F., G. Mitchell, H. Blanton, J. Jaccard, & P. Tetlock, 2013, “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies”, Journal of Personality and Social Psychology, 105(2): 171–192. doi: 10.1037/a0032734
- Paluck, E. & D. Green, 2009, “Prejudice Reduction: What Works? A Review and Assessment of Research and Practice”, Annual Review of Psychology, 60: 339–367.
- Payne, B., 2001, “Prejudice and perception: The role of automatic and controlled processes in misperceiving a weapon”, Journal of Personality and Social Psychology, 81: 181–192.
- Payne, B., C.M. Cheng, O. Govorun, & B. Stewart, 2005, “An inkblot for attitudes: Affect misattribution as implicit measurement”, Journal of Personality and Social Psychology, 89: 277–293.
- Payne, B., & B. Gawronski, 2010, “A history of implicit social cognition: Where is it coming from? Where is it now? Where is it going?”, in Handbook of implicit social cognition: Measurement, theory, and applications, B. Gawronski, & B. Payne (eds.), New York, NY: Guilford Press, pp. 1–17.
- Payne, B., A. Lambert, & L. Jacoby, 2002, “Best laid plans: Effects of goals on accessibility bias and cognitive control in race-based misperceptions of weapons”, Journal of Experimental Social Psychology, 38: 384–396.
- Penner, L., J. Dovidio, T. West, S. Gaertner, T. Albrecht, R. Dailey, & T. Markova, 2010, “Aversive racism and medical interactions with Black patients: A field study”, Journal of Experimental Social Psychology, 46(2): 436–440.
- Perkins, A. & M. Forehand, 2012, “Implicit self-referencing: The effect of nonvolitional self-association on brand and product attitude”, Journal of Consumer Research, 39(1): 142–156.
- Pessoa, L., forthcoming, “The Cognitive-Emotional Brain”, Behavioral and Brain Sciences.
- Peters, D. & S. Ceci, 1982, “Peer-review practices of psychological journals: The fate of published articles, submitted again”, Behavioral and Brain Sciences, 5(2): 187–195.
- Pettigrew, T. & L. Tropp, 2006, “A Meta-Analytic Test of Intergroup Contact Theory”, Journal of Personality and Social Psychology, 90: 751–83.
- Petty, R., 2006, “A metacognitive model of attitudes”, Journal of Consumer Research, 33(1): 22–24.
- Petty, R., P. Briñol, & K. DeMarree, 2007, “The meta-cognitive model (MCM) of attitudes: Implications for attitude measurement, change, and strength”, Social Cognition, 25(5): 657–686.
- Phills, C., K. Kawakami, E. Tabi, D. Nadolny, & M. Inzlicht, 2011, “Mind the Gap: Increasing the associations between the self and blacks with approach behaviors”, Journal of Personality and Social Psychology, 100: 197–210.
- Proffitt, D., 2006, “Embodied perception and the economy of action”, Perspectives on psychological science, 1(2): 110–122.
- Railton, P., 2009, “Practical Competence and Fluent Agency”, in Reasons for Action, D. Sobel & S. Wall (eds.), Cambridge: Cambridge University Press, pp. 81–115.
- –––, 2014, “The Affective Dog and its Rational Tale: Intuition and Attunement”, Ethics, 124(4): 813–859.
- Richeson, J. & J. Shelton, 2003, “When prejudice does not pay effects of interracial contact on executive function”, Psychological Science, 14(3): 287–290.
- –––, 2007, “Negotiating interracial interactions: Costs, consequences, and possibilities”, Current Directions in Psychological Science, 16: 316–320.
- Richeson, J. & S. Trawalter, 2008, “The threat of appearing prejudiced and race-based attentional biases”, Psychological Science, 19(2): 98–102.
- Ross, L., M. Lepper, & M. Hubbard, 1975, “Perseverance in Self-Perception and Social Perception: Biased Attributional Processes in the Debriefing Paradigm”, Journal of Personality and Social Psychology, 32(5): 880–802.
- Ryle, G., 1949/2009, The Concept of Mind, New York: Routledge.
- Sarkissian, H., 2010, “Minor tweaks, major payoffs: The problems and promise of situationalism in moral philosophy”, Philosopher’s Imprint, 10(9): 1–15.
- Saul, J., 2012, “Skepticism and Implicit Bias”, Disputatio, Lecture, 5(37): 243–263.
- –––, 2013, “Unconscious Influences and Women in Philosophy”, in Women in Philosophy: What Needs to Change?, F. Jenkins & K. Hutchison (eds.), Oxford: Oxford University Press.
- Scanlon, T., 1998, What We Owe Each Other, Cambridge: Harvard University Press.
- Schacter, D., 1987, “Implicit memory: History and current status”, Journal of Experimental Psychology: Learning, Memory, and Cognition, 13: 501–518.
- Schneider, D., 2004, The Psychology of Stereotyping, New York: Guilford Press.
- Schwitzgebel, E., 2002, “A Phenomenal, Dispositional Account of Belief”, Nous, 36: 249–275.
- –––, 2006/2010, “Belief”, The Stanford Encyclopedia of Philosophy, (Winter 2010 edition), E. Zalta (ed.), URL = <http://plato.stanford.edu/archives/win2010/entries/belief/>
- –––, 2010, “Acting contrary to our professed beliefs, or the gulf between occurrent judgment and dispositional belief”, Pacific Philosophical Quarterly, 91: 531–553.
- –––, 2013, “A Dispositional Approach to Attitudes: Thinking Outside of the Belief Box”, in New Essays on Belief, N. Nottelmann (ed.), New York: Palgrave Macmillan, pp. 75–99.
- Sher, G., 2009, Who Knew? Responsibility without Awareness, Oxford: Oxford University Press.
- Shiffrin, R. & W. Schneider, 1977, “Controlled and automatic human information processing: Perceptual learning, automatic attending, and a general theory”, Psychological Review, 84: 127–190.
- Shoemaker, D., 2003, “Caring, Identification, and Agency”, Ethics, 118: 88–118.
- –––, 2011, “Attributability, Answerability, and Accountability: Towards a Wider Theory of Moral Responsibility”, Ethics, 121: 602–632.
- Sie, M. & N. Vorst Vader-Bours, forthcoming, “Personal Responsibility vis-à-vis Prejudice Resulting from Implicit Bias”, in Brownstein & Saul (eds.) forthcomingB.
- Siegel, S., 2012, “Cognitive Penetrability and Perceptual Justification”, Nous, 46(2): 201–222.
- –––, 2013, “Evidentialism and Selection Effects on Experience”, in Oxford Studies in Epistemology 4, T.S. Gendler & J. Hawthorne (eds.), Oxford: Oxford University Press.
- Smith, A., 2005, “Responsibility for attitudes: activity and passivity in mental life”, Ethics, 115(2): 236–271.
- –––, 2008, “Control, responsibility, and moral assessment”, Philosophical Studies,138: 367–392.
- –––, 2012, “Attributability, Answerability, and Accountability: In Defense of a Unified Account”, Ethics, 122(3): 575–589.
- Smith, H., 2011, “Non-Tracing Cases of Culpable Ignorance”, Criminal Law and Philosophy, 5: 115–146.
- Snow, N., 2006, “Habitual Virtuous Actions and Automaticity”, Ethical Theory and Moral Practice, 9: 545–561.
- Stalnaker, R., 1984, Inquiry, Cambridge, MA: MIT Press.
- Steele, C. & J. Aronson, 1995, “Stereotype threat and the intellectual test performance of African Americans”, Journal of personality and social psychology, 69(5): 797–811.
- Stewart, B. & B. Payne, 2008, “Bringing Automatic Stereotyping under Control: Implementation Intentions as Efficient Means of Thought Control”, Personality and Social Psychology Bulletin, 34: 1332–1345.
- Strack, F. & R. Deutsch, 2004, “Reflective and impulsive determinants of social behaviour”, Personality and Social Psychology Review, 8: 220–247.
- Suhler, C. & P. Churchland, 2009, “Control: conscious and otherwise”, Trends in cognitive sciences, 13(8): 341–347.
- Taylor, S. & J. Brown, 1988, “Illusion and well-being: a social psychological perspective on mental health”, Psychological bulletin, 103(2): 193–210.
- Tetlock, P., O. Kristel, B. Elson, M. Green, & J. Lerner, 2000, “The psychology of the unthinkable: Taboo trade-offs, forbidden base rates, and heretical counterfactuals”, Journal of Personality and Social Psychology, 78(5): 853–870.
- Tetlock, P., & G. Mitchell, 2009, “Implicit bias and accountability systems: What must organizations do to prevent discrimination?”, Research in Organizational Behavior, 29: 3–38.
- Trawalter, S. & J. Richeson, 2006, “Regulatory focus and executive function after interracial interactions”, Journal of Experimental Social Psychology, 42(3): 406–412.
- Valian, V., 1998, Why so slow? The advancement of women, Cambridge, MA: M.I.T. Press.
- –––, 2005, “Beyond gender schemas: Improving the advancement of women in academia”, Hypatia, 20: 198–213.
- Vargas, M., 2005, “The Revisionist’s Guide to Responsibility”, Philosophical Studies, 125(3): 399–429.
- Walther, E., 2002, “Guilty by Mere Association: Evaluative Conditioning and the Spreading Attitude Effect”, Journal of Personality and Social Psychology, 82(6): 919–34.
- Washington, N. & D. Kelly, forthcoming, “Who’s responsible for this? Implicit bias and the knowledge condition”, in Brownstein & Saul (eds.) forthcomingB.
- Watson, G., 1975, “Free Agency”, Journal of Philosophy, 72(8): 205–220.
- –––, 1996, “Two faces of responsibility”, Philosophical Topics, 24(2): 227–248.
- Webb, T., P. Sheeran, & A. Pepper, 2012, “Gaining control over responses to implicit attitude tests: Implementation intentions engender fast responses on attitude-incongruent trials”, British Journal of Social Psychology, 51(1): 13–32. DOI:10.1348/014466610X532192
- Wigley, S., 2007, “Automaticity, Consciousness, and Moral Responsibility”, Philosophical Psychology, 20(2): 209–225.
- Zeigler-Hill, V. & C. Jordan, 2010, “Two faces of self-esteem”, in Handbook of implicit social cognition: Measurement, theory, and applications, B. Gawronski & B. Payne (eds.), NY: Guilford Press, pp. 392–407.
- Zheng, R., forthcoming, “Attributability, Accountability and Implicit Attitudes”, in Brownstein & Saul (eds.) forthcomingB.
- Zimmerman, A., 2007, “The nature of belief”, Journal of Consciousness Studies, 14(11): 61–82.
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
- Brownstein, M., ms., “Attributionism and Moral Responsibility for Implicit Bias.”
- Madva, A., ms a, “Biased Against De-Biasing: On the Role of (Institutionally Sponsored) Self-Transformation in the Struggle Against Prejudice”.
- –––, ms b, “Implicit Bias, Moods, and Moral Responsibility”.
- –––, ms c, “Why Implicit Attitudes are (Probably) Not Beliefs”.
- Madva, A. and M. Brownstein, ms, “The Blurry Boundary between Stereotyping and Evaluation in Implicit Cognition”.
- Sripada, C., ms, “Self-Expression: A Deep Self Theory of Moral Responsibility”.
- Implicit Bias and Philosophy at the University of Sheffield
- Project Implicit (homepage of the IAT)
- Climate for Women and Underrepresented Groups at Rutgers
- MAP (Minorities and Philosophy)
- Active Bystander Strategies
- Feminist Philosophers blog
- Tutorials for Change—Gender Schemas and Science
- The Gender Equity Project
- Reducing Stereotype Threat
Many thanks to Yarrow Dunham, Jules Holroyd, Bryce Huebner, Daniel Kelly, Calvin Lai, Carole Lee, Alex Madva, Eric Mandelbaum, Jennifer Saul, and Susanna Siegel for invaluable suggestions and feedback. Thanks also to the Leverhulme Trust for funding the “Implicit Bias and Philosophy” workshops at the University of Sheffield from 2011–2013, and to Jennifer Saul for running the workshops and making them a model of scholarship and collaboration at its best.