# Reliabilist Epistemology

First published Mon Apr 21, 2008; substantive revision Fri May 21, 2021

One of the main goals of epistemologists is to provide a substantive and explanatory account of the conditions under which a belief has some desirable epistemic status (typically, justification or knowledge). According to the reliabilist approach to epistemology, any adequate account will need to mention the reliability of the process responsible for the belief, or truth-conducive considerations more generally. Historically, one major motivation for reliabilism—and one source of its enduring interest—is its naturalistic potential. According to reliabilists, epistemic properties can be explained in terms of reliability, which in turn can be understood without reference to any unreduced epistemic notions, such as evidence or knowledge.

This article begins by surveying some of the main forms of reliabilism, concentrating on process reliabilism as a theory of justification. It proceeds to review some of the main objections to reliabilism, and some of the responses that have been offered on the reliabilist’s behalf. After canvassing some recent developments in reliabilist epistemology, the article concludes by considering various cousins and spin-offs of reliabilist epistemology, including virtue reliabilism and various evidentialist-reliabilist hybrids.

## 1. A Paradigm Shift in Analytic Epistemology

In the 1960s, a wide range of epistemologists were absorbed by the question: what does it take for a belief to amount to knowledge? It was generally agreed that for a person, S, to know some proposition p, at least three conditions must be met. First, p must be true. Second, S must believe p. And third, S must be justified in believing p. Thus, knowledge requires justified true belief. But does the satisfaction of these three conditions suffice to entail that S knows p? Unfortunately, as Edmund Gettier (1963) showed with several convincing cases, justified true belief doesn’t guarantee knowledge.

Some subsequent attempts to explicate knowledge had broadly “reliabilist” features, although they did not use this terminology. According to “relevant alternatives theories”, a subject is said to know p provided that they can “rule out” relevant alternatives where p does not obtain (Dretske 1970; Goldman 1976; Stine 1976; Lewis 1996). According to “truth-tracking theories”, a subject knows p provided their beliefs “track the truth” of p across modal space. Truth-tracking theorists typically cash this out in terms of subjunctive conditionals, such as “Sensitivity”: If p had been false, S wouldn’t have believed p (Nozick 1981), or “Safety”: If S were to believe p, p would be true (Sosa 1999; Williamson 2000; Pritchard 2005). These views are united by the idea that knowledge requires some form of reliability. (See the entry on the analysis of knowledge.)

Meanwhile, many epistemologists have turned their attention to a related epistemic concept: the concept of (epistemic) justification. Justified belief is widely regarded as a necessary—though insufficient—condition for knowledge. Hence, justification must be regarded as a core concept for epistemologists to explore and elucidate.

However, justification also proves to be a very elusive concept. In 1979 Alvin Goldman published a paper entitled “What Is Justified Belief?”, which departed from several well-entrenched positions on the nature of epistemic justification. Years later, a number of epistemologists recognized this paper as marking a significant turning point, or “paradigm shift”, in epistemology. Michael Williams (2016: 3) called it “the Reliabilist Revolution”. This turning point has given rise to a lively debate about the kinds of factors that determine the epistemic status of a person’s beliefs or credences.

### 1.1 The Basic Components: The Reliability of Mental Processes in Determining True vs. False Beliefs

The key idea behind Goldman’s reliabilist approach is that the justifiedness of a belief depends on the mental history of the subject’s belief. In particular, it depends on the reliability of the process(es) which cause the belief in question. Somewhat similar ideas (or segments thereof) have been advanced by a number of predecessors, often focusing on knowledge rather than justification. Frank Ramsey (1931) wrote that a belief qualifies as knowledge if it is true, certain, and obtained by a reliable process. He did not, however, provide a detailed defense of this thesis. Peter Unger (1968) proposed that S knows that p just in case “it is not at all accidental” that S is right about its being the case that p. David Armstrong (1973) offered an analysis of non-inferential knowledge that explicitly used the term “reliable”. He drew an analogy between a thermometer that reliably indicates the temperature and a belief that reliably indicates the truth. All of these writers seemed to endorse some variant of reliabilism, although typically there were minor or major differences from the version we shall focus on here. For example, Armstrong promoted an “indicator” variant of reliabilism rather than a “psychological process” variant. And Unger presented “non-accidentality” rather than “truth-conduciveness” as the central determinant of justifiedness. Clearly, however, there is a great deal of convergence in these views. So, contemporary defenders of epistemic reliabilism have grounds for optimism that they are laboring in promising terrain. The nitty-gritty details, however, still need additional work, as we shall see in the body of this entry. We turn now to some of the principal details of Goldman’s form of reliabilism for justification (Goldman, 1979, 1986).

Goldman began by proposing some desiderata on any account of justification. First, theories of justification should specify conditions for justified belief that do not invoke the concept of justification itself, or any other epistemically normative concepts such as reasonability or rationality. The aim is to provide a “reductive” account of justification that does not appeal explicitly or implicitly to any notions that entail justification or other members of that family. There is bite in this requirement. For example, it might preclude an analysis of justified belief in terms of “evidence”, unless evidence can itself be characterized in non-epistemic terms. What kinds of terms or properties, then, are appropriate for constructing an account of justification? Permissible concepts or properties would include doxastic ones, such as belief, disbelief, suspension of judgment, and any other purely psychological concepts, such as ones that refer to perceptual experience or memory. Given the assumption that truth and falsity are non-epistemic notions, they would also be perfectly legitimate for use in analyzing justifiedness. Another admissible element in an account of justifiedness is the causal relation.

Proceeding under these constraints, Goldman was led to the “reliable process” theory as follows. (Note that the main concept to be tackled here is doxastic justifiedness rather than propositional justifiedness. That is, it focuses on what it takes for a belief to be justified rather than what it takes for someone to have grounds for forming such a belief.) First, examples were adduced to show that the justifiedness of particular beliefs depends on how those beliefs are caused, or causally sustained, in the mind of the thinker. Suppose Arthur justifiedly believes a conjunction of propositions, $$q \amp r$$, from which proposition $$p$$ follows logically. Further suppose that, soon after forming his belief in this conjunction, Arthur also forms a belief in $$p$$. Does it follow from this that Arthur’s belief in $$p$$ is justified? No. Although Arthur believes $$q \amp r$$, this conjunction may not have played any causal role in his coming to believe $$p$$. He may have formed his belief in $$p$$ purely by wishful thinking. Perhaps he hoped $$p$$ would be true, and therefore (somehow) came to believe it. Alternatively, suppose Arthur employed confused reasoning that began with $$q \amp r$$, and serendipitously led to $$p$$. In neither case is his resulting belief in $$p$$ justified. This shows that a necessary condition for a belief to be justified is that it be produced or generated in a suitable way. Of course, this immediately raises the question: which kinds of belief-forming processes are suitable and which are defective or unsuitable?

One feature that is shared by wishful thinking and confused reasoning is doxastic unreliability. These belief-forming methods commonly generate beliefs that are false rather than true. By contrast, what types of belief-forming processes commonly confer justification? They include standard perceptual processes, remembering, good reasoning, and introspection. What is shared by these processes? What they share is reliability: most of the beliefs they produce are true. (This formulation will be slightly refined later.) Thus, the central proposal made in “What Is Justified Belief” is that the justifiedness or unjustifiedness of a belief is determined by the reliability or unreliability of the process or processes that cause it. Reliability can here be understood either in a frequency sense (pertaining to what occurs in the actual world) or in a propensity sense (pertaining both to the actual-world and other possible worlds.). A belief earns the status of “justified” if it is produced by a process (or series of processes) that has a high truth-ratio. Precisely how high a truth-ratio must be in order to confer justifiedness is left vague, just as the concept of justification itself is vague. The truth-ratio threshold need not be as high as 1.0, but it must surely be greater (presumably, quite a lot greater) than .50.

A number of conclusions were drawn from these main points, and refinements were added. One consequence was that process reliabilism is a historical type of theory. For example, a reliable inferential process confers justification on a new belief only if the input beliefs (premises) were themselves justified. How could the justifiedness of these input beliefs have arisen? Presumably, their justifiedness arose because they were produced by earlier applications of reliable processes. Such a chain of processes must ultimately originate from one or more reliable processes that themselves had no doxastic inputs. Perceptual inputs are good candidates for such processes. So, on this approach, justifiedness is often a product of a history of personal cognitive processes. The historical framework of such a process theory contrasts sharply with more traditional epistemological frameworks, such as coherentism and (certain versions of) evidentialism. According to these more traditional theories, causal and historical relations play no role in determining justifiedness. (Though see, e.g., Goldberg 2012 and Fleisher 2019 for views that combine elements of coherentism and reliabilism; for views that integrate evidentialism and reliabilism, see §4.2 below.) More generally, reliabilism marks an important break with “current time-slice” approaches, according to which the epistemic status of a belief at time t depends entirely on the subject’s mental states at t.

These fundamental ideas were spelled out by Goldman in “What Is Justified Belief?” (1979), in a series of principles including base-clause principles and recursive-clause principles. The initial one was (1):

(1)
If S’s belief in p at t is a product of a reliable cognitive belief-forming process (or set of processes), then S’s belief in p at t is justified.

This principle would fit cases of perceptually caused beliefs and other beliefs that make no use of prior doxastic states (as inputs). But inference-generated beliefs require a different principle. When a new belief results from an inference, its justificational status depends not only on the properties of the inferential process, but also on whether the input beliefs themselves were justified. To accommodate this, a slightly more complex principle was introduced:

(2)
If S’s belief in p results from a conditionally reliable belief-dependent process, and if the beliefs on which this process operated in generating S’s belief in P are themselves justified, then S’s new belief in P is also justified.

By philosophical standards, these are not terribly complex principles. Thus, process reliabilism is a comparatively simple and straightforward theory. Such simplicity has usually been viewed as a virtue of the approach. Of course, matters are more complicated than the foregoing principles convey. We will survey some potential complications and refinements of this theory below.

### 1.2 Justified vs. Unjustified Beliefs: Some Examples

Is it true, as process reliabilism claims, that beliefs formed by reliable processes are (intuitively) considered justified and that beliefs formed by unreliable processes are (intuitively) considered unjustified? Here are some examples of processes that frequently lead to false beliefs: wishful thinking, reliance on emotional attachment, mere hunch or guesswork, and hasty generalization. Beliefs produced by such processes would all be considered, intuitively, unjustified. So there is a high correlation between process-unreliability and unjustifiedness. Similarly, here are some examples of processes that commonly lead to true beliefs: standard perceptual processes, good reasoning, and introspection. Here again there is a strong correlation between process-reliability and justified belief.

We should notice, of course, that justifiedness is not a purely categorial concept. We can and do regard certain beliefs as more certain and more justified than others. Furthermore, our intuitions of comparative justifiedness go along with our beliefs about the comparative reliability of the belief-causing processes (Goldman 1979; 1986: 103–4; 1992; 2008). This comports well with process reliabilism.

## 2. Challenges and Replies

By now, a number of problems have been raised for process reliabilism. Here we review some of the main challenges, focusing on the three that have received the most attention in the literature: the clairvoyance problem (§2.1), the new evil demon problem (§2.2), and the generality problem (§2.3). Afterwards, we will briefly survey some challenges of a more recent vintage (§2.4).

### 2.1 The Clairvoyance Problem

One early challenge to reliabilist theories of justification was advanced by Laurence BonJour (1980), concerning a hypothetical clairvoyant named “Norman”. Norman has a perfectly reliable clairvoyance faculty. But he has no evidence or reasons for or against the general possibility of a clairvoyant power or for or against his possessing such a power. One day Norman’s clairvoyance faculty generates in him a belief that the President is currently in New York City, but with no accompanying perception-like experience, just the bare belief itself. Intuitively, says BonJour, Norman isn’t justified in holding this belief. Yet process reliabilism seems to imply otherwise. Since Norman’s clairvoyant power has a high truth ratio, Norman’s belief about the President must be justified. So reliabilism seems to get this wrong. (Similar examples were offered by Keith Lehrer (1990) and Alvin Plantinga (1993).)

Reliabilists have offered various responses to this challenge, including:

#### Approved List Reliabilism

One response is to opt for a variant of simple process reliabilism, which goes by the names of “two-stage reliabilism” (Goldman 1992) or “approved-list reliabilism” (Fricker 2016). Rather than directly giving an account of what justification is, this approach seeks to give a theory of how ordinary people make justification attributions. Approved-list reliabilists offer the following conjecture about how this works. In a preliminary stage, attributors form opinions about the reliability or unreliability of assorted belief-forming processes, using observation and/or inference to draw conclusions about the track-records of these processes in the actual world. They thereby construct mental lists of reliable and unreliable processes: lists of approved and disapproved processes (respectively). In the second stage, they deploy these lists to make judgments about particular beliefs (actual or hypothetical). If somebody’s belief was caused by a process that is on their approved list—or strongly resembles one on their approved list—they consider it justified. If it is caused by a process on their disapproved list, it is classed as unjustified.

How would approved-list reliabilism explain intuitive judgments in the clairvoyance case? Presumably, an ordinary attributor would not have a clairvoyance process on either of her lists. But she might well have processes like extra-sensory-perception or telekinesis on her list, especially her disapproved list. The process or faculty that Norman uses to arrive at his belief about the President sounds very similar to one of those obscure and suspect powers. Hence, Norman’s belief is intuitively classified as unjustified. This is despite the fact that—as potential attributors are told—Norman’s clairvoyance process is thoroughly reliable.

Approved list reliabilism is a theory of the factors that influence our attributions of epistemic justification. It is thus naturally construed as an attributor theory—a theory of the conditions under which a justification attribution (that is, a sentence of the form, “S is justified in believing p”) is judged to be true or false. In this regard, it parallels attributor theories of knowledge (e.g., DeRose 1992). There are different ways of developing an approved list attributor theory more precisely. For instance, a contextualist implementation might hold that a justification attribution is true if and only if the subject’s belief-forming process belongs to the speaker’s approved list. Alternatively, one could adopt an assessor-relativist implementation, according to which the truth-conditions of justification attributions are relativized to contexts of assessment (cf. MacFarlane 2005). An assessor-relativist version of the approved list might hold that a justification attribution is true at a context of assessment if and only if the subject’s belief-forming process belongs to the assessor’s approved list.

#### Appealing to Primal Systems

A different response to the clairvoyance problem is to concede that being the result of a reliable process is not sufficient for a belief to be prima facie justified. Rather, some further condition must be met. One version of this approach has been developed by Jack Lyons (2009, 2011) who argues that in order for a non-inferential belief to be justified, it must be the result of a “primal system”. Drawing on research in cognitive science, Lyons proposes that a primal system is any cognitive system that meets two conditions: (i) it is “inferentially opaque”—that is, its outputs are not the result of an introspectively accessible train of reasoning, (ii) it develops as a result of a combination of learning and innate constraints (2009: 144). For Lyons, our perceptual systems are paradigmatic examples of such primal systems.

How does this help with the clairvoyance objection? According to Lyons, BonJour’s presentation of the Norman example invites the assumption that Norman’s clairvoyance was the result of some recent development—e.g., “a recent encounter with radioactive waste” or a “neurosurgical prank”—not the result of some combination of learning and innate constraints (2009: 118–119). Given this assumption, Norman’s clairvoyance-based belief is not the result of a primal system, hence it is not prima facie justified.

A third possible response to the clairvoyance objection—which involves combining reliabilism with evidentialist elements—will be discussed in §4.2 below.

### 2.2 The New Evil Demon Problem

A second problem for process reliabilism is the “new evil-demon problem” (Cohen 1984; Pollock 1984; Feldman 1985; Foley 1985). Imagine a world where an evil demon creates non-veridical perceptions of physical objects in everybody’s minds. All of these perceptions are qualitatively identical to ours, but are false in the world in question. Hence, their perceptual belief-forming processes (as judged by the facts in that world) are unreliable; and their beliefs so caused are unjustified. But since their perceptual experiences—hence evidence—are qualitatively identical to ours, shouldn’t those beliefs in the demon world be justified?

#### Approved List Reliabilism

As with the clairvoyance objection, one response is to move from simple reliabilism to approved list reliabilism (see §2.1 above). As before, approved-list reliabilists will maintain that a potential attributor constructs lists of belief-forming processes, one for approved process types and one for disapproved types. Perceptual processes (of various sorts) would be on the approved list. Since the people in the evil-demon case use perceptual processes that would be on the approved list, an attributor would consider their resulting beliefs justified—even though s/he is told that the perceptual processes in the evil-demon world are unreliable.

#### Indexical Reliabilism

A number of philosophers have explored related ideas for solving the New Evil Demon Problem. One similar strategy starts by distinguishing two different ways a process can be reliable. Suppose we inhabit the actual world (@), and we’re evaluating a subject S who inhabits some other world $$w_s$$. Then there are two different things we could mean when we say that S’s belief-forming process is reliable: we could mean that it’s reliable relative to @, or we could mean that it’s reliable relative to $$w_s$$ (Sosa 1993, 2001). Comesaña (2002) uses this distinction to provide a solution to the new evil demon problem cast in the framework of two-dimensional semantics (Stalnaker 1999). On Comesaña’s proposal, the sentence, “S is justified in believing p” has two readings: (i) that S’s belief-forming process is reliable relative to @, (ii) that S’s belief-forming process is reliable relative to $$w_s$$. If $$w_s$$ is a demon world, then reading (i) will be true, but reading (ii) is false. While Comesaña’s use of two-dimensional semantics has drawn criticism (Ball & Blome-Tillman 2013), the basic strategy of solving the New Evil Demon problem by appealing to two types of reliability remains popular. For discussion of a related approach, see the discussion of normality reliabilism in §3.1 below.

#### Denying justification in Demon Worlds

The foregoing responses seek to accommodate the claim that the person in the demon world has justified beliefs (call this the “New Evil Demon Claim”). Another response is to reject this claim. One way of motivating the New Evil Demon Claim is by appealing to the premise that the person in the new evil demon scenario has the very same evidence that we do at the actual world. However, some philosophers reject this premise in favor of an externalist conception of evidence. For example, Williamson (2000: ch. 9) suggests that an agent’s evidence is just their knowledge. Since the victim of an evil demon knows less than we do, it follows that they have less evidence than we do. As Schellenberg (2016) emphasizes, this sort of view is compatible with claiming that there is some evidence that both we and the demon victim have in common. It’s just that there is further body of evidence that we possess but the demon victim lacks. This sort of responses raises larger questions about what reliabilists should say about the nature of evidence; for further discussion, see §4.2 below.

Philosophers who go this route typically emphasize that the beliefs of the person at the demon world have some positive status, even if this status falls short of justification. For example, Lyons (2013) points out that the inferential beliefs of the demon’s victim are formed by conditionally reliable processes, which is an epistemic good in its own right. Other philosophers have claimed that the beliefs of the demon’s victim are blameless or excusable, even though they are not justified (e.g., Pritchard 2012a). This issue is closely related to a growing recent literature on the relationship between epistemic justification and epistemic blamelessness (see, e.g., Kelp & Simon 2017; Brown 2018; D. Greco forthcoming; Williamson forthcoming).

### 2.3 The Generality Problem

Perhaps the most widely discussed problem for process reliabilism is the generality problem. Originally anticipated by Goldman in “What Is Justified Belief?”, it has been pressed more systematically by Feldman (1985) and Conee and Feldman (1998). Any particular belief is the product of a token causal process in the subject’s mind/brain, which occurs at a particular time and place. Such a process token can be “typed”, however, in many broader or narrower ways. Each type will have its own associated level of reliability, often distinct from the levels of reliability of other types it instantiates. Which repeatable type should be selected for purposes of assigning a reliability number to the process token? If no (unique) type can be selected, what establishes the justificatory status of the resulting belief?

Consider a concrete example (Conee & Feldman 1998): Smith sees a maple tree outside his window one sunny afternoon and forms the belief that there is a maple tree near his house. How should we type his belief-forming process? Is the relevant type vision? Or perhaps something more fine-grained, such as visual experience of a maple tree on a sunny afternoon? Or is it something more coarse-grained, such as perception? The worry is that Smith’s token belief-forming process seems to instantiate all these process types (and many more). So singling out any single one of them as the “right” way of typing the process seems arbitrary.

Reliabilists have proposed various strategies for dealing with the generality problem. Some of the main responses include:

#### Psychological Approaches (Alston 1995; Beebe 2004; Kampa 2018; Lyons 2019)

Not every way of describing a belief-forming process carves psychological reality at its joints. Consider, for example, the process type, forming a perceptual belief while wearing green socks. Intuitively, this is an “inappropriate” or “irrelevant” way of describing Smith’s belief-forming process. But why is it inappropriate? According to psychological approaches, it is inappropriate because it mentions features of Smith’s circumstances that play no causal role in the psychological processes responsible for his belief.

In their 1998 discussion of the generality problem, Conee and Feldman acknowledge that turning to psychology will help winnow down the number of eligible process types. But they express skepticism that it will always yield a unique process type. Why won’t there be multiple psychologically “real” process types responsible for a given belief token? Subsequent developments of the psychological approach have tried to address this worry. For example, Beebe (2004) proposes that the relevant process type will always be an information-processing procedure or algorithm. Of course, there will often be indefinitely many types of this kind, of varying reliability. To pick out the appropriate type, Beebe offers the following instructions. Let A be the broadest such type. Choose a partition that is the broadest objectively homogeneous subclass of A within which the token process falls, where a class is objectively homogeneous if no statistically relevant partition of it can be effected. Beebe’s proposal has been challenged by Dutant and Olsson (2013), who argue that it faces a dilemma: either it does not always yield a unique process type or it leads to trivialization by collapsing reliability into truth. For a more recent variant of Beebe’s solution, aimed at addressing Dutant and Olsson’s concerns, see Kampa (2018).

Perhaps the most fully developed version of a psychological approach to the generality problem comes from Lyons (2019). Lyons agrees with Beebe that the relevant psychological process types are information-processing algorithms. But Lyons additionally suggests that these algorithms need to be relativized to parameters, understood as psychological variables that affect processing in a law-like way. For example, lighting conditions are a parameter that affects the speed and accuracy of visual information processing. As Lyons develops this view, the appropriate process type for a given belief token B is provided by the complete algorithmic specification of every psychological process that is causally relevant to B, along with the parameter values for all of these processes. As Lyons notes, one upshot of this approach is that the relevant process types will typically be very fine-grained. Smith’s belief-forming process type will not be vision, but something more like: visual recognition of objects based on retinal stimulus of sort S, in lighting conditions C, with attention distributed in manner M.

#### Parrying Responses (Comesaña 2006; Bishop 2010; Tolly 2017)

A different approach to the generality problem is to insist that a version of the problem arises for any adequate epistemology, not just reliabilists. A version of this “parrying response” has been developed by Juan Comesaña (2006), who argues that every account of doxastic justification needs to appeal to the basing relation. This is true even of evidentialist accounts: after all, evidentialists maintain that in order for a belief B to be doxastically justified, B needs to be based on a body of evidence that supports B. (More on evidentialism in §4.2 below.) But, Comesaña argues, any attempt to characterize the basing relation will run into the generality problem, or something very similar to it. Moreover, Comesaña contends, if evidentialists are willing to simply take for granted some notion of basing in their theory, then there is no reason why reliabilists cannot follow suit, and deploy the basing relation in their solution to the generality problem.

A rather different parrying response has been defended by Michael Bishop (2010), who argues that the problem will arise for any theory that allows for the possibility of “reflective justification”—that is, having a belief B that is justified on the basis of one’s knowledge that one formed B via a reliable form of reasoning. For further discussion, see Matheson (2015) and Tolly (2017).

Both Comesaña and Bishop focus on the idea that all epistemologists face a version of the generality problem. A related but distinct idea is to insist that versions of the generality problem crop up everywhere, not just in epistemology. This response has been developed recently by Goldman (forthcoming), who points out that we regularly evaluate the reliability of all sorts of processes, not all of them cognitive. Goldman illustrates with the example of Geraldine, who enjoys shooting hoops at her local gym. Geraldine’s performance is a function of many factors, including how she focuses her eyes on the target hoop, how she grips the ball, how she selects an angle to hit the backboard, etc. Now consider Geraldine’s friend, Henry, who watches as Geraldine consistently sinks more than 80% of her shots, using roughly the technique each time. Henry may not be able to provide a detailed specification of Geraldine’s hoop-shooting process. Nonetheless, it seems he is in a position to conclude that Geraldine uses some process type that is responsible for her frequent attainment of her athletic goal, and that this process is fairly reliable.

Goldman (forthcoming) argues that these considerations carry two lessons. First, they show that the generality problem is very general, since some version of it arises in non-epistemic contexts. Second, Goldman takes these considerations to cast doubt on the common assumption—voiced explicitly by Conee and Feldman (1998: 2–3)—that an adequate solution to the generality problem will need to specify a unique process type responsible for any and every token belief. According to Goldman, an evaluator can correctly (and justifiably) describe a process as reliable, without being able to specify in any detail the sort of process type at issue.

#### “Common Sense” Typing Approaches (Goldman 1979; Jönsson 2013; Olsson 2016)

Yet another strategy is to switch tack and inquire into how ordinary people type belief-forming processes. Suppose we ask a person on the street, “How did Smith form his belief?” Chances are, they’ll answer “vision”, not “visual experience of a maple tree on a Tuesday afternoon while wearing…” Inspired by this observation, common sense approaches seek to solve the generality problem in two steps. Step One: develop a psychological theory of how ordinary people type belief-forming processes. Step Two: Use these “common sense” typing methods to winnow down the range of candidate process types.

A version of this approach has been elaborated by Erik Olsson (2016), who appeals to a well-supported psychological theory about conceptualization called basic-level theory, developed by Eleanor Rosch in the 1970s (Rosch et al. 1976). Rosch and her collaborators studied the deployment of taxonomically related concepts like “animal”, “dog”, and “labrador”. In such a taxonomy, one term is a superordinate concept (“animal”), another is an intermediate-level concept (“dog”), and a third is a subordinate concept (“labrador”). It turns out that intermediate-level concepts are overwhelmingly preferred in free naming tasks (e.g., “What would you call this?”). For example, in one study Rosch et al. found that, out of 540 responses, 530 to 533 converged on the same intermediate-level word for naming a physical object. Olsson suggests that ordinary people might similarly tend to converge on an intermediate-level concept when typing belief-forming processes. This conjecture has been empirically supported in work by Jönsson (2013). Jönsson showed subjects clips in which characters arrived at various conclusions, and then asked the subjects to specify how the characters arrived at their beliefs. Subjects converged on the choice of verbs describing the belief-formation processes, even without linguistic cues to guide the process-typing task. Jönsson also found a correlation between subjects’ estimates of the reliability of the characters’ belief-forming processes and subjects’ judgments about whether the characters were justified in holding the beliefs in question. Thus there is some evidence that folk psychological propensities lead us to converge on belief-typing tasks, and that our reliability assessments track our justification judgments.

### 2.4 Further Challenges to Reliabilism

In addition to these three main objections, a number of further challenges have been raised for reliabilism, including the three that follow.

#### The Bootstrapping Problem

Consider the following case, due to Vogel (2000): Roxanne is a driver who believes whatever her gas gauge says about the state of her fuel tank, although she has no antecedent reasons to believe it is reliable. So Roxanne often looks at the gauge and arrives at conjunctive beliefs like the following: On this occasion the gauge reads “Full” & the tank is full. Now, the perceptual process by which she arrives at the belief that the gauge reads “Full” is reliable, and so is the process by which she arrives at the belief that the tank is full (given that the gauge functions reliably). According to reliabilism, therefore, her belief in the indicated conjunction should be justified. Now Roxanne deduces the proposition, On this occasion, the gauge is reporting accurately. From multiple instances of this reasoning, she induces the further conclusion: The gauge is generally reliable. Finally, with a little more deduction she concludes she is justified in believing that her gas tank is full. Since deduction and induction are reliable processes, Roxanne must also be justified in believing that her gas gauge is full. But is this verdict plausible? Definitely not, says Vogel, because such bootstrapping amounts to a vicious form of epistemic circularity.

In response to this problem, we should start by noting that this problem is not specific to reliabilism (Cohen 2002). Indeed, the problem arises for all theories that allow for “basic justification”—that is, justification that is obtained via some process or method X without antecedent justification for believing that X is reliable. As van Cleve (2003) forcefully argues, theories that do not allow for basic justification seem to lead to wide-ranging skepticism.

If we wish to allow for basic justification, can we still give a principled explanation of why some forms of bootstrapping seem illegitimate (for example, the case of Roxanne)? This is an area of active research. One suggestion is that illegitimate forms of bootstrapping involve No Lose Investigations. Roughly, a No Lose Investigation into a hypothesis $$h$$ is an investigation that could never, in principle, count against $$h$$. (For suggestions along these lines, see Kornblith 2009; Titelbaum 2010; Douven & Kelp 2013.) Another suggestion is that illegitimate forms of bootstrapping all involve epistemic feedback (Weisberg 2010). Suppose an agent believes premises $$p_1$$ … $$p_n$$ from which she infers lemmas $$l_1$$ … $$l_n$$, from which she in turn infers a conclusion $$c$$. Epistemic feedback is present when the probability of $$c$$ conditional on $$l_1$$ … $$l_n$$ is greater than the probability of $$c$$ conditional on $$p_1$$ … $$p_n$$. Roxanne’s case can be understood in these terms. She first believes various premises about the gas gauge readings (e.g., The gas gauge read full at time $$t$$; the gauge reads half-empty at $$t^*$$). She then infers various lemmas about the state of the gas tank (e.g., The tank was full at $$t$$; the tank was half-empty at $$t^*$$). Finally, by conjoining these premises with these lemmas, she comes to believe the conclusion: The gas tank is reliable. The probability of this conclusion conditional on just the lemmas (that is, the beliefs about the state of the gas tank) is higher than the probability of the conclusion conditional on the premises (that is, the gas gauge readings). Perhaps by imposing a ban on either No Lose Investigations or epistemic feedback (or both), we can account for the intuition about Roxanne, while still allowing for basic justification. (For an overview of various responses to the bootstrapping problem, see Weisberg 2012.)

#### The Problem of Defeat

Another challenge for reliabilism comes from the phenomenon of epistemic defeat. Consider a case where an agent reliably forms a belief that $$p$$ at some initial time, and later receives some evidence (perhaps misleading evidence) indicating that $$p$$ is false. For example, suppose Alice looks at a red vase in good lighting conditions, and forms the belief that the vase is red. A friend comes along and tells her that she is actually looking at a white vase illuminated by red lights. It seems that the receipt of this testimony renders her belief unjustified, even though it was originally formed via a reliable process. Can reliabilists accommodate this verdict?

In “What is Justified Belief?”, Goldman acknowledged this sort of problem, and suggested adding a “No Defeaters” condition to the theory of epistemic justification. There, he elucidated an account of defeat in counterfactual terms. Roughly, your belief B is defeated provided there is some reliable (or conditionally reliable) process X available to you such that, if you were you use X, you would no longer hold B. On this view, Alice’s belief is defeated because if she were to use the process believing her friend’s testimony, she would not have believed B. However, this account of defeat has been subject to various challenges. For example, Beddor (2015) argues that it commits a version of the “counterfactual fallacy”, rendering it susceptible to counterexamples. And both Lyons (2009: 124) and Beddor (2021) point out that it has trouble accommodating cases where one defeater is defeated by another.

But even if we reject an account of defeat along these lines, the reliabilist may have other options for dealing with the problem. For example, some might suggest that we can handle this problem by appropriately typing the agent’s belief. Perhaps after Alice receives her friend’s testimony, her belief is no longer the result of vision alone, but rather of vision while disregarding testimony that vision is unreliable. (See Constantin 2020; Nagel 2021 for versions of this response.) An important question for this response is whether it is consistent with our most promising theories of process-typing; in this regard, the problem of defeat is connected to the generality problem. Another strategy for handling defeat is to modify reliabilism by incorporating some evidentialist elements into the theory. For now, we will defer a more detailed discussion of this maneuver to §4.2 below.

It is also worth noting that the reliabilist’s treatment of defeat may bear on other problems facing reliabilism. Consider again the clairvoyance objection. Yet another response to this objection, suggested briefly by Goldman (1986: 112) is that Norman’s belief is prima facie justified, but it is defeated. Whether this response is viable will depend on the details of the reliabilist’s theory of defeat.

#### The Temporality Problem

A more recent challenge to reliabilism, due to Frise (2018), is the “temporality problem”. Reliabilists maintain that the justificatory status of a belief depends on the reliability of the process responsible for that belief. But, Frise points out, a process can be more reliable at one time than at another. To give one of Frise’s examples, weather forecasting has improved over time. So the process type, forming a belief based on the forecast is more reliable now than it was twenty years ago. This raises a question about the temporal parameters that we should use when evaluating reliability. Suppose we are evaluating whether a belief B is justified at a time t. Do we restrict our attention to whether the belief-forming process responsible for B is reliable at t? Or do we look at whether it has been reliable at all times up until t? Or all times that are temporally close to t? Or something else?

This issue has not received much attention from reliabilists, though see Tolly (2019) for an exception. One question that merits further investigation is whether the reliabilist’s preferred solution to the generality problem will generalize to help with the temporality problem. For example, according to the “common sense” approach described above, there is considerable convergence in ordinary people’s judgments about how to type belief-forming processes. If this is right, is there also convergence in ordinary people’s judgments about which times are relevant to assessing the reliability of a given process? If so, could we use this common sense consensus to make progress on the temporality problem?

## 3. New Developments for Reliabilism

We now turn to consider some relatively recent developments in reliabilist epistemology. In the next section, we turn to consider various cousins and spin-offs of reliabilism.

### 3.1 Normality Reliabilism

As we saw in §2.2, a crucial question facing reliability theories concerns the domain in which a process is assessed for reliability. Recently, some writers have explored the idea that we should take this domain to be the normal conditions for the use of a given process.

For example, Jarrett Leplin (2007, 2009) rejects the common view of reliable processes as those that produce a high ration of truths to falsehoods. In its place, Leplin advances a conception of reliability according to which a process/method is reliable if it would never produce or sustain false beliefs under normal conditions (see Leplin 2007: 33: 2009: 34–35). A similar proposal has been developed by Peter Graham (e.g., Graham, 2012, 2020). Graham draws on an etiological account of function due to Larry Wright (1973) and Ruth Millikan (1984), among others, to advance a theory of epistemic entitlement in terms of proper functioning. Then, like Leplin, Graham tries to use a normality approach to address some familiar counterexamples to process reliabilism.

Why favor a normality-based version of reliabilism? One motivation comes from the new evil demon problem. The victim of the evil demon forms beliefs using perception—a process that is unreliable at their world. But, the response runs, the normal conditions for perception are free from evil demons and the like. At these worlds, perception is indeed reliable.

There remains work to be done in developing a normality-based version of reliability, particularly when it comes to elucidating the notion of “normal conditions” here. But at the very least normality-based approaches offer a welcome variant on more traditional forms of reliabilism. Normality-based approaches also parallel recent developments in the analysis of knowledge. For example, D. Greco (2014), taking inspiration from an idea in Dretske (1981), develops an information-theoretic analysis of knowledge, which he cashed out in terms of normal conditions. In a similar vein, both Goodman and Salow (2018) and Beddor and Pavese (2020) have proposed “normal conditions” variants of a safety condition on knowledge (of the sort discussed in §1).

### 3.2 Beyond Individualistic Reliabilism

Most versions of reliabilism are “individualistic” in at least two senses. First, they assume that the justificatory status of an agent’s belief depends entirely on the reliability of processes that take place within that agent’s head. Second, they assume that the bearers of justificatory status are the beliefs of individual agents, rather than groups. Recently, several philosophers have developed versions of reliabilism that revise or abandon these individualistic assumptions.

Sanford Goldberg (2010) advances a distinctive view of testimonial belief that abandons the first individualistic assumption. Goldberg invites us to imagine that an informant (A) forms a perceptual belief that p, which they convey via testimony to an audience (B), who then comes to believe p. However, A’s perceptual belief that p was formed in a way that falls just shy of the threshold for justification. Goldberg contends that B’s testimony-based belief that p does not amount to knowledge. What’s more, the reason why it does not amount to knowledge is that it is not justified. After all, the belief could well be true, and free from Gettierization. But if B’s belief that p is not justified, this justificatory failing is not due to any unreliability in B’s mental processing of A’s testimony. Rather, it’s due to the fact that A’s original belief that p was insufficiently justified. Goldberg concludes that the justificatory status of testimonial beliefs depends—in part—on the reliability of the testifier’s cognitive processes.

Turn next to the second individualistic assumption: the bearers of justificatory status are always the beliefs of individual agents. A new movement in social epistemology has called this assumption into question. This movement starts with the observation that we routinely ascribe beliefs to groups. For example, we talk about whether the jury believes the defendant is guilty, or what the Committee on Climate Change believes to be the causes of global warming. A burgeoning body of literature investigates the nature of group belief, and the ways in which group belief depends on the beliefs of the group’s members. For important work on this topic, see Gilbert (1989); List and Pettit (2011); for an overview, see the entry on belief merging and judgment aggregation.

In addition to inquiring into the conditions under which a group holds a particular belief, we can also inquire into the conditions under which group beliefs are justified. Some social epistemologists have sought to answer this question in reliabilist terms. For example, Goldman (2014) develops a view of group justification that is modelled on the way inference transmits justification within an individual agent. According to process reliabilism for individuals, inferential justification depends on two factors: (a) the justifiedness of the premise beliefs and (b) the conditional reliability of the inferential process used. Similarly, Goldman suggests, group justification depends on two analogous factors: (a) the justifiedness of the members’ beliefs and (b) the conditional reliability of the belief aggregation function (a function that specifies the way in which the group’s beliefs depend on the members’ beliefs). The details of Goldman’s proposal have been critically discussed by both Lackey (2016) and Dunn (forthcoming), both of whom propose alternative accounts of the conditions under which group beliefs are justified.

### 3.3 Reliabilism and Degrees of Belief

Historically, reliabilism has been offered as an account of the justificatory status of full or outright belief. However, it’s widely thought that beliefs come in degrees: a person might believe that it’s sunny and also believe that it’s Monday, but have a higher degree of belief in the former than the latter. This raises the question: can reliabilism be extended to provide an account of the justificatory status of degrees of belief?

Formal epistemologists have long been interested in different “scoring rules”—functions that measure the accuracy or inaccuracy of degrees of belief (hereafter, credences). For example, one widely discussed scoring rule is the Brier score (Brier 1950). Let $$C(p)$$ be an agent’s credence in $$p$$; let $$T(p)$$ be $$p$$’s indicator function, which equals 1 if $$p$$ is true, and 0 if $$p$$ is false. $$C(p)$$’s Brier score is calculated by the formula:

$(C(p)-T(p))^2$

Thus, a credence of 1 in a true proposition will get a Brier score of 0—the best score possible. A credence of 1 in a false proposition will get a Brier score of 1—the worst score possible. An intermediate credence of .6 will get a Brier score of .16 if the proposition is true, and .36 if it is false.

Given a particular scoring rule R, we can develop a measure of process reliability (Dunn 2015; Tang 2016; Pettigrew 2021). Let X be some credence-forming process: that is, a process that outputs credences in a range of propositions. We can use R to score all of the credences that X produces. Average all of these scores, and we have a measure of X’s degree of reliability. Process reliabilists can then use this measure of reliability to give an account of justification for credences: a credence is (prima facie) justified iff it is produced by a reliable credence-forming process.

What is the most suitable scoring rule for process reliabilism to use? Recent work has begun to tackle this question. In what follows, we confine ourselves to discussing two particularly prominent scoring rules—the Brier score and a calibration score.

Given its prominence in the literature, the Brier score is a natural option. But using the Brier score to measure the reliability of credence-forming processes faces challenges. For example, Dunn (2015) and Tang (2016) object that if the Brier score is used, a credence-forming process that only outputs mid-level credences (say, a credence of .6) will never qualify as highly reliable; hence the credences it produces will never count as highly justified. Both Dunn and Tang object to this consequence. For instance, Tang argues that sometimes a particular input requires having a mid-level credence. If I have a vague visual experience of the silhouette of a horse, then it seems I should only have a mid-level credence that there is a horse in front of me: a credence of .6 in this proposition might well be justified, whereas a credence of 1 or 0 would not.

Another option is to measure the reliability of credence-forming processes using a calibration score. To see what it means for a credal state to be well-calibrated, consider the following example from van Fraassen:

Consider a weather forecaster who says in the morning that the probability of rain equals .8. That day it either rains or does not. How good a forecaster is he? Clearly, to evaluate him we must look at his performance over a longer period of time. Calibration is a measure of agreement between judgments and actual frequencies… This forecaster was perfectly calibrated over the past year, for example, if, for every number r, the proportion of rainy days among those days on which he announced probability r for rain, equaled r. (van Fraassen 1984: 245)

According to the calibration approach, a credence is justified iff it is produced by a well-calibrated process. This avoids the objection to using the Brier score: after all, a credence-forming process that produces mid-level credences can still be well-calibrated.

However, the calibration approach has also elicited criticism. Goldman (1986) asks us to imagine an agent A, 70% of whose opinions turn out to be true. A can achieve a perfectly calibrated credence function by adopting a .7 credence in every proposition about which she has an opinion. However, Goldman argues that it’s wrong to automatically conclude that A’s credal state is perfectly reliable. If A has no good reason for adopting a .7 credence in many of the propositions in question, then her credal state shouldn’t count as justified. Dunn (2015) defends the calibration approach, arguing that the relevant question is whether the process that produced A’s credal state is reliable. In order to answer this question, it’s not enough to look at the truth-ratio of A’s opinions at the actual world; rather, we should look across a range of nearby worlds. If it’s just a matter of chance that 70% of the propositions A has an opinion about are true, then by looking at the truth-values of A’s opinions at nearby worlds the calibration approach will be able to avoid the counterintuitive consequence that A’s credal state is perfectly reliable.

Tang (2016) objects to the calibration approach on the grounds that a credence-forming process can be well-calibrated even though that process is insensitive to relevant evidence. In light of the perceived shortcomings of the calibration approach (and those of alternative scoring rules), Tang proposes a synthesis of reliabilism and evidentialism, where evidentialism can roughly be understood as the view that a belief’s justification is determined by how well it is supported by the believer’s evidence. According to Tang’s proposal, a credence of $$C(p)$$ is only justified if it is based on some ground $$g$$, such that the objective probability of the credence having a true content given $$g$$ approximates $$C(p).$$ More recently, Pettigrew (2021) has argued that these two approaches should not be viewed as competitors at all. Pettigrew suggests that we can develop a version of the calibration approach that makes reference to the agent’s evidence. Once we do, this version turns out to be extensionally equivalent to a version of Tang’s approach. Both Tang and Pettigrew’s approaches to this issue can thus be viewed as syntheses of reliabilism and evidentialism, to be discussed in more detail in §4.2 below.

Only recently have philosophers started to systematically explore the possibility of using scoring rules to provide a reliabilist theory of credal justification. Given its position at the intersection of traditional and formal epistemology, this will likely prove to be a rich and important area of ongoing research.

## 4. Cousins and Spin-offs of Reliabilism

A number of theories have “branched off” from process reliabilism, borrowing some key ideas but parting company with respect to others. This section discusses two such cousins of process reliabilism: virtue reliabilism and syntheses of reliabilism and evidentialism.

### 4.1 Virtue Reliabilism

As its label suggests, virtue reliabilism is a branch of virtue epistemology that emerged in the mid-1980s in the wake of process reliabilism and shares some significant features with it. In particular, one of its central theoretical notions, that of an epistemic competence, resembles that of a reliable belief-forming process type. And its notion of the exercise of an epistemic competence resembles that of a token of a reliable process. Leading proponents of virtue reliabilism include Ernest Sosa (1991, 2007, 2010, 2015), John Greco (1999, 2010) and Duncan Pritchard (2012b). Here we will focus primarily on Sosa’s version.

Most virtue reliabilists do not explicitly use the notion of a “reliable process”, preferring instead the notions of “competence”, “virtue”, “skill” or “ability”. How should we understand these notions? Sosa often characterizes competences in terms of dispositions, for instance:

A competence is a certain sort of disposition to succeed when you try. So, exercise of a competence involves aiming at a certain outcome. It is a competence in part because it is a disposition to succeed reliably enough when one makes such attempts… It is thus tied to a conditional of the form: if one tried to $$\varphi$$, one would (likely enough) succeed. (2015: 96)

Epistemic competences—the sort of competence that is relevant to epistemology—are dispositions of a specific variety: dispositions to arrive at the truth.

One question that arises for such accounts of competence is how we are to understand the dispositions in question. Are they general dispositions of an agent to arrive at the truth about some matter? Or are they to be understood as implicitly relativized to belief-forming processes or methods, in which case an epistemic competence is really of the form: a disposition to arrive at the truth when employing process X?

From a process reliabilist perspective, it’s necessary to relativize the dispositions to belief-forming processes or methods. After all, process reliabilists will insist that in order to know whether an agent’s belief that p is justified or counts as knowledge, it’s not enough to know whether the agent is generally disposed to arrive at truths about p-related matters. Instead, we’ll need to know whether the agent’s particular belief that p was the result of a reliable process. (After all, an agent could form the belief that p via an ultra-reliable process, even if she’s generally disposed to form beliefs about p-related matters in highly unreliable ways.) In effect, committed process reliabilists will suggest that virtue reliabilists face a dilemma: either epistemic competences are general dispositions of the agent, in which case they won’t be able to perform the various jobs required of them (specifically, explaining whether a belief is justified, or amounts to knowledge), or they are implicitly relativized to processes, in which case epistemic competences are not significantly different from reliable belief-forming processes. In the latter case, epistemic competences “collapse” into reliable processes.

In at least some discussions of epistemic competences, virtue reliabilists indicate a willingness to relativize epistemic competences to processes. For example, Sosa describes good eyesight and color vision as paradigmatic epistemic competences (Sosa 1991: 271; 2010: 467)—both of which are also standard examples of reliable processes. If epistemic competences are understood as involving reliable processes, then virtue reliabilism inherits many of the challenges facing process reliabilism—in particular, the generality problem. (In virtue reliabilist terms, this will amount to the question: “How exactly should we type epistemic competences?”) Of course, this result is unsurprising if the generality problem is a problem for everyone who tries to give an adequate theory of justified belief, as proponents of the “parrying response” suggest (§2.3).

Virtue reliabilism differs from traditional process reliabilism in its choice of analysandum. Historically process reliabilists have focused on giving an account of justification; by contrast, virtue reliabilists have focused on giving an account of knowledge. However, one certainly could try to extend one’s virtue reliabilism to justification. Indeed, if one assumes that knowledge entails justification, being a virtue reliabilist about the former seems to lead naturally to virtue reliabilism about the latter. And if epistemic competences are understood as reliable processes, the resulting virtue reliabilist account of justification would presumably amount to a version of process reliabilism.

Let us turn now to virtue reliabilist accounts of knowledge. How do virtue reliabilists propose to understand knowledge in terms of epistemic competences? There are a variety of slightly different proposals in the literature (J. Greco 2009, 2010; Sosa 2007, 2015; Turri 2011). However, virtue reliabilists typically understand knowledge as involving some sort of explanatory relation between having a true belief and the exercise of an epistemic competence. For instance, Sosa (2007) holds that S knows p if and only if S aptly believes p, where S’s belief is apt if and only if it is correct because of the exercise of an epistemic competence (see also J. Greco 2009, 2010 for a closely related account). More recently, Sosa (2010, 2015) defends a similar account couched in terms of “manifestation”: knowledge is belief whose correctness manifests the agent’s epistemic competence (see Turri 2011 for a similar account).

How do such accounts handle Gettier cases? Sosa 2007: 94–97 discusses Lehrer’s (1965) Nogot/Havit case, in which a subject S truly believes that someone here owns a Ford, but he only does so on the basis of Nogot’s misleading testimony. Sosa claims that while S holds this belief because of the exercise of an epistemic competence, S’s belief isn’t correct because of the exercise of an epistemic competence. This explanation raises important questions about how to understand the relevant “because of” relation here: what exactly is the difference between a true belief being held because of an epistemic competence, and a belief being correct because of an epistemic competence? Other ways of fleshing out the details of a virtue reliabilist analysis raise similar questions.

Even if a virtue reliabilist account of knowledge can handle some Gettier cases, there remains a question of whether it will be able to handle the full spectrum of Gettier cases. One case that has been thought to cause particular trouble for virtue reliabilists is the fake barn scenario (introduced by Goldman 1976, who credits the example to Carl Ginet). In the fake barn scenario, Henry sees from the road the one genuine barn in an area filled with many convincing barn façades. Henry forms a true belief that there’s a barn in front of him; what’s more, the fact that he correctly believes there’s a barn in front of him seems to be causally explained by an exercise of his visual competence. (See Lackey 2007 for a forceful statement of this point.)

One response to this challenge is to abandon the hope that virtue reliabilism on its own will solve every Gettier case. Pritchard (2012b) takes this line, opting for a view that combines elements of virtue epistemology with a safety requirement on knowledge (where, again, safety is roughly the requirement that the belief in question couldn’t easily have been held falsely). On Pritchard’s view, Henry’s belief that he sees a barn is unsafe, hence fails to count as knowledge. Whether this reply is adequate is a matter of debate; for relevant discussion, see Lackey (2006); Beddor and Pavese (2020).

A full assessment of these issues is beyond the scope of the current article. This much is clear: one feature that distinguishes virtue reliabilism from classical process reliabilism is its distinctive treatment of knowledge. However, this treatment gives rise to important questions—questions that remain very much an area of active research.

### 4.2 Syntheses of Reliabilism and Evidentialism

Process reliabilism and evidentialism have long been viewed as competitors, even antitheses of one another, with one of them (reliabilism) being a paradigm of externalism and the other (evidentialism) a paradigm of internalism. However, a number of epistemologists have recently questioned whether these views are necessarily opposed. For example, Comesaña (2010), Goldman (2011), Tang (2016), Goldberg (2018), Pettigrew (2021), Miller (2019) and Beddor (2021) have all developed reliabilist views that incorporate certain elements traditionally associated with evidentialism.

Let us start with Comesaña’s version of a hybrid view. Comesaña defends:

Evidentialist Reliabilism: S’s belief that p is justified if and only if:

1. S has some evidence e,
2. The belief that p is based on e, and either:
1. e doesn’t include any beliefs and the type, producing a belief that p based on e is actually reliable, or
2. e includes beliefs of S, all of these beliefs are justified, and the type producing a belief that p based on e is conditionally actually reliable.

There are a few respects in which this departs from standard versions of reliabilism. The most obvious is that it involves an evidential requirement. This is intended to serve two functions. First, it is designed to help with the clairvoyance problem. One crucial feature of Norman’s situation is that he has no evidence regarding his clairvoyance, or regarding the whereabouts of the President. This is at least one of the reasons (says Comesaña) why we have the intuition that Norman is not justified. Second, Comesaña suggests that by incorporating bodies of evidence into the belief-forming process, we can make headway on the generality problem. On this view, the relevant type for a given belief-forming process is always: producing a belief that such-and-such on the basis of body of evidence e.

While incorporating the notion of evidence into a reliablist theory carries potential advantages, it also raises issues of its own. As we’ve seen, traditional process reliabilists resisted defining “justification” in terms of evidence because they didn’t want an analysis that relied on any unreduced epistemic notions (Goldman 1979). Moreover, even those who do not share these reductive ambitions may well want some account of the notion of evidence possession that appears in clause 1) of Evidentialist Reliabilism. Comesaña suggests following Conee and Feldman (2004) in opting for a “mentalist” construal of our evidence, according to which our evidence ultimately consists in various mental states. While this is a start, there remains the question of which mental states constitute a subject’s evidence. Are they conscious experiences? States that are accessible to consciousness? Beliefs?

One possibility—not pursued directly by Comesaña—would be to appeal to reliabilist resources to provide an answer. Goldman (2011) makes a suggestion along these lines in developing his preferred synthesis of reliabilism and evidentialism. Goldman points out that while reliabilists have not traditionally appealed to the notion of evidence, the notions of mental or psychological states play an important role in reliabilist theories. After all, in addition to belief-forming processes, there are also states that serve as inputs to those processes. These include both doxastic states (beliefs, primarily) and various experiences (perceptual, memorial, etc.). Although reliabilists typically do not call these states “evidence”, there is no principled reason why they could not do so. A similar suggestion is made by Beddor (2021), framed in terms of reasons rather than evidence. On Beddor’s proposal, a given psychological state s constitutes a prima facie reason for an agent to hold a belief B just in case there is some reliable (or, in the case of inferential beliefs, conditionally reliable) process available to the agent that is disposed to produce B when fed s as input. (More on the applications of this sort of view below.) Perhaps, then, not only does a reliabilist-evidentialist hybrid help address problems for reliabilism, it also helps answer some pressing questions that have historically faced evidentialists.

A further question that arises for reliabilist-evidentialist hybrids concerns the role of historical features (or the lack thereof) in the theory of justifiedness. As noted in §1 above, traditional forms of reliabilism make the epistemic status of a belief at a time t depend not only on features of the agent at t, but also on facts about how the believer acquired the belief in question. Here’s an example that motivates this “historicist” dimension to traditional reliabilism (Goldman 1999). Last year Sally read about the health benefits of broccoli in a New York Times science section story. She then forms a justified belief in broccoli’s beneficial effects. She still retains that belief today but no longer recalls the evidence she had upon first reading the story. And she hasn’t encountered any further evidence in the interim, from any kind of source. Isn’t her belief in broccoli’s beneficial effects still justified? Presumably this is because of her past acquisition. True, she also has a different kind of evidence, namely, her (justified) belief that whenever she seems to remember a (putative) fact it is usually true. But this is not her entire evidence. It is an important determinant of her belief’s justificatory status at t that she was justified in forming it originally on the basis of good evidence (of another kind). Had her original belief been based on very poor evidence, e.g., reading a similar story in an untrustworthy news source, so that the belief wasn’t justified from the start, her belief at time t would be unjustified—or at least much less justified. This indicates that the evidence she acquired originally still has some impact on the justificatory status of her belief at t.

Notably, Comesaña’s Evidentalist Reliabilism lacks any such historical condition on justifiedness. However, other versions of a hybrid theory do incorporate a historical condition, and emphasize it as a selling point of the view (Goldman 2011; Goldberg 2018; Beddor 2021). For example, Goldman (2011) advocates a “two-component” approach to justification, which makes room for a “degree of support” dimension of justification as well as a “proper causal generation” dimension. Here is one simple way of developing such a theory:

Two-Component Hybrid View: S’s belief that p is justified at t iff both:

1. S’s total evidence at t supports believing p,
2. S’s belief that p is the result of a reliable belief-forming process.

Condition 1 incorporates the traditional evidentialist take on justification. (Though, as noted above, this condition could itself be given a reliabilist spin, if we characterize evidence as the inputs to reliable processes). By contrast, condition 2 captures the causal generation dimension of justification. Goldman suggests that a two-component view is nicely positioned to preserve the best of both approaches.

A final motivation for hybrid views is also worth noting. We saw earlier (§2.4) that some authors have argued that reliabilists have trouble accounting cases of defeat — cases where an agent reliably forms a belief that p at some initial time, and later receives evidence indicating that p is false. Intuitively, this further evidence defeats S’s justification for believing p, rendering their belief unjustified (even if it was previously justified). Recently, some authors have suggested that the best way of accommodating defeat in a reliabilist framework is to draw on evidentialist elements (broadly construed). For example, Miller (2019) defends a version of a Two-Component Hybrid View. On her view, cases of defeat are cases where condition 1 (the evidential support condition) is not satisfied.

Another approach to this issue is developed in Beddor (2021), who offers a synthesis of a reliabilist view with a “reasons first” account of justification of the sort developed by John Pollock (see, e.g., Pollock 1987, 1995). On Pollock’s view, a belief is justified as long as it is based on a chain of undefeated reasons that support it. One distinctive feature of Pollock’s framework is that he goes on to define defeaters in terms of reasons: a defeater for a belief B is a prima facie reason to believe that B is false (rebutting defeater) or a prima facie reason to believe that the agent’s beliefs do not support B (undercutting defeater). Beddor suggests that by supplementing this account with a reliabilist analysis of prima facie reasons (of the sort sketched above), we get a view that is faithful to the main motivations for reliabilism, while also providing a more satisfactory treatment of defeat.

Thus there are a number of different ways of developing a hybrid of reliabilism and evidentialism. Such hybrids hold considerable promise for overcoming some of the problems facing both reliabilism and evidentialism. In view of this promise, such hybrids are a potentially fruitful area for further research.

## Bibliography

• Alston, William P., 1980, “Level-Confusions in Epistemology”, Midwest Studies in Philosophy, 5: 135–150. doi:10.1111/j.1475-4975.1980.tb00401.x
• –––, 1995, “How to Think about Reliability”:, Philosophical Topics, 23(1): 1–29. doi:10.5840/philtopics199523122
• Armstrong, David M., 1973, Belief, Truth and Knowledge, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511570827
• Ball, Brian and Michael Blome-Tillmann, 2013, “Indexical Reliabilism and the New Evil Demon”, Erkenntnis, 78(6): 1317–1336. doi:10.1007/s10670-012-9422-3
• Beddor, Bob, 2015, “Process Reliabilism’s Troubles with Defeat”, The Philosophical Quarterly, 65(259): 145–159. doi:10.1093/pq/pqu075
• –––, 2021, “Reasons for Reliabilism”, in Brown and Simion 2021, pp. 146–176.
• Beddor, Bob and Carlotta Pavese, 2020, “Modal Virtue Epistemology”, Philosophy and Phenomenological Research, 101(1): 61–79. doi:10.1111/phpr.12562
• Beebe, James R., 2004, “The Generality Problem, Statistical Relevance and the Tri-Level Hypothesis”, Noûs, 38(1): 177–195. doi:10.1111/j.1468-0068.2004.00467.x
• Bishop, Michael A., 2010, “Why the Generality Problem Is Everybody’s Problem”, Philosophical Studies, 151(2): 285–298. doi:10.1007/s11098-009-9445-z
• BonJour, Laurence, 1980, “Externalist Theories of Empirical Knowledge”, Midwest Studies in Philosophy, 5: 53–73. doi:10.1111/j.1475-4975.1980.tb00396.x
• Brier, Glenn W., 1950, “Verification of Forecasts Expressed in Terms of Probability”, Monthly Weather Review, 78(1): 1–3.
• Brown, Jessica, 2018, Fallibilism: Evidence and Knowledge, Oxford: Oxford University Press. doi:10.1093/oso/9780198801771.001.0001
• Brown, Jessica and Mona Simion (eds.), 2021, Reasons, Justification, and Defeat, Oxford: Oxford University Press.
• Chisholm, Roderick, 1966, Theory of Knowledge, Englewood Cliffs, NJ: Prenctice Hall.
• Cohen, Stewart, 1984, “Justification and Truth”, Philosophical Studies, 46(3): 279–295. doi:10.1007/BF00372907
• –––, 2002, “Basic Knowledge and the Problem of Easy Knowledge”, Philosophy and Phenomenological Research, 65(2): 309–329. doi:10.1111/j.1933-1592.2002.tb00204.x
• Colaço, David, Wesley Buckwalter, Stephen Stich, and Edouard Machery, 2014, “Epistemic Intuitions in Fake-Barn Thought Experiments”, Episteme, 11(2): 199–212. doi:10.1017/epi.2014.7
• Comesaña, Juan, 2002, “The Diagonal and the Demon”, Philosophical Studies, 110(3): 249–266. doi:10.1023/A:1020656411534
• –––, 2006, “A Well-Founded Solution to the Generality Problem”, Philosophical Studies, 129(1): 27–47. doi:10.1007/s11098-005-3020-z
• –––, 2010, “Evidentialist Reliabilism”, Noûs, 44(4): 571–600. doi:10.1111/j.1468-0068.2010.00748.x
• Conee, Earl and Richard Feldman, 1998, “The Generality Problem for Reliabilism”, Philosophical Studies, 89(1): 1–29. doi:10.1023/A:1004243308503
• –––, 2004, Evidentialism: Essays in Epistemology, Oxford: Oxford University Press. doi:10.1093/0199253722.001.0001
• Constantin, Jan, 2020, “Replacement and Reasoning: A Reliabilist Account of Epistemic Defeat”, Synthese, 197(8): 3437–3457. doi:10.1007/s11229-018-01895-y
• DeRose, Keith, 1992, “Contextualism and Knowledge Attributions”, Philosophy and Phenomenological Research, 52(4): 913–929. doi:10.2307/2107917
• Douven, Igor and Christoph Kelp, 2013, “Proper Bootstrapping”, Synthese, 190(1): 171–185. doi:10.1007/s11229-012-0115-x
• Dretske, Fred I., 1970, “Epistemic Operators”, The Journal of Philosophy, 67(24): 1007–1023. doi:10.2307/2024710
• –––, 1981, Knowledge and the Flow of Information, Cambridge, MA: MIT Press.
• Dunn, Jeff, 2015, “Reliability for Degrees of Belief”, Philosophical Studies, 172(7): 1929–1952. doi:10.1007/s11098-014-0380-2
• –––, forthcoming, “Reliable Group Belief”, Synthese, first online: 11 January 2019. doi:10.1007/s11229-018-02075-8
• Dutant, Julien and Erik J. Olsson, 2013, “Is There a Statistical Solution to the Generality Problem?”, Erkenntnis, 78(6): 1347–1365. doi:10.1007/s10670-012-9427-y
• Feldman, Richard, 1985, “Reliability and Justification”:, The Monist, 68(2): 159–174. doi:10.5840/monist198568226
• –––, 2003, Epistemology, Upper Saddle River, NJ: Prentice-Hall.
• Fleisher, Will, 2019, “Method Coherence and Epistemic Circularity”, Erkenntnis, 84(2): 455–480. doi:10.1007/s10670-017-9967-2
• Foley, Richard, 1985, “What’s Wrong With Reliabilism?”, The Monist, 68(2): 188–202. doi:10.5840/monist198568220
• Fricker, Elizabeth, 2016, “Unreliable Testimony”, in McLaughlin and Kornblith 2016: 88–120. doi:10.1002/9781118609378.ch5
• Frise, Matthew, 2018, “The Reliability Problem for Reliabilism”, Philosophical Studies, 175(4): 923–945. doi:10.1007/s11098-017-0899-0
• Gettier, Edmund, 1963, “Is Justified True Belief Knowledge?”, Analysis, 23(7): 121–123.
• Gilbert, Margaret, 1989, On Social Facts, London: Routledge.
• Goldberg, Sanford C., 2010, Relying on Others: An Essay in Epistemology, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199593248.001.0001
• –––, 2012, “A Reliabilist Foundationalist Coherentism”, Erkenntnis, 77(2): 187–196. doi:10.1007/s10670-011-9347-2
• –––, 2018, To the Best of Our Knowledge: Social Expectations and Epistemic Normativity, Oxford: Oxford University Press. doi:10.1093/oso/9780198793670.001.0001
• Goldman, Alvin I., 1976, “Discrimination and Perceptual Knowledge”, The Journal of Philosophy, 73(20): 771–791. doi:10.2307/2025679
• –––, 1979 [2012], “What Is Justified Belief?” in George S. Pappas (ed.), Justification and Knowledge: New Studies in Epistemology, Dordrecht: Reidel, pp. 1–25; reprinted in his Reliabilism and Contemporary Epistemology, New York: Oxford University Press, 2012, pp. 29–49.
• –––, 1986, Epistemology and Cognition, Cambridge, MA: Harvard University Press.
• –––, 1992, “Epistemic Folkways and Scientific Epistemology”, in his Liaisons: Philosophy Meets the Cognitive and Social Sciences, Cambridge, MA: MIT Press, pp. 155–175.
• –––, 1999, “Internalism Exposed”, Journal of Philosophy, 96(6): 271–293. doi:10.2307/2564679
• –––, 2008, “Immediate Justification and Process Reliabilism”, in Quentin Smith (ed.), Epistemology: New Essays, New York: Oxford University Press, pp. 63–82.
• –––, 2009, “Internalism, Externalism, and the Architecture of Justification”, Journal of Philosophy, 106(6): 309–338. doi:10.5840/jphil2009106611
• –––, 2011, “Toward a Synthesis of Reliabilism and Evidentialism”, in Trent Dougherty (ed.), Evidentialism and Its Discontents, New York: Oxford University Press, pp. 254–290.
• –––, 2014, “Social Process Reliabilism: Solving Justification Problems in Collective Epistemology”, in Jennifer Lackey (ed.), Essays in Collective Epistemology, Oxford: Oxford University Press, pp. 11–41.
• –––, forthcoming, “A Different Kind of Solution to the Generality Problem for Process Reliabilism”, Philosophical Topics.
• Goodman, Jeremy and Bernhard Salow, 2018, “Taking a Chance on KK”, Philosophical Studies, 175(1): 183–196. doi:10.1007/s11098-017-0861-1
• Graham, Peter J., 2012, “Epistemic Entitlement”, Noûs, 46(3): 449–482. doi:10.1111/j.1468-0068.2010.00815.x
• –––, 2020. “Why Should Warrant Persist in Demon Worlds?”, in Peter J. Graham and Nikolaj J. L. L Pedersen (eds.), Epistemic Entitlement, Oxford: Oxford University Press.
• Greco, Daniel, 2014, “Could KK Be OK?”:, Journal of Philosophy, 111(4): 169–197. doi:10.5840/jphil2014111411
• –––, forthcoming, “Justifications and Excuses in Epistemology”, Noûs, first online: 8 August 2019. doi:10.1111/nous.12309
• Greco, John, 1999, “Agent Reliabilism”, Philosophical Perspectives, 13: 273–296. doi:10.1111/0029-4624.33.s13.13
• –––, 2009, “Knowledge and Success From Ability”, Philosophical Studies, 142(1): 17–26. doi:10.1007/s11098-008-9307-0
• –––, 2010, Achieving Knowledge: A Virtue-Theoretic Account of Epistemic Normativity, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511844645
• Jönsson, Martin L., 2013, “A Reliabilism Built on Cognitive Convergence: An Empirically Grounded Solution to the Generality Problem”, Episteme, 10(3): 241–268. doi:10.1017/epi.2013.27
• Kampa, Samuel, 2018, “A New Statistical Solution to the Generality Problem”, Episteme, 15(2): 228–244. doi:10.1017/epi.2017.6
• Kelp, Christoph and Mona Simion, 2017, “Criticism and Blame in Action and Assertion”:, Journal of Philosophy, 114(2): 76–93. doi:10.5840/jphil201711426
• Kornblith, Hilary, 2009, “A Reliabilist Solution to the Problem of Promiscuous Bootstrapping”, Analysis, 69(2): 263–267. doi:10.1093/analys/anp012
• Lackey, Jennifer, 2006, “Pritchard’s Epistemic Luck”, The Philosophical Quarterly, 56(223): 284–289. doi:10.1111/j.1467-9213.2006.00443.x
• –––, 2007, “Why We Don’t Deserve Credit for Everything We Know”, Synthese, 158(3): 345–361. doi:10.1007/s11229-006-9044-x
• –––, 2016, “What Is Justified Group Belief?”:, Philosophical Review, 125(3): 341–396. doi:10.1215/00318108-3516946
• Lehrer, Keith, 1965, “Knowledge, Truth and Evidence”, Analysis, 25(5): 168–175. doi:10.1093/analys/25.5.168
• –––, 1990, Theory of Knowledge, Boulder, CO: Westview Press.
• Leplin, Jarrett, 2007, “In Defense of Reliabilism”, Philosophical Studies, 134(1): 31–42. doi:10.1007/s11098-006-9018-3
• –––, 2009, A Theory of Epistemic Justification, (Philosophical Studies Series 112), Dordrecht: Springer Netherlands. doi:10.1007/978-1-4020-9567-2
• Lewis, David, 1996, “Elusive Knowledge”, Australasian Journal of Philosophy, 74(4): 549–567. doi:10.1080/00048409612347521
• List, Christian and Philip Pettit, 2011, Group Agency: The Possibility, Design, and Status of Corporate Agency, Oxford: Oxford University Press.
• Lyons, Jack C., 2009, Perception and Basic Beliefs: Zombies, Modules and the Problem of the External World, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780195373578.001.0001
• –––, 2011, “Precis of Perception and Basic Beliefs”, Philosophical Studies, 153: 443–446.
• –––, 2013, “Should Reliabilists Be Worried About Demon Worlds?”, Philosophy and Phenomenological Research, 86(1): 1–40. doi:10.1111/j.1933-1592.2012.00614.x
• –––, 2019, “Algorithm and Parameters: Solving the Generality Problem for Reliabilism”, The Philosophical Review, 128(4): 463–509. doi:10.1215/00318108-7697876
• MacFarlane, John, 2005, “The Assessment-Sensitivity of Knowledge Attributions”, in Oxford Studies in Epistemology, Volume 1, Tamar Szabo Gendler and John Hawthorne (eds), Oxford: Oxford University Press, 197–233.
• Matheson, Jonathan D., 2015, “Is There a Well-Founded Solution to the Generality Problem?”, Philosophical Studies, 172(2): 459–468. doi:10.1007/s11098-014-0312-1
• McLaughlin, Brian P. and Hilary Kornblith (eds.), 2016, Goldman and His Critics, Malden, MA: Wiley-Blackwell. doi:10.1002/9781118609378
• Miller, Emelia, 2019, “Liars, Tigers, and Bearers of Bad News, Oh My!: Towards a Reasons Account of Defeat”, The Philosophical Quarterly, 69(274): 82–99. doi:10.1093/pq/pqy027
• Millikan, Ruth, 1984, Language, Thought, and Other Biological Categories, Cambridge, MA: MIT Press.
• Nagel, Jennifer, 2021, “Losing Knowledge by Thinking About Thinking”, in Brown and Simion 2021, pp. 69–92.
• Nozick, Robert, 1981, Philosophical Explanations, Cambridge, MA: Harvard University Press.
• Olsson, Erik, 2016, “A Naturalistic Approach to the Generality Problem”, in McLaughlin and Kornblith 2016: 178–199. doi:10.1002/9781118609378.ch8
• Pettigrew, Richard, 2021, “What Is Justified Credence?”, Episteme, 18(1): 16–30. doi:10.1017/epi.2018.50
• Plantinga, Alvin, 1993, Warrant: The Current Debate, Oxford: Oxford University Press. doi:10.1093/0195078624.001.0001
• Pollock, John L., 1984, “Reliability and Justified Belief”, Canadian Journal of Philosophy, 14(1): 103–114. doi:10.1080/00455091.1984.10716371
• –––, 1987, “Defeasible Reasoning”, Cognitive Science, 11(4): 481–518. doi:10.1207/s15516709cog1104_4
• –––, 1995, Cognitive Carpentry, Cambridge, MA: MIT Press.
• Pritchard, Duncan, 2005, Epistemic Luck, Oxford: Oxford University Press. doi:10.1093/019928038X.001.0001
• –––, 2012a, Epistemological Disjunctivism, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199557912.001.0001
• –––, 2012b, “Anti-Luck Virtue Epistemology”:, Journal of Philosophy, 109(3): 247–279. doi:10.5840/jphil201210939
• Ramsey, F.P., 1931, “Knowledge”, in Foundations of Mathematics and Other Logical Essays, R.B. Braithwaite (ed.), London: Kegan Paul, pp. 126–128.
• Riggs, Wayne D., 2002, “Reliability and the Value of Knowledge”, Philosophy and Phenomenological Research, 64(1): 79–96. doi:10.1111/j.1933-1592.2002.tb00143.x
• Rosch, Eleanor, Carolyn B Mervis, Wayne D Gray, David M Johnson, and Penny Boyes-Braem, 1976, “Basic Objects in Natural Categories”, Cognitive Psychology, 8(3): 382–439. doi:10.1016/0010-0285(76)90013-X
• Schellenberg, Susanna, 2016, “Phenomenal Evidence and Factive Evidence”, Philosophical Studies, 173(4): 875–896. doi:10.1007/s11098-015-0528-8
• Sosa, Ernest, 1991, Knowledge in Perspective: Selected Essays in Epistemology, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511625299
• –––, 1993, “Proper Functionalism and Virtue Epistemology”, Noûs, 27(1): 51–65. doi:10.2307/2215895
• –––, 1999, “How to Defeat Opposition to Moore”, Philosophical Perspectives, 13: 137–149. doi:10.1111/0029-4624.33.s13.7
• –––, 2001, “Goldman’s Reliabilism and Virtue Epistemology”, Philosophical Topics, 29(1): 383–400. doi:10.5840/philtopics2001291/214
• –––, 2007, Apt Belief and Reflective Knowledge, Volume 1: A Virtue Epistemology, Oxford: Clarendon Press. doi:10.1093/acprof:oso/9780199297023.001.0001
• –––, 2010, “How Competence Matters in Epistemology”, Philosophical Perspectives, 24: 465–476.
• –––, 2015, Judgment and Agency, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780198719694.001.0001
• Stalnaker, Robert C., 1999, Context and Content: Essays on Intentionality in Speech and Thought, Oxford: Oxford University Press. doi:10.1093/0198237073.001.0001
• Stine, Gail C., 1976, “Skepticism, Relevant Alternatives, and Deductive Closure”, Philosophical Studies, 29(4): 249–261. doi:10.1007/BF00411885
• Tang, Weng Hong, 2016, “Reliability Theories of Justified Credence”, Mind, 125(497): 63–94. doi:10.1093/mind/fzv199
• Titelbaum, Michael G., 2010, “Tell Me You Love Me: Bootstrapping, Externalism, and No-Lose Epistemology”, Philosophical Studies, 149(1): 119–134. doi:10.1007/s11098-010-9541-0
• Tolly, Jeffrey, 2017, “A Defense of Parrying Responses to the Generality Problem”, Philosophical Studies, 174(8): 1935–1957. doi:10.1007/s11098-016-0776-2
• –––, 2019, “Does Reliabilism Have a Temporality Problem?”, Philosophical Studies, 176(8): 2203–2220. doi:10.1007/s11098-018-1122-7
• Turri, John, 2011, “Manifest Failure: The Gettier Problem Solved”, Philosophers’ Imprint, 11: art. 8. [Turri 2011 available online]
• Unger, Peter, 1968, “An Analysis of Factual Knowledge”, The Journal of Philosophy, 65(6): 157–170. doi:10.2307/2024203
• van Fraassen, Bas C., 1984, “Belief and the Will”, The Journal of Philosophy, 81(5): 235–256. doi:10.2307/2026388
• Van Cleve, James, 2003, “Is Knowledge Easy—or Impossible? Externalism as the Only Alternative to Skepticism”, in Steven Luper (ed.), The Skeptics: Contemporary Essays, Aldershot: Ashgate, pp. 45–59.
• Vogel, Jonathan, 2000, “Reliabilism Leveled”, The Journal of Philosophy, 97(11): 602–623. doi:10.2307/2678454
• Weisberg, Jonathan, 2010, “Bootstrapping in General”, Philosophy and Phenomenological Research, 81(3): 525–548. doi:10.1111/j.1933-1592.2010.00448.x
• –––, 2012, “The Bootstrapping Problem”, Philosophy Compass, 7(9): 597–610. doi:10.1111/j.1747-9991.2012.00504.x
• Williams, Michael, 2016, “Internalism, Reliabilism and Deontology”, in McLaughlin and Kornblith 2016: 1–21. doi:10.1002/9781118609378.ch1
• Williamson, Timothy, 2000, Knowledge and Its Limits, Oxford: Oxford University Press. doi:10.1093/019925656X.001.0001
• –––, forthcoming, “Justification, Excuses, and Sceptical Scenarios”, in The New Evil Demon Problem, Fabian Dorsch and Julien Dutant (eds.), Oxford: Oxford University Press.
• Wright, Larry, 1973, “Functions”, The Philosophical Review, 82(2): 139–168.

## Academic Tools

 How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.

## Other Internet Resources

[Please contact the author with suggestions.]