Stanford Encyclopedia of Philosophy
This is a file in the archives of the Stanford Encyclopedia of Philosophy.

Coherentist Theories of Epistemic Justification

First published Tue Nov 11, 2003; substantive revision Thu Nov 15, 2012

According to the coherence theory of justification, also known as coherentism, a belief or set of beliefs is justified, or justifiably held, just in case the belief coheres with a set of beliefs, the set forms a coherent system or some variation on these themes. The coherence theory of justification should be distinguished from the coherence theory of truth. The former is a theory of what it means for a belief or a set of beliefs to be justified, or for a subject to be justified in holding the belief or set of beliefs. The latter is a theory of what it means for a belief or proposition to be true. Modern coherence theorists, in contrast to some earlier writers in the British idealist tradition, typically subscribe to a coherence theory of justification without advocating a coherence theory of truth. Rather, they either favor a correspondence theory of truth or take the notion of truth for granted, at least for the purposes of their epistemological investigations. This does not prevent many authors from claiming that coherence justification is an indication or “criterion” of truth.

1. Coherentism Versus Foundationalism

A central problem in epistemology is when we are justified in holding a proposition to be true. This is a problem because it is not at all evident how epistemic justification should be understood, and classical accounts of that notion have turned out to be severely problematic. Descartes thought that a person is justified in holding something to be true just in case the proposition in question can be derived from impeccable first principles characterized by their presenting themselves as being self-evident to the subject in question. But, as is often argued, little of what we take ourselves to justifiably believe satisfies these austere conditions: many of our apparently justified beliefs, it is commonly thought, are neither based on self-evident truths nor derivable in a strict logical sense from other things we believe in. Thus, the rationalist picture of justification faces severe skeptical challenges. Similar problems hound empiricist attempts to ground all our knowledge in the allegedly indubitable data of the senses. Depending on how they are understood, sense data are either not indubitable or else not informative enough to justify a sufficient portion of our purported knowledge. The exact characterization of foundationalism is a somewhat contentious issue. There is another form of foundationalism according to which some beliefs have some non-doxastic source of epistemic support that requires no support of its own. This support can be defeasible and it can require supplementation to be strong enough for knowledge. This sort of non-doxastic support would terminate the regress of justification. To do so it may not have to appeal to self-evidence, indubitability or certainty. Such foundationalist views vary on the source of the non-doxastic support, how strong the support is on its own, and what role in justification coherence plays, if any. Some critics of this position have questioned the intelligibility of the non-doxastic support relation. Thus, Davidson (1986) complains that advocates have been unable to explain the relation between experience and belief that allows the first to justify the second. This is an on-going debate the detailed coverage of which is outside the scope of the present article.

The difficulties pertaining to both rationalism and empiricism have led many epistemologists to think that there must be something fundamentally wrong with the way in which the debate has been framed, prompting their rejection of the foundationalist justificatory structure underlying rationalism and empiricism alike. Rather than conceiving the structure of our knowledge on the model of Euclidean geometry, with its basic axioms and derived theorems, these epistemologists favor a holistic picture of justification which does not distinguish between basic or foundational and non-basic or derived beliefs, treating rather all our beliefs as equal members of a “web of belief” (Quine and Ullian, 1970). Our purported knowledge, on this view, is more like a raft, which may have to be rebuilt on the open sea, to use Neurath's famous metaphor, than like a pyramid standing on its apex (Neurath 1983/1932, Sosa 1980).

Of course the mere rejection of foundationalism is not itself an alternative theory because it leaves us with no positive account of justification. A more substantial contrasting proposal is that what justifies our beliefs is ultimately the way in which they hang together or dovetail so as to produce a coherent set. As Davidson puts it, “[w]hat distinguishes a coherence theory is simply the claim that nothing can count as a reason for a belief except another belief” (Davidson, 1986). The fact that our beliefs cohere can establish their truth, even though each individual belief may lack justification entirely if considered in splendid isolation, or so it is thought. Following C. I. Lewis (1946), some proponents think of this situation as analogous to how agreeing testimonies in court can lead to a verdict although each testimony by itself would be insufficient for that purpose.

There is an obvious objection that any coherence theory of justification or knowledge must immediately face. It is called the isolation objection: how can the mere fact that a system is coherent, if the latter is understood as a purely system-internal matter, provide any guidance whatsoever to truth and reality? Since the theory does not assign any essential role to experience, there is little reason to think that a coherent system of belief will accurately reflect the external world. A variation on this theme is presented by the equally notorious alternative systems objection. For each coherent system of beliefs there exist, conceivably, other systems that are equally coherent yet incompatible with the first system. If coherence is sufficient for justification, then all these incompatible systems will be justified. But this observation, of course, thoroughly undermines any claim suggesting that coherence is indicative of truth.

As we shall see, most, if not all, influential coherence theorists try to avoid these traditional objections by assigning some beliefs that are close to experience a special role, whether they are called “supposed facts asserted” (Lewis, 1946), “truth-candidates” (Rescher, 1973), “cognitively spontaneous beliefs” (BonJour, 1985) or something else. Depending on how this special role is construed, these theories may be more fruitfully classified as versions of weak foundationalism than as pure coherence theories. An advocate of weak foundationalism typically holds that while coherence is incapable of justifying beliefs from scratch, it can provide justification for beliefs that already have some initial, perhaps miniscule, degree of warrant, e.g., for observational beliefs.

A fair number of distinguished contemporary philosophers have declared that they advocate a coherence theory of justification. Apart from this superficial fact, these theories often address some rather diverse issues loosely united by the fact that they in one way or the other take a holistic approach to the justification of beliefs. Here are some of the problems and questions that have prompted coherentist inquiry (cf. Bender, 1989):

The fact that these separate, though related, issues are sometimes discussed in one swoop presents a challenge to the reader of the relevant literature.

To get a firmer grasp of the coherence theory and the way in which it is invoked, it is helpful to introduce it, following tradition, as a response to the regress problem. This will also serve to illustrate some challenges that a coherence theory faces. We will then turn to the concept of coherence itself as that concept is traditionally conceived. Unfortunately, not all prominent authors associated with the coherence theory use the term coherence in this traditional sense, and the section that follows is devoted to such non-standard coherence theories. The arguably most systematic and prolific discussion of the coherence theory of justification has focused on the relationship between coherence and probability. The rest of the article will be devoted to this development, which took off in the mid-1990s inspired by seminal work by C. I. Lewis (1946) and which has given us precise and sophisticated definitions of coherence as well as detailed studies of the relationship between coherence and truth (probability), culminating in some potentially disturbing impossibility results that shed doubt on the possibility of defining coherence in a way that makes it indicative of truth. What these results entail, more precisely, and how the worries they raise can be addressed will be the topic of our final discussion.

2. The Regress Problem

On the traditional justified true belief account of knowledge, a person cannot be said to know that a proposition p is true without having good reasons for believing that p is true. If Lucy knows that she will pass tomorrow's exam, she must have good reasons for thinking that this is so. Consider now Lucy's reasons. They will presumably consist of other beliefs she has, e.g., beliefs about how well she did earlier, about how well she has prepared, and so on. For Lucy to know that she will pass the exam, these other beliefs, upon which the first belief rests, must also be things that Lucy knows. Knowledge, after all, cannot be based on something less than knowledge, i.e., on ignorance (cf. Rescher 1979, 76). Since the reasons are themselves things that Lucy knows, those reasons must in turn be based on reasons, and so on. Thus, any knowledge claim requires a never-ending chain, or “regress”, of reasons for reasons. This seems strange, or even impossible, because it involves reference to an infinite number of beliefs. But most of us think that knowledge is possible.

What is the coherentist's response to the regress? The coherentist can be understood as proposing that nothing prevents the regress from proceeding in a circle. Thus, A can be a reason for B which is a reason for C which is a reason for A. If this is acceptable, then what we have is a chain of reasons that is never-ending but which does not involve an infinite number of beliefs. It is never-ending in the sense that for each belief in the chain there is a reason for that belief also in the chain. Yet there is an immediate problem with this response due to the fact that justificatory circles are usually thought to be vicious ones. If someone claims C and is asked why she believes it, she may reply that her reason is B. If asked why she believes B, she may assert A. But if prompted to justify her belief in A, she is not allowed to refer back to C which in the present justificatory context is still in doubt. If she did justify A in terms of C nonetheless, her move would lack any justificatory force whatsoever.

The coherentist may respond by denying that she ever intended to suggest that circular reasoning is a legitimate dialectical strategy. What she objects to is rather the assumption that justification should at all proceed in a linear fashion whereby reasons are given for reasons, and so on. This assumption of linearity presupposes that what is, in a primary sense, justified are individual beliefs. This, says the coherentist, is simply wrong: it is not individual beliefs that are primarily justified, but entire belief systems. Particular beliefs can also be justified but only in a secondary or derived sense, if they form part of a justified belief system. This is a coherence approach because what makes a belief system justified, on this view, is precisely its coherence. A belief system is justified if it is coherent to a sufficiently high degree. This, in essence, is Laurence BonJour's 1985 solution to the regress problem.

This looks much more promising than the circularity theory. If epistemic justification is holistic in this sense, then a central assumption behind the regress is indeed false, and so the regress never gets started. Even so, this holistic approach raises many new questions to which the coherentist will need to respond. First of all, we need to get clearer on what the concept of coherence involves as that concept is applied to a belief system. This is the topic of the next section. Second, the proposal that a singular belief is justified merely in virtue of being a member of a justified totality can be questioned because, plausibly, a belief can be a member of a sufficiently coherent system without in any way adding to the coherence of that system. Surely, a belief will have to contribute to the coherence of the system in order to become justified by that system. A particular belief needs, in other words, to cohere with the system of which it is a member if that belief is to be considered justified. We will turn to this issue in section 4, in connection with Keith Lehrer's epistemological work. Finally, we have seen that most coherence theories assign a special role to some beliefs that are close to experience in order to avoid the isolation and alternative systems objections. This fact raises the question of what status those special beliefs have. Do they have to have some credibility in themselves or can they be totally lacking therein? A particularly clear debate on this topic is the Lewis-BonJour controversy over the possibility of justification by coherence from scratch, which we will examine more closely in section 5.

3. Traditional Accounts of Coherence

By a traditional account of coherence we will mean one which construes coherence as a relation of mutual support, consistency or agreement among given data (propositions, beliefs, memories, testimonies etc.). Early characterizations were given by, among others, Brand Blanshard (1939) and A. C. Ewing (1934). According to Ewing, a coherent set is characterized partly by consistency and partly by the property that every belief in the set follows logically from the others taken together. Thus, a set such as {A1, A2, A1&A2}, if consistent, is highly coherent on this view because each element follows by logical deduction from the rest in concert.

While Ewing's definition is admirably precise, it defines coherence too narrowly. Few sets that occur naturally in everyday life satisfy the austere second part of his definition: the requirement that each element follow logically from the rest when combined. Consider, for instance, the set consisting of propositions A, B and C, where

A = “John was at the crime scene at the time of the robbery”
B = “John owns a gun of the type used by the robber”
C = “John deposited a large sum of money in his bank account the next day”

This set is intuitively coherent, and yet it fails to satisfy Ewing's second condition. The proposition A, for instance, does not follow logically from B and C taken together: that John owns a gun of the relevant type and deposited money in his bank the day after does not logically imply him being at the crime scene at the time of the crime. Similarly, neither B nor C follows from the rests of the propositions in the set by logic alone.

C. I. Lewis's definition of coherence, or “congruence” to use his term, can be seen as a refinement and improvement of Ewing's basic idea. As Lewis defines the term, a set of “supposed facts asserted” is coherent (congruent) just in case every element in the set is supported by all the other elements taken together, whereby “support” is understood not in logical terms but in a weak probabilistic sense. In other words, P supports Q if and only if the probability of Q is raised on the assumption that P is true. As is readily appreciated, Lewis's definition is less restrictive than Ewing's: more sets will turn out to be coherent on the former than on the latter. (There are some uninteresting limiting cases for which this is not true. For instance, a set of tautologies will be coherent in Ewing's but not in Lewis's sense.)

Let us return to the example with John. The proposition A, while not logically entailed by B and C, is under normal circumstances nevertheless supported by those propositions taken together. If we assume that John owns the relevant type of gun and deposited a large sum the next day, then this should raise the probability that John did it and thereby also raise the probability that he was at the crime scene when the robbery took place. Similarly, one could hold that each of B and C is supported, in the weak probabilistic sense, by the other elements of the set. If so, this set is not only coherent in an intuitive sense but also coherent according to Lewis's definition. Against Lewis's proposal one could hold that it seems arbitrary to focus merely on the support single elements of a set receive from the rest of the set (cf. Bovens and Olsson 2000). Why not consider the support any subset, not just singletons, receives from the rest?

Another influential proposal concerning how to define coherence originates from Laurence BonJour (1985), whose account is considerably more complex than earlier suggestions. Where Ewing and Lewis proposed to define coherence in terms of one single concept—logical consequence and probability, respectively—BonJour thinks that coherence is a concept with a multitude of different aspects corresponding to the following “coherence criteria” (97–99):

  1. A system of beliefs is coherent only if it is logically consistent.
  2. A system of beliefs is coherent in proportion to its degree of probabilistic consistency.
  3. The coherence of a system of beliefs is increased by the presence of inferential connections between its component beliefs and increased in proportion to the number and strength of such connections.
  4. The coherence of a system of beliefs is diminished to the extent to which it is divided into subsystems of beliefs which are relatively unconnected to each other by inferential connections.
  5. The coherence of a system of beliefs is decreased in proportion to the presence of unexplained anomalies in the believed content of the system.

A difficulty pertaining to theories of coherence that construe coherence as a multidimensional concept is to specify how the different dimensions are to be amalgamated so as to produce an overall coherence judgment. It could well happen that one system S is more coherent than another system T in one respect, whereas T is more coherent than S in another. Perhaps S contains more inferential connections than T, but T is less anomalous than S. If so, which system is more coherent in an overall sense? Bonjour's theory is largely silent on this point.

BonJour's account also raises another general issue. The third criterion stipulates that the degree of coherence increases with the number of inferential connections between different parts of the system. Now as a system grows larger the probability that there will be relatively many inferentially connected beliefs is increased simply because there are more possible connections to be made. Hence, one could expect there to be a positive correlation between the size of a system and the number of inferential connection between the beliefs contained in the system. BonJour's third criterion, taken at face value, entails therefore that a bigger system will generally have a higher degree of coherence due to its sheer size. But this is at least not obviously correct. A possible modified coherence criterion could state that what is correlated with higher coherence is not the number of inferential connections but rather the inferential density of the system, where the latter is obtained by dividing the number of inferential connections by the number of beliefs in the system.

4. Other Accounts of Coherence

We will return, in section 6, to the problem of defining the traditional concept of coherence while addressing some of the concerns that we have raised, e.g., concerning the relationship between coherence and system size. The point of departure for the present discussion, however, is the observation that several prominent self-proclaimed coherentists construe the central concept, and to some extent also its role in philosophical inquiry, in ways that depart somewhat from the traditional view. Among them we find Nicolas Rescher, Keith Lehrer and Paul Thagard.

Central in Rescher's account, as laid out in Rescher (1973), his most influential book on the subject, is the notion of a truth-candidate. A proposition is a truth-candidate if it is potentially true, so that there is something that speaks in its favor. Rescher's truth-candidates are obviously related to Lewis's “supposed facts asserted”. In both cases, the propositions of interest are prima facie rather than bona fide truths. Although Rescher's 1973 book is entitled A Coherence Theory of Truth, the purpose of Rescher's investigation is not to investigate the possibility of defining truth in terms of coherence but to find a truth criterion, which he understands to be a systematic procedure for selecting from a set of conflicting and even contradictory truth-candidates those elements which it is rational to accept as bona fide truths. His solution amounts to first identifying the maximal consistent subsets of the original set, i.e., the subsets that are consistent but would become inconsistent if extended by further elements of the original set, and then choosing the most “plausible” among these subsets. Plausibility is spelled out in a way that reveals no obvious relation to the traditional concept of coherence. While the traditional concept of coherence plays a role in the philosophical underpinning of Rescher's theory, it does not figure essentially in the final product. In a later book, Rescher develops a more traditional “system-theoretic” view on coherence (Rescher 1979).

Keith Lehrer employs the concept of coherence in his definition of justification, which in turn is a chief ingredient in his complex post-Gettier definition of knowledge. According to Lehrer, a person is justified in accepting a proposition just in case that proposition coheres with the relevant part of her cognitive system. This is the relational concept of coherence alluded to earlier. In Lehrer (1990), the relevant part is the “acceptance system” of the person, consisting of reports to the effect that the subject accepts this and that. Thus, “S accepts that A” would initially be in S's acceptance system, but not A itself. In later works, Lehrer has emphasized the importance of coherence with a more complex cognitive entity which he calls the “evaluation system” (e.g., Lehrer 2000 and 2003).

The starting point of Lehrer's account of coherence is the fact that we can think of all sorts of objections an imaginative critic may raise to what a person accepts. These objections might be directly incompatible with what that person accepts or they might threaten to undermine her reliability in making assessments of the kind in question. For instance, a critic might object to her claim that she sees a tree by suggesting that she is merely hallucinating. That would be an example of the first sort of objection. An example of the second sort would be a case in which the critic replies that the person cannot tell whether she is hallucinating or not. Coherence, and (personal) justification, results when all objections have been met.

Lehrer's concept of coherence does not seem to have much in common with the traditional concept of mutual support. If one takes it as essential that such a theory make use of a concept of systematic or global coherence, then Lehrer's theory is not a coherence theory in the traditional sense because, in Lehrer's view, “[c]oherence … is not a global feature of the system” (1997, 31), nor does it depend on global features of the system (31). A critic may wonder what reasons there are for calling the relation of meeting objections to a given claim relative to an evaluation system a relation of coherence. Lehrer's answer seems to be that it is a relation of “fitting together with”, rather than, say, a relation of “being inferable from”: “[i]f it is more reasonable for me to accept one of [several] conflicting claims than the other on the basis of my acceptance system, then that claim fits better or coheres better with my acceptance system” (116), and so “[a] belief may be completely justified for a person because of some relation of the belief to a system to which it belongs, the way it coheres with the system, just as a nose may be beautiful because of some relation of the nose to a face, the way it fits with the face” (88). Olsson (1999) has objected to this view by pointing out that it is difficult to understand what it means for a belief to fit into a system unless the former does so in virtue of adding to the global coherence of the latter.

Paul Thagard's theory is clearly influenced by the traditional concept of coherence but the specific way in which the theory is developed gives it a somewhat non-traditional flavor, in particular considering its strong emphasis on explanatory relations between beliefs. Like Rescher, Thagard takes the fundamental problem to be which elements of a given set of typically conflicting claims that have the status of prima facie truths to single out as acceptable. However, where Rescher proposes to base the choice of acceptable truths on considerations of plausibility, Thagard suggests the use of explanatory coherence for that purpose.

According to Thagard, prima facie truths can cohere (fit together) or “incohere” (resist fitting together). The first type of relation includes relations of explanation and deduction, whereas the second type includes various types of incompatibility, such as logical inconsistency. If two propositions cohere, this gives rise to a positive constraint. If they incohere, the result is a negative constraint. A positive constraint between two propositions can be satisfied either by accepting both or by rejecting both. By contrast, satisfying a negative constraint means accepting one proposition while rejecting the other. A “coherence problem”, as Thagard sees it, is one of dividing the initial set of propositions into those that are accepted and those that are rejected in such a way that most constraints are satisfied. Thagard presents several different computational models for solving coherence problems, including a model based on neural networks.

How acceptability depends on coherence, more precisely, is codified in Thagard's “principles of explanatory coherence” (Thagard, 2000):

Principle E1 (Symmetry)
Explanatory coherence is a symmetric relation, unlike, say, conditional probability. That is, two propositions A and B cohere with each other equally.
Principle E2 (Explanation)
  1. A hypothesis coheres with what it explains, which can either be evidence or another hypothesis.
  2. Hypotheses that together explain some other proposition cohere with each other.
  3. The more hypotheses it takes to explain something, the lower the degree of coherence.
Principle E3 (Analogy)
Similar hypotheses that explain similar pieces of evidence cohere.
Principle E4 (Data Priority)
Propositions that describe the results of observation have a degree of acceptability on their own.
Principle E5 (Contradiction)
Contradictory propositions are incoherent with each other.
Principle E6 (Competition)
If A and B both explain a proposition, and if A and B are not explanatorily connected, then A and B are incoherent with each other (A and B are explanatorily connected if one explains the other or if together they explain something).
Principle E7 (Acceptance)
The acceptability of a proposition in a system of propositions depends on its coherence with them.

Principle E4 (Data Priority) reveals that Thagard's theory is not a pure coherence theory, as it gives some epistemic priority to observational beliefs, making it rather a form of weak foundationalism. Moreover, Thagard's theory is based on binary coherence/incoherence relations, i.e., relations holding between two propositions. His basic theory does not handle incompatibilities that involve, in an essential way, more than two propositions. But incompatibilities of that sort may very well arise, as exemplified by the three propositions “Jane is taller than Martha”, “Martha is taller than Karen” and “Karen is taller than Jane”. Nevertheless, Thagard reports the existence of computational methods for converting constraint satisfaction problems whose constraints involve more than two elements into problems that involve only binary constraints, concluding that his characterization of coherence “suffices in principle for dealing with more complex coherence problems with nonbinary constraints” (Thagard 2000, 19). Several other authors have defended coherence theories that emphasize the importance of explanatory relations, e.g., William Lycan. See Lycan (1988) and, for a recent statement, Lycan (2012).

5. Justification by Coherence from Scratch

The arguably most significant development of the coherence theory in recent years has been the revival of C. I. Lewis's work and the research program he inspired by translating parts of the coherence theory into the language of probability. This translation has made it possible to define concepts and prove results with mathematical precision. It has also led to increased transferability of concepts and results across fields, e.g., between coherence theory and confirmation theory as it is studied in philosophy of science. As an effect, the study of coherence, from being a fairly isolated and somewhat obscure part of epistemology, has developed into an interdisciplinary research program with connections to philosophy of science, cognitive psychology, artificial intelligence and philosophy of law. The rest of this article will be devoted to this recent transformation of the subject.

To introduce Lewis's view on the role of coherence, consider the following famous passage on “relatively unreliable witnesses who independently tell the same story” from his 1946 book:

For any one of these reports, taken singly, the extent to which it confirms what is reported may be slight. And antecedently, the probability of what is reported may also be small. But congruence of the reports establishes a high probability of what they agree upon, by principles of probability determination which are familiar: on any other hypothesis than that of truth-telling, this agreement is highly unlikely; the story any one false witness might tell being one out of so very large a number of equally possible choices. (It is comparable to the improbability that successive drawings of one marble out of a very large number will each result in the one white marble in the lot.) And the one hypothesis which itself is congruent with this agreement becomes thereby commensurably well established. (246)

While Lewis allows that individual reports need not be very credible considered in isolation for coherence to have a positive effect, he is firmly committed to the view that their credibility must not be nil. He writes, in his discussion of reports from memory, that “[i]f … there were no initial presumption attaching to the mnemically presented … then no extent of congruity with other such items would give rise to any eventual credibility” (357). In other words, if a belief system is completely isolated from the world, then no justification will ensue from observing the coherence of its elements. Thus, Lewis is advocating weak foundationalism rather than a pure coherence theory.

In apparent agreement with Lewis, Laurence BonJour (1985, 148) writes: “[a]s long as we are confident that the reports of the various witnesses are genuinely independent of each other, a high enough degree of coherence among them will eventually dictate the hypothesis of truth telling as the only available explanation of their agreement.” However, BonJour proceeds to reject Lewis's point about the need for positive antecedent credibility: “[w]hat Lewis does not see, however, is that his own [witness] example shows quite convincingly that no antecedent degree of warrant or credibility is required” (148). BonJour is here apparently denouncing Lewis's claim that coherence will not have any confidence boosting power unless the sources are initially somewhat credible. BonJour is proposing that coherence can play this role even if there is no antecedent degree of warrant, so long as the witnesses are delivering their reports independently.

Several authors have objected to this claim of BonJour's, arguing that coherence does not have any effect on the probability of the report contents if the independent reports lack individual credibility. The first argument to that effect was given by Michael Huemer (1997). A more general proof in the same vein is presented in Olsson (2002). What follows is a sketch of the latter argument for the special case of two testimonies, couched essentially in the terminology of Huemer (2011). In the following, all probabilities are assumed to lie strictly between 0 and 1.

Let E1 be the proposition that the first witness reports that A, and let E2 be the proposition that the second witness reports that A. Consider the following conditions:

Conditional Independence
P(E2 | E1, A) = P(E2 | A)
P(E2 | E1A) = P(E2 | ¬A)
P(A | E1) = P(A)
P(A | E2) = P(A)
Coherence Justification
P(A | E1, E2) > P(A)

Conditional independence is intended to capture the idea that the testimonies are independent in the sense that there is no direct influence between the testimonies. The probability of a testimony is influenced only by the fact it reports on, meaning that once that fact is given, this “screens off” any probabilistic influence between the individual testimonies making them irrelevant to each other. Nonfoundationalism states that neither testimony confers any justification upon A by itself: assuming merely that one single witness has testified that A has no effect on the probability of A. Finally, Coherence Justification states that testimonies when combined do provide justification for A.

The debate between Lewis and BonJour can be reconstructed as a debate over the joint consistency of these three conditions. BonJour is claiming that the conditions are jointly consistent, and that Coherence Justification follows from Conditional Independence even in the context of Nonfoundationalism, whereas Lewis is rejecting these claims. Olsson (2002) established that if the dispute is couched in these terms, then Lewis was provably right. From Conditional Independence and Nonfoundationalism it follows that

P(A | E1, E2) = P(A)

so that combining collectively independent but individually useless testimonies, however coherent, fails to give rise to anything useful. (As noted in Olsson, 2005, section 3.5, the matter is somewhat complicated by the fact that Lewis adopted a notion of independence that is weaker than Conditional Independence. Ironically, Lewis's weaker notion turns out to be compatible with the combination of Nonfoundationalism and Coherence Justification.)

Nonfoundationalism should be contrasted with the following condition:

Weak Foundationalism
P(A | E1) > P(A)
P(A | E2) > P(A)

Weak Foundationalism does not by itself entail Coherence Justification: it is part of the folklore of probability theory that even if two pieces of evidence each support a given conclusion, that support may disappear, or even turn into disconfirmation, if they are combined. However, in the context of Conditional Independence, Weak Foundationalism does imply Coherence Justification. Indeed, the combined testimonies will, in this case, confer more support upon the conclusion than the testimonies did individually. As recently confirmed by James Van Cleve (2011), the conclusions supported by these considerations are that coherence can boost justification or credibility that is already there without being able to create such justification or credibility from scratch. The latter conclusion can be seen as a probabilistic vindication of the traditional isolation objection, or a precise version thereof.

There are various ways to save the coherence theory from this probabilistic attack. The most radical strategy would be to dismiss the probabilistic framework as altogether unsuitable for coherentism. Independent reasons for this response can be found in Thagard's work (e.g., Thagard 2000 and 2005). A less radical approach would be to refrain from any blanket dismissal of probability theory in this context but reject one of the premises of the troublesome proof. This is the strategy recently taken by Huemer, who now considers his 1997 probabilistic refutation of coherentism to be mistaken (Huemer 2011, 39, footnote 6). While he thinks that Coherentist Justification correctly captures a minimal sense of coherentism, he reports dissatisfaction with both Conditional Independence and Nonfoundationalism (his term for the latter is “Strong Nonfoundationalism”). Huemer now thinks independence, in the intuitive sense, is better captured by the condition P(E2 | E1, A) > P(E2 | E1, ¬A). Moreover, he takes the condition P(A | E1, ¬E2) = P(A), or “Weak Nonfoundationalism” in his terminology, to be a more suitable explication of nonfoundationalist intuitions than the condition P(A | E1) = P(A). Space does not allow a more detailed examination of Huemer's arguments for these conditions. At any rate, he goes on to show that they are jointly consistent with Coherentist Justification: there are probability distributions satisfying all three conditions. Thus the immediate threat to coherentism presented by the observed inconsistency of the three original conditions has been neutralized, even though a critic might point out that the defense is weak since it has not been shown that Coherence Justification follows from the two new conditions.

Whatever merits Huemer's new conditions might have, their standing in the literature is hardly comparable to that of the original conditions. Conditional Independence, for instance, is an extremely powerful and intuitive concept which has been put to fruitful use in many areas in philosophy and computer science, the most spectacular example being the theory of Bayesian networks (Pearl, 1985). Similarly, the Nonfoundationalist condition is still the most widely used—and many would say most natural—way of stating, in the language of probability theory, that a testimony fails to support that which is testified. Thus, it would seem that coherentism is saved at the price of disconnecting it from the way in which probability theory is standardly applied. Roche (2010) criticizes Nonfoundationalism from another perspective. In his view, a close reading of BonJour reveals that the latter requires only that the witness reports lack individual credibility in the sense that P(A | Ei) = 0.5 and not in the sense of P(A | Ei) = P(A), which is the condition we called Nonfoundationalism. Since the former does not entail the latter, coherentists, to the extent that they follow BonJour, need not worry about the joint inconsistency of Conditional Independence, Nonfoundationalism and Coherence Justification. Still, this account of what it means to lack initial credibility is highly counterintuitive if taken as a general characterization, and it may in the end be more charitable to interpret BonJour as not having subscribed to it. For an elaboration of this point the reader is referred to Olsson (2005, 65), footnote 4. In later works, BonJour has gradually retracted from his original coherentist position (e.g., BonJour 1989 and 1999).

6. Probabilistic Measures of Coherence

We recall that Lewis's defined coherence, or congruence, not for any old set of proposition but rather for a set of supposed facts asserted. One way to capture this idea is in terms of the notion of a testimonial system introduced in Olsson (2005). A testimonial system S is a set {⟨E1,A1⟩,…, ⟨En,An⟩} where Ei is a report to the effect that Ai is true. We will say that Ai is the content of report Ei. The content of a testimonial system S = {⟨E1,A1⟩, … , ⟨En,An⟩} is the ordered set of report contents ⟨A1,…,An⟩. By the degree of coherence C(S) of such a testimonial system we will mean the degree of coherence of its content. Bovens and Hartmann (2003) proposed a similar representation of supposed facts asserted in terms of ordered sets.

To illustrate these concepts, consider a case in which all witnesses report exactly the same thing, e.g., that John was at the crime scene. That would be a paradigm case of a (highly) coherent set of reports. Now contrast this situation with one in which only one witness reports this. That would be a situation which would intuitively not qualify as coherent. Indeed, it does not even seem meaningful to apply the concept of coherence to a case of just one report (except in the trivial sense in which everything coheres with itself). Letting A be the proposition “John was at the crime scene”, and E1,…,En the corresponding reports, this intuitive difference can be represented as the difference between two testimonial systems: S = {⟨E1,A⟩,…, ⟨E2,A⟩} and S′ = {⟨E1,A⟩}. If, by contrast, the entities to which coherence applies are represented as simple unstructured sets, the sets of testimonies in question would be given the same formal representation in terms of the set having A as its sole member.

By a (probabilistic) coherence measure, as defined for ordered sets of propositions, we shall mean any numerical measure C(A1,…,An) defined solely in terms of the probability of A1,…,An (and their Boolean combinations) and standard arithmetical operations (Olsson, 2002). This definition makes the degree of coherence of a set of witness reports a function of the probability of the report contents (and their Boolean combinations). Huemer (2011, 45) refers to this consequence as the Content Determination Thesis. We will return to the status of this thesis in section 8, in connection with the recent impossibility results for coherence. A reasonable constraint on any coherence measure is that the degree of coherence of an ordered set should be independent of the particular way in which the content propositions are listed. Thus, C(⟨A1,A2, … ,An⟩) = C(⟨B1,B2, … ,Bn⟩) whenever ⟨B1,B2, … ,Bn⟩ is a permutation of ⟨A1,A2, … ,An⟩. This is a formal way of stating that all propositions in the relevant set should be treated as epistemic equals. All measures that will be discussed below satisfy this condition.

Our starting point will be an attempt to identify the degree of coherence of a set with its joint probability:

C0(A,B) = P(AB)

However, it is easily seen that this is not a plausible proposal. Consider the following two cases. Case 1: Two witnesses point out the same person as the perpetrator, John, say. Case 2: One witness states that John or James did it, and the other witness that John or Mary did it. Since the joint probability is the same in both cases, equaling the probability that John did it, they yield the same degree of coherence as measured by C0. And yet, the reports in the first case are more coherent from a presystematic standpoint because the witnesses are in complete agreement.

One way of handling this example would be to define coherence as follows (Glass 2002, Olsson 2002):

C1(A,B)  =   P(AB

C1(A,B), which also takes on values between 0 and 1, measures how much of the total probability mass assigned to either A or B falls into their intersection. The degree of coherence is 0 if and only if P(AB) = 0, i.e., just in case A and B do not overlap at all, and it is 1 if and only if P(A∧B) = P(A∨B), i.e., just in case A and B coincide. The measure is straightforwardly generalizable:

C1(A1,…An)  =   P(A1∧…∧An

This measure assigns the same coherence value, namely 1, to all cases of total agreement, regardless of the number of witnesses that are involved. Against this it may be objected that agreement among the many is more coherent than agreement among the few, an intuition that can be accounted for by following alternative measure introduced by Tomoji Shogenji (1999):

C2(A,B)  =   P(A | B  =    P(AB)  
  P(A)     P(AP(B)  

or, as Shogenji proposes to generalize it,

C2(A1,…,An)  =    P(A1∧…∧An)  

It is easy to see that this measure is sensitive, in the way we suggested, to the number of reports in cases of total agreement: n agreeing reports correspond to a coherence value of 1/P(A)n−1, meaning that as n approaches infinity, so does the degree of coherence. Like the other measures, C2(A,B) equals 0 if and only if A and B do not overlap. An alternative generalization of the Shogenji measure is presented in Shupbach (2011). However, whatever its philosophical merits, Schupbach's proposal is considerably more complex than Shogenji's original suggestion. Ken Akiba (2000) raises a number of worries for the Shogenji measure but they seem to be predicated on the assumption that the concept of coherence is interestingly applicable to unordered sets of proposition, an assumption that we found reason to question above.

C1 and C2 can also be contrasted with regard to their sensitivity to the specificity of the propositions involved. Consider two cases. The first case involves two witnesses both claiming that John committed the crime. The second case involves two witnesses both making the weaker disjunctive claim that John, Paul or Mary committed the crime. Which pair of witnesses are delivering the more coherent set? One way to reason is as follows. Since both cases involve fully agreeing testimonies, the degree of coherence should be the same. This is also the result we get if we apply C1. But one could maintain instead that since the first two witnesses agree on something more specific—a particular individual's guilt—the degree of coherence should be higher. This is what we get if we apply C2. In an attempt at reconciliation, Olsson (2002) suggested that C1 and C2 may capture two different concepts of coherence. While C1 measures the degree of agreement of a set, C2 is more plausible as a measure of how striking the agreement is.

A further much discussed measure is that proposed in Fitelson (2003). It is based on the intuition that the degree of coherence of a set E should be “a quantitative, probabilistic generalization of the (deductive) logical coherence of E” (ibid., 194). Fitelson takes it to be a consequence of this idea that a maximum (constant) degree of coherence is attained if the propositions in E are all logically equivalent (and consistent). This is in accordance with C1 but not with C2, which as we saw is sensitive to the specificity (prior probability) of the propositions involved. Fitelson, who approached the subject from the standpoint of confirmation theory, proposed a complex coherence measure based on Kemeny and Oppenheim's (1952) measure of factual support. A further innovative idea is that Fitelson extends this measure to take into account support relations holding between all subsets in the set E, whereas Lewis, we recall, only considered the support relation holding between one element and the rest. The degree of coherence of a set, finally, is defined as the mean support among the subsets of E. An alleged counterexample to this measure can be found in Siebel (2004). The reader may wish to consult Bovens and Hartmann (2003) and Douven and Meijs (2007) for further measures and for detailed critical surveys of the literature. Douven and Meijs's own proposal is similar to Fitelson's in defining coherence in terms of confirmation.

It is fair to say that coherence theorists have yet to reach anything like consensus on how best to define coherence in probabilistic terms. Nevertheless, the debate so far has given rise to a much more fine-grained understanding of what the options are and what consequences they have. What is more, some quite surprising conclusions can be drawn even with this issue largely unresolved: all we need to assume in order to prove that no coherence measure can be truth conducive, in a sense to be explained, is that those measures respect the Content Determination Thesis.

7. Truth Conduciveness: the Analysis Debate

Peter Klein and Ted Warfield's 1994 paper in Analysis initiated a lively and instructive debate on the relationship between coherence and probability (e.g., Klein and Warfield 1994 and 1996, Merricks 1995, Shogenji 1999, Cross 1999, Akiba 2000, Olsson 2001, Fitelson 2003 and Siebel 2004). According to Klein and Warfield, just because one set of beliefs is more coherent than another set, this does not mean that the first set is more likely to be true. On the contrary, a higher degree of coherence can, so they claimed, be associated with a lower probability of the whole set. The idea behind their reasoning is simple: We can often raise the coherence of an informational set by adding more information that explains the information already in the set. But as more genuinely new information is added, the probability that all the elements of the set are true is correspondingly diminished. This, Klein and Warfield wrote, follows from the well-known inverse relationship between probability and informational content. They concluded that coherence is not truth conducive.

Much in the spirit of C. I. Lewis, Klein and Warfield illustrated their argument referring to a detective story (the so-called “Dunnit example”). It turns out that this example is unnecessarily complex and that the main point can be illustrated by reference to a simpler case (borrowed from computer science where it is used to exemplify the concept of non-monotonic inference). Suppose that you are told by one source, Jane, that Tweety is a bird and by another source, Carl, that Tweety cannot fly. The resulting information set S = ⟨“Tweety is a bird”, “Tweety cannot fly”⟩ is not particularly coherent from an intuitive standpoint. Nor is it coherent from the point of view of Lewis's definition: assuming one of the items true decreases the probability of the other. At this point, it would be reasonable to conjecture that either Jane or Carl is not telling the truth. However, upon consulting a further source, Rick, we receive the information that Tweety is a penguin. The new set S′ = ⟨“Tweety is a bird”, “Tweety cannot fly”, “Tweety is a penguin”⟩ is surely more coherent than S. In explaining the previous anomaly, the information supplied by Rick contributes to the explanatory coherence of the set.

The new enlarged set S′ is more coherent than the original smaller set S. And yet S, being less informative, is more probable than S′: the conjunction of all the propositions in S is more probable than the conjunction of all the propositions in S′. Hence, more coherence does not necessarily imply higher likelihood of truth in the sense of higher joint probability. Klein and Warfield seem to be right: coherence is not truth conducive.

But, as will soon be clear, this conclusion is premature. As a preliminary, let us state Klein and Warfield's argument more formally using the following abbreviations:

A1 = “Tweety is a bird.”
A2 = “Tweety cannot fly.”
A3 = “Tweety is a penguin.”

The first information set S consists of A1 and A2. The second, more coherent set S′ contains, in addition, A3. We let C denote the degree of coherence, intuitively understood. What we have then is:

C(A1,A2) > C(A1,A2,A3).

As we saw, due to the greater informational content of the larger set, its probability is lower than that of the smaller set:

P(A1,A2,A3) < P(A1,A2).

Yet behind this seemingly impeccable piece of reasoning lurks a serious difficulty. It has not been taken into account that we are supposed to know also that Jane reports that Tweety is a bird, that Carl reports that Tweety cannot fly and that Rick reports that Tweety is a penguin. Let:

E1 = “Jane reports that Tweety is a bird”
E2 = “Carl reports that Tweety cannot fly”
E3 = “Rick reports that Tweety is a penguin”

The well-known principle of total evidence now dictates that all relevant evidence should be taken into consideration when computing probabilities. Since it cannot be excluded at the outset that the evidence represented by E1E3 may be relevant to the probability of the information sets S and S′, the probability of the smaller set is not P(A1,A2) but rather P(A1,A2 | E1,E2). Similarly, the probability of the larger set is not P(A1,A2,A3) but rather P(A1,A2,A3 | E1, E2, E3). Klein and Warfield's reasoning fails because based on an incorrect understanding of the joint probability of a set of reported propositions.

Bovens and Olsson (2002) raised the question whether, given this revised understanding of the probability of a set of reported propositions, it would still follow that extended sets are no more probable than the sets they extend. Referring to our Tweety example, would it still hold that

P(A1,A2,A3 | E1,E2,E3) < P(A1,A2 | E1,E2)?

Bovens and Olsson demonstrated that the answer to the general question is in the negative by giving an example of a more coherent extended set that is also more probable, on the revised understanding of what this means, than the original smaller set.

Let us say that a measure C of coherence is propositionally truth conducive if and only if the following holds:

if C(A1,…,An) > C(B1,…,Bm), then P(A1∧…∧An) > P(B1∧…∧Bm).
One lesson emerging from the Analysis debate is that this way of construing truth conduciveness should be replaced by a notion of truth conduciveness where the relevant probabilities takes all relevant evidence into account, whatever that evidence may be (beliefs, testimonies etc.). For example, a coherence measure C is doxastically truth conducive (for a subject S) if and only if:
if C(A1,…,An) > C(B1,…,Bm), then P(A1∧…∧An | BelSA1,…,BelSAn) > P(B1∧…∧Bm | BelSB1,…,BelSBm),

where BelSA abbreviates “S believes that A”. In other words, a measure of coherence is doxastically truth conducive just in case a more coherent set of believed propositions is jointly more probable than a less coherent set of believed propositions. This the how we will understand the probability (likelihood of truth) of a set in the following.

8. Impossibility Results

The recent impossibility results for coherence draw on all three debates summarized above: the Lewis-BonJour controversy, the debate over probabilistic measures of coherence and also the dispute in Analysis regarding truth conduciveness. Before we can discuss the results we need to make one further observation. Given the conclusion of the Lewis-BonJour dispute, it is a reasonable expectation that no coherence measure is truth conducive, in the relevant conditional sense, unless it is assumed that the reports (beliefs, memories etc.) in question are individually credible and collectively independent. But assuming this is not sufficient for coherence to stand a reasonable chance of being truth conducive. We must also require that when we compare two different sets of reports, we do so while keeping the degree of individual credibility fixed. Otherwise we could have a situation in which one set of report contents is more coherent than another set but still fails to give rise to a higher likelihood of truth simply because the reporters delivering the propositions in the less coherent set are individually more reliable. Thus, truth conduciveness must be understood in a ceteris paribus sense. The question of interest, then, is whether more coherence implies a probability (given independence and individual credibility) everything else being equal. We are now finally in a position to state the impossibility theorems. What they show is that no measure of coherence is truth conducive even in a weak ceteris paribus sense, under the favorable conditions of (conditional) independence and individual credibility.

The first result of this nature was presented by Luc Bovens and Stephan Hartmann (2003). Their definition of truth conduciveness deviates slightly from the standard account given above. As they define it, a measure C is truth conducive if and only if, for all sets S and S′, if S is at least as coherent as S′ according to C, then S is at least as likely to be true as Sceteris paribus, given independence and individual credibility. Very roughly, their proof has the following structure: They show that there are sets S and S, each containing three propositions, such that which set is more likely to be true will depend on the level at which the individual credibility (reliability) is held fixed. Thus for lower degrees of reliability, one set, say S, will be more probable than the other set, S′; for higher degrees of reliability, the situation will be reversed. One can now find a counterexample to the truth conduciveness of any measure C through a strategic choice of the level at which the reliability is held fixed. Suppose for instance that, according to C, the set S is more coherent than the set S′. In order to construct a counterexample to C's truth conduciveness, we set the reliability to a value for which S′ will be more probable than S. If, on the other hand, C makes S′ more coherent than S, we fix the reliability to a level at which S will be the more probable set. For the details, see Bovens and Hartmann (2003, section 1.4).

Olsson defines truth conduciveness in the standard fashion. His impossibility theorem is based on the following alternative proof strategy (Olsson 2005, appendix B): Consider a situation of two witnesses both reporting that A, represented by S = ⟨A, A⟩. Take a measure C of coherence that is informative with respect to S, in the sense that it does not assign the same degree of coherence to S regardless of which probability assignment is used. Take two assignments P and P′ of probabilities to the propositions in S that give rise to different coherence values. Olsson shows that a counter example to the truth conduciveness of C can be constructed through a strategic choice of the probability of reliability. If P makes S more coherent than does P′ according to C, we fix the probability of reliability in such a way that S comes out as more probable on P′ than on P. If, on the other hand, P′ makes S more coherent, then we choose a value for the probability of reliability so that P makes S more probable. It follows that no coherence measure is both truth conducive and informative.

There are some further subtle differences between the two results. First, Olsson's theorem is proved against the backdrop of a dynamic model of reliability: the assessment of witness reliability, which in this model is represented as a probability of reliability, may change as we obtain more testimonies. Bovens and Hartmann's detailed proof assumes a non-dynamic model of reliability, although they indicate that the result carries over to the dynamic case. Second, there is a difference in the way the ceteris paribus condition is understood. Olsson fixes the initial probability of reliability, but allows the prior probability of the report contents to vary. Bovens and Hartmann fix not only the reliability but also the prior probability of the report contents.

These impossibility results give rise to a thought-provoking paradox. How can it be that we trust and rely on coherence reasoning, in everyday life and in science, when in fact coherence is not truth conducive? Since the impossibility results were published a number of studies have been dedicated to the resolution of this paradox. These studies can be divided into two camps. Researchers in the first camp accept the conclusion that the impossibility results show that coherence is not truth conducive. They add, however, that this does not prevent coherence from being valuable and important in other ways. Researchers in the other camp do not accept the conclusion that the impossibility results show that coherence is not truth conducive because they think that at least one premise used in proving the results is doubtful.

Let us start with responses from the first camp. Franz Dietrich and Luca Moretti (2005) show that coherence in the sense of the Olsson measure is linked to the practice of indirect confirmation of scientific hypotheses. That measure turns out to be, in the terminology of Moretti (2007), “confirmation conducive”. David H. Glass (2007) argues, similarly, that coherence can provide the key to a precise account of inference to the best explanation, the main idea being to use a coherence measure for ranking competing hypotheses in terms of their coherence with a given piece of evidence. Furthermore, Olsson and Schubert (2007) observe that, while coherence falls short of being truth conducive, it can still be “reliability conducive”, i.e., more coherence, according to some measures, entails a higher probability that the sources are reliable, at least in a paradigmatic case (cf. Schubert 2012a, 2011). Nevertheless, Schubert has recently proved an impossibility theorem to the effect that no coherence measure is reliability conducive in general (Schubert 2012b). For yet another example, Staffan Angere (2007, 2008) has argued, based on computer simulations, that the fact that coherence fails to be truth conducive, in the above sense, does not prevent it from being connected with truth in a weaker, defeasible sense. In fact, almost all coherence measures that have an independent standing in the literature satisfy the condition that most cases of higher coherence are also cases of higher probability, although they do so to different degrees. Finally, it has been noted that coherence plays an important negative role in our thinking. If our beliefs show signs of incoherence, this is often a good reason for contemplating a revision. See chapter 10 in Olsson (2005) for an elaboration of this point.

As for the other approach to the impossibility results (questioning the premises used in their derivation), we have already seen that Huemer (2007, 2011), in connection with the Lewis-BonJour dispute, has expressed doubts regarding the standard way of formalizing independence in terms of conditional probability. It should come as no surprise that he objects to the impossibility results (ibid.) on the same grounds. In his 2011 article, Huemer even questions the Content Determination Thesis, which plays a pivotal role in the derivation of the results, for reasons that we have to leave aside here.

All these things can be consistently questioned. But the question is: at what cost? We have already seen that there are strong systematic reasons for explicating independence in terms of conditional independence. Furthermore, the Content Determination Thesis is deeply entrenched in just about all work on coherence that takes agreeing witnesses to be the prototypical case. Giving up Content Determination would mean purging the coherence theory of one of its clearest and most distinctive pre-systematic intuitions: that coherence is a property at the level of report contents. The worry is that coherentism is saved at the cost of robbing it of almost all its significance, as Ewing put it almost a century ago in response to a similar worry (Ewing 1934, 246).

These concerns do not obviously carry over to another dialectical move: questioning the ceteris paribus conditions employed in the impossibility results, i.e., the conditions that determine what to hold fixed as the degree of coherence is varied. This line of criticism has been taken up by several authors, including Douven and Meijs (2007), Schupbach (2008) and Huemer (2011), and it may well be the internally least problematic strategy to explore for those who are inclined to challenge the premises upon which the impossibility results are based. It should be borne in mind, though, that the tendency to offer ever stronger ceteris paribus conditions may in the end be self-defeating. As more things are held fixed, it becomes easier for a coherence measure to be truth conducive. Hence, researchers pursuing this line of defense ultimately run the risk of trivializing the debate by making coherence truth conducive by definition (cf. Schubert 2012b).

9. Conclusions

The coherence theory of justification represents an initially suggestive solution to some deeply rooted problems of epistemology. Perhaps most significantly, it suggests a way of thinking about knowledge and justification as arising in a “web of belief”. As such, it competes with, and could potentially replace, the historically dominating, but increasingly disreputable, foundationalist picture of knowledge as resting on a secure base of indubitable fact, as well as with other forms of foundationalism such as the view that some beliefs enjoy non-doxastic support that requires no support of its own. Unfortunately, coherence theorists have generally struggled to provide the details necessary for their theory to advance beyond the metaphorical stage, something which has not gone unnoticed by their critics. Following the seminal work of C. I. Lewis, contemporary scholars have taken on that challenge with considerable success in terms of clarity and established results, although a fair number of the latter are to the coherentist's disadvantage. Some results support a weak foundationalist theory according to which coherence can boost credibility that is already there, without creating it from scratch. However, on the face of it, the impossibility results negatively affect this less radical form of coherence theory as well. It is often observed that while it is relatively easy to put forward a convincing theory in the outline, the ultimate test for any philosophical endeavor is whether the product will survive detailed specification (the devil is in the details, and so on). What the recent developments in this area have shown, if nothing else, is that this is very much true for the coherence theory of epistemic justification.


Academic Tools

sep man icon How to cite this entry.
sep man icon Preview the PDF version of this entry at the Friends of the SEP Society.
inpho icon Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO).
phil papers icon Enhanced bibliography for this entry at PhilPapers, with links to its database.

Other Internet Resources

Related Entries

epistemology | justification, epistemic: foundationalist theories of | justification, epistemic: internalist vs. externalist conceptions of | knowledge: analysis of


Thanks to Stefan Schubert for his lucid and constructive line-by-line criticism of an earlier version, and to Staffan Angere for several helpful additional comments. I am finally indebted to two anonymous referees for suggesting a number of other improvements.