Supplement to Relativism
Formal normative models, especially deductive logics, the probability calculus, and sometimes decision theory, are often said to provide at least minimal guidelines for rationality. They are often defended on the grounds that the failure to abide by them will lead to inconsistency (this is what Dutch-book arguments for assigning probabilities in accordance with the strictures of the probability calculus boil down to, for example). But consistency alone cannot take us very far towards justifying specific norms of justification or inference.
Some possible limitations of consistency are:
Consistency does not always trump:
The preface paradox suggests that it is sometimes reasonable to have beliefs that one knows are inconsistent. Furthermore, everyone has inconsistent beliefs but life is too short to spend it trying to track down even the most trivial inconsistencies in one's thought.
Consistency is not unequivocal:
Beliefs are consistent with one another if they could all be true. Hence consistency requires some antecedent conception of truth, and different logical systems and semantic theories give somewhat different accounts of this, which may in turn lead to different verdicts about consistency.
Formal models do not apply themselves:
Formal normative models often provide clear accounts of consistency, but applying such models to the real world is almost as much an art as a science, and when we turn to the messiness of everyday it is often less than obvious what consistency amounts to.
Consistency is not enough:
Consistency is much too weak to justify our complex edifice of inferential and justificatory practices. It is an important but fairly weak constraint that leaves a number of alternative standards of epistemic evaluation open, so it alone cannot be enough to justify such epistemic practices as induction by enumeration or inference to the best explanation.
The most obvious sense in which we might show that a pattern of inference (e.g., modus ponens, induction by enumeration, inference to the best explanation) is justified would be to show that it is reliable or truth-conducive. Truth-conduciveness, i.e., the tendency of a method to produce true conclusions when supplied with true premises, and showing that a pattern of inference was truth-conducive would provide a good measure of justification for it.
In the case of deductive logics soundness proofs show that the inferential rules (e.g., modus ponens, existential instantiation) of particular logical system are reliable in this way. Different logical systems employ slightly different definitions of truth, however, and soundness proofs employ some of the same patterns of reasoning (e.g., modus ponens, contraposition), so there is an inescapable atmosphere of circularity here.
This problem is more obvious when we turn to inductive or ampliative inferences. Justification here would require us to solve Hume's problem of induction, i.e., to provide a non-question-begging justification of ampliative inference, and there is virtually no reason to suppose this is possible.
Reflective Equilibrium (Goodman, 1983; Rawls, 1971) is a process in which we begin with both our considered views about general epistemic standards (e.g., that modus ponens and inference to the best explanation are good patterns of reasoning) and our particular judgments about which things are true, which arguments persuasive, which sorts of evidence convincing. We then work back and forth, revising our standards in light of our particular judgments, on the one hand, and modifying some of our judgments about particular cases in light of our standards, on the other. The goal is to find a set of general standards and particular judgments that are in harmony or equilibrium. The standards arrived at in such an idealized process are said to be in reflective equilibrium, and some (though by no means all) philosophers hold that this is one way, perhaps the only, way to justify principles (cf. Goodman, 1984. pp. 62 ff; Rawls, 1971).
A common objection to reflective equilibrium is that even if two people begin with the same beliefs and principles they may make different adjustments that would lead to rather different outcomes or equilibrium points. And if two groups begin with fairly different epistemic standards and judgments about particular cases, they may well end up justifying very different standards. The fact that the method of reflective equilibrium could lead to different outcomes is a sort of virtue for the epistemic relativist, however, since it means that it would be possible for rather different packages of central beliefs and principles to be justified.
There has been a good deal of recent debate about the extent to which purely formal, e.g., consistency-based, accounts of rationality supply appropriate norms for creatures with cognitive limitations, like us, who must work with limited information, limited time, and limited energy.
Various writers have suggested replacing these idealized models with accounts of bounded rationality. For example, Gerd Gigerenzer and his coworkers hold that it is rational to use certain “fast and frugal” heuristics (e.g., Chase, Hertwig, and Gigerenzer, 1998). Because these heuristics exploit contingent facts about the environment we inhabit, they would not deliver accurate results in all possible environments. But we don't need accurate results in all possible environments. We only need accuracy in our actual environment and, Gigerenzer and his coworkers argue, these heuristics provide this because they involve biological adaptations that are attuned to the structure of information in the ecological settings in which our species evolved. They contain built-in, often quite domain-specific information, and when used in the environments to which they are geared, they can be quite accurate. One may wonder how reliable heuristics that were adaptive in the pre-agricultural environment in which our species spent most of its evolutionary history would be in our highly-technical information age, but Gigerenzer's approach is intriguing.
Mary is an obsessively scrupulous and meticulous author who doesn't write anything in her book unless she has triple checked it and is virtually certain that it's true. Hence she is justified in believing that every sentence in her book is true. But Mary is also a student of human fallibility and knows all too well of the slips even the most careful writers can make. Hence she is justified in believing that something in her long and complicated book is false. But putting these two things together, it appears that she is justified in believing both that every sentence in the book is true and that at least one of them is not true.
Philosophers have proposed various ways to avoid this conclusion, but it provides some reason to suppose that it can sometimes be rational to hold inconsistent beliefs. The paradox was originally formulated by D. C. Makinson (1965).
Return to Relativism: §1: A Framework for Relativism
Return to Relativism: §2: Dependent Variables: What is Relative?
Return to Relativism: §3: Independent Variables: Relative to What?
Return to Relativism: §4: Arguments For Relativism
Return to Relativism: §5: Arguments Against Relativism
Return to Relativism: Table of Contents