Carl G. Hempel (1905–1997) was the principal proponent of the “covering law” theory of explanation and the paradoxes of confirmation as basic elements of the theory of science. A master of philosophical methodology, Hempel pursued explications of initially vague and ambiguous concepts, which were required to satisfy very specific criteria of adequacy. With Rudolf Carnap and Hans Reichenbach, he was instrumental in the transformation of the dominant philosophical movement of the 1930s and 40s, which was known as “logical positivism”, into the more nuanced position known as “logical empiricism”. His studies of induction, explanation, and rationality in science exerted a profound influence upon more than a generation of philosophers of science, many of whom became leaders of the discipline in their own right.
- 1. Biographical Sketch
- 2. The Critique of Logical Positivism
- 3. Scientific Reasoning
- 4. Scientific Explanations
- 5. Explanation, Prediction, Retrodiction
- 6. Inductive-Statistical Explanation
- 7. The Problem of Provisos
- 8. Reflections on Rationality
- Academic Tools
- Other Internet Resources
- Related Entries
Carl G(ustav) Hempel (1905–97), known as “Peter” to his friends, was born near Berlin, Germany, on January 8, 1905. He studied philosophy, physics and mathematics at the Universities of Gottingen and Heidelberg before coming to the University of Berlin in 1925, where he studied with Hans Reichenbach. Impressed by the work of David Hilbert and Paul Bernays on the foundations of mathematics and introduced to the studies of Rudolf Carnap by Reichenbach, Hempel came to believe that the application of symbolic logic held the key to resolving a broad range of problems in philosophy, including that of separating genuine problems from merely apparent ones. Hempel's commitment to rigorous explications of the nature of cognitive significance, of scientific explanation, and of scientific rationality would become the hallmark of his research, which exerted great influence on professional philosophers, especially during the middle decades of the 20th Century.
In 1929, at Reichenbach's suggestion, Hempel spent the fall semester at the University of Vienna, where he studied with Carnap, Moritz Schlick, and Frederick Waismann, who were advocates of logical positivism and members of (what came to be known as) “the Vienna Circle”. It would fall to Hempel to become perhaps the most astute critic of that movement and to contribute to its refinement as logical empiricism. As Hitler increased his power in Germany, Hempel, who was not Jewish but did not support the Nazi regime, moved to Brussels and began collaborating with Paul Oppenheim, which would result in several classic papers, including “Studies in the Logic of Explanation”, which appeared in 1948 (Rescher 2005, Chs. 8 and 9). Hempel would also visit the United States twice—the University of Chicago in 1937–38 and then the City College of New York in 1939–40, where he held his first academic position—and eventually became a naturalized citizen.
He was productive throughout his career, publishing such important papers as “The Function of General Laws in History” (1942) and “Studies in the Logic of Confirmation”, “Geometry and Empirical Science”, and “The Nature of Mathematical Truth” (all in 1945), before leaving City College for Yale. While there, Hempel would publish “Problems and Changes in the Empiricist Criterion of Meaning” (1950) and “The Concept of Cognitive Significance: A Reconsideration” (1951), as well as his first book, a volume in the International Encyclopedia of Unified Science, Fundamentals of Concept Formation in Empirical Science (1952). Hempel moved to Princeton in 1955, where his research program flourished and his influence upon professional philosophers became immense.
During his two decades at Princeton, Hempel's approach dominated the philosophy of science. His major articles during this interval included “The Theoretician's Dilemma” (1958), “Inductive Inconsistencies” (1960), and “Deductive-Nomological vs. Statistical Explanation”, “Explanation in Science and in History”, and “Rational Action” (all in 1962). A classic collection of his studies, Aspects of Scientific Explanation (1965c), became a scholar's bible for generations of graduate students. His introductory text, Philosophy of Natural Science (1966a), would be translated into ten languages. Other articles he published thereafter included “Recent Problems of Induction” (1966b) and “Maximal Specificity and Lawlikeness in Probabilistic Explanation” (1968) as well as a series of studies that included “On the ‘Standard Conception’ of Scientific Theories” (1970).
At the University of Pittsburgh following his mandatory retirement from Princeton in 1973, he continued to publish significant articles, including studies of the nature of scientific rationality, “Scientific Rationality: Analytic vs. Pragmatic Perspectives” (1979), and “Turns in the Evolution of the Problem of Induction” (1981), and on the structure of scientific theories, including, most importantly, “Limits of a Deductive Construal of the Function of Scientific Theories” (1988a) as well as “Provisos: A Problem Concerning the Inferential Function of Scientific Theories” (1988b), further enhancing his reputation by his willingness to reconsider earlier positions. After his death in 1997, new collections of his papers appeared (Jeffrey 2000 and Fetzer 2001), which complemented studies of his research (Rescher 1969, Esler 1985, Kitcher and Salmon 1989, and Fetzer 2000b).
However surprising it may initially seem, contemporary developments in the philosophy of science can only be properly appreciated in relation to the historical background of logical positivism. Hempel himself attained a certain degree of prominence as a critic of this movement. Language, Truth and Logic (1936; 2nd edition, 1946), authored by A. J. Ayer, offers a lucid exposition of the movement, which was—with certain variations—based upon the analytic/synthetic distinction, the observational/theoretical distinction, and the verifiability criterion of meaningfulness. A fundamental desideratum motivating its members was to establish standards for separating genuine questions for which answers might be found from pseudo questions for which no answers could be found.
According to the first principle, sentences are analytic relative to a language framework L when their truth follows from its grammar and vocabulary alone. In English, “Bachelors are unmarried” cannot be false, since “bachelor =df unmarried, adult male”. Sentences of this kind make no claims about the world, but instead reflect features of the linguistic framework as syntactical or semantic truths in L. And sentences are synthetic when they make claims about the world. Their truth in L does not follow from its grammar and vocabulary alone but hinges upon properties of the world and its history. According to logical positivism, all such claims about the world have to be evaluated on the basis of experience, which means the kind of knowledge they display is a posteriori. But kinds of knowledge whose truth can be established independently of experience are a priori.
Logical positivism affirmed that, given a language L, all a priori knowledge is analytic and all synthetic knowledge is a posteriori, thus denying the existence of knowledge that is both synthetic and a priori. Indeed, the denial of the existence of synthetic a priori knowledge is commonly assumed to define the position known as “Empiricism”, while the affirmation of its existence defines “Rationalism”. Figure 1 thus reflects the intersection of kinds of sentences and kinds of knowledge on the Empiricist approach:
A Priori Knowledge A Posteriori Knowledge Synthetic Sentences No Yes Analytic Sentences Yes ?
Figure 1. The Empiricist Position
The category for sentences that are analytic and yet represent a posteriori knowledge deserves discussion. The empirical study of the use of language within language-using communities by field linguists involves establishing the grammar and the vocabulary employed within each such community. Their empirical research yields theories of the languages, L, used in those communities and affords a basis for distinguishing between which sentences are analytic-in-L and which are synthetic-in-L. The kind of knowledge they acquire about specific sentences based on empirical procedures thus assumes the form, “Sentence S is analytic-in-L”, when that is true of sentence S, which is a posteriori.
One of Hempel's early influential articles was a defense of logicism, according to which mathematics—with the notable exception of geometry, which he addressed separately—can be reduced to logic (for Hempel, including set theory) as its foundation (Hempel 1945c). Mathematics thus becomes an exemplar of analytic a priori knowledge. Two subtheses should be distinguished: (i) that all mathematical concepts can be defined by means of basic logical concepts; and (ii) that all mathematical theorems can be deduced from basic logical truths. In order to distinguish logicism from formalism, however, the former maintains that there is one system of logic that is fundamental to all inquiries, where all mathematical terms are reducible to logical terms, and all mathematical axioms are derivable from logical ones, which formalism denies (Rech 2004).
The tenability of logicism been disputed on multiple grounds, the most prominent of which has been that the notion of membership fundamental to the theory of sets is not a logical notion but rather a symbol that must be added to first-order logic to formalize what is properly understood as a non-logical theory. Nor would philosophers today accept the conception of the axioms of set theory as logical axioms, since there exist alternatives. So even if mathematics were reducible to set theory, these considerations undermine Hempel's claim that mathematics is thereby reducible to logic (cf Benacerraf 1981 and Linsky and Zalta, 2006, which provides an extensive bibliography). Hempel's views about geometry, in retrospect, thus appear to have been the better founded.
The analytic/synthetic distinction and the observational/theoretical distinction were tied together by the verifiability criterion of meaningfulness, according to which, in relation to a given language, L, a sentence S is meaningful if and only if it is either analytic-in-L or synthetic-in-L as an observation sentence or a sentence whose truth follows from a finite set of observation sentences. By this standard, sentences that are non-analytic but also non-verifiable, including various theological or metaphysical assertions concerning God or The Absolute, qualify as cognitively meaningless. This was viewed as a desirable result. But, as Hempel would demonstrate, its scope was far too sweeping, since it also rendered meaningless the distinctively scientific assertions made by laws and theories.
From an historical perspective, logical positivism represents a linguistic version of the empiricist epistemology of David Hume (1711–76). It refines his crucial distinctions of “relations between ideas” and “matters of fact” by redefining them relative to a language L as sentences that are analytic-in-L and synthetic-in-L, respectively. His condition that significant ideas are those which can be traced back to impressions in experience that gave rise to them now became the claim that synthetic sentences have to be justified by derivability from finite classes of observation sentences. Hume applied this criterion to exclude the idea of necessary connections, which are not observable, from significant causal claims, which were thereby reduced to relations of regular association, spatial contiguity, and temporal succession. And logical positivism followed Hume's lead.
Empiricism historically stands in opposition to Rationalism, which is represented most prominently by Immanuel Kant, who argued that the mind, in processing experiences, imposes certain properties on whatever we experience, including what he called Forms of Intuition and Categories of Understanding. The Forms of Intuition impose Euclidean spatial relations and Newtonian temporal relations; the Categories of Understanding require objects to be interpreted as substances and causes as inherently deterministic. Several developments in the history of science, such as the emergence of the theory of relativity and of quantum mechanics, undermine Kant's position by introducing the role of frames of reference and of probabilistic causation. Newer versions are associated with Noam Chomsky and with Jerry Fodor, who have championed the ideas of an innate syntax and innate semantics, respectively (Chomsky 1957; Fodor 1975; Chomsky 1986)
Indeed, according to the computational theory of the mind, human minds, like computing machines, are special kinds of formal systems. Since deviations from formal systems of language in practice can be considerable, Chomsky introduced a distinction between competence and performance, where the former models the formal system and various explanations are advanced for deviations from that model in practice, similar to differences between the fall of bodies in a vacuum and in air, which raises questions about testability that parallel those for scientific theories, in general. If languages are not best understood as formal systems, however, or if syntax and semantics are not innate, then Chomsky and Fodor's views are as vulnerable as those of Kant. If syntax is an emergent property of semantic complexity, for example, then grammar is not innate; and if mentality has and continues to evolve, Chomsky and Fodor are wrong (Schoenemann 1999, Fetzer 2005).
In his study of formal systems for geometry (Hempel 1945b), Hempel discusses the existence of alternatives based upon different axioms, which differentiate Euclidean geometry from its non-Euclidean rivals. According to Euclid, for example, the sum of the interior angles of a triangle must equal 180° and, in relation to a point separate from a given line, one and only one parallel line passes through it. The alternatives advanced by Lobachevsky (hyperbolic) and by Riemann (elliptical), however, which represent the surface of a sphere and of a saddle, respectively, violate both of those conditions, albeit in different ways. Hempel emphasized that all three, as formal systems, are on a par, where the most appropriate choice to describe the geometry of space depends on the outcome of empirical studies. As it happened, Einstein would adopt a generalized form of Riemannian geometry in his general theory of relativity.
Hempel accordingly drew a distinction of fundamental importance between pure and applied mathematics, which he emphasized by using a quotation from Einstein, who had observed, “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality”. The existence of alternative and incompatible formal systems, moreover, appears to affect Hempel's defense of logicism from another direction. If mathematics is supposed to be reducible to logic and logic is supposed to be consistent, then how can alternative geometries be consistently reducible to logic? No one would dispute that they exist as distinct formal systems with their own axioms and primitives, but if these geometries are jointly reducible to logic only if logic is inconsistent, their existence suggests that, perhaps, as formalism claims, it is not the case there is one system of logic that is fundamental to all inquiries.
2.1.1 Quine's Complaints
The analytic/synthetic distinction took a decided hit when the noted logician, Willard van Orman Quine, published “Two Dogmas of Empiricism” (1953), challenging its adequacy. Quine argued that the notion of analyticity presupposes the notion of synonymy-in-use, which in turn presupposes understanding inter-substitutability-while-preserving-truth. He claimed none of the notions can be understood without the other, creating a circular relation between them. Thus, matching up a definiens with a definiendum could only be done if we already understood that the definiens specifies the meaning of the word that is being defined—a variation of “the paradox of analysis”, according to which we either already know the meaning of words (in which case analysis is unnecessary) or we do not (in which case it is impossible). The idea of analyticity appeared to have been deposed.
The paper created a sensation and has been the most influential philosophical article of the past 100 years. But Quine explicitly allowed for the existence of the class of logical truths (such as “No unmarried man is married”) as narrowly analytic sentences and their relationship to broadly analytic sentences (such as “No bachelor is married”), when the definitional relationship between them has been suitably stipulated (as in “bachelor =df unmarried, adult male” in a language framework L). In cases of this kind, he conceded, inter-substitutability, synonymy, and analyticity are related in an unproblematic way. It would have been odd for a logician to deny the existence of logical truths or the role of stipulations, which are basic to the construction of formal systems, which suggests that he may not have actually defeated the idea of analyticity, after all (Fetzer 1990, 1993).
Indeed, Carnap (1939) had explained that the process of constructing a vocabulary and a grammar for a language-in-use involves several stages, including observation of the use of language by members of the community, formulating hypotheses regarding the meaning of its phrases and expressions, and drawing inferences about the underlying grammar. These are pragmatic, semantic, and syntactical procedures, respectively, and decisions have to be made in arriving at a theory about the language as the outcome of empirical research. The construction of formal systems thus provides an illustration of the elements of artificial languages, where accounts of natural language counterparts can be subject to further testing and refinement. The linguistic practices of a specific community can thus be modeled and thereby overcome Quine's professed objections.
2.1.2 Hempel's Concerns
Moreover, in Fundamentals of Concept Formation in Empirical Science (1952), Hempel had endorsed explication as a method of definition analogous to theory construction by taking words and phrases that are somewhat vague and ambiguous and subjecting them to a process of clarification and disambiguation. Adequate explications are required to satisfy criteria of syntactical determinacy, semantic relevance, and pragmatic benefit (by clarifying and illuminating the meaning of those words and phrases in specific contexts). The word “probability” is vague and ambiguous, for example, but various contexts of its usage can be distinguished and theories of evidential support, relative frequency, and causal propensity can be advanced. Hempel, following Carnap (1950), had accordingly advanced a methodology that dealt with the paradox of analysis before “Two Dogmas”.
About the same time, however, Hempel found reasons of his own for doubting that the analytic/synthetic distinction was tenable, which surfaced in his study of dispositional predicates, such as “malleable,” “soluble” and “magnetic”. They designate, not directly observable properties, but (in the case of “magnetic”) tendencies on the part of some kinds of things to display specific reactions (such as attracting small iron objects) under suitable conditions (such as the presence of small iron objects in the vicinity). It is very tempting to define them using the material conditional, “___ ⊃…” by definitions like,
(D1) x is magnetic at t =df if, at t, a small iron object is close to x, then it moves toward x
which could then be formalized by means of the horseshoe and suitable abbreviations,
(D2) Mxt =df Sxt ⊃ Txt*
where “t*” is equal to or later than “t” and reflects that effects brought about by causes entail changes across time. Since the material “if ___ then …” is equivalent to “either not-___ or …”, however, the meaning of (D2) turns out to be logically equivalent to,
(D3) Mxt =df ¬Sxt ∨ Txt*
which means that anything not subject to the test, such as a brown cow, is magnetic. Hempel acknowledged that the use of the subjunctive conditional, say, “___ → …”, formalizing what would be the case … if something ___ were the case, in this case,
(D4) Mxt =df Sxt → Txt*
“if, at t, as small iron object were close to x, then it would move toward x at t*” (which assumes the satisfaction of the test condition) would avoid the problem, because these conditionals take for granted their antecedents are satisfied (that “Sxt” is true). But while acknowledging the importance of subjunctive conditionals for an understanding of both counterfactual conditionals and lawlike sentences, Hempel regarded their explication as not fully satisfactory and their use as “a program, rather than a solution” (Hempel 1952, p. 25).
He therefore adopted a method from Carnap to overcome this difficulty, where, instead of attributing the property in question to anything not subject to the test, the predicate is partially defined by means of a reduction sentence, such as “if, at t, a small iron object is close to x, then x is magnetic at t if and only if it moves toward x at t” or symbolically,
(D5) Sxt ⊃ (Mxt ≡ Txt*)
where a biconditional, “___ ≡ …”, is true when “___” and “…” have the same truth value and otherwise is false. This solved one problem by abandoning the goal of defining the predicate for a partial specification of meaning. But it created another, insofar as, if there should be more than one test/response for a property—such as that, “if x moves through a closed wire loop at t, then x is magnetic at t if and only if an electric current flows in the loop at t*”—in conjunction they jointly imply that any object x that is near small iron objects and moves through a closed wire loop will generate a current in the loop if and only if it attracts those iron objects. But this no longer has the character of even a partial definition but instead that of an empirical law. The prospect that analytic sentences might have synthetic consequences was not a welcome result (Hempel 1952).
2.1.3 Intensional Methodology
Carnap (1963) was receptive to the adoption of an intensional methodology that went beyond the constraints of extensional logic, which Hempel (1965b) would consider but leave for others to pursue (Fetzer 1981, 1993). The distinction can be elaborated with respect to the difference between the actual world and alternative possible worlds as sequences of events that diverge from those that define the history of the actual world. If specific conditions that obtained at a specific time had been different, for example, the course of ensuing events would have changed. These intensional methods can be applied to the problems of defining dispositions and the nature of laws by employing descriptions of possible worlds as variations on the actual, not as alternatives that are “as real as” the actual—as David Lewis (2001a, 2001b) has proposed—but as a means for formally specifying the semantic content of subjunctives and counterfactuals (where counterfactuals are subjunctives with false antecedents), using an alternative calculus.
There appear to be two broad kinds of justification for subjunctive conditionals, which are logical and ontological, where logical justifications are derived from the grammar and vocabulary of a specific language, such as English. The subjunctive, “If John were a bachelor, then John would be unmarried”, for example, follows from the definition of “bachelor” as “unmarried, adult male”. Analogously, the subjunctive, “If this were gold, then it would be malleable”, could be justified on ontological grounds if being malleable were (let us call it) a permanent attribute of being gold as a reference property, where attributes are “permanent” when there is no process or procedure, natural or contrived, by means of which things having those reference properties could lose those attributes except by no longer possessing those reference properties, even though that is not a logical consequence of their respective definitions (Fetzer 1977). The approach appeals to necessary connections, which are unobservable and therefore unacceptable to Hume. As we shall discover, that they are unobservable doesn't mean they are empirically untestable.
The elaboration of a possible-worlds formal semantics that might be suitable for this purpose, however, requires differentiating between familiar minimal-change semantics, where the world remains the same except for specified changes, and a maximal-change semantics, in which everything can change except for specified properties, which is the ingredient that seems to be required to satisfy the constraints of scientific inquiries as opposed to conversational discourse (Fetzer and Nute 1979 and 1980). In the 1950s and 60s, however, Nelson Goodman (1955) and Karl Popper (1965) were attempting to sort out the linkage between dispositions, subjunctives, and laws from distinctive points of view. Hempel's commitments to extensional logic and to Humean constraints would lead him to endorse an account of laws that was strongly influenced by Goodman and to embrace a pragmatic account that was both epistemically and contextually-dependent.
While the analytic/synthetic distinction appears to be justifiable in modeling important properties of languages, the observational/theoretical distinction does not fare equally well. Within logical positivism, observation language was assumed to consist of names and predicates whose applicability or not can be ascertained, under suitable conditions, by means of direct observation (such as using names and predicates for colors, shapes, sounds) or relatively simple measurement (names and predicates for heights, weights, and sizes, for example). This was an epistemic position, of course, since it was sorting them out based upon their accessibility by means of experience. Both theoretical and dispositional predicates, which refer to non-observables, posed serious problems for the positivist position, since the verifiability criterion implies they must be reducible to observables or are empirically meaningless. Karl Popper (1965, 1968), however, would carry the argument in a different direction by looking at the ontic nature of properties.
Popper has emphasized that we are theorizing all the time. Consider the observation of a glass full of clear liquid. Suppose it's water. Then it quenches thirst and extinguishes fires and nourishes plants. But what if it's alcohol instead? Just describing it as “water” entails innumerable subjunctives about the kinds of responses it would display under a wide variety of test conditions. They are the would be's of things of that kind. Consider the differences between basket balls, billiard balls, and tennis balls. Things of different kinds can do different things. Even the seemingly simplest observation of a rabbit in the backyard, for example, implies that it is going to display rabbit-like behavior, including eating carrots when my wife puts them out. It is going to hop around and create more rabbits. If it's a rabbit, it is going to have rabbit DNA. It will not turn out to be stuffed. And this suggested that observational properties and predicates are dispositional, too.
From the Humean epistemic perspective, observational, dispositional, and theoretical predicates are successively more and more problematical in relation to their accessibility via experience. The observational describe observable properties of observable entities; the dispositional, unobservable properties of observable entities; and the theoretical, unobservable properties of unobservable entities. Popper suggested that observational and theoretical properties (gravitational strengths electromagnet fields, and such) are ontologically dispositional, too (Popper 1965, 425). But if universals as properties that can be attributed to any member of any possible world are dispositions and the kind of property dispositions are does not depend upon the ease with which their presence or absence can be ascertained, then nomological subjunctives and counterfactuals—taken as instantiations of lawlike generalizations for specific individuals, places, and times—might be explicable as displays of dispositions and of natural necessities (Fetzer 1981).
Hempel (1950, 1951), meanwhile, demonstrated that the verifiability criterion could not be sustained. Since it restricts empirical knowledge to observation sentences and their deductive consequences, scientific theories are reduced to logical constructions from observables. In a series of studies about cognitive significance and empirical testability, he demonstrated that the verifiability criterion implies that existential generalizations are meaningful, but that universal generalizations are not, even though they include general laws, the principal objects of scientific discovery. Hypotheses about relative frequencies in finite sequences are meaningful, but hypotheses concerning limits in infinite sequences are not. The verifiability criterion thus imposed a standard that was too strong to accommodate the characteristic claims of science and was not justifiable.
Indeed, on the assumption that a sentence S is meaningful if and only if its negation is meaningful, Hempel demonstrated that the criterion produced consequences that were counterintuitive if not logically inconsistent. The sentence, “At least one stork is red-legged,” for example, is meaningful because it can be verified by observing one red-legged stork; yet its negation, “It is not the case that even one stork is red-legged,” cannot be shown to be true by observing any finite number of red-legged storks and is therefore not meaningful. Assertions about God or The Absolute were meaningless by this criterion, since they are not observation statements or deducible from them. They concern entities that are non-observable. That was a desirable result. But by the same standard, claims that were made by scientific laws and theories were also meaningless.
Indeed, scientific theories affirming the existence of gravitational attractions and of electromagnetic fields were thus rendered comparable to beliefs about transcendent entities such as an omnipotent, omniscient, and omni-benevolent God, for example, because no finite sets of observation sentences are sufficient to deduce the existence of entities of those kinds. These considerations suggested that the logical relationship between scientific theories and empirical evidence cannot be exhausted by means of observation sentences and their deductive consequences alone, but needs to include observation sentences and their inductive consequences as well (Hempel 1958). More attention would now be devoted to the notions of testability and of confirmation and disconfirmation as forms of partial verification and partial falsification, where Hempel would recommend an alternative to the standard conception of scientific theories to overcome otherwise intractable problems with the observational/theoretical distinction.
The need to dismantle the verifiability criterion of meaningfulness together with the demise of the observational/theoretical distinction meant that logical positivism no longer represented a rationally defensible position. At least two of its defining tenets had been shown to be without merit. Since most philosophers believed that Quine had shown the analytic/synthetic distinction was also untenable, moreover, many concluded that the enterprise had been a total failure. Among the important benefits of Hempel's critique, however, was the production of more general and flexible criteria of cognitive significance in Hempel (1965b), included in a famous collection of his studies, Aspects of Scientific Explanation (1965d). There he proposed that cognitive significance could not be adequately captured by means of principles of verification or falsification, whose defects were parallel, but instead required a far more subtle and nuanced approach.
Hempel suggested multiple criteria for assessing the cognitive significance of different theoretical systems, where significance is not categorical but rather a matter of degree:
Significant systems range from those whose entire extralogical vocabulary consists of observation terms, through theories whose formulation relies heavily on theoretical constructs, on to systems with hardly any bearing on potential empirical findings. (Hempel 1965b, 117, italics added)
The criteria Hempel offered for evaluating the “degrees of significance” of theoretical systems (as conjunctions of hypotheses, definitions, and auxiliary claims) were (a) the clarity and precision with which they are formulated, including explicit connections to observational language; (b) the systematic—explanatory and predictive—power of such a system, in relation to observable phenomena; (c) the formal simplicity of the systems with which a certain degree of systematic power is attained; and (d) the extent to which those systems have been confirmed by experimental evidence (Hempel 1965b). The elegance of Hempel's study laid to rest any lingering aspirations for simple criteria of cognitive significance and signaled the demise of logical positivism as a philosophical movement.
Precisely what remained, however, was in doubt. Presumably, anyone who rejected one or more of the three principles defining positivism—the analytic/synthetic distinction, the observational/theoretical distinction, and the verifiability criterion of significance—was not a logical positivist. The precise outlines of its philosophical successor, which would be known as “logical empiricism”, were not entirely evident. Perhaps this study came the closest to defining its intellectual core. Those who accepted Hempel's four criteria and viewed cognitive significance as a matter of degree were members, at least in spirit. But some new problems were beginning to surface with respect to Hempel's covering-law explication of explanation and old problems remained from his studies of induction, the most remarkable of which was known as “the paradox of confirmation”.
Hempel's most controversial argument appeared in an article about induction entitled “Studies in the Logic of Confirmation” (1945a), where he evaluates the conditions under which an empirical generalization would be confirmed or disconfirmed by instances or non-instances of its antecedent and consequent. He focused on universally quantified material conditionals, exemplified by sentences of the form, “(x)(Rx ⊃ Bx)”. With “Rx” interpreted as “x is a raven” and “Bx” as “x is black”, this schema represents, in first-order symbolic logic, the claim, “All ravens are black”. He also considered sentences of more complex logical structures, but nothing hinges upon their use that cannot be addressed relative to an example of the simplest possible kind. And, indeed, Hempel took sentences of this kind as exemplars of “lawlike sentences”, which combine purely universal form with what he called purely qualitative predicates. So lawlike sentences that are true as extensional generalizations are “laws” (Hempel and Oppenheim 1948).
Hempel applied “Nicod's criterion” to this example, where Nicod had proposed that, in relation to conditional hypotheses, instances of their antecedents that are also instances of their consequents confirm them; instances of their antecedents that are not instances of their consequents disconfirm them; and non-instantiations of their antecedents are neutral, neither confirming nor disconfirming. Applied to the raven hypothesis, this means that, given a thing named “c”, the truth of “Rc” and “Bc” confirms it; the truth of “Rc” and “¬Bc” disconfirms it; and the truth of “¬Rc” neither confirms nor disconfirms it, but remains evidentially neutral, regardless of the truth value of “Bc”. To these highly intuitive conditions, Hempel added that, since logically equivalent hypotheses have the same empirical content, whatever confirms one member of a set of logically equivalent hypotheses must also confirm the others, which he called “the equivalence condition”.
No matter how intuitive, Hempel proceeded to demonstrate that this creates a paradox. By Nicod's criterion, “(x)(Rx ⊃ Bx)” is confirmed by ravens that are black. But, by that same standard, “(x)(¬Bx ⊃ ¬Rx)” is confirmed by non-black non-ravens, such as white shoes! Since these are logically equivalent and have the same empirical content, they must be confirmed or disconfirmed by all and only the same instances. This means that—no matter how counter-intuitive—the lawlike hypothesis, “All ravens are black”, is confirmed by observations of white shoes! Since these hypotheses are also equivalent to “(x)(¬Rx ∨ Bx)”, which asserts that everything either is not a raven or is black, it is also the case that observations of non-ravens confirms the hypothesis, regardless of their color. Observations of non-ravens, however, unlike those of black ravens, confirm alternative hypotheses, like “All ravens are blue” and “All ravens are green”. Hempel's point was that the application of Nicod's criterion means that, since even observations of non-ravens are confirmatory, the class of neutral instances in fact has no members.
Few papers in the philosophy of science have produced such a voluminous literature. In a “Postscript”, Hempel addressed a few of the suggestions that have been advanced to analyze the paradoxical quality of the argument (Hempel 1965a). Several were intent upon explaining them quantitatively on the ground that, for example, there are many more non-black things than black things or that the probability of being non-black is much great than the probability of being a raven, whereas others appealed to Bayesian principles and suggested that the prior probability for ravens being black makes testing non-ravens of considerably less relative risk. Hempel replied that even the existence of a quantitative measure of evidential support poses no challenge to his conclusion that the paradoxical cases—non-black non-ravens, such as white shoes—are confirmatory.
Hempel acknowledges that an explanation for why the paradoxical cases appear to be non-confirmatory may have something to do with fashioning hypotheses about classes that are affected by their relative size. Since the class of non-ravens is so much larger than the class of ravens, where what we are interested in, by hypothesis, is the color of ravens, instances of non-black non-ravens might count as confirmatory but to a lesser degree than instances of black ravens. He allows the possibility that perhaps the theory of confirmation should proceed quantitatively, which might provide a less perplexing basis for assessments of this kind. But he steadfastly maintains that the consequences he identified following from the application of the principles he employed are logically impeccable, no matter how psychologically surprising they may seem (Hempel 1960).
The most important claim of Hempel (1965a) is that confirmation cannot be adequately defined by linguistic means alone. Here he cites Goodman (1955) to demonstrate that some hypotheses of the form, “(x)(Fx ⊃ Gx)”, are not confirmable even by instances of the kind “Fc” and “Gc”. If “Rx” stands for “x is a raven” and “Bx” for “x is blite” (where x is blite when it has been examined before time t and is black or has not been examined before t and is white), then any raven observed before t and found to be black confirms the hypothesis, “(x)(Rx ⊃ Bx)”; yet this hypothesis implies that all ravens not examined before t are white, a consequence that, in Hempel's language, “must surely count as disconfirmed rather than as confirmed”. And he endorses Goodman's suggestion that whether a universal conditional is capable of being confirmed by its positive instances turns out to depend upon the character of its constituent predicates and their past use.
Goodman (1955) draws a distinction between universal conditional generalizations that can be confirmed by their instances and those that cannot, where the former are said to be “projectible”. Those that can be projected from examined cases to unexamined ones are those that have a history of past projections. Thus, the predicate “black” has many past projections, but the predicate “blite” does not. Since a history of past projections enhances the projectibility of predicates only when they are successful, the measure of a predicate's degree of projectibility, which Goodman calls its degree of entrenchment, is the relative frequency of its successful use in past predictions. Hume observed that, no matter how consistently a regularity has held in the past, that provides no guarantee it will continue to hold in the future. Goodman nevertheless adopted the past as a guide, which qualifies as his solution to the linguistic version of Hume's problem of induction.
Since Goodman is offering a pragmatic solution for (what most would assume to be) a syntactical and semantical problem, it may be useful to consider whether or not there might be a more promising approach. In embracing Goodman's approach, Hempel was modifying his conception of lawlike sentences as extensional generalizations which are restricted to purely qualitative predicates which make no mention of specific individuals, specific places or specific times, but which might, in special cases, reference samples or exemplars, such as the standard meter or the atomic clock (Hempel 1965d, 269, where he also mentions Popper's notion of universal predicates). Goodman's account does not actually capture the concept of laws but the rather restricted notion of hypotheses that have been projected and are supposed to be laws. Laws themselves, after all, exist even when they have not yet been discovered, as in the case of Archimedes' principle before Archimedes, Snell's law before Snell, and Newton's before Newton. A more promising approach lay in the direction of universals in Popper's sense, which are dispositions with subjunctive force, where laws can be characterized as unrestricted intensional conditionals.
Popper (1965, 1968) championed falsifiability as a criterion of demarcation that is more appropriate than verifiability as a criterion of meaningfulness, on the ground that what we need is a basis for distinguishing scientific from nonscientific statements, where the latter can still be meaningful, even when they are not scientific. He suggested that laws should be understood as having the force of prohibitions, which are empirically testable by attempts to falsify them. And he observed there are no “paradoxes of falsification” to parallel the “paradoxes of confirmation”. Even in the case of material conditionals, the only falsifying instances are those that combine the truth of their antecedents with the falsity of their consequents. Relative to “(x)(Rx ⊃ Bx)”, for example, the only potential falsifiers are instances of “Rx” that are instances of “¬Bx”, and similarly for “(x)(¬Bx ⊃ ¬Rx)” and for “(x)(¬Rx ∨ Bx)”, not to mention its subjunctive counterpart, “(x)(Rx → Bx)”. The absence of paradox suggested that Popper's approach to problems of demarcation and of cognitive significance might possess substantial advantages over the alternatives.
Hempel's most important contributions to the theory of science have been a series of explications of the structure of scientific explanations. His methodology was such that, following the preliminary examination of the linguistic and physical phenomena under consideration, he would advance a semi-formal characterization, one which he would subsequently subject to formal characterization using the resources of symbolic logic. The overarching theme of his work was the conception of explanation by subsumption, where specific events are subsumed by corresponding laws (of physics, of chemistry, of biology, and so forth). The conception of explanation by subsumption is rather ancient in its lineage, but Hempel advanced explicit formulations that drew distinctions between explanations of different kinds, especially those that invoke universal (or deterministic) laws, statistical (or probabilistic) laws, and the explanation of laws by means of theories.
Hempel implemented the conception of subsumption by presuming that explanations explain the occurrence of singular events by deriving their descriptions from premises that include at least one lawlike sentence, which thereby displays (what he called) their nomic expectability. In the simplest cases, explanations assume the following form:
Premises: L1, L2, … Lk C1, C2, … Cr Conclusion: E
Figure 2. An Explanation Schema
Thus, in relation to Figure 2, C1, C2, …, Cr describe specific conditions (referred to as “initial” or “antecedent”) and L1, L2, …, Lk general laws, where “E” describes the event to be explained. The explanation takes the form of an inductive or deductive argument, where the premises are called the “explanans” and the conclusion the “explanandum”.
Richard Jeffrey (1969) noted that Hempel's conception harmonizes well with Aristotle's definition of (what he called) unqualified scientific knowledge, “[where] the premises of demonstrated knowledge must be true, primary, immediate, better known than and prior to the conclusion, which is further related to them as effect to cause” (Posterior Analytics 1.71–2). A potentially even more illuminating comparison emerges from the perspective of Aristotle's theory of the four causes, where the laws are the material cause, the antecedent conditions the efficient cause, the logical relation between the explanans and the explanandum the formal cause and the explanandum the final cause (Fetzer 2000a, 113–114). There were important differences in their conceptions of law. Aristotle's general premises were definitional, necessarily, whereas Hempel's were not.
For Aristotle, the general premises of scientific explanations are generalizations that describe commensurately universal properties of things of a subject kind, K. Merely universal properties are ones that everything of kind K has but could be without and remain a thing of that kind, just as every Honda might have Michelin tires. Aristotle referred to such properties as “accidental”. Commensurately universal properties are ones that belong to everything of the kind as necessary attributes that they cannot be without. A triangle, for example, has three lines and angles. Aristotle referred to them as “essential”. Because generalizations about the essential properties of things of a kind are “proper definitions”, they provide the basis for explanations that qualify as analytic. Hempel encompassed analytic generalizations within the scope of “fundamental laws” as defined in Hempel and Oppenheim (1948), but he focused on those that were synthetic.
Analytic explanations are common in the currents of daily life, especially in the context of explaining how you “know that” something is the case. A mother might explain to her single daughter that she knows that John must be unmarried, because a friend told her that John is a bachelor. Similar cases of analytic explanations can occur in scientific contexts, such as knowing that the element they are dealing with is gold because it has atomic number 79, when gold is defined by its atomic number. Knowing why John is a bachelor, however, is another matter. Indeed, in Hempel (1965c), he would distinguish between reason-seeking why-questions and explanation-seeking why-questions, where the former seek reasons that justify believing that something is the case, as opposed to the latter, which are usually motivated by knowledge that a specific event has occurred.
In his semi-formal explication of the requirements for adequate scientific explanations, Hempel specified four conditions of adequacy (CA) that have to be satisfied, namely:
(CA-1) The explanandum must be a deductive consequence of the explanans;
(CA-2) The explanans must contain general laws, which are required to satisfy (CA-1);
(CA-3) The explanans must have empiricial content and must be capable of test; and,
(CA-4) The sentences of the explanans must be true. (Hempel and Oppenheim 1948)
These conditions are intended to serve as requirements whose satisfaction guarantees that a proposed explanation is adequate. Hempel drew several distinctions over time between potential scientific explanations (which satisfy the first three conditions, but possibly not the fourth) and confirmed scientific explanations (which are believed to be true but might turn out to be false). Hempel recognized that (CA-3) was a redundant condition, since it would have to be satisfied by any explanation that satisfied (CA-1) and (CA-2). Insofar as the explanandum describes an event that occurred during the history of the world, its derivation thereby implies the explanans has empirical content.
Hempel's conditions had many virtues, not least of which was that they appeared to fit many familiar examples of scientific explanations, such as explaining why a coin expanded when it was heated by invoking the law that copper expands when heated and noting that the coin was copper. Hempel doesn't specify the form laws may take, which could be simple or complex. Most of his examples were simple, such as “All ravens are black”, “All gold is malleable”, and so on. Others were quantitative, such as Archimedes' principle (A body totally immersed in a fluid—a liquid or a gas—experiences an apparent loss in weight equal to the weight of the fluid it displaces), Snell's law (During refraction of light, the ratio of the sines of the angles of incidence and of refraction is a constant equal to the refractive index of the medium), and Newton's law of gravitation (Any two bodies attract each other with a force proportional to the product of their masses and inversely proportional to the square of the distance between them), which he discussed.
More complicated examples can be found in standard sources, such as The Feynman Lectures on Physics (Feynman et al. 1963–65), the first volume of which is devoted to mechanics, radiation, and heat; the second, to electromagnetism and matter; and the third, to quantum mechanics. Hempel explored the implications of (CA-4) for laws:
The requirement of truth for laws has the consequence that a given empirical statement S can never be definitely known to be a law; for the sentence affirming the truth of S is tantamount to S and is therefore capable only of acquiring a more or less high probability, or degree of confirmation, relative to the experimental evidence available at any given time (Hempel 1948, note 22, italics added).
Hempel considered weakening condition (CA-4) to one of high confirmation instead of truth, but concluded that it would be awkward to have “adequate explanations” that are later superseded by different “adequate explanations” with the acquisition of additional evidence and alternative hypotheses across time. He therefore retained the condition of truth. Whether or not the conditions of explanatory adequacy should be relative to an epistemic context of confirmation rather than to an ontic context of truth would become an important question in coping with the requirements for probabilistic explanations.
No aspect of Hempel's position generated more controversy than the symmetry thesis, which holds that, for any adequate explanation, had its premises—its initial conditions and covering laws—been taken into account at a suitable prior time, then a deductive prediction of the occurrence of the explanandum event would have been possible, and conversely (Hempel and Oppenheim 1948). Inferences from adequate explanations to potential predictions were generally accepted, but not the converse. Critics, such as Michael Scriven (1962), advanced counter-examples that were based on correlations or “common causes”: the occurrence of a storm might be predicted when cows lie down in their fields, for example, yet their behavior does not explain why the storm occurs. Sylvain Bromberger offered the example of the length of the shadow cast by a flagpole, which is sufficient to deduce the height of the flagpole and thus satisfies Hempel's conditions but does not explain why the flagpole has that height (Bromberger 1966). Since logical relations are non-temporal, Hempel may have taken the symmetry thesis to be a trivial consequence of his account, but deeper issues are involved here.
Hempel observed that universal generalizations of the form, “(x)(Fx ⊃ Gx)”, are true if and only if logically equivalent generalizations of the form, “(x)(¬Fx ∨ Gx)”, are true. But if “Fx” stands for an uninstantiated property, such as being a vampire, then, since “¬Fx” is satisfied by everything, regardless of the attribute “Gx”, “¬Gx”, etc., that hypothesis—and its logical equivalents—are not only universally confirmed but even qualify as “laws”. This is not surprising, from a logical point of view, since extensional logic is purely truth functional, where the truth value of molecular sentences is a function of the truth values of its atomic constituents. But it implies that an empirical generalization that might be true of the world's history may or may not be “lawful”, since it could be the case that its truth was merely “accidental”. Hempel's endorsement of Goodman's approach to select generalizations that support subjunctive conditionals, therefore, was not only consistent with the tradition of Hume—who believed that attributions of natural necessity in causal relations are merely “habits of mind”, psychologically irresistible, perhaps, but logically unwarranted—but circumvented a disconcerting consequence of his explication.
There is a fundamental difference between sentences that “support subjunctives” and those that “entail subjunctives”, when the selection of those that support subjunctives is made on pragmatic grounds. Popper captured the crucial difference between material (extensional) generalizations and subjunctive (intensional) generalizations as follows:
A statement may be said to be naturally or physically necessary if, and only if, it is deducible from a statement function which is satisfied in all worlds that differ from our own, if at all, only with respect to initial conditions. (Popper 1965, 433)
Indeed, the existence of irreducibly ontic probabilistic phenomena, where more than one outcome is possible under the same initial conditions, would mean that the history of an indeterministic world might be identical with the history of a deterministic one, where their differences were concealed “by chance”. Even a complete description of the history of the actual world might not suffice to distinguish between them, where fundamental aspects of their causal structure would remain beyond empirical detection (Fetzer 1983).
The difference between Popper's universal predicates and Goodman's paradoxical ones may derive from the consideration that incorporating specific times into his definitions entails reference to specific moments of time t, such as midnight tonight, during the history of the actual world, a point to which we are going to return. It follows that they cannot be “universal” in Popper's sense and do not even qualify as “purely qualitative” in Hempel's, either. Reliance upon material conditionals within first-order symbolic logic, moreover, forfeits the benefits of synthetic subjunctives. If the hypothesis, “All ravens are black”, is more adequately formalized as a subjunctive of the form, “(x)(Rx → Bx)”, where the truth of a subjunctive hypothetically assumes that its antecedent condition is satisfied, that obviates the paradox: white shoes are not ravens, and Nicod's criteria apply Instances of “Rx” and “Bx” thereby confirm the hypothesis; instances of “Rx” and “¬Bx” disconfirm it; and instances of “¬Rx” are epistemically neutral, precisely as Nicod said.
Hempel's critics—both early and late—have not always shown an appreciation for the full dimensions of his explication. Michael Scriven (1959), for example, once objected to “the deductive model, with its syllogistic form, where no student of elementary logic could fail to complete the inference, given the premise”, when no restriction is placed on the complexity of the relevant laws or the intricacy of these deductions (Hempel 1965c, 339). Similar objections have been lodged more recently by William Bechtel and Adele Abrahamsen (2005), for example, who claim, in defense of “the mechanistic alternative”, that, among its many benefits, “investigators are not limited to linguistic representations and logical inference in presenting explanations but frequently employ diagrams and reasoning about mechanisms by simulating them”. Hempel's conditions of adequacy, however, are capable of satisfaction without their presentation as formal arguments.
Hempel remarks that his model of explanation does not directly apply to the wordless gesticulations of a Yugoslavian automobile mechanic or guarantee that explanations that are adequate are invariably subjectively satisfying. His purpose was to formalize the conditions that must be satisfied when an explanation is adequate without denying that background knowledge and prior beliefs frequently make a difference in ordinary conversational contexts. Jan might not know some of the antecedent conditions, Jim might have misunderstood the general laws, or features of the explanandum event may have been missed by them both. Even when information is conveyed using diagrams and simulations, for example, as long as it satisfies conditions (CA-1) through (CA-4)—no matter whether those conditions are satisfied implicitly or explicitly—an adequate scientific explanation is at hand. But it has to satisfy all four of those requirements.
Demonstrating that an adequate scientific explanation is at hand, however, imposes demands beyond the acquisition of information about initial conditions and laws. In the examiner's sense of knowledge Ian Hacking identified (Hacking 1967, p. 319), to prove an adequate scientific explanation is available, it would be necessary to show that each of Hempel's adequacy conditions has been satisfied. It would not be enough to show, say, that one person is knowledgeable about the initial conditions, another about the covering laws, and a third about the explanandum. It would be necessary to show that the explanandum can be derived from the explanans, that those laws were required for the derivation, and that the initial conditions were those present on that occasion. (CA-1)-(CA-4) qualifies as a “check list” to insure that a scientific explanation is adequate.
Students of Hempel have found it very difficult to avoid the impression that Hempel was not only defending the position that every adequate scientific explanation is potentially predictive but also the position that every adequate scientific prediction is potentially explanatory. This impression is powerfully reinforced in “The Theoretician's Dilemma” (Hempel 1958), where, in the course of demonstrating that the function of theories goes beyond merely establishing connections between observables, he offers this passage:
Scientific explanations, predictions, and postdictions all have the same logical character: they show that the fact under consideration can be inferred from certain other facts by means of specified general laws. In the simplest case, the type of argument may be schematized as a deductive inference of the following form [here substituting the “simplest case” for the abstract schemata presented in the original]:
(x)(Rx ⊃ Bx) Explanans Rc Bc Explanandum
Figure 3. A Covering-Law Explanation
… While explanation, prediction, and postdiction are alike in their logical structure, they differ in certain other respects. For example, an argument [like Figure 3 above] will qualify as a prediction only if [its explanandum] refers to an occurrence at a time later than that at which the argument is offered; in the case of postdiction, the event must occur before the presentation of the argument. These differences, however, require no further study here, for the purpose of the preceding discussion was simply to point out the role of general laws in scientific explanation, prediction, and postdiction (Hempel 1958, 37–38).
What is crucial about this passage is Hempel's emphasis on the pragmatic consideration of the time at which the argument is presented in relation to the time of occurrence of the explanandum event itself. Let us take this to be the observation that c is black and assume that occurs at a specific time t1. If the argument is presented prior to time t1 (before observing that c is black), then it is a prediction. If the argument is presented after time t1 (after observing that c is black), then it is a postdiction. Since explanations are usually advanced after the time of the occurrence of the event to be explained, they usually occur as postdictions. But there is nothing about their form that requires that.
As Hempel uses the term, reasoning is “scientific” when it involves inferences with laws. Indeed, this appears to be the principal ground upon which he wants to maintain that explanations, predictions, and postdictions share the same logical form. His position here makes it logically impossible for explanations, predictions and postdictions to not have the same logical form. If there are modes of prediction that are non-explanatory, such as might be the case when they are based upon accidential generalizations, then they would not thereby demonstrate that the symmetry thesis does not hold for the kinds of distinctively “scientific” explanations that are the focus of his work. Scriven's cow example, for example, does not show that the symmetry thesis does not hold for scientific explanations and scientific predictions, even if it shows that some ordinary predictions are not also ordinary explanations. But other counter-examples posed greater threats.
Indeed, in the process of drawing the distinction between explanation-seeking and reason-seeking why-questions, Hempel (1965c) proposed a different kind of symmetry thesis, where adequate answers to explanation-seeking why-questions also provide adequate answers to reason-seeking why-questions, but not conversely. That was very appropriate, since the purpose of reason-seeking why-questions is to establish grounds for believing that the event described by the explanandum sentence has taken (or will take) place, while explanation-seeking why-questions typically take for granted that we know their explanandum events have already occurred. It follows that, while Hempel's conditions are meant to specify when we know why, they also entail how we know that.
Insofar as Hempel focused on logically equivalent formulations of lawlike sentences in addressing the paradoxes of confirmation, some may find it remarkable that he does not explore the consequences of logically equivalent formulations in relation to explanations. We tend to assume we can explain why a specific thing c is black on the grounds that it is a raven and all ravens are black. Yet we would hesitate to explain why a specific thing is not a raven on the ground that it is not black. We may refer to this as the paradox of transposition. Notice, the contention that one member of a class of logically equivalent hypotheses is explanatory but others are not generates “paradoxes of explanation” that parallel the “paradoxes of confirmation” (Fetzer 2000a). Figure 4 offers an illustration:
(x)(¬Bx ⊃ ¬Rx) ¬Bc ¬Rc
Figure 4. The Transposition Paradox
Even if we accept the transposed form of “(x)(Rx ⊃ Bx)” as the basis for an answer to a reason-seeking why-question—in this case, how we know that c is no raven—Figure 4 surely does not explain why that is the case, which presumably has to do with the genes of entity c and its developmental history as a living thing. If logically equivalent forms equally qualify as lawlike—there's no basis in extensional logic to deny it—then Hempel confronts a dilemma. In fact, by his own conditions of adequacy, this argument is not only explanatory but also predictive. It is not a “postdiction” as he has defined it above, but it appears to qualify as a “retrodiction” that does not explain its explanandum event.
Suppose Hempel were to resort to Goodman's approach and deny that the negation of a purely qualitative predicates is also a purely qualitative predicate. That would mean that sentences logically equivalent to lawlike sentences—which have the same content—are not therefore also lawlike by invoking pragmatic differences in their degree of entrenchment and projectibility of their constituent predicates. An appeal to pragmatic considerations within this context has a decidedly ad hoc quality about it, which appears to contravene the spirit of the paradoxes of confirmation, where Hempel insists that the counterintuitive cases are confirmatory and their paradoxical character is merely psychological. Even if Hempel were to adopt this position and take for granted that one member of a class of logically equivalent sentences can be lawlike while the others are not, another difficulty arises from the use of modus tollens in lieu of modus ponens, as Figure 5 exemplifies:
(x)(Rx ⊃ Bx) ¬Bc ¬Rc
Figure 5. The Modus Tollens Paradox
Here we have a more threatening problem, since there is no apparent basis for denying that an argument having this form satisfies Hempel's criteria and therefore ought to be both explanatory and predictive. Similarly for temporally quantified arguments, where the fact that a match of kind K has not lighted at t* would certainly not be adequate to explain why it was not struck at t, even though every match of kind K will light when it is struck, under suitable conditions! Something serious thus appears to be wrong (Fetzer 2000a).
The adoption of an alternative theory of laws that incorporates Hempel's commitment to universal generality and to purely qualitative predicates, but abandons Goodman's pragmatic entanglements, might offer solutions for these problems. One such account has been advanced by David Armstrong (1983), who embraces three basic assumptions: (a) realism (the thesis that laws exist apart from any human minds); (b) actualism (the thesis that laws and properties exist only if they are instantiated); and (c) naturalism (the thesis that nothing exists except “the single, spatio-temporal, world, the world studied by physics, chemistry, cosmology, and so on”). These theses harbor an equivocation, however, since, if there can be uninstantiated laws and properties, actualism is false and naturalism might also not be true—unless “the single, spatio-temporal, world” includes uninstantiated as well as instantiated properties and laws. Since Armstrong endorses realism about universals (according to which properties are not reducible to their class of instances), there appears to be a certain tension between his realism and his actualism.
Armstrong abandons the Humean account of universal laws as constant conjunctions and of statistical laws as relative frequencies, which are both extensional in character, for the alternative conception of laws as intensional relations between properties, which are connected by (what he characterizes as) primitive relations of necessitation and of probabilification, respectively. When a thing's being F necessitates its being G, which he formalizes as “N(F, G)”—where being gold may be said to necessitate being malleable, for example—then everything that is F will also be G, so that a corresponding extensional generalization, “(x)(Fx ⊃ Gx)” must be true, but not conversely. This marks an advance in dealing with accidental generalizations, since, even if every Volkswagen were painted grey, it would not follow that being grey is necessitated by being a Volkswagen, when, for example, there are processes and procedures, such as repainting them, that could be performed to violate that generalization, which cannot occur with bona fide laws.
He argues that his notion of physical necessitation is not the same as the counterpart notion of logical necessitation, where the former, unlike the latter, is non-symmetrical and cannot be transposed. The adoption of this approach, therefore, would resolve the paradox of transposition, since “N(F, G)” is no longer logically equivalent to “N(¬G, ¬F)”. Armstrong attempts to assimilate statistical to universal laws by their interpretation as necessitations of less than (let us say) universal strength. On this view, the logical form of statistical laws turns out to be “N:P(F, G)” and that of universal laws “N:1(F, G)”, where the value of “P” equals any (possibly infinitesimal) number between 0 and 1. Armstrong appeals to the non-existence of negative properties to support the nontranposability of necessitations, but there are hidden dangers either way: when “N:1(F, G)”, then “N:0(F, ¬G)”, necessarily; when “N:P(F, G)”, necessarily, “N:1-P(F, ¬G)”. They seem unavoidable. Even more important than preserving the addition and summation of probabilifications, the distinction between permanent and transient properties introduced in Section 2.1 provides an ontological basis for these differences, which Armstrong does not supply.
The presence of G is not part of the definition of “F”, which requires that “[ ](Fx ⊃ Gx)” be false in L, where “[ ]” stands for logical necessity; yet there is no process or procedure, natural or contrived, by means of which a permanent property G can be taken away from something that is F except by making it no longer F. The properties that things can have or lose and remain things of that kind are transient (Fetzer 1981, 1993). Being yellow, malleable, and a metal are some permanent properties of gold, while having a particular shape, being owned by a specific person, or having a certain selling price are not. Lawlike sentences are then logically contingent, subjunctive conditionals that reflect permanent property relations. The adoption of a more adequate theory of laws, moreover, would be consistent with Hempel's conditions of adequacy and support his schemata, as Figure 6 displays:
(x)(Fx → Gx) Fc Gc
Figure 6. A D-N Explanation Schema
Thus, when G is a permanent property of F, it does not mean that the absence of F is a permanent property of the absence of G, which conflates logical and physical necessities and contradicts the assumption of contingency (Fetzer 1981, 193-4; and Fetzer 2000a). It follows that “(x)(Fxt → Gx)” is not logically equivalent to “(x)(¬Gx → ¬Fx)”, even though it entails the corresponding extensional generalization, “(x)(Fx ⊃ Gx)”, which can account for differences between explanations and predictions based on them. The paradox of transposition is overcome, because lawlike sentences are not transposable. Subjunctives can be true even when they have no instances; indeed, counterfactuals are subjunctives with false antecedents. Actualism thus appears to be dispensable. But the modus tollens paradox endures, suggesting something is still not quite right. Even the adoption of a better theory of laws is not enough to exonerate Hempel's adequacy conditions.
In his studies of inductive reasoning, Hempel (1960, 1962a, 1966b) discusses the ambiguity of induction, which arises because incompatible conclusions can appear to be equally-well supported by inductive arguments, where all the premises of both arguments are true. This problem has no analogue for deductive arguments: inconsistent conclusions cannot be validly derived from consistent premises, even if they are false. Inconsistent conclusions can receive inductive support from consistent premises, however, even when they are all true. Consider Hempel's illustration of conflicting predictions for a patient's recovery:
(e1) Jones, a patient with a sound heart, has just had an appendectomy, and of all persons with sound hearts who underwent appendectomy in the past decade, 93% had an uneventfully recovery.
This information, taken by itself, would surely lend strong support to the hypothesis,
(h1) Jones will have an uneventful recovery.
But suppose that we also have the information:
(e2) Jones is a nonagenarian with serious kidney failure; he just had an appendectomy after his appendix had ruptured; and in the past decade, of all cases of appendectomy after rupture of the appendix among nonagenarians with serious kidney failure, only 8% had an uneventful recovery.
This information by itself would lend strong support to the contradictory of (h1):
(¬h1) Jones will not have an uneventful recovery. (Hempel 1966b)
Thus, Hempel observed, (e1) and (e2) are logically compatible and could all be part of the evidence available when Jones' prognosis is being considered. The solution Hempel endorsed was the requirement of total evidence, according to which, in reasoning about the world, arguments must be based upon all the available evidence, though he noted that evidence can be omitted when it is irrelevant and its omission does not affect the level of support. Here, when (e1) is combined with (e2), (¬h1) is better supported than (h1).
Hempel referred to statistical explanations as “inductive-statistical” in contrast with his prior discussion of “deductive-nomological” explanations (Hempel 1958). Because generalizations of both kinds are supposed to be lawlike, however, more appropriate names for them might have been “universal-deductive” and “statistical-inductive” (or, perhaps, “statistical-probabilistic”, respectively (Fetzer 1974b). In formalizing these arguments, Hempel symbolized statistical premises attributing a certain probability to outcome G, under conditions F, by “P(G, F) = r”, where these explanations then assume this form:
P(G,F) = r Fc [r] Gc
Figure 7. An I-S Explanation Schema
While Hempel initially interpreted the bracketed variable [r] as the measure of inductive support which the explanans confers upon the explanandum, Gi, (in conformity with the requirement of total evidence)—where the value of [r] equals that of r—it occurred to him that, since inductive and deductive explanations are typically offered on the basis of the knowledge that their explanandum events have already occurred, viewing statistical explanations as inductive arguments whose purpose is to establish how we know that they have occurred (or that they will occur) is not the correct perspective for understanding them:
the point of an explanation is not to provide evidence for the occurrence of the explanandum, but to exhibit it as nomically expectable. And the probability attached to an I-S explanation is the probability of the conclusion relative to the explanatory premises, not relative to the total class K [representing our total knowledge at that time]. (Hempel 1968, original emphasis)
In order to formalize this conception, he advanced the concept of a maximally specific predicate related to “Gi” in K, where, when “P(G, F) = r”, then, if the premises include a predicate stronger than “F”, say, “M”, which implies the presence of more properties than does “F”, then “P(G, M) = r”, where, as before, r = P(G, F). Thus additional predicates could be included in the explanans of an adequate scientific explanation, provided that they made no difference to the nomic expectability of the explanandum. Including the position of the moon or the day of the week a match is struck, for example, might be irrelevant, but his condition did not exclude them. Wesley C. Salmon (1970), however, would pursue this issue by arguing that Hempel had embraced the wrong standard upon which to base explanatory relevance relations, where explanations, in Salmon's view, strictly speaking, do not even qualify as arguments.
Salmon offered a series of counterexamples to Hempel's approach. In the case of I-S explanations, Hempel required that the nomic expectability of the explanandum must be equal to or greater than .5, which preserved the symmetry thesis for explanations of this kind. This implied that events with low probability could not be explained. There were persuasive illustrations, moreover, demonstrating that explanations could satisfy Hempel's criteria, yet not explain why their explanandum events occur. For example,
(CE-1) John Jones was almost certain to recover from his cold within a week, because he took vitamin C, and almost all colds clear up within a week after administration of vitamin C.
(CE-2) John Jones experienced significant remission of his neurotic symptoms, for he underwent extensive psychoanalytic treatment and a substantial percentage of those who undergo psychoanalytic treatment experience significant remission of neurotic symptoms.
Most colds clear up within a week, with or without vitamin C, and similarly for neurotic symptoms. Salmon thought that this problem was peculiar to statistical explanations, but was corrected by Henry Kyburg (1965), who offered examples of the following kind:
(CE-3) This sample of table salt dissolves in water, for it has had a dissolving spell cast upon it, and all samples of table salt that have had dissolving spells cast upon them dissolve in water.
(CE-4) John Jones avoided becoming pregnant during the past year, for he has taken his wife's birth control pills regularly, and every man who regularly takes birth control pills avoids pregnancy.
For Hempel, a property F is explanatorily relevant to the occurrence of an outcome G if there is a lawful relationship that relates F to G. If table salt dissolves in water, so does Morton's table salt, Morton's finest table salt, Morton's finest table salt bought on sale, and so on. Salmon concluded that Hempel's conception of I-S explanation was wrong.
A student of Reichenbach (1938, 1949), Salmon had adopted and defended the limiting frequency interpretation of physical probability, where “P(G/F) = r” means that G occurs with a limiting frequency r in a reference class of instances of F, which has to be infinite for limits to exist (Salmon 1967, 1971). On this approach, a property H is explanatorily relevant to the occurrence of an attribute G within a reference class F just in case (SR):
(SR) P(G/F&H) = m and P(G/F&¬H) = n, where m ≠ n;
that is, the limiting frequency for G in F-and-H differs from the limiting frequency for G in F-and-not-H. Properties whose presence does not make a difference to the limiting frequency for the occurrence of G in F would therefore qualify as statistically irrelevant. Relative frequencies were typically relied upon in practice, but in principle they had to be limiting.
Salmon also rejected Hempel's requirement that the nomic expectability of a statistical explanation must be equal to or greater than .5, which led him to abandon the notion of explanations as arguments. Even events of low probability were explainable within the context of Salmon's approach, which he compared with Hempel's approach as follows:
I-S model (Hempel): an explanation is an argument that renders the explanandum highly probable;
S-R model (Salmon): an explanation is an assembly of facts statistically relevant to the explanandum regardless of the degree of probability that results. (Salmon 1971, original emphasis)
Salmon's work created a sensation, since Hempel's dominance of the philosophy of science, especially in relation to the theory of explanation, now had a significant rival. The students of explanation who recognized that probabilities as properties of infinite classes posed substantial problems were few, and those who realized that Salmon's position confronted difficulties of its own fewer still.
Salmon was ingenious in devising “screening off” criteria to insure that the properties cited in an S-R explanation were restricted to those that were statistically relevant. He may have appreciated that the available evidence was restricted to relative frequencies in finite sequences, but that could be discounted as typical of the distinction between a context K representing our total knowledge at that time and the truth condition, since scientific knowledge is always tentative and fallible. A deeper problem arose from the existence of properties that were statistically relevant but not explanatorily relevant, however. If women whose middle initials began with vowels, for example, experienced miscarriages with a different frequency than women whose middle initials began with consonants—even after other properties were taken into account—then that property would have to qualify as statistically relevant and therefore explanatorily relevant (Fetzer 1974, 1981). This result implied that statistical relevance cannot adequately define explanatory relevance, and Salmon would gradually move toward the propensity approach in Salmon (1980, 1989).
The advantage of propensities over frequencies are considerable, since, on the propensity account, a probabilistic law no longer simply affirms that a certain percentage of the reference class belongs to the attribute class. What it asserts instead is that every member of the reference class possesses a certain disposition, which in the case of statistical laws is of probabilistic strength and in the case of universal laws of universal strength. On its single-case formulation, moreover, short runs and long runs are simply finite and infinite sequences of single cases, where each trial has propensities that are equal and independent from trial to trial and classic theorems of statistical inference apply.
Envisioning dispositions as single-case causal tendencies brought the further benefit that the differences between them could be formalized using causal conditionals “__ =n=> …” of variable strength n in the case of probabilistic laws and of universal strength u in the case of universal laws “__ =u=> …” (Fetzer 1974b, 1981). This completes the conception of natural laws as subjunctive conditionals attributing permanent properties to universals as dispositions and thereby justifies two basic models of explanation, the universal-deductive (U-D) model and the probabilistic-inductive (P-I) model, where, for example, why a match of kind K lighted when struck could assumes the following form by adopting appropriate formulations when temporal constants “t” and quantifiers “(t)” are included and “t*” occurs some specific interval of time after “t”:
(x)(t)[Kxt → (Sxt =u=> Lxt*)] Explanans Kct1 & Sct1 Lct*1 Explanandum
Figure 8. A Universal-Deductive Explanation
Explanations for indeterministic phenomena are equally straightforward. If the half-life of 3.05 minutes is a permanent property of atoms of polonium-218 (218P), for example, then a probabilistic law could express its meaning as a disposition H of strength .5 to undergo decay D during any 3.05 minute interval I, which could be formally displayed as follows:
(x)(t)[218Pxt → (I=3.05minxt =.5=> Dxt*)] Explanans 218Pct1 & I=3.05minct1 [.5] Dct*1 Explanandum
Figure 9. A Probabilistic-Inductive Explanation
Here the value of [r] can be justified as a degree of nomic expectability that applies to every single case of the reference property and implies that, for collections of atoms of 218P, approximately half will undergo decay during any 3.05 minute interval, which enables lawlike sentences of probabilistic form to be subjected to empirical test, on the basis of relative frequencies, especially by attempting to refute them. Salmon also expressed enthusiasm for this approach, which he regarded as “a straightforward generalization” of Hempel's account (Salmon 1989, 83–89; Fetzer 1992). And it does appear to be a suitable successor, not only to Hempel's I-S but also his D-N approach.
Even this intensional explication would still be vulnerable to the problems of irrelevant properties and the modus tollens paradox but for the adoption of a condition to exclude the occurrence of predicates that are not nomically relevant to the explanandum event from the explanans of an adequate scientific explanation. This criterion, known as the requirement of strict maximal specificity, permits the reformulation of Hempel's four conditions of adequacy by replacing the redundant empirical content condition with this new requirement (Fetzer 1981, Salmon 1989). Explanations not only display the nomic expectability of their explanandum events—which, in the case of those that occur with high probability, would enable them to have been predicted, as Hempel proposed but—more importantly—explain them by specifying all and only those properties nomically responsible for their occurrence, even when they occur with low probability (Fetzer 1992).
Bromberger's flagpole counterexample provides a severe test of this requirement. The reason why the inference from the length of the shadow to the height of the flagpole is non-explanatory is because the length of the shadow is not nomically responsible for the flagpole's height. Hempel's original conditions could not cope with situations of this kind, where inferences using laws support predictions and retrodictions that are not also explanations, even though they satisfy all four. This alternative condition thus requires that the properties cited in the antecedent of the lawlike premise(s) must be nomically relevant to the explanandum or may not be included there (Fetzer 1981, 1992). The height of the flagpole, but not the length of the shadow, qualifies as nomically relevant, which resolves the quandary—at the expense of acknowledging classes of arguments that are predictive or retrodictive but not explanatory, even when they involve inferences from law(s) that satisfy Hempel's original criteria. The reformulated conditions are:
(CA-1*) The explanandum must be a suitable logical consequence of the explanans;
(CA-2) The explanans must contain general laws, which are required to satisfy (CA-1*);
(CA-3*) The explanans must satisfy the requirement of strict maximal specificity; and,
(CA-4*) The sentences—both of the explanans and of the explanandum—must be true.
By formulating (CA-1*) in this way, the covering law conception of the subsumption of singular events by general laws is preserved; but abandonment of the high-probability requirement led both Salmon (1971) and Alberto Coffa (1973) to question whether or not explanations still properly qualify as “arguments”. At the least, however, they would appear to be special kinds of “explanatory arguments”, even when they involve low probabilities.
These revised conditions implicitly require abandoning Hempel's commitment to extensional methodology to capture the notion of nomic responsibility, but the benefits of an intensional approach appear to be profound. As Salmon has observed,
[on such an explication] the distinction between description and prediction, on the one hand, and explanation, on the other, is that the former can proceed in an extensional language framework, while the latter demands an intensional language framework. It remains to be seen whether the intensional logic can be satisfactorily formulated (Salmon 1989, 172, original emphasis).
This approach to explanation incorporates the causal relevance criterion, according to which a property H is causally relevant to the occurrence of an attribute G relative to a reference property F—within the context of a causal explanation—just in case (CR):
(CR) (Fxt & Hxt) =m=> Gxt* and (Fxt & ¬Hxt) =n=> Gxt*, where m ≠ n;
that is, the propensity for G, given F&H, differs from the propensity for G, given F&¬H, where the presence or absence of H affects the single-case causal tendency for G. The universal generalization of sentential functions like these thus produces lawlike sentences, while their instantiation to individual constants or to ambiguous names produces (what are known as) nomological conditionals (Fetzer 1981, 49–54). The introduction of the probabilistic causal calculus C, moreover, responds to Salmon's concerns by providing a formalization within intensional logic that resolves them (Fetzer and Nute 1979, 1980).
What may come as some surprise is that Hempel exposed yet another significant problem confronting the theory of scientific explanation. One of the most remarkable features of his career is that he continued to publish original and innovative studies well into his eightieth decade. Rather surprisingly, he authored a series of articles that moved away from the influential conception of scientific theories as formal calculi combined with empirical interpretations that had been characteristic of logical empiricism. In Hempel (1966a), at the time, the most widely adopted introduction to the philosophy of science, which has been translated into ten other languages, he advanced the novel explication of scientific theories as consisting of internal principles and bridge principles, where the lawlike hypotheses that distinguish theories are linked to observation, measurement, and experiment by principles expressed in various mixtures of ordinary and of technical language. Now antecedent understanding replaces explicit definability, which seems to have been, in part, a response to the demise of the observational/theoretical distinction.
Even more strikingly, Hempel (1988a, 1988b) observed that the application of scientific theories presupposes the absence of factors that might affect the internal principles of the theory, which goes beyond the content of the theory itself. Deriving predictions and explanations from classical mechanics presupposes that bodies are being acted upon exclusively by gravitational forces, for example, where the presence of electromagnetic forces would invalidate those derivations. This is not simply a matter of conducting tests “under suitable conditions”, where the malleability of gold differs when struck by a hammer when at extremely cold temperatures (where the combustibility of paper differs when the paper is wet, and so on), which are familiar examples of the various specific conditions that must be identified in complete definitions of dispositional predicates. What Hempel noticed is that these properties may not only be affected by conditions covered by the theory being applied but involve entirely different theories. He took this to mean that the application of theories has to be accompanied by “provisos” affirming that no properties beyond those specified by the theory are present in a specific case.
The function of these provisos means that instrumentalist constructions of scientific theories as mere calculating devices and programs for the elimination of theoretical language by reduction to observational language alone are misguided and cannot be sustained. And this is because, if other theoretical properties make a difference to the application of a particular theory, then the observational consequences of that theory cannot be assumed to obtain in any instance without provisos about other theoretical properties beyond those specified by the theory, which requires separate investigation on the basis of observation, measurement, and experiment. The conditions for testing, confirming or falsifying alternative hypotheses and theories is thereby rendered vastly more complex than had previously been supposed. Strikingly, as an essential element of an adequate explanation, Coffa (1973) had advanced “extremal clauses” that assert that no factors other than those specified by the initial conditions of the explanans are relevant to the occurrence of the explanandum event. Indeed, Salmon points out that, since the application of every law involves a tacit extremal (or “ceteris paribus”) clause, any law can be protected from refutation by discounting disconfirming evidence on the expedient of claiming that, because the clause was not satisfied, the law is still correct, which is another way to accent the source of Hempel's concern (Salmon 1989, 84–85).
These observations are related to the claim that “the laws of physics lie”, which has been suggested by Nancy Cartwright (1983). She has claimed that there are theoretical laws, which are supposed to be “simple” but rarely if ever instantiated, and phenomenological laws, which are frequently instantiated but typically if not always complex. Armstrong (1983) draws a distinction between laws that are “iron” and have no exceptions and those that are “oaken” and have no exceptions as long as no interfering conditions are present. In his language, unless Cartwright has overlooked the possibility that some laws might be both complex and true, she seems to be saying that theoretical laws are “oaken” laws for which there are no “iron” counterparts. If that is right, then she has failed to appreciate the distinction between counterfactual conditionals (which may be true in spite of having no instances) and mere indicative conditionals (which may be true because they have no instances). Alternatively, the “provisos” problem implies that satisfying the requirement of maximal specificity may be more demanding than has been generally understood in the past. Either way, Cartwright's theses appear to trade on an equivocation between the non-existence of “iron” counterparts and their non-availability: even if the “iron” laws within a domain are not yet known, it does not follow that those laws do not exist.
The publication of Thomas S. Kuhn's The Structure of Scientific Revolutions (1962), ironically, was among the historical developments that contributed to a loss of confidence in science. Kuhn's work, which turned “paradigm” into a household word, was widely regarded as assimilating revolutions in science to revolutions in politics, where one theory succeeds another only upon the death of its adherents. It was interpreted as having destroyed the myth that philosophers possess some special kind of wisdom or insight in relation to the nature of science or the thought processes of scientists with respect to their rationality, almost as though every opinion were on a par with every other. A close reading of Kuhn's work shows that these were not his own conclusions, but they were enormously influential. And among the public at large and many social scientists, the tendency to no longer hold science in high esteem or to be affected by its findings has induced political ramifications that are inimical to the general good. When our beliefs are not well founded, actions we base upon them are unlikely to succeed, often with unforeseen effects that are harmful. Rational actions ought to be based upon rational beliefs, where science has proven to be the most reliable method for acquiring knowledge about ourselves and the world around us.
This study supports the conclusion that Hempel's conception of scientific explanations as involving the subsumption of singular events by means of covering laws was well-founded, even though his commitment to an extensional methodology inhibited him from embracing a more adequate account of natural laws. The symmetry thesis turns out to require qualification, not only with respect to predictions for events that occur only with low probability, but also for retrodictions derived by modus tollens. The link that perhaps most accurately embodies the relationship between Hempel's work on explanation and decision-and-inference occurs in the form of “the principal principle”, which David Lewis advanced to formalize the recognition that personal probabilities as degrees of belief in the occurrence of specific events under specific conditions should have the same values as corresponding objective probabilities, when they are known (Lewis 1980). The values of propensities as properties of laws, no doubt, should be given precedence over the values of frequencies, insofar as laws, unlike frequencies, cannot be violated and cannot be changed and provide a more reliable guide.
A recent trend presumes that the philosophy of science has been misconceived and requires “a naturalistic turn” as a kind of science of science more akin to history or to sociology than to philosophy. In studies published in the twilight of his career, Hempel demonstrated that, without standards to separate science from pseudo-science, it would be impossible to distinguish frauds, charlatans, and quacks from the real thing (Hempel 1979, 1983). Without knowing “the standards of science”, we would not know which of those who call themselves “scientists” are scientists and which methods are “scientific”. The philosophy of science, therefore, cannot be displaced by history or by sociology. Not least among the important lessons of Hempel's enduring legacy is the realization that the standards of science cannot be derived from mere descriptions of its practice alone but require rational justification in the form of explications satisfying the highest standards of philosophical rigor.
Primary Sources: Works by Hempel
- 1942, “The Function of General Laws in History,” Journal of Philosophy, 39: 35–48.
- 1945a, “Studies in the Logic of Confirmation”, Mind, 54: 1–26 and 97–121.
- 1945b, “Geometry and Empirical Science”, American Mathematical Monthly, 52: 7–17.
- 1945c, “On the Nature of Mathematical Truth”, American Mathematical Monthly, 52: 543–556.
- 1950, “Problems and Changes in the Empiricist Criterion of Meaning” Revue Internationale de Philosophie, 41(11): 41–63.
- 1951, “The Concept of Cognitive Significance: A Reconsideration,” Proceedings of the American Academy of Arts and Sciences, 80(1): 61–77.
- 1952, Fundamentals of Concept Formation in Empirical Science, Chicago: University of Chicago Press.
- 1958, “The Theoretician's Dilemma,” in Minnesota Studies in the Philosophy of Science, Vol. II, H. Feigl, M. Scriven, and G. Maxwell (eds.), Minneapolis: University of Minnesota Press, pp. 37–98.
- 1960, “Inductive Inconsistencies,” Synthese, 12: 439–469.
- 1962a, “Deductive-Nomological vs. Statistical Explanation”, in Minnesota Studies in the Philosophy of Science, Vol. III, H. Feigl and G. Maxwell (eds.), Minneapolis: University of Minnesota Press, pp. 98–169.
- 1962b, “Explanation in Science and in History”, in Frontiers of Science and Philosophy, R. G. Colodney (ed.), Pittsburgh, PA: University of Pittsburgh Press, pp. 9–33.
- 1962c, “Rational Action”, Proceedings and Addresses of the American Philosophical Association, 35: 5–23.
- 1965a, “Postscript (1964)”, in Hempel 1965d: 47–51.
- 1965b, “Empiricist Criteria of Cognitive Significance: Problems and Changes”, in Hempel 1965d: 101–119.
- 1965c, “Aspects of Scientific Explanation,” in Hempel 1965d: 331–496.
- 1965d, Aspects of Scientific Explanation, New York, NY: Free Press.
- 1966a, Philosophy of Natural Science, Englewood Cliffs, NJ: Prentice-Hall.
- 1966b, “Recent Problems of Induction”, in R. G. Colodny (ed.), Mind and Cosmos, Pittsburgh, PA: University of Pittsburgh Press, 112–134.
- 1968, “Maximal Specificity and Lawlikeness in Probabilistic Explanation”, Philosophy of Science, 35 (June): 116–133.
- 1970, “On the ‘Standard Conception’ of Scientific Theories”, in M. Radner and S. Winokur (eds.), Minnesota Studies in the Philosophy of Science, Vol. IV, Minneapolis, MN: University of Minnesota Press, 142–163.
- 1979, “Scientific Rationality: Analytic vs. Pragmatic Perspectives”, in T. S. Geraets (ed.), Rationality To-Day, Ottawa, Canada: The University of Ottowa Press,46–58.
- 1981, “Turns in the Evolution of the Problem of Induction”, Synthese, 46: 193–404.
- 1983, “Valuation and Objectivity in Science,” in Physics, Philosophy, and Psychoanalysis: Essays in Honor of Adolf Grunbaum, Robert S. Cohen and Larry Laudan (eds.), Dordrecht, The Netherlands: Kluwer, 73–100.
- 1988a, “Limits of a Deductive Construal of the Function of Scientific Theories”, in E. Ullmann-Margalit (ed.), Science in Reflection. The Israeli Colloquium, (Volume 3), Dordrecht, The Netherlands: Kluwer, 1–15.
- 1988b, “Provisos: A Problem Concerning the Inferential Function of Scientific Theories”, Erkenntnis, 28: 147–164.
- Hempel, C. G. and P. Oppenheim, 1945d, “A Definition of ‘Degree of Confirmation’”, Philosophy of Science, 12: 98–115.
- ––– 1948, “Studies in the Logic of Explanation,” Philosophy of Science, 15: 135–175.
- Armstrong, D., 1983, What is a Law of Nature?, Cambridge, UK: Cambridge University Press.
- Ayer, A. J., 1936, Language, Truth and Logic, 1946, 2nd edition, New York, NY: Dover.
- Bechtel, W. and A. Abrahamsen, 2005, “Explanation: The Mechanistic Alternative”, Studies in History and Philosophy of the Biological and Biomedical Sciences, 36: 421–441.
- Benacerraf, P., 1981, “Frege: The Last Logicist”, in P. French et al. (eds.), Midwest Studies in Philosophy (Volume VI), Minneapolis, MN: Uniiversity of Minnesota Press: 17–35.
- Benacerraf, P., and H. Putnam (eds.), 1984, Philosophy of Mathematics: Selected Readings, 2nd ed., Cambridge, UK: Cambridge University Press.
- Bromberger, S., 1966, “Why Questions”, in R. G. Colodney (ed.), Mind and Cosmos, Pittsburgh, PA: University of Pittsburgh Press.
- Carnap, R., 1939, Foundations of Logic and Mathematics, Chicago, IL: University of Chicago Press.
- ––– 1950, Logical Foundations of Probability, Chicago, IL: University of Chicago Press.
- ––– 1963, “Replies and Systematic Expositions”, in The Philosophy of Rudolf Carnap, P. A. Schilpp (ed.), La Salle, IL: Open Court, 859–1013.
- Cartwright, N., 1983, How the Laws of Physics Lie, Oxford, UK: The Clarendon Press.
- Chomsky, N., 1957, Syntactic Structures, Paris: Mouton.
- ––– 1985, Knowledge of Language: Its Nature, Origin, and Use, Westport, CT: Praeger Publishers.
- Coffa, A., 1973, The Foundations of Inductive Explanation, Doctoral Dissertation, University of Pittsburgh.
- Esler, W., et al. (eds.), 1985, Epistemology, Methodology, and Philosophy of Science, Dordrecht, Netherlands: Kluwer.
- Fetzer, J. H., 1974, “A Single Case Propensity Theory of Explanation”, Synthese, 28: 171–198.
- ––– 1977, “A World of Dispositions”, Synthese, 34: 397–421.
- ––– 1981, Scientific Knowledge: Causation, Explanation, and Corroboration, Dordrecht: D. Reidel.
- ––– 1983, “Transcendent Laws and Empirical Procedures”, in N. Rescher (ed.), The Limits of Lawfulness, Lanham, MD: University Press of America, pp. 25-32.
- ––– 1985, “What is a Law of Nature?/How the Laws of Physics Lie”, Philosophical Books, 26: 120–124.
- ––– 1988, “Probabilistic Metaphysics”, in J. H. Fetzer (ed.), Probability and Causality, Dordrecht, The Netherlands: Kluwer Academic Publishers.
- ––– 1992, “What's Wrong with Salmon's History of the Third Decade”, Philosophy of Science, 59: 246–262.
- ––– 1993, The Philosophy of Science, New York, NY: Paragon House.
- ––– 2000a, “The Paradoxes of Hempelian Explanation”, in Fetzer (2000b), 111–137.
- ––– (ed.), 2000b, Science, Explanation, and Rationality: Aspects of the Philosophy of Carl G. Hempel, Oxford: Oxford University Press.
- ––– (ed.), 2001, The Philosophy of Carl G. Hempel: Studies in Science, Explanation, and Rationality, Oxford: Oxford University Press.
- ––– 2005, The Evolution of Intelligence: Are Humans the Only Animals with Minds?, Chicago, IL: Open Court.
- Fetzer, J. H. and D. Nute, 1979, “Syntax, Semantics, and Ontology: A Probabilistic Causal Calculus”, Synthese, 40: 453–495.
- ––– 1980, “A Probabilistic Causal Calulus: Conflicting Conceptions”, Synthese, 44: 241–246.
- Feynman, R. et al., 1963–65, The Feynman Lectures on Physics, Vols. I–III, Reading, MA: Addison-Wesley.
- Goodman, N., 1955, Fact, Fiction and Forecast, Cambridge, MA: Harvard University Press.
- Hacking, I., 1967, “Slightly More Realistic Personal Probability”, Philosophy of Science, 34: 311–325.
- Jeffrey, R., 1969, “Statistical Explanation vs. Statistical Inference”, in Rescher 1969, 104–113.
- ––– (ed.), 2000, Selected Philosophical Essays: Early and Late, Cambridge, U.K.: Cambridge University Press.
- Kitcher, P. and W. C. Salmon (eds.), 1989, Scientific Explanation, Minneapolis, MN: University of Minnesota Press, 1990.
- Kuhn, T. S., 1962, The Structure of Scientific Revolutions, Chicago, IL: University of Chicago Press.
- Kyburg, H., 1965, “Comments”, Philosophy of Science, 32: 147–151.
- Lewis, D., 1980, “A Subjectivist's Guide to Objective Chance”, in R. Jeffrey (ed.), Studies in Inductive Logic and Probability, Berkeley, CA: University of California Press, 263–293.
- ––– 2001a, Counterfactuals, 2nd edition, New York, NY: Wiley-Blackwell.
- ––– 2001b, On the Plurality of Worlds, New York, NY: Wiley-Blackwell.
- Linsky, B. and E. Zalta, 2006, “What is Neologicism?”, The Bulletin of Symbolic Logic, 12(1): 60–99.
- Parrini, P, W.C. Salmon and M.H. Salmon (eds.), 2003, Logical Empiricism: Historical and Contemporary Perspectives, Pittsburgh, PA: University of Pittsburgh Press.
- Popper, K., 1965, The Logic of Scientific Discovery, New York, NY: Harper & Row.
- ––– 1968, Conjectures and Refutations, New York, NY: Harper & Row.
- Quine, W. V. O., 1953, “Two Dogmas of Empiricism”, From a Logical Point of View, Cambridge, MA: Harvard University Press, 20–46.
- Rech, E., 2004, “From Frege and Russell to Carnap: Logic and Logicism in the 1920s”, in S. Awodey and C. Klein (eds.), Carnap Brought Home, Chicago, IL: Open Court: 151–179.
- Reichenbach, H., 1938, Experience and Prediction, Chicago, IL: University of Chicago Press.
- ––– 1948, The Theory of Probability, 2nd edition, 1971, Berkeley, CA: University of California Press
- Rescher, N. (ed.), 1969, Essays in Honor of Carl G. Hempel, Dordrecht, Netherlands: D. Reidel.
- Rescher, N., 2005, Studies in 20th Century Philosophy, Frankfurt, Germany: Ontos Verlag.
- Salmon, W. C., 1967, Foundations of Scientific Inference, Pittsburgh, PA: University of Pittsburgh Press.
- –––, 1970, “Statistical Explanation”, The Nature and Function of Scientific Theories, R. G. Colodny (ed.), Dordrecht, Holland: D. Reidel, 173–231.
- –––, 1971, “Introduction”, Statistical Explanation and Statistical Relevance, W. C. Salmon (ed.), Pittsburgh, PA: University of Pittsburgh Press.
- –––, 1978, “Why ask, ‘Why?’?”, Proceedings and Addresses of the American Philosophical Association, 51: 683–705.
- –––, 1980, “Probabilistic Causality”, Pacific Philosophical Quarterly, 61: 50–74.
- –––, 1984, Scientific Explanation and the Causal Structure of the World, Princeton: Princeton University Press.
- –––, 1989, “Four Decades of Scientific Explanation”, in Kitcher and Salmon (eds.), 1989, 3–219.
- Schoenemann, P.T., 1999, “Syntax as an Emergent Property of Semantic Complexity”, Minds and Machines, 9: 309–346.
- Scriven, M., 1959, “Truisms as the Grounds for Historical Explanations”, Theories of History, M. Gardner (ed.), New York, NY: The Free Press, 443–475.
- –––, 1962, “Comments on Professor Grunbaum's Remarks at the Wesleyan Meeting”, Philosophy of Science, 29(2): 171–174.
- Woodward, J., 2003/2010, “Scientific Explanation”, The Stanford Encyclopedia of Philosophy, (Spring 2010 Edition), Edward N. Zalta (ed.), URL= <https://plato.stanford.edu/archives/spr2010/entries/scientific-explanation/>
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
- Burkhart, F., “Carl G. Hempel Dies at 92; Applied Science to Philosophy”, The New York Times (November 23, 1997).
- “Carl Gustav Hempel Papers”, University of Pittsburgh, University Library System.
- “Hempel, Carl Gustav”, New World Encyclopedia.
- Murzi, M., “Carl Gustav Hempel (1905–1997)”, Internet Encyclopedia of Philosophy