Notes to Scientific Explanation
1. See Cartwright, 2004 for a similar diagnosis and for another survey of some of the issues described here.
2. In addition to Hempel, 1965 and Salmon, 1989, see also Cartwright, 1983, Earman, 1986, pp. 80–110, and van Fraassen, 1989.
3. As an illustration of this attitude, consider Hempel's discussion of “support for counterfactuals” as a criterion for distinguishing laws from accidental generalizations in his (1965). While Hempel agrees that support for counterfactuals is, as it were, diagnostic of the law/accident distinction, he also holds that counterfactuals “present notorious philosophical difficulties” (1965, p. 339) and because of this cannot be used to provide any independent purchase on the distinction between laws and accidentally true generalizations.
4. In addition to the requirement that laws must be exceptionless generalizations, these include the requirements that laws not contain terms referring to particular objects or places and the requirement that laws must contain only projectable predicates in the sense of Goodman, 1955.
5. For a similar assessment, see Salmon, 1989, p. 15.
6. For example, according to Salmon , 1984 all (why) explanations are causal and causal explanations require tracing causal processes and their intersections (See section 4). By contrast, Graham Nerlich, 1979 is in rough agreement with Salmon about what counts as a causal explanation, but holds that there is an important non-causal form of explanation which he calls geometrical explanation—for example, the explanation of the trajectories of free particles in gravitational field by reference to the affine structure of space-time. Salmon would presumably deny that such appeals to space-time structure are explanatory. Sober, 1983 offers another distinction between causal and non-causal forms of explanation: he contrasts explanations that trace the actual sequence of events leading up to some outcome which he thinks of as causal, with a non-causal form of explanation which he calls equilibrium explanation in which an outcome is explained by showing that a very large number of initial states of a system will evolve in such a way that the system ends up in the outcome state that we wish to explain, but in which no attempt is made to trace the actual sequence of events leading up to that outcome.
7. This argument rests on the assumption that once one has specified all of the information that is relevant to some account, and has not left out any relevant information, one has explained it. In other words, it is assumed that it is not possible for all of the information that is relevant to some $M$ to be insufficient to explain it. Call this the Nothing More to Be Said (NMTBS) argument. Many other treatments of explanation (e.g., Railton, 1978) make a similar assumption. It is far from self-evident that this assumption is correct. Why isn't it possible that even given a specification of all nomic and causal information relevant to some explanandum, this falls short of what is required to explain it? Why can't there be explananda that are simply unexplainable? As a limiting possibility, imagine an explanandum event $M$, the occurrence of which is governed by no laws, which has no causes, and the frequency of occurrence of which varies in unpredictable ways across time and space. If there is nothing more to be said about $M$ except that it sometimes occurs spontaneously, does it follow that this information explains why $M$ occurs? The contrary intuition is that the occurrence of $M$ is a paradigm case of something that we are unable to explain. If this correct, the NMTBS argument does not hold in general and this raises the question of why we should accept it in the particular case in which all relevant information specifies only the probability with which an outcome occurs. In other words, why accept the claim, common to both the IS and SR models, that statistical theories explain individual outcomes? This issue is discussed in more detail below.
8. Some writers hold that the deterministic interpretation of structural equations is optional; the error terms that figure in such equations may be interpreted as representing the net effect of the operation of genuinely indeterministic processes. It is also arguable that although conventional causal modeling techniques assume that the macrosopic causes of juvenile delinquency are deterministic, this does not rule out the possibility that there are underlying processes at some micro-level that are indeterministic. If these claims are correct, it follows that causal modeling techniques are agnostic about indeterminism. However, this is not enough to vindicate the application of the SR model to the sorts of examples covered by causal modeling techniques since the SR model requires indeterminism. Thanks to Elliott Sober for helpful comments on this issue.
9. At the risk of completely overwhelming readers patience for this topic, let me add that rejecting Hempel's and Salmon's views about the kinds of cases in which it is appropriate to talk about statistical explanation of individual outcomes and the structure of such explanations need not commit one to the position that we can never explain non-determined individual outcomes. Instead, what follows is that if such explanations of non-determined outcomes exist, they will be different in structure from the IS or SR models. In my view, the indeterministic contexts in which it is most natural to think in terms of the explanation of individual outcomes are those in which indeterministic causes are operative and such explanations seem to have a rather different structure from what is captured by either the IS or SR model. Consider the following illustration of the operation of an indeterministic cause based on Dretske and Snyder (1972). A radioactive source is introduced into a chamber for a fixed time period. If the source decays during this period, it will trigger a Geiger counter that will in turn release a poisonous gas, killing a cat. Suppose that in fact the source is introduced, a decay event occurs, and the cat dies. Here it seems natural to think of the introduction of the source as both a cause of and as contributing to the explanation of the cat's death, even though the connection between these two event is indeterministic. As a foil to both the IS and SR models, consider the following (very rough and schematic) proposal (cf. Woodward, 2003): A sufficient condition for the introduction $S$ of the source to explain the death $D$ of the cat is that the following three conditions are met. (i) $S$ and $D$ occur, (ii) If $S$ had not occurred, $D$ would not have occurred, (iii) the probability $p$ of $D$ if $S$ were to occur is greater than zero on at least some other occasions on which $S$ occurs, where the conditionals in (ii) and (iii) are interpreted as non-backtracking counterfactuals. Call this is the probabilistic causality (PC) model. (The formulation (i)–(iii) is intended to apply only to cases in which there is a single causal path from $S$ to $D$ and no pre-emptive or potential back-up causes are operative. ) The PC model is quite different from both the IS and SR models. In contrast to the IS model and in agreement with the SR model, according to the PC model, whether the value of $p$ is high seems to make no difference to the goodness of the explanation furnished. Even if the probability of decay was very low, if the decay occurs and the gas is released, the introduction of the source will be what caused and explains the death of the cat. This reproduces our pre-analytic judgment about the example. But in contrast to the SR model, it does not follow from the PC model that if $S$ occurs and $-D$ occurs, then the occurrence of $S$ explains $-D$. This does not follow since the result of substituting $-D$ for $D$ in (ii) is a counterfactual claim that is false.
10. See especially Cartwight, 1979 and Spirtes, Glymour and Scheines, 1993, 2000.
11. See, for example, Spirtes, Glymour and Scheines 1993, 2000, Pearl, 2000, Hausman and Woodward, 1999.
12. An example is provided in Salmon, 1984 and subsequently discussed in Spirtes, Glymour, and Scheines, 1993, 2000. A collision $C$ between a cue ball and two other billiard balls sends the first into the right hand pocket $(A)$ and the second into the left hand pocket $(B)$. $C$ is a common cause of $A$ and $B$ and $A$ does not cause $B$ or vice-versa. Nonetheless, because of the conservation of linear momentum, the information that $A$ occurred provides information about whether $B$ occurred, even given the occurrence of $C$. In other words, $A$ is statistically relevant to $B$, given $C$, even though $A$ does not cause $B$. Intuitively, the problem is that the property $C$ is too coarse-grained. Since the system is presumably deterministic at a more fine-grained level of description in which we specify the exact positions and momenta $M$ of the balls, conditioning on $M$ would render $A$ independent of (irrelevant to) $B$. But if we employ variables like $C$ that are insufficiently fine grained, the connection between causation and statistical relevance assumed in both the Causal Markov condition and the SR model will fail.
13. There is evidence from developmental psychology (e.g., Leslie and Keeble, 1987) that even very young infants are sensitive to the difference between changes in the trajectories of moving balls that involve spatio-temporal contact between the balls and changes in trajectories that do not involve such contact. In adults the former changes are much more readily perceived (or judged) to involve causal interactions than the latter. One may conjecture that the psychological mechanisms at work in such judgments also underlie our preference for causal explanations that involve action by contact. But while spatio-temporal clues are a useful heuristic for picking out causal interactions in some cases, such as those involving collisions, they provide little guidance and indeed may be positively misleading in other cases, as we will see below. So while a preference for causal explanations that contain no spatio-temporal gaps may be very psychologically natural, it is dubious that it provides a useful basis for constructing a general theory of causal explanation.
14. Cases of causation by omission are cases in which, to put the matter intuitively, the non-occurrence of some event causes an outcome, as when a doctor's failure to provide medical help causes the death of his patient. In such cases, there is no transfer of energy or momentum from cause to effect and no natural candidate for a connecting process. Some writers (e.g. Dowe, 2000) conclude on these grounds that causation by omission is not real or literal causation, although it possesses some features that makes it similar to cases of real causation. If omissions can be causes or figure in causal explanations, this presents an obvious prima facie problem for causal process theories like Salmon's. Cases of causation by $C$ of $E$ by double prevention or disconnection are cases in which $C$ prevents or interferes with the operation of a second factor $D$ which if operative would block the occurrence of $E$. By removing the preventer $D$ of $E$, $C$ causes $E$ to occur. Examples are common in biological contexts—see Schaffer, 2000, Woodward, 2002, and for more general discussion of this phenomenon, see Hall, forthcoming and Lewis, 2000. Again, if it is accepted that citing a disconnecting cause provides a (scientific) explanation, this is a difficulty for causal process theory at least as formulated by Salmon.
15. One might well wonder what the basis for this judgment is. Can't the gas as a whole be “marked”—e.g., by heating it—and won't the gas transmit this mark, at least for awhile?
16. The reader is reminded here of a point made in connection with the hidden structure strategy in: the lower level explanations that “underlie” upper level explanations will not always be “ideals” in terms of which the upper level explanations are to be judged.
17. Morrison, 2000 emphasizes the heterogeneity of unification and provides a number of examples illustrating the point that many sorts of unification seem to have little to do with explanation. Sober, 1999, forthcoming, argues that at least in many cases we prefer unified theories over less unified competitors because the former are better confirmed than the latter, and not because the former provide better explanations than the latter. In other words, unification is an epistemological virtue that is relevant to confirmation, but is not an explanatory virtue.
18. These remarks gloss over some complex issues. Some classificatory schemes use causal or etiological information as a basis for constructing classifications. It may be argued that such classificatory schemes are not merely descriptive but rather explanatory in virtue of “invoking” or “appealing to” causal information. It seems clear, however, that there are other classificatory schemes that are not guided by causal information but nonetheless achieve information compression. Such classifications do not provide causal explanations.
19. Sober, 1999 argues for the stronger but related thesis that there is “no objective reason ” (p.551) to prefer unified over disunified explanations—which we prefer will depend on our explanatory interests. He also notes that explanations that cite a great deal of micro-detail and which are regarded by philosophers like Kitcher as “less unified ” than explanations that abstract from such detail (and hence can apply to systems with different microstructures) need not be regarded as competitors—that is, we don't need to choose between them. A glance at any molecular biology textbook will show that Sober is correct in contending that scientists do not always prefer explanations that abtract away from micro-detail and that we should reject any theory of explanation that automatically requires such abstraction. It is of course a further question whether abstracting explanations should be regarded as “more unified” than explanations that provide micro-detail. In fact, the latter are typically more integrated with fundamental theories in physics and chemistry and in this respect arguably more unified.
20. A parallel problem applies to the suggestion that all explanation has a DN structure, unless some version of the hidden structure strategy can be made to work.
21. Recall in particular that Hempel responds to putative counterexamples that attempt to show that the DN model fails to capture the directional features of explanation by claiming that these features are to be understood in terms of “pragmatics”, rather than in terms of features a traditional, non-pragmatic theory might capture. This response has generally struck critics as ad hoc. Somewhat ironically, as discussed below, van Fraasen's diagnosis of the source of explanatory asymmetries is the same as Hempel's, despite the fact that his account is fundamentally opposed to Hempel's in many respects. Looked at one way, van Fraassen simply generalizes Hempel's treatment of explanatory asymmetries, relegating all of explanation to the realm of pragmatics.
22. I have written “often” here because of the following complication: Defenders of pragmatic approaches sometimes write as though they are engaged in a different project from the traditional one, in the sense that their aim is to characterize a notion of explanation according to which, e.g., the same body of information might be explanatory for one audience and not another, and the goal is to specify what all or most uses of the word “explanation” have in common —i.e., a notion of explanation which is just not the target of traditional accounts. Thinking of pragmatic accounts in this way, one might regard them as complimentary to and not in any way in conflict with traditional projects. Thus there is no reason in principle for defenders of traditional accounts to object to such projects, although traditionalists may be skeptical that they are likely to produce anything interesting. However, because defenders of pragmatic accounts have usually wished to use those accounts to criticize traditional accounts, this suggests that they do not conceive of what they are doing in only the conciliatory way just described. This is one of several points in which greater clarity about goals on the part of both traditionalists and their pragmatic critics would be salutary.
23. I am not aware of any systematic historical exploration of the origins of this particular use of “pragmatic” in connection with accounts of explanation. However, one conjecture (and it is only a conjecture) is that it derives, at least in significant part, from linguistics and, in particular, from the contrast between, on the one hand, syntax and semantics, (often thought to be “objective” and appropriate objects of general, systematic study) and “pragmatics” understood as having to do with the use of language by particular speakers directed toward particular audiences in particular contexts to achieve particular ends. This notion of “pragmatic” does suggest a connection with what is idiosyncratic to the psychology of particular language users and contexts and contrasts with projects that are regarded as providing a “syntax” or “semantics” of explanation.
24. The relevant notion of stability is the notion discussed in Woodward (2006)
25. This example also reminds us that attempts to provide traditional characterizations of features of explanatory (or at least causal) relationships are by no means confined to philosophers—attempts to do this can also be found in machine learning, as in Janzing et al., 2012, and in normative theories of causal judgment proposed by psychologists, as in Lombrozo (2010) and Cheng (2000).
26. In his very lucid discussion of pragmatic theories of explanation, Salmon (1989) stresses the failure of both van Fraassen or Achinstein's accounts to provide a characterization of an objective explanatory relevance relation. While Salmon is correct about this, his remarks do not fully engage with van Fraassen's and Achinstein's views that there is no such characterization to be had.
27. A closely related point is that a characterization that “relativizes” some feature of an explanation to a context sometimes can, so to speak, be “de-relativized” by making it explicit how the feature in question depends on context—in other words, the apparent contextuality may be just a reflection of the fact that some relevant feature has not been made explicit. For example, an explanandum for which the contrast class is left implicit may seem to support a context-relative picture of explanation (for example, that different explanations of why the conductor is bent will be appropriate depending on context) but at least sometimes this appearance may be removed by making the intended contrast class explicit. It might be argued that once it is made explicit that what is of interest is why this conductor is bent while others are straight, it is an “objective” matter whether some candidate explanans accounts for this contrast. For this reason, it seems that we should regard a thorough-going pragmatic theory as one that (like Achinstein's and presumably van Fraassen's) claims that explanations have a contextual element that can't be removed (in a way that satisfies objectivist constraints) by making the context explicit.
28. Whether one accepts this assessment will depend in part on whether one thinks that there are non-causal forms of why-explanation (in the broad and vague sense of why-explanation gestured at in Section 1). If there are, then a more adequate theory of causation will take us only part of the way toward a more satisfactory theory of explanation.
29. Relevant recent work includes Hall, forthcoming, Lewis, 1986, 2000, Pearl, 2000, and Spirtes, Glymour and Scheines, 1993, 2000.
30. This asymmetry is closely connected to the fact that the shadow length is upshot of several different factors that are independently variable; the height of the pole, the orientation of the pole and the angular distance of the sun above the horizon. One may manipulate shadow length by altering the last two factors and this will not affect pole height. See Hausman, 1998 and Woodward, 2003.
31. For a statement of the basic idea, see Lewis, 1973a.
32. For a similar observation, see Psillos, 2002. Notice, though, that although this rationale yields an account of why it is desirable to construct explanations appealing to laws (when this is possible), it isn't clear that yields the result that explanation always requires laws. Perhaps there are generalizations that unify sufficiently to qualify as explanatory, according to the unificationist account, but do not count as laws according to the MRL account. Kitcher's version of the unficationist model seems to admit this as a possibility.
33. See Woodward, 2003, pp. 288–95, 358–73.