Debates about scientific realism are centrally connected to almost everything else in the philosophy of science, for they concern the very nature of scientific knowledge. Scientific realism is a positive epistemic attitude towards the content of our best theories and models, recommending belief in both observable and unobservable aspects of the world described by the sciences. This epistemic attitude has important metaphysical and semantic dimensions, and these various commitments are contested by a number of rival epistemologies of science, known collectively as forms of scientific antirealism. This article explains what scientific realism is, outlines its main variants, considers the most common arguments for and against the position, and contrasts it with its most important antirealist counterparts.
- 1. What is Scientific Realism?
- 2. Considerations in Favour of Scientific Realism (and Responses)
- 3. Considerations Against Scientific Realism (and Responses)
- 4. Antirealism: Foils for Scientific Realism
- Academic Tools
- Other Internet Resources
- Related Entries
It is perhaps only a slight exaggeration to say that scientific realism is characterized differently by every author who discusses it, and this presents a challenge to anyone hoping to learn what it is. Fortunately, underlying the many idiosyncratic qualifications and variants of the position, there exists a common core of ideas, typified by an epistemically positive attitude towards the outputs of scientific investigation, regarding both observable and unobservable aspects of the world. The distinction here between the observable and the unobservable reflects human sensory capabilities: the observable is that which can, under favourable conditions, be perceived using the unaided senses (for example, planets and platypuses); the unobservable is that which cannot be detected this way (for example, proteins and protons). This is to privilege vision merely for terminological convenience, and differs from scientific conceptions of observability, which generally extend to things that are detectable using instruments (Shapere 1982). The distinction itself has been problematized (Maxwell 1962, Churchland 1985, Musgrave 1985, Dicken & Lipton 2006), but if it is problematic, this is arguably a concern primarily for certain forms of antirealism, which adopt an epistemically positive attitude only with respect to the observable. It is not ultimately a concern for scientific realism, which does not discriminate epistemically between observables and unobservables per se.
Before considering the nuances of what scientific realism entails, it is useful to distinguish between two different kinds of definition in this context. Most commonly, the position is described in terms of the epistemic achievements constituted by scientific theories (and models—this qualification will be taken as given henceforth). On this approach, scientific realism is a position concerning the actual epistemic status of theories (or some components thereof), and this is described in a number of ways. For example, most define scientific realism in terms of the truth or approximate truth of scientific theories or certain aspects of theories. Some define it in terms of the successful reference of theoretical terms to things in the world, both observable and unobservable. (A note about the literature: ‘theoretical term’, prior to the 1980s, was standardly used to denote terms for unobservables, but will be used here to refer to any scientific term, which is now the more common usage.) Others define scientific realism not in terms of truth or reference, but in terms of belief in the ontology of scientific theories. What all of these approaches have in common is a commitment to the idea that our best theories have a certain epistemic status: they yield knowledge of aspects of the world, including unobservable aspects. (For definitions along these lines, see Smart 1963, Boyd 1983, Devitt 1991, Kukla 1998, Niiniluoto 1999, Psillos 1999, and Chakravartty 2007a.)
Another way to think about scientific realism is in terms of the epistemic aims of scientific inquiry (van Fraassen 1980, p. 8, Lyons 2005). That is, some think of the position in terms of what science aims to do: the scientific realist holds that science aims to produce true descriptions of things in the world (or approximately true descriptions, or ones whose central terms successfully refer, and so on). There is a weak implication here to the effect that if science aims at truth and scientific practice is at all successful, the characterization of scientific realism in terms of aim may then entail some form of characterization in terms of achievement. But this is not a strict implication, since defining scientific realism in terms of aiming at truth does not, strictly speaking, suggest anything about the success of scientific practice in this regard. For this reason, some take the aspirational characterization of scientific realism to be too weak (Kitcher 1993, p. 150, Devitt 2005, n. 10, Chakravartty 2007b, p. 197)—it is compatible with the sciences never actually achieving, and even the impossibility of their achieving, their aim as conceived on this view of scientific realism. Most scientific realists commit to something more in terms of achievement, and this is assumed in what follows.
The description of scientific realism as a positive epistemic attitude towards theories, including parts putatively concerning the unobservable, is a kind of shorthand for more precise commitments (Kukla 1998, ch. 1, Niiniluoto 1999, ch. 1, Psillos 1999, Introduction, Chakravartty 2007a, ch. 1). Traditionally, realism more generally is associated with any position that endorses belief in the reality of something. Thus, one might be a realist about one's perceptions of tables and chairs (sense datum realism), or about tables and chairs themselves (external world realism), or about mathematical entities such as numbers and sets (mathematical realism), and so on. Scientific realism is a realism about whatever is described by our best scientific theories—from this point on, ‘realism’ here denotes scientific realism. But what, more precisely, is that? In order to be clear about what realism in the context of the sciences amounts to, and to differentiate it from some important antirealist alternatives, it is useful to understand it in terms of three dimensions: a metaphysical (or ontological) dimension; a semantic dimension; and an epistemological dimension.
Metaphysically, realism is committed to the mind-independent existence of the world investigated by the sciences. This idea is best clarified in contrast with positions that deny it. For instance, it is denied by any position that falls under the traditional heading of ‘idealism’, including some forms of phenomenology, according to which there is no world external to and thus independent of the mind. This sort of idealism, though historically important, is rarely encountered in contemporary philosophy of science, however. More common rejections of mind-independence stem from neo-Kantian views of the nature of scientific knowledge, which deny that the world of our experience is mind-independent, even if (in some cases) these positions accept that the world in itself does not depend on the existence of minds. The contention here is that the world investigated by the sciences—as distinct from “the world in itself” (assuming this to be a coherent distinction)—is in some sense dependent on the ideas one brings to scientific investigation, which may include, for example, theoretical assumptions and perceptual training; this proposal is detailed further in section 4. It is important to note in this connection that human convention in scientific taxonomy is compatible with mind-independence. For example, though Psillos (1999, p. xix) ties realism to a ‘mind-independent natural-kind structure’ of the world, Chakravartty (2007a, ch. 6) argues that mind-independent properties are often conventionally grouped into kinds (see also Boyd 1991 and Humphreys 2004, pp. 22–25, 35–36).
Semantically, realism is committed to a literal interpretation of scientific claims about the world. In common parlance, realists take theoretical statements at “face value”. According to realism, claims about scientific entities, processes, properties, and relations, whether they be observable or unobservable, should be construed literally as having truth values, whether true or false. This semantic commitment contrasts primarily with those of so-called instrumentalist epistemologies of science, which interpret descriptions of unobservables simply as instruments for the prediction of observable phenomena, or for systematizing observation reports. Traditionally, instrumentalism holds that claims about unobservable things have no literal meaning at all (though the term is often used more liberally in connection with some antirealist positions today). Some antirealists contend that claims involving unobservables should not be interpreted literally, but as elliptical for corresponding claims about observables. These positions are described in more detail in section 4.
Epistemologically, realism is committed to the idea that theoretical claims (interpreted literally as describing a mind-independent reality) constitute knowledge of the world. This contrasts with sceptical positions which, even if they grant the metaphysical and semantic dimensions of realism, doubt that scientific investigation is epistemologically powerful enough to yield such knowledge, or, as in the case of some antirealist positions, insist that it is only powerful enough to yield knowledge regarding observables. The epistemological dimension of realism, though shared by realists generally, is sometimes described more specifically in contrary ways. For example, while many realists subscribe to the truth (or approximate truth) of theories understood in terms of some version of the correspondence theory of truth (as suggested by Fine 1986 and contested by Ellis 1988), some prefer deflationary accounts of truth (including Giere 1988, p. 82, Devitt 2005, and Leeds 2007). Though most realists marry their position to the successful reference of theoretical terms, including those for unobservable entities, processes, properties, and relations (Boyd 1983, and as described by Laudan 1981), some deny that this is a requirement (Cruse & Papineau 2002, Papineau 2010). Amidst these differences, however, a general recipe for realism is widely shared: our best scientific theories give true or approximately true descriptions of observable and unobservable aspects of a mind-independent world.
The general recipe for realism just described is accurate so far as it goes, but still falls short of the degree of precision most realists offer. The two main sources of imprecision here are found in the general recipe itself, which makes reference to the idea of ‘our best scientific theories’ and the notion of ‘approximate truth’. The motivation for these qualifications is perhaps clear. If one is to defend a positive epistemic attitude regarding scientific theories, it is rational to do so not merely in connection with any theory (especially when one considers that, over the long history of the sciences up to the present, some theories were not or are not especially successful), but rather with respect to theories that would appear, prima facie, to merit such a defence, viz. our best theories. And it is widely held, not least by realists, that even many of our best scientific theories are likely false, strictly speaking, hence the importance of the notion that theories may be “close to” the truth (that is, approximately true) even though they are false. The challenge of making these qualifications more precise, however, is significant, and has generated much discussion.
Consider first the issue of how best to identify those theories that realists should be realists about. A general disclaimer is in order here: realists are generally fallibilists, holding that realism is appropriate in connection with our best theories even though they likely cannot be proven with absolute certainty; some of our best theories could conceivably turn out to be significantly mistaken, but realists maintain that, granting this possibility, there are grounds for realism nonetheless. These grounds are bolstered by restricting the domain of theories suitable for realist commitment to those that are sufficiently mature and non-ad hoc (Worrall 1989, pp. 153-154, Psillos 1999, pp. 105–108). Maturity may be thought of in terms of the well established nature of the field in which a theory is developed, or the duration of time a theory has survived, or its survival in the face of significant testing; and the condition of being non-ad hoc is intended to guard against theories that are “cooked up” (that is, posited merely) in order to account for some known observations in the absence of rigorous testing. On these construals, however, both the notion of maturity and the notion of being non-ad hoc are admittedly vague. One strategy for adding precision here is to attribute these qualities to theories that make successful, novel predictions. The ability of a theory to do this, it is commonly argued, marks it as genuinely empirically successful, and the sort of theory to which realists should be more inclined to commit (Musgrave 1988, Lipton 1990, Leplin 1997, White 2003, Hitchcock & Sober 2004, Barnes 2008; for a dissenting view, see Harker 2008).
The idea that with the development of the sciences over time, theories are converging on (“moving in the direction of”, “getting closer to”) the truth, is a common theme in realist discussions of theory change (for example, Hardin & Rosenberg 1982 and Putnam 1982). Talk of approximate truth is often invoked in this context, and has produced a significant amount of often highly technical work, conceptualizing the approximation of truth as something that can be quantified, such that judgments of relative approximate truth (of one proposition or theory in comparison to another) can be formalized and given precise definitions. This work provides one possible means by which to consider the convergentist claim that theories can be viewed as increasingly approximately true over time, and this possibility is further considered in section 3.4.
A final and especially important qualification to the general recipe for realism described above comes in the form of a number of variations. These species of generic realism can be viewed as falling into three families or camps: explanationist realism; entity realism; and structural realism. There is a shared principle of speciation here, in that all three approaches are attempts to identify more specifically the component parts of scientific theories that are most worthy of epistemic commitment. Explanationism recommends realist commitment with respect to those parts of our best theories—regarding (unobservable) entities, processes, laws, etc.—that are in some sense indispensible or otherwise important to explaining their empirical success—for instance, components of theories that are crucial in order to derive successful, novel predictions. Entity realism is the view that under conditions in which one can demonstrate impressive causal knowledge of a putative (unobservable) entity, such as knowledge that facilitates the manipulation of the entity and its use so as to intervene in other phenomena, one has good reason for realism regarding it. Structural realism is the view that one should be a realist, not in connection with descriptions of the natures of things (like unobservable entities and processes) found in our best theories, but rather with respect to their structure. All three of these positions adopt a strategy of selectivity, and this and the positions themselves are considered further in section 2.3.
The most powerful intuition motivating realism is an old idea, commonly referred to in recent discussions as the ‘miracle argument’ or ‘no-miracles argument’, after Putnam's (1975, p. 73) claim that realism ‘is the only philosophy that doesn't make the success of science a miracle’. The argument begins with the widely accepted premise that our best theories are extraordinarily successful: they facilitate empirical predictions, retrodictions, and explanations of the subject matters of scientific investigation, often marked by astounding accuracy and intricate causal manipulations of the relevant phenomena. What explains this success? One explanation, favoured by realists, is that our best theories are true (or approximately true, or correctly describe a mind-independent world of entities, properties, laws, structures, or what have you). Indeed, if these theories were far from the truth, so the argument goes, the fact that they are so successful would be miraculous. And given the choice between a straightforward explanation of success and a miraculous explanation, clearly one should prefer the non-miraculous explanation, viz. that our best theories are approximately true (etc.). (For elaborations of the miracle argument, see Brown 1982, Boyd 1989, Lipton 1994, Psillos 1999, ch. 4, Barnes 2002, Lyons 2003, Busch 2008, and Frost-Arnold 2010.)
Though intuitively powerful, the miracle argument is contestable in a number of ways. One sceptical response is to question the very need for an explanation of the success of science in the first place. For example, van Fraassen (1980, p. 40; see also Wray 2007, 2010) suggests that successful theories are analogous to well-adapted organisms—since only successful theories (organisms) survive, it is hardly surprising that our theories are successful, and therefore, there is no demand here for an explanation of success. It is not entirely clear, however, whether the evolutionary analogy is sufficient to dissolve the intuition behind the miracle argument. One might wonder, for instance, why a particular theory is successful (as opposed to why theories in general are successful), and the explanation sought may turn on specific features of the theory itself, including its descriptions of unobservables. Whether such explanations need be true, though, is a matter of debate. While most theories of explanation require that the explanans be true, pragmatic theories of explanation do not (van Fraassen 1980, ch. 5). More generally, any epistemology of science that does not accept one or more of the three dimensions of realism—commitment to a mind-independent world, literal semantics, and epistemic access to unobservables—will thereby present a putative reason for resisting the miracle argument; these positions are considered in section 4.
Some authors contend that the miracle argument itself is an instance of fallacious reasoning called the base rate fallacy (Howson 2000, ch. 3, Lipton 2004, pp. 196–198, Magnus & Calendar 2004). Consider the following illustration. There is a test for a disease for which the rate of false negatives (negative results in cases where the disease is present) is zero, and the rate of false positives (positive results in cases where the disease is absent) is one in ten (that is, disease-free individuals test positive 10% of the time). If one tests positive, what are the chances that one has the disease? It would be a mistake to conclude that, based on the rate of false positives, the probability is 90%, for the actual probability depends on some further, crucial information: the base rate of the disease in the population (the proportion of people having it). The lower the incidence of the disease at large, the lower the probability that a positive result signals the presence of the disease. By analogy, using the success of a scientific theory as an indicator of its approximate truth (assuming a low rate of false positives—cases in which theories far from the truth are nonetheless successful) is arguably, likewise, an instance of the base rate fallacy. The success of a theory does not by itself suggest that it is likely approximately true, and since there is no independent way of knowing the base rate of approximately true theories, the chances of it being approximately true cannot be assessed. Worrall (2009) maintains that these contentions are ineffective against the miracle argument because they depend crucially on a misleading formalization of it in terms of probabilities.
One motivation for realism in connection with at least some unobservables described by scientific theories comes by way of “corroboration”. If an unobservable entity or property is putatively capable of being detected by means of a scientific instrument or experiment, one might think that this could form the basis of a defeasible argument for realism regarding it. If, however, that same entity or property is putatively capable of being detected by not just one, but rather two or more different means of detection—forms of detection that are distinct with respect to the apparatuses they employ and the causal mechanisms and processes they are described as exploiting in the course of detection—this may serve as the basis of a significantly enhanced argument for realism. Hacking (1983, p. 201; see also Hacking 1985, pp. 146–147) gives the example of dense bodies in red blood platelets that can be detected using different forms of microscopy. Different techniques of detection, such as those employed in light microscopy and transmission electron microscopy, make use of very different sorts of physical processes, and these operations are described theoretically in terms of correspondingly different causal mechanisms. (For similar examples, see Salmon 1984, pp. 217–219, and Franklin 1986, pp. 166–168, 1990, pp. 103–115.)
The argument from corroboration thus runs as follows. The fact that one and the same thing is apparently revealed by distinct modes of detection suggests that it would be an extraordinary coincidence if the supposed target of these revelations did not, in fact, exist. The greater the extent to which detections can be corroborated by different means, the stronger the argument for realism in connection with their putative target. The argument here can be viewed as resting on an intuition similar to that underlying the miracle argument: realism based on apparent detection may be only so compelling, but if different, theoretically independent means of detection produce the same result, suggesting the existence of one and the same unobservable, then realism provides a good explanation of the consilient evidence, in contrast with the arguably miraculous state of affairs in which theoretically independent techniques produce the same result in the absence of a shared target. The idea that techniques of (putative) detection are often constructed or calibrated precisely with the intention of reproducing the outputs of others, however, may stand against the argument from corroboration. Additionally, van Fraassen (1985, pp. 297–298) argues that scientific explanations of evidential consilience may be accepted without the explanations themselves being understood as true, which once again raises questions about the nature of scientific explanation.
In section 1.3, the notion of selectivity was introduced as a general strategy for maximizing the plausibility of realism, particularly with respect to scientific unobservables. This strategy is adopted in part to square realism with the widely accepted view that most if not all of even our best theories are false, strictly speaking. If, nevertheless, there are aspects of these theories that are true (or close to the truth) and one is able to identify these aspects, one might then plausibly cast one's realism in terms of an epistemically positive attitude towards those aspects of theories that are most worthy of epistemic commitment. The most important variants of realism to implement this strategy are explanationism, entity realism, and structural realism. (For related work pertaining to the notion of selectivity more generally, see Miller 1987, chs. 8–10, Fine 1990, Jones 1991, and Musgrave 1992.)
Explanationists hold that a realist attitude can be justified in connection with unobservables described by our best theories precisely when appealing to those unobservables is indispensible or otherwise important to explaining why these theories are successful. For example, if one takes successful novel prediction to be a hallmark of theories worthy of realist commitment generally, then explanationism suggests that, more specifically, those aspects of the theory that are essential to the derivation of novel predictions are the parts of the theory most worthy of realist commitment. In this vein, Kitcher (1993, pp. 140–149) draws a distinction between the ‘presuppositional posits’ or ‘idle parts’ of theories, and the ‘working posits’ to which realists should commit. Psillos (1999, chs. 5–6) argues that realism can be defended by demonstrating that the success of past theories did not depend on their false components: ‘it is enough to show that the theoretical laws and mechanisms which generated the successes of past theories have been retained in our current scientific image’ (p. 108). The immediate challenge to explanationism is to furnish a method with which to identify precisely those aspects of theories that are required for their success, in a way that is objective or principled enough to withstand the charge that realists are merely rationalizing post hoc, identifying the explanatorily crucial parts of past theories with aspects that have been retained in our current best theories. (For discussions, see Chang 2003, Stanford 2003a, 2003b, Elsamahi 2005, McLeish 2005, 2006, Saatsi 2005a, Lyons 2006, and Harker 2010.)
Another version of realism that adopts the strategy of selectivity is entity realism. On this view, realist commitment is based on the putative ability to causally manipulate unobservable entities (like electrons or gene sequences) to a high degree—for example, to such a degree that one is able to intervene in other phenomena so as to bring about certain effects. The greater the ability to exploit one's apparent causal knowledge of something so as to bring about (often extraordinarily precise) outcomes, the greater the warrant for belief (Hacking 1982, 1983, Cartwright 1983, ch. 5, Giere 1989, ch. 5). Belief in scientific unobservables thus described is here partnered with a degree of scepticism about scientific theories more generally, and this raises questions about whether believing in entities while withholding belief with respect to the theories that describe them is a coherent or practicable combination (Morrison 1990, Elsamahi 1994, Resnik 1994, Chakravartty 1998, Clarke 2001, Massimi 2004). Entity realism is especially compatible with and nicely facilitated by the causal theory of reference associated with Kripke (1980) and Putnam (1985/1975, ch. 12), according to which one can successfully refer to an entity despite significant or even radical changes in theoretical descriptions of its properties; this allows for stability of epistemic commitment when theories change over time. Whether the causal theory of reference can be applied successfully in this context, however, is a matter of dispute (see Hardin & Rosenberg 1983, Laudan 1984, Psillos 1999, ch. 12, and Chakravartty 2007a, pp. 52–56).
Structural realism is another view promoting selectivity, but in this case it is the natures of unobservable entities that are viewed sceptically, with realism reserved for the structure of the unobservable realm, as represented by certain relations described by our best theories. All of the many versions of this position fall into one of two camps: the first emphasizes an epistemic distinction between notions of structure and nature; the second emphasizes an ontological thesis. The epistemic view holds that our best theories likely do not correctly describe the natures of unobservable entities, but do successfully describe certain relations between them. The ontic view suggests that the reason realists should aspire only to knowledge of structure is that the very concept of entities that stand in relations is metaphysically problematic—there are, in fact, no such things, or if there are such things, they are in some sense emergent from or dependent on their relations. One challenge facing the epistemic version is that of articulating a concept of structure that makes knowledge of it effectively distinct from that of the natures of entities. The ontological version faces the challenge of clarifying the relevant notions of emergence and/or dependence. (On epistemic structural realism, see Worrall 1989, Psillos 1995, 2006, Votsis 2003, and Morganti 2004; regarding ontic structural realism, see French 1998, 2006, Ladyman 1998, Psillos 2001, 2006, Ladyman & Ross 2007, and Chakravartty 2007a, ch. 3).
Lined up in opposition to the various motivations for realism presented in section 2 are a number of important antirealist arguments, all of which have pressed realists either to attempt their refutation, or to modify their realism accordingly. One of these challenges, the underdetermination of theory by data, has a storied history in twentieth century philosophy more generally, and is often traced to the work of Duhem (1954/1906, ch. 6). In remarks concerning the confirmation of scientific hypotheses (in physics, which he contrasted with chemistry and physiology), Duhem noted that a hypothesis cannot be used to derive testable predictions in isolation; to derive predictions one also requires “auxiliary” assumptions, such as background theories, hypotheses about instruments and measurements, etc. If subsequent observation and experiment produces data that conflict with those predicted, one might think that this reflects badly on the hypothesis under test, but Duhem pointed out that given all of the assumptions required to derive predictions, it is no simple matter to identify where the error lies. Different amendments to one's overall set of beliefs regarding hypotheses and theories will be consistent with the data. A similar result is commonly associated with the later “confirmational holism” of Quine (1953), according to which experience (including, of course, that associated with scientific testing) does not confirm or disconfirm individual beliefs per se, but rather the set of one's beliefs taken as a whole. The thesis of underdetermination is thus sometimes referred to as the ‘Duhem-Quine thesis’ (see Ben-Menahem 2006 for a historical introduction).
How does this amount to a concern about realism? The argument from underdetermination proceeds as follows: let us call the relevant, overall sets of scientific belief ‘theories’; different, conflicting theories are consistent with the data; the data exhaust the evidence for belief; therefore, there is no evidential reason to believe one of these theories as opposed to another. Given that the theories differ precisely in what they say about the unobservable (their observable consequences—the data—are all shared), a challenge to realism emerges: the choice of which theory to believe is underdetermined by the data. In contemporary debate, the challenge is usually presented using slightly different terminology. Every theory, it is said, has empirically equivalent rivals—that is, rivals that agree with respect to the observable, but differ with respect to the unobservable. This then serves as the basis of a sceptical argument regarding the truth of any particular theory the realist may wish to endorse. Various forms of antirealism then suggest that hypotheses and theories involving unobservables are endorsed, not merely on the basis of evidence that may be relevant to their truth, but also on the basis of other factors that are not indicative of truth as such (see sections 3.2, and 4.2–4.4). (For modern explications, see van Fraassen 1980, ch. 3, Earman 1993, Kukla 1998, chs. 5–6, and Stanford 2001.)
The argument from underdetermination is contested in a number of ways. One might, for example, distinguish between underdetermination in practice (or at a time) and underdetermination in principle. In the former case, there is underdetermination only because the data that would support one theory or hypothesis at the expense of another is unavailable, pending foreseeable developments in experimental technique or instrumentation. Here, realism is arguably consistent with a “wait and see” attitude, though if the prospect of future discriminating evidence is poor, a commitment to future realism may be questioned thereby. In any case, most proponents of underdetermination insist on the idea of underdetermination in principle: the idea that there are always (plausible) empirically equivalent rivals no matter what evidence may come to light. In response, some argue that the principled worry cannot be established, since what counts as data is apt to change over time with the development of new techniques and instruments, and with changes in scientific background knowledge, which alter the auxiliary assumptions required to derive observable predictions (Laudan & Leplin 1991). Such arguments rest, however, on a different conception of data than that assumed by many antirealists (defined above, in terms of human sensory capacities). (For other responses, see Okasha 2002, van Dyck 2007, and Busch 2009. Stanford 2006 proposes a historicized version of the argument from underdetermination, discussed in Chakravartty 2008 and Godfrey-Smith 2008).
One especially important reaction to concerns about the alleged underdetermination of theory by data gives rise to another leading antirealist argument. This reaction is to reject one of the key premises of the argument from underdetermination, viz. that evidence for belief in a theory is exhausted by the empirical data. Many realists contend that other considerations—most prominently, explanatory considerations—play an evidential role in scientific inference. If this is so, then even if one were to grant the idea that all theories have empirically equivalent rivals, this would not entail underdetermination, for the explanatory superiority of one in particular may determine a choice (Laudan 1990, Day & Botterill 2008). This is a specific exemplification of a form of reasoning by which ‘we infer what would, if true, provide the best explanation of [the] evidence’ (Lipton 2004/1991, p. 1). To put a realist-sounding spin on it: ‘one infers, from the premise that a given hypothesis would provide a “better” explanation for the evidence than would any other hypothesis, to the conclusion that the given hypothesis is true’ (Harman 1965, p. 89). Inference to the best explanation (as per Lipton's formulation) seems ubiquitous in scientific practice. The question of whether it can be expected to yield knowledge of the sort suggested by realism (as per Harman's formulation) is, however, a matter of dispute.
Two difficulties are immediately apparent regarding the realist aspiration to infer truth (approximate truth, existence of entities, etc.) from hypotheses or theories that are judged best on explanatory grounds. The first concerns the grounds themselves. In order to judge that one theory furnishes a better explanation of some phenomenon than another, one must employ some criterion or criteria on the basis of which the judgement is made. Many have been proposed: simplicity (whether of mathematical description or in terms of the number or nature of entities, properties, or relations involved); consistency and coherence (both internally, and externally with respect to other theories and background knowledge); scope and unity (pertaining to the domain of phenomena explained); and so on. One challenge here concerns whether virtues such as these can be defined precisely enough to permit relative rankings of explanatory goodness. Another challenge concerns the multiple meanings associated with some virtues (consider, for example, mathematical versus ontological simplicity). Another concerns the possibility that such virtues may not all favour any one theory in particular. Finally, there is the question of whether these virtues should be considered evidential or epistemic, as opposed to merely pragmatic. What reason is there to think, for instance, that simplicity is an indicator of truth? Thus, the ability to rank theories with respect to their likelihood of being true may be questioned.
A second difficulty facing inference to the best explanation concerns the pools of theories regarding which judgments about relative explanatory efficacy are made. Even if scientists are likely reliable rankers of theories with respect to truth, this will not produce belief in a true theory (in some domain) unless that theory in particular happens to be among those considered. Otherwise, as van Fraassen (1989, p. 143) notes, one may simply end up with ‘the best of a bad lot’. Given the widespread view, even amongst realists, that many and perhaps most of our best theories are false, strictly speaking, this concern may seem especially pressing. However, in just the way that the realist strategy of selectivity (see section 2.3) may offer responses to the question of what it could mean for a theory to be close to the truth without being true simpliciter, this same strategy may offer the beginnings of a response here. That is to say, the best theory of a bad lot may nonetheless describe unobservable aspects of the world in such a way as to meet the standards of variants of realism including explanationism, entity realism, and structural realism. (For a book-length treatment of inference to the best explanation, see Lipton 2004/1991; for defences, see Lipton 1993, Day & Kincaid 1994, and Psillos 1996, 2009, part III; for critiques, see van Fraassen 1989, chs. 6–7, Ladyman, Douven, Horsten & van Fraassen 1997, and Wray 2008.)
Worries about underdetermination and inference to the best explanation are generally conceptual in nature, but the so-called pessimistic induction (also called the ‘pessimistic meta-induction’, because it concerns the “ground level” inductive inferences that produce scientific theories and laws) is intended as an argument from empirical premises. If one considers the history of scientific theories in any given discipline, what one typically finds is a regular turnover of older theories in favour of newer ones, as scientific knowledge develops. From the point of view of the present, most past theories must be considered false; indeed, this will be true from the point of view of most times. Therefore, by enumerative induction (that is, generalizing from these cases), surely theories at any given time will ultimately be replaced and regarded as false from some future perspective. Thus, current theories are also false. The general idea of the pessimistic induction has a rich pedigree. Though neither endorse the argument, Poincaré (1952/1905, p. 160), for instance, describes the seeming ‘bankruptcy of science’ given the apparently ‘ephemeral nature’ of scientific theories, which one finds ‘abandoned one after another’, and Putnam (1978, pp. 22–25) describes the challenge in terms of the failure of reference of terms for unobservables, with the consequence that theories employing them cannot be said to be true.
Contemporary discussion commonly focuses on Laudan's (1981) argument to the effect that the history of science furnishes vast evidence of empirically successful theories that were later rejected; from subsequent perspectives, their unobservable terms were judged not to refer and thus, they cannot be regarded as true or even approximately true. (If one prefers to define realism in terms of scientific ontology rather than reference and truth, the worry can be rephrased in terms of the mistaken ontologies of past theories from later perspectives.) Responses to this argument generally take one of two forms, the first stemming from the qualifications to realism outlined in section 1.3, and the second from the forms of realist selectivity outlined in section 2.3—both can be understood as attempts to restrict the inductive basis of the argument in such a way as to foil the pessimistic conclusion. For example, one might contend that if only sufficiently mature and non-ad hoc theories are considered, the number whose central terms did not refer and/or which cannot be regarded as approximately true is dramatically reduced (see references, section 1.3). Or, the realist might grant that the history of science presents a record of significant referential discontinuity, but contend that, nevertheless, it also presents a record of impressive continuity regarding what is properly endorsed by realism, as recommended by explanationists, entity realists, or structural realists (see references, section 2.3). (For other responses, see Leplin 1981, McAllister 1993, Chakravartty 2007, ch. 2, Doppelt 2007, and Nola 2008; Hardin & Rosenberg 1982, Cruse & Papineau 2002, and Papineau 2010 explore the idea that reference is irrelevant to approximate truth).
In just the way that some authors suggest that the miracle argument is an instance of fallacious reasoning—the base rate fallacy (see section 2.1)—some suggest that the pessimistic induction is likewise flawed (Lewis 2001, Lange 2002, Magnus & Callender 2004). The argument is analogous: the putative failure of reference on the part of past successful theories, or their putative lack of approximate truth, cannot be used to derive a conclusion regarding the chances that our current best theories do not refer to unobservables, or that they are not approximately true, unless one knows the base rate of non-referring or non-approximately true theories in the relevant pools. And since one cannot know this independently, the pessimistic induction is fallacious. Again, analogously, one might argue that to formalize the argument in terms of probabilities, as is required in order to invoke the base rate fallacy, is to miss the more fundamental point underlying the pessimistic induction (Saatsi 2005b). One might read the argument simply as cutting a supposed link between the empirical success of scientific theory and successful reference or approximate truth, as opposed to an enumerative induction per se. If even a few examples from the history of science demonstrate that theories can be empirically successful and yet fail to refer to the central unobservables they invoke, or fail to be what realists would regard as approximately true, this constitutes a prima facie challenge to the notion that only realism can explain the success of science.
The regular appeal to the notion of approximate truth by realists has several motivations. The widespread use of abstraction (that is, incorporating some but not all of the relevant parameters into scientific descriptions) and idealization (distorting the nature of certain parameters) suggests that even many of our best theories and models are not strictly correct. The common realist contention that theories can be viewed as gradually converging on the truth as scientific inquiry progresses suggests that such progress is amenable to assessment or measurement in some way, if only in principle. And even for realists who are not convergentists as such, the importance of cashing out the metaphor of theories being close to the truth is pressing in the face of antirealist assertions to the effect that the metaphor is empty. The challenge to make good on the metaphor and explicate, in precise terms, what approximate truth could be, is one source of scepticism about realism. Two broad strategies have emerged in response to this challenge: attempts to quantify approximate truth by formally defining the concept and the related notion of relative approximate truth; and attempts to explicate the concept informally.
The formal route was inaugurated by Popper (1972, pp. 231–236), who defined relative orderings of ‘verisimilitude’ (literally, ‘likeness to truth’) between theories in a given domain over time by means of a comparison of their true and false consequences. Miller (1974) and Tichý (1974) proved that there is a technical problem with this account, however, yielding the consequence that in order for theory A to have greater verisimilitude than theory B, A must be true simpliciter, which leaves the realist desideratum of explaining how strictly false theories can differ with respect to approximate truth unsatisfied (see also Oddie 1986a). Another formal account is the possible worlds approach (also called the ‘similarity’ approach), according to which the truth conditions of a theory are identified with the set of possible worlds in which it is true, and ‘truth-likeness’ is calculated by means of a function that measures the average or some other mathematical “distance” between the actual world and the worlds in that set, thereby facilitating orderings of theories with respect to truth-likeness (Tichý 1976, 1978, Oddie 1986b, Niiniluoto 1987, 1998; for critiques, see Miller 1976 and Aronson 1990). One last attempt to formalize approximate truth is the type hierarchy approach, which analyzes truth-likeness in terms of similarity relationships between nodes in tree-structured graphs of types and subtypes representing scientific concepts on the one hand, and the entities, properties, and relations in the world they putatively represent on the other (Aronson 1990, Aronson, Harré, & Way 1994, pp. 15–49; for a critique, see Psillos 1999, p. 270–273).
Less formally and perhaps more typically, realists have attempted to explicate approximate truth in qualitative terms. One common suggestion is that a theory may be considered more approximately true than one that preceded it if the earlier theory can be described as a “limiting case” of the later one. The idea of limiting cases and inter-theory relations more generally is elaborated by Post (1971; see also French & Kamminga 1993), who argues that certain heuristic principles in science yield theories that ‘conserve’ the successful parts of their predecessors. His ‘General Correspondence Principle’ states that later theories commonly account for the successes of their predecessors by ‘degenerating’ into earlier theories in domains in which the earlier ones are well confirmed. Hence, for example, the often cited claim that certain equations in relativistic physics degenerate into the corresponding equations in classical physics in the limit, as velocity tends to zero. The realist may then contend that later theories offer more approximately true descriptions of the relevant subject matter, and that the ways in which they do so can be illuminated in part by studying the ways in which they build on the limiting cases represented by their predecessors. (For further takes on approximate truth, see Leplin 1981, Boyd 1990, Weston 1992, Smith 1998, and Chakravartty 2010.)
The term ‘antirealism’ (or ‘anti-realism’) encompasses any position that is opposed to realism along one or more of the dimensions canvassed in section 1.2: the metaphysical commitment to the existence of a mind-independent reality; the semantic commitment to interpret theories literally or at face value; and the epistemological commitment to regard theories as constituting knowledge of both observables and unobservables. As a result, and as one might expect, there are many different ways of being an antirealist, and many different positions qualify as antirealism. In the historical development of realism, arguably the most important strains of antirealism have been varieties of empiricism which, given their emphasis on experience as a source and subject matter of knowledge, are naturally set against the idea of knowledge of unobservables. It is possible to be an empiricist more broadly speaking in a way that is consistent with realism—for example, one might endorse the idea that knowledge of the world stems from empirical investigation, but contend that on this basis, one can justifiably infer certain things about unobservables. In the first half of the twentieth century, however, empiricism came predominantly in the form of varieties of “instrumentalism”: the view that theories are merely instruments for predicting observable phenomena or systematizing observation reports.
Traditionally, instrumentalists maintain that terms for unobservables, by themselves, have no meaning; construed literally, statements involving them are not even candidates for truth or falsity. The most influential advocates of instrumentalism were the logical empiricists (or logical positivists), including Carnap and Hempel, famously associated with the Vienna Circle group of philosophers and scientists as well as important contributors elsewhere. In order to rationalize the ubiquitous use of terms which might otherwise be taken to refer to unobservables in scientific discourse, they adopted a non-literal semantics according to which these terms acquire meaning by being associated with terms for observables (for example, ‘electron’ might mean ‘white streak in a cloud chamber’), or with demonstrable laboratory procedures (a view called ‘operationalism’). Insuperable difficulties with this semantics led ultimately (in large measure) to the demise of logical empiricism and the growth of realism. The contrast here is not merely in semantics and epistemology: a number of logical empiricists also held the neo-Kantian view that ontological questions “external” to the frameworks for knowledge represented by theories are also meaningless (the choice of a framework is made solely on pragmatic grounds), thereby rejecting the metaphysical dimension of realism (as in Carnap 1950). (Duhem 1954/1906 was influential with respect to instrumentalism; for a critique of logical empiricist semantics, see Brown 1977, ch. 3; on logical empiricism more generally, see Giere & Richardson 1997 and Richardson & Uebel 2007; on the neo-Kantian reading, see Richardson 1998 and Friedman 1999.)
Van Fraassen (1980) reinvented empiricism in the scientific context, evading many of the challenges faced by logical empiricism, by adopting a realist semantics. His position, constructive empiricism, holds that the aim of science is empirical adequacy, where ‘a theory is empirically adequate exactly if what it says about the observable things and events in the world, is true’ (p. 12; p. 64 gives a more technical definition in terms of the embedding of observable structures in scientific models). Crucially, unlike traditional instrumentalism and logical empiricism, constructive empiricism interprets theories in precisely the same manner as realism. The antirealism of the position is due entirely to its epistemology—it recommends belief in our best theories only insofar as they describe observable phenomena, and an agnostic attitude with respect to anything unobservable. The constructive empiricist thus recognizes claims about unobservables as true or false, but does not go so far as to believe or disbelieve them. In advocating a restriction of belief to the domain of the observable, the position is similar to traditional instrumentalism, and is for this reason sometimes described as a form of instrumentalism. (For elaborations of the view, see van Fraassen 1985, 2001, and the helpful study, Rosen 1994.) There are also affinities here with the idea of fictionalism, according to which things in the world are and behave as if our best scientific theories are true (Vaihinger 1923/1911, Fine 1993).
The collapse of the logical empiricist program was in part facilitated by a historical turn in the philosophy of science in the 1960s, associated with authors such as Kuhn, Feyerabend, and Hanson. Kuhn's highly influential work, The Structure of Scientific Revolutions, played a significant role in establishing a lasting interest in a form of historicism about scientific knowledge, particularly among those interested in the nature of scientific practice. An underlying principle of the historical turn was to take the history of science and its practice seriously by furnishing descriptions of scientific knowledge in situ. Kuhn argued that the fruits of such history illuminate a recurring pattern: periods of so-called normal science, often fairly long in duration (consider, for example, the periods dominated by classical physics, or relativistic physics), punctuated by revolutions which lead scientific communities from one period of normal science into another. The implications for realism on this picture derive from Kuhn's characterization of knowledge on either side of a revolutionary divide. Two different periods of normal science, he said, are “incommensurable” with one another, in such a way as to render the world importantly different after a revolution (the phenomenon of “world change”). (Among the many detailed studies of these topics, see Horwich 1993, Hoyningen-Huene 1993, Sankey 1994, and Bird 2000.)
The notion of incommensurability concerns the comparison of theories operative during different periods of normal science. Kuhn held that if two theories are incommensurable, they are not comparable in a way that would permit the judgement that one is epistemically superior to the other, because different periods of normal science are characterized by different ‘paradigms’ (commitments to symbolic representations of the phenomena, metaphysical beliefs, values, and problem solving techniques). As a consequence, scientists in different periods of normal science generally employ different methods and standards, experience the world differently via “theory laden” perceptions, and most importantly for Kuhn (1983), differ with respect to the very meanings of their terms. This is a version of meaning holism or contextualism, according to which the meaning of a term or concept is exhausted by its connections to others within a paradigm. A change in any part of this network entails a change in meanings throughout—the term ‘mass’, for instance, has different meanings in the contexts of classical physics and relativistic physics. Thus, any judgment to the effect that the latter's characterization of mass is closer to the truth, or even that the relevant theories describe the same property, is importantly confused: it equivocates between two different concepts which can only be understood in an appropriately historicized manner, from the perspectives of the paradigms in which they occur.
The changes in perception, conceptualization, and language that Kuhn associated with changes in paradigm also fuelled his notion of world change, which further extends the contrast of the historicist approach with realism. There is an important sense, he maintained, in which after a scientific revolution, scientists live in a different world. This is a famously cryptic remark in Structure (pp. 111, 121, 150), but Kuhn (2000, p. 264) later gives it a neo-Kantian spin: paradigms function so as to create the reality of scientific phenomena, thereby allowing scientists to engage with this reality. On such a view, it would seem that not only the meanings but also the referents of terms are constrained by paradigmatic boundaries. And thus, reflecting an interesting parallel with neo-Kantian logical empiricism, the idea of a paradigm-transcendent world which is investigated by scientists, and about which one might have knowledge, has no obvious cognitive content. On this picture, empirical reality is structured by scientific paradigms, and this violates the metaphysical commitment of realism to the existence of a mind-independent world.
One outcome of the historical turn in the philosophy of science and its emphasis on scientific practice was a focus on the complex social interactions that inevitably surround and infuse the generation of scientific knowledge. Relations between experts, their students, and the public, collaboration and competition between individuals and institutions, and social, economic, and political contexts, became the subjects of an approach to studying the sciences known as the sociology of scientific knowledge, or SSK. Though in theory, a commitment to studying the sciences from a sociological perspective is interpretable in such a way as to be neutral with respect to realism (Lewens 2005), in practice, most accounts of science inspired by SSK are implicitly or explicitly antirealist. This antirealism in practice stems from the common suggestion that once one appreciates the role that social factors (using this as a generic term for the sorts of interactions and contexts indicated above) play in the production of scientific knowledge, a philosophical commitment to some form of “social constructivism” is inescapable, and this latter commitment is inconsistent with various aspects of realism.
The term ‘social construction’ refers to any knowledge-generating process in which what counts as a fact is substantively determined by social factors, and in which different social factors would likely generate facts that are inconsistent with what is actually produced. The important implication here is thus a counterfactual claim about the dependence of facts on social factors. There are numerous ways in which social determinants may be consistent with realism; for example, social factors might determine the directions and methodologies of research permitted, encouraged, and funded, but this by itself need not undermine a realist attitude with respect to the outputs of scientific work. Often, however, work in SSK takes the form of case studies that aim to demonstrate how particular decisions affecting scientific work were influenced by social factors which, had they been different, would have facilitated results that are inconsistent with those ultimately accepted as scientific fact. Some, including proponents of the so-called Strong Program in SSK, argue that for more general, principled reasons, such factual contingency is inevitable. (For a sample of influential approaches to social constructivism, see Latour & Woolgar 1986/1979, Knorr-Cetina 1981, Pickering 1984, Shapin & Schaffer 1985, and Collins & Pinch 1993; on the Strong Program, see Barnes, Bloor & Henry 1996; for a historical study of the transition from Kuhn to SSK and social constructivism, see Zammito 2004, chs. 5–7.)
By making social factors an inextricable, substantive determinant of what counts as true or false in the realm of the sciences (and elsewhere), social constructivism stands opposed to the realist contention that theories can be understood as furnishing knowledge of a mind-independent world. And as in the historicist approach, notions such as truth, reference, and ontology are here relative to particular contexts, and have no context-transcendent significance. The later work of Kuhn and Wittgenstein in particular were influential in the development of the Strong Program doctrine of “meaning finitism”, according to which the meanings of terms are conceived as social institutions: the various ways in which they are used successfully in communication within a linguistic community. This theory of meaning forms the basis of an argument to the effect that the meanings of scientific terms (inter alia) are products of social negotiation and need not be fixed or determinate, which further conflicts with a number of realist notions, including the idea of convergence towards true theories, improvements with respect to ontology or approximate truth, and determinate reference to mind-independent entities, properties, and relations. The subject of neo-Kantianism thus emerges here again, though its strength in constructivist doctrines varies significantly (for a robustly finitist view, see Kusch 2002, and for a more moderate constructivism, see Putnam's (1981, ch. 3) ‘internal realism’ and compare Ellis 1988).
Feminist critiques of science are linked thematically with SSK and forms of social constructivism in emphasizing the role of social factors as determinants of scientific fact, but extend the analysis in a more specific way, reflecting particular concerns about the marginalization of points of view based on gender, ethnicity, socio-economic status, and political status. Not all feminist approaches are antirealist, but nearly all are normative, offering prescriptions for revising both scientific practice and concepts such as objectivity and knowledge that have direct implications for realism. In this regard it is useful to distinguish (as originally proposed in Harding 1986) between three broad approaches. Feminist empiricism focuses on the possibility of warranted belief within scientific communities as a function of the transparency and consideration of biases associated with different points of view that enter into scientific work. Standpoint theory investigates the idea that scientific knowledge is inextricably linked to perspectives arising from differences in such points of view. Feminist postmodernism rejects traditional conceptions of universal or absolute objectivity and truth. (As one might expect, these views are not always so neatly distinguishable; for some influential approaches, see Keller 1985, Harding 1986, Haraway 1988, Longino 1990, 2002, Alcoff & Potter 1993, and Nelson & Nelson 1996.)
The notion of objectivity has a number of traditional connotations—including disinterest (detachment, lack of bias) and universality (independence from any particular perspective or viewpoint)—which are commonly associated with knowledge of a mind-independent world. Feminist critiques are almost unanimous in rejecting scientific objectivity in the sense of disinterest, offering case studies that aim to demonstrate how the presence of (for example) androcentric bias in a scientific community can lead to the acceptance of one theory at the expense of alternatives (see Longino 1990, ch. 6, for two detailed cases). Arguably, the failure of objectivity in this sense is consistent with realism in principle, but only under certain conditions. If the relevant bias here is epistemically neutral (that is, if one's assessment of scientific evidence is not influenced by it one way or another), then realism remains at least one viable interpretation of the outputs of scientific work. In the more interesting case where bias is epistemically consequential, the prospects for realism are diminished, but may be enhanced by a scientific infrastructure that functions to bring it under scrutiny (by means of, for example, effective peer review, genuine consideration of minority views, etc.), thus facilitating corrective measures where appropriate. The contention that the sciences do not generally exemplify such an infrastructure is one motivation for the normativity of much feminist empiricism.
The challenge to objectivity in the sense of universality or perspective-independence is even more difficult to square with the possibility of realism. In a Marxist vein, some standpoint theorists argue that certain perspectives are epistemically privileged in the realm of science: viz., subjugated perspectives are epistemically privileged in comparison to dominant ones in light of the deeper insight afforded the former (just as the proletariat has a deeper knowledge of human potential than the superficial knowledge typical of those in power). Others portray epistemic privilege in a more splintered or deflationary manner, suggesting that no one point of view can be established as superior to another by any overarching standard of epistemological assessment. This view is most explicit in feminist postmodernism, which embraces a thoroughgoing relativism with respect to truth (and presumably approximate truth, scientific ontology, and other notions central to the various descriptions of realism). As in the case of Strong Program SSK, truth and epistemic standards are here defined only within the context of a perspective, and thus cannot be interpreted in any context-transcendent or mind-independent manner.
It is not uncommon to hear philosophers remark that the dialogue between the various forms of realism and antirealism surveyed in this article shows every symptom of a perennial philosophical dispute. The issues contested range so broadly and elicit so many competing intuitions (about which, arguably, reasonable people may disagree), that some question whether a resolution is even possible. This prognosis of potentially irresolvable dialectical complexity is relevant to a number of further views in the philosophy of science, some of which arise as direct responses to it. For example, Fine (1996/1986, chs. 7–8) argues that ultimately, neither realism nor antirealism is tenable, and recommends what he calls the “natural ontological attitude” (NOA) instead (see Rouse 1988 and 1991 for a detailed exploration of the view). NOA is intended to comprise a neutral, common core of realist and antirealist attitudes of acceptance of our best theories. The mistake that both parties make, Fine suggests, is to add further epistemological and metaphysical diagnoses to this shared position, such as pronouncements about which aspects of scientific ontology should be viewed as real, which are proper subjects of belief, and so on. Others contend that this sort of approach to scientific knowledge is non- or anti-philosophical, and defend philosophical engagement in debates about realism (Crasnow 2000, Mcarthur 2006). Musgrave (1989) argues that the view is either empty or collapses into realism.
The idea of putting the conflict between realist and antirealist approaches to science aside is also a recurring theme in traditional accounts of pragmatism, and quietism. Regarding the first, Peirce (1998/1992, in ‘How to Make Our Ideas Clear’, for instance, originally published in 1878) holds that the meaning of a proposition is given by its ‘practical consequences’ for human experience, such as implications for observation or problem-solving. For James (1979/1907), positive utility measured in these terms is the very marker of truth (where truth is whatever will be agreed in the ideal limit of scientific inquiry). Many of the points disputed by realists and antirealists—differences in epistemic commitment to scientific entities, properties, and relations based on observability, for example—are effectively non-issues on this view. It is nevertheless a form of antirealism on traditional readings of Peirce and James, since both suggest that truth in the pragmatist sense exhausts our conception of reality, thus running foul of the metaphysical dimension of realism. The notion of quietism is often associated with Wittgenstein's response to philosophical problems about which, he maintained, nothing sensible can be said. This is not to say that engaging with such a problem is not to one's taste, but rather that quite independently of one's interest or lack thereof, the dispute itself concerns a pseudo-problem. Blackburn (2002) suggests that disputes about realism may have this character.
One last take on the putative irresolvability of debates concerning realism focuses on certain meta-philosophical commitments adopted by the relevant interlocutors. Wylie (1986, p. 287), for instance, claims that ‘the most sophisticated positions on either side now incorporate self-justifying conceptions of the aim of philosophy and of the standards of adequacy appropriate for judging philosophical theories of science’. That is, different assumptions ab initio regarding what sorts of inferences are legitimate, what sorts of evidence reasonably support belief, whether there is a genuine demand for the explanation of observable phenomena in terms of underlying realities, and so on, may render some arguments between realists and antirealists question-begging. This diagnosis is arguably facilitated by van Fraassen's (1989, pp. 170–176, 1994, p. 182) intimation that neither realism nor antirealism (in his case, empiricism) is ruled out by plausible canons of rationality; each is sustained by a different conception of how much epistemic risk one should take in forming beliefs on the basis of one's evidence. An intriguing question then emerges as to whether disputes surrounding realism and antirealism are resolvable in principle, or whether, ultimately, internally consistent and coherent formulations of these positions should be regarded as irreconcilable but nonetheless permissible interpretations of scientific knowledge (Chakravartty 2007a, pp. 16–26).
- Alcoff, L. & E. Potter (eds.), 1993, Feminist Epistemologies, London: Routledge.
- Aronson, J. L., 1990, ‘Verisimilitude and Type Hierarchies’, Philosophical Topics, 18: 5–28.
- Aronson, J. L., R. Harré, & E. C. Way, 1994, Realism Rescued: How Scientific Progress is Possible, London: Duckworth.
- Barnes, B., B. Bloor & J. Henry, 1996, Scientific Knowledge, London: Athlone.
- Barnes, E. C., 2002, ‘The Miraculous Choice Argument for Realism’, Philosophical Studies, 111: 97–120.
- –––, 2008, The Paradox of Predictivism, Cambridge: Cambridge University Press.
- Ben-Menahem, Y., 2006, Conventionalism, Cambridge: Cambridge University Press.
- Bird, A., 2000, Thomas Kuhn, Chesham: Acumen.
- Blackburn, S., 2002, ‘Realism: Deconstructing the Debate’, Ratio, 15: 111–133.
- Boyd, R. N., 1983, ‘On the Current Status of the Issue of Scientific Realism’, Erkenntnis, 19: 45–90.
- –––, 1989, ‘What Realism Implies and What it Does Not’, Dialectica, 43: 5–29.
- –––, 1990, ‘Realism, Approximate Truth and Philosophical Method’, in C. W. Savage (ed.), Scientific Theories, Minnesota Studies in the Philosophy of Science, vol. 14, Minneapolis: University of Minnesota Press.
- –––, 1999, ‘Kinds as the “Workmanship of Men”: Realism, Constructivism, and Natural Kinds’, in J. Nida-Rümelin (ed.), Rationalität, Realismus, Revision: Proceedings of the Third International Congress, Gesellschaft für Analytische Philosophie, Berlin: de Gruyter.
- Brown, H. I., 1977, Perception, Theory and Commitment: The New Philosophy of Science, Chicago: University of Chicago Press.
- Brown, J. R., 1982, ‘The Miracle of Science’, Philosophical Quarterly, 32: 232–244.
- Busch, J., 2008, ‘No New Miracles, Same Old Tricks’, Theoria, 74: 102–114.
- –––, 2009, ‘Underdetermination and Rational Choice of Theories’, Philosophia, 37: 55–65.
- Carnap, R., 1950, ‘Empiricism, Semantics and Ontology’, Revue Intérnationale de Philosophie, 4: 20–40. Reprinted in Carnap, R. 1956: Meaning and Necessity: A Study in Semantic and Modal Logic, Chicago: University of Chicago Press.
- Cartwright, N., 1983, How the Laws of Physics Lie, Oxford: Clarendon.
- Chakravartty, A., 1998, ‘Semirealism’, Studies in History and Philosophy of Science, 29: 391–408.
- –––, 2007a, A Metaphysics for Scientific Realism: Knowing the Unobservable, Cambridge: Cambridge University Press.
- –––, 2007b, ‘Six Degrees of Speculation: Metaphysics in Empirical Contexts’, in B. Monton (ed.), Images of Empiricism, Oxford: Oxford University Press.
- –––, 2008, ‘What You Don't Know Can't Hurt You: Realism and the Unconceived’, Philosophical Studies, 137: 149–158.
- –––, 2010, ‘Truth and Representation in Science: Two Inspirations from Art’, in R. Frigg & M. Hunter (eds.), Beyond Mimesis and Convention: Representation in Art and Science, Boston Studies in the Philosophy of Science, Dordrecht: Springer.
- Chang, H., 2003, ‘Preservative Realism and Its Discontents: Revisiting Caloric’, Philosophy of Science, 70: 902–912.
- Churchland, P., 1985, ‘The Ontological Status of Observables: In Praise of the Superempirical Virtues, in Churchland & Hooker (eds.), Images of Science: Essays on Realism and Empiricism, (with a reply from Bas C. van Fraassen), Chicago: University of Chicago Press.
- Clarke, S., 2001, ‘Defensible Territory for Entity Realism’, British Journal for the Philosophy of Science, 52: 701–722.
- Collins, H. & T. J. Pinch, 1993, The Golem, Cambridge: Cambridge University Press.
- Crasnow, S. L., 2000, ‘How Natural Can Ontology Be?’, Philosophy of Science, 67: 114–132.
- Cruse, P. & D. Papineau, 2002, ‘Scientific Realism Without Reference’, in M. Marsonet (ed.), The Problem of Realism, London: Ashgate.
- Day, M. & G. Botterill, 2008, ‘Contrast, Inference and Scientific Realism’, Synthese, 160: 249–267.
- Day, T. & H. Kincaid, 1994, ‘Putting Inference to the Best Explanation in its Place’, Synthese, 98: 271–295.
- Devitt, M., 1991, Realism and Truth, Oxford: Blackwell.
- –––, 2005, ‘Scientific Realism’, in F. Jackson & M. Smith (eds.), The Oxford Handbook of Contemporary Philosophy, Oxford: Oxford University Press.
- Dicken, P. & P. Lipton, 2006, ‘What can Bas Believe? Musgrave and van Fraassen on Observability’, Analysis, 66: 226–233.
- Doppelt, G., 2007, ‘Reconstructing Scientific Realism to Rebut the Pessimistic Meta-Induction’, Philosophy of Science, 74: 96-118.
- Duhem, P. M. M., 1954 (1906), The Aim and Structure of Physical Theory, P. P. Wiener (tr.), Princeton: Princeton University Press.
- van Dyck, M., 2007, ‘Constructive Empiricism and the Argument from Underdetermination’, in B. Monton (ed.), Images of Empiricism: Essays on Science and Stances, with a Reply From Bas C. Van Fraassen, Oxford University Press; 2007.
- Earman, J., 1993, ‘Underdetermination, Realism, and Reason’, Midwest Studies in Philosophy, 18: 19–38.
- Ellis, B., 1988, ‘Internal Realism’, Synthese, 76: 409–434.
- Elsamahi, M., 1994, Proceedings of the Philosophy of Science Association, 1: 173–180.
- –––, 2005, ‘A Critique of Localised Realism’, Philosophy of Science, 72: 1350–1360.
- Fine, A., 1986, ‘Unnatural Attitudes: Realist and Antirealist Attachments to Science’, Mind, 95: 149–177.
- –––, 1990, ‘Piecemeal Realism’, Philosophical Studies, 61: 79–96.
- –––, 1993, ‘Fictionalism’, Midwest Studies in Philosophy, 18: 1–18.
- –––, 1996 (1986), The Shaky Game: Einstein, Realism and The Quantum Theory, 2nd edition. Chicago: University of Chicago Press.
- Franklin, A., 1986, The Neglect of Experiment, Cambridge: Cambridge University Press.
- –––, 1990, Experiment, Right or Wrong, Cambridge: Cambridge University Press.
- French, S., 1998, ‘On the Withering Away of Physical Objects’, in E. Castellani (ed.), Interpreting Bodies: Classical and Quantum Objects in Modern Physics, pp. 93–113. Princeton: Princeton University Press.
- –––, 2006, ‘Structure as a Weapon of the Realist’, Proceedings of the Aristotelian Society, 106: 1–19.
- French, S. & H. Kamminga (eds.), 1993, Correspondence, Invariance and Heuristics, Dordrecht: Kluwer.
- Friedman, M., 1999, Reconsidering Logical Positivism, Cambridge: Cambridge University Press.
- Frost-Arnold, G., 2010, ‘The No-Miracles Argument for Realism: Inference to an Unacceptable Explanation’, Philosophy of Science, 77: 35–58.
- Giere, R. N., 1988, Explaining Science: A Cognitive Approach, Chicago: University of Chicago Press.
- Giere, R. N. & A. W. Richardson, 1997, Origins of Logical Empiricism (Minnesota Studies in the Philosophy of Science, Volume 16), Minneapolis: University of Minnesota Press.
- Godfrey-Smith, P., 2008, ‘Recurrent Transient Underdetermination and the Glass Half Full, Philosophical Studies, 137: 141–148.
- Hacking, I., 1982, ‘Experimentation and Scientific Realism’, Philosophical Topics, 13: 71–87.
- –––, 1983, Representing and Intervening, Cambridge: Cambridge University Press.
- –––, 1985, ‘Do We See Through a Microscope?’, in Churchland & Hooker (eds.), Images of Science: Essays on Realism and Empiricism, (with a reply from Bas C. van Fraassen), Chicago: University of Chicago Press.
- Harding, S., 1986, The Science Question in Feminism, Ithaca: Cornell University Press.
- Hardin, C. L. & A. Rosenberg, 1982, ‘In Defence of Convergent Realism’, Philosophy of Science, 49: 604–615.
- Harker, D., 2008, ‘On the Predilections for Predictions’, British Journal for the Philosophy of Science, 59: 429–453.
- –––, 2010, ‘Two Arguments for Scientific Realism Unified’, Studies in History and Philosophy of Science, 41: 192–202.
- Harman, G., 1965, ‘The Inference to the Best Explanation’,Philosophical Review, 74: 88–95.
- Haraway, D., 1988, ‘Situated Knowledges’, Feminist Studies, 14: 575–600.
- Hitchcock, C. & E. Sober, 2004, ‘Prediction versus Accommodation and the Risk of Overfitting’, British Journal for the Philosophy of Science, 55: 1–34.
- Horwich, P. (ed.), 1993, World Changes: Thomas Kuhn and the Nature of Science, Cambridge, MA: MIT Press.
- Hoyningen-Huene, P. , 1993, Reconstructing Scientific Revolutions: The Philosophy of Science of Thomas S. Kuhn, Chicago: University of Chicago Press.
- Howson, C., 2000, Hume's Problem: Induction and the Justification of Belief, Oxford: Oxford University Press.
- Humphreys, P., 2004, Extending Ourselves: Computational Science, Empiricism, and Scientific Method, Oxford: Oxford University Press.
- James, W., 1979 (1907), Pragmatism, Cambridge, MA: Harvard University Press.
- Jones, R., 1991, ‘Realism About What?’, Philosophy of Science, 58: 185–202.
- Keller, E. F., 1985, Reflections on Gender and Science, New Haven: Yale University Press.
- Kitcher, P., 1993, The Advancement of Science: Science Without Legend, Objectivity without Illusions, Oxford: Oxford University Press.
- Knorr-Cetina, K., 1981, The Manufacture of Knowledge, Oxford: Pergamon.
- Kripke, S. A., 1980, Naming and Necessity, Oxford: Blackwell.
- Kuhn, T.S., 1970 (1962), The Structure of Scientific Revolutions, Chicago: University of Chicago Press.
- –––, 1983, ‘Commensurability, Comparability, Communicability’, Proceedings of the Philosophy of Science Association, 1982: 669–688.
- –––, 2000, The Road Since Structure, J. Conant & J. Haugeland (eds.), Chicago: University of Chicago Press.
- Kukla, A., 1998, Studies in Scientific Realism, Oxford: Oxford University Press.
- Kusch, M., 2002, Knowledge by Agreement: the Programme of Communitarian Epistemology, Oxford: Oxford University Press.
- Ladyman, J., 1998, ‘What is Structural Realism?’, Studies in History and Philosophy of Science, 29: 409–424.
- Ladyman, J., I. Douven, L. Horsten, & B. C. van Fraassen, 1997, ‘A Defence of van Fraassen's Critique of Abductive Inference: Reply to Psillos’, Philosophical Quarterly, 47: 305–321.
- Ladyman, J. & D. Ross, 1997, Every Thing Must Go: Metaphysics Naturalized, Oxford: Oxford University Press.
- Lange, M., 2002, ‘Baseball, Pessimistic Inductions and the Turnover Fallacy’, Analysis, 62: 281–285.
- Latour, B & S. Woolgar, 1986 (1979), Laboratory Life: The Construction of Scientific Facts, (2nd ed.), Princeton: Princeton University Press.
- Laudan, L., 1981, ‘A Confutation of Convergent Realism’, Philosophy of Science, 48: 19–48.
- –––, 1984, ‘Discussion: Realism Without the Real’, Philosophy of Science, 51: 156–162.
- –––, 1990, ‘Demystifying Underdetermination’, in C. W. Savage (ed.), Scientific Theories, Minnesota Studies in the Philosophy of Science, vol. 14, Minneapolis: University of Minnesota Press.
- Laudan, L. & J. Leplin, 1991, ‘Empirical Equivalence and Underdetermination’, Journal of Philosophy, 88: 449–472.
- Leeds, S., 2007, ‘Correspondence Truth and Scientific Realism’, Synthese, 159: 1–21.
- Leplin, J., 1981, ‘Truth and Scientific Progress’, Studies in History and Philosophy of Science, 12: 269–292.
- –––, 1997, A Novel Defence of Scientific Realism, Oxford: Oxford University Press.
- Lewens, T., 2005, ‘Realism and the Strong Program’, British Journal for the Philosophy of Science, 56: 559–577.
- Lewis, P., 2001, ‘Why the Pessimistic Induction is a Fallacy’, Synthese, 129: 371–380.
- Lipton, P., 1990, ‘Prediction and Prejudice’, International Studies in the Philosophy of Science, 4: 51–65.
- –––, 1993, ‘Is the Best Good Enough?’, Proceedings of the Aristotelian Society, 93: 89–104.
- –––, 1994, ‘Truth, Existence, and the Best Explanation’, in A. A. Derksen (ed.), The Scientific Realism of Rom Harré, Tilburg: Tilburg University Press.
- –––, 2004 (1991), Inference to the Best Explanation, 2nd edition. London: Routledge.
- Longino, H., 1990, Science as Social Knowledge: Values and Objectivity in Scientific Inquiry, Princeton: Princeton University Press.
- –––, 2002, The Fate of Knowledge, Princeton: Princeton University Press.
- Lyons, T. D., 2003, ‘Explaining the Success of a Scientific Theory’, Philosophy of Science, 70: 891–901.
- –––, 2005, ‘Towards a Purely Axiological Scientific Realism’, Erkenntnis, 63: 167–204.
- –––, 2006, ‘Scientific Realism and the Stratagema de Divide et Impera’, British Journal for the Philosophy of Science, 57: 537–560.
- Magnus, P.D. & C. Callender, 2004, ‘Realist Ennui and the Base Rate Fallacy’, Philosophy of Science, 71: 320–338.
- Massimi, M., 2004, ‘Non-Defensible Middle Ground for Experimental Realism: Why We are Justified to Believe in Colored Quarks’, Philosophy of Science, 71: 36–60.
- Maxwell, G., 1962, ‘On the Ontological Status of Theoretical Entities’, in H. Feigl & G. Maxwell (eds.), Scientific Explanation, Space, and Time, Minnesota Studies in the Philosophy of Science, Volume III, Minneapolis: University of Minnesota Press.
- McAllister, J. W., 1993, ‘Scientific Realism and the Criteria for Theory-Choice’, Erkenntnis, 38: 203–222.
- Mcarthur, D., 2006, ‘The Anti-Philosophical Stance, the Realism Question and Scientific Practice’, Foundations of Science, 11: 369–397.
- McLeish, C., 2005, ‘Realism Bit by Bit: Part I: Kitcher on Reference’, Studies in History and Philosophy of Science, 36: 667–685.
- –––, 2006, ‘Realism Bit by Bit: Part 2: Disjunctive Partial Reference’, Studies in History and Philosophy of Science, 37: 171–190.
- Miller, D., 1974, ‘Popper's Qualitative Theory of Verisimilitude’, British Journal for the Philosophy of Science, 25: 166–177.
- –––, 1976, ‘Verisimilitude Redeflated’, British Journal for the Philosophy of Science, 27: 363–380.
- Miller, R. W., 1987, Fact and Method: Explanation, Confirmation and Reality in the Natural and the Social Sciences, Princeton: Princeton University Press.
- Morganti, M., 2004, ‘On the Preferability of Epistemic Structural Realism’, Synthese, 142: 81–107.
- Morrison, M., 1990, ‘Theory, Intervention and Realism’, Synthese, 82: 1–22.
- Musgrave, A., 1985, ‘Constructive Empiricism and Realism’, P. M. Churchland & C. A. Hooker (eds.), Images of Science: Essays on Realism and Empiricism, (with a reply from Bas C. van Fraassen), Chicago: University of Chicago Press.
- –––, 1988, ‘The Ultimate Argument for Scientific Realism’, in R. Nola (ed.), Relativism and Realism in Sciences, Dordrecht: Kluwer.
- –––, 1989, ‘Noa's Ark—Fine for Realism’, Philosophical Quarterly, 39: 383–398.
- –––, 1992, ‘Discussion: Realism About What?’, Philosophy of Science, 59: 691–697.
- Nelson, L. H. & J. Nelson (eds.), 1996, Feminism, Science, and the Philosophy of Science, Dordrecht: Kluwer.
- Niiniluoto, I., 1987, Truthlikeness, Dordrecht: Reidel.
- –––, 1998, ‘Verisimilitude: The Third Period’, British Journal for the Philosophy of Science, 49: 1–29.
- –––, 1999, Critical Scientific Realism, Oxford: Oxford University Press.
- Nola, R., 2008, ‘The Optimistic Meta-Induction and Ontological Continuity: The Case of the Electron’, in L. Soler, H. Sankey, & P. Hoyningen-Huene (eds.), Rethinking Scientific Change and Theory Comparison: Stabilities, Ruptures, Incommensurabilities?, Dordrecht: Springer.
- Oddie, G., 1986a, ‘The Poverty of the Popperian Program for Truthlikeness’, Philosophy of Science, 53: 163–178.
- –––, 1986b, Likeness to Truth, Dordrecht: Reidel.
- Okasha, S., 2002, ‘Underdetermination, Holism and the Theory/Data Distinction’, Philosophical Quarterly, 52: 302–319.
- Papineau, D., 2010, ‘Realism, Ramsey Sentences and the Pessimistic Meta-Induction’, Studies in History and Philosophy of Science, 41: 375–385.
- Peirce, C. S., 1998 (1992), The Essential Peirce, N. Houser, C. Kloesel, & the Peirce Edition Project (eds.), Bloomington: Indiana University Press.
- Pickering, A., 1984, Constructing Quarks: A Sociological History of Particle Physics, Edinburgh: Edinburgh University Press.
- Poincaré, H., 1952 (1905), Science and Hypothesis, New York: Dover.
- Popper, K. R., 1972, Conjectures and Refutations: The Growth of Knowledge, 4th edition. London: Routledge & Kegan Paul.
- Post, H. R., 1971, ‘Correspondence, Invariance and Heuristics: In Praise of Conservative Induction’, Studies in History and Philosophy of Science, 2: 213–255.
- Psillos, S., 1995, ‘Is Structural Realism the Best of Both Worlds?’, Dialectica, 49: 15–46.
- –––, 1996, ‘On van Fraassen's Critique of Abductive Reasoning’, Philosophical Quarterly, 46: 31–47.
- –––, 1999, Scientific Realism: How Science Tracks Truth, London: Routledge.
- –––, 2001, ‘Is Structural Realism Possible?’, Philosophy of Science, 68: S13–S24.
- –––, 2006, ‘The, Structure, the Whole, Structure and Nothing But, the Structure?’, Philosophy of Science, 73: 560–570.
- –––, 2009, Knowing the Structure of Nature: Essays on Realism and Explanation, London: Palgrave Macmillan.
- Putnam, H., 1975, Mathematics, Matter and Method, Cambridge: Cambridge University Press.
- –––, 1978, Meaning and the Moral Sciences, London: Routledge.
- –––, 1981, Reason, Truth and History, Cambridge: Cambridge University Press.
- –––, 1982, ‘Three Kinds of Scientific Realism’, Philosophical Quarterly, 32: 195–200.
- –––, 1985 (1975), Philosophical Papers, vol. 2: Mind, Language and Reality, Cambridge University Press.
- Quine, W., 1953, ‘Two Dogmas of Empiricism’, in From a Logical Point of View, pp. 20–46. Cambridge, MA: Harvard University Press.
- Resnik, D. B., 1994, ‘Hacking's Experimental Realism’, Canadian Journal of Philosophy, 24: 395–412.
- Richardson, A. W., 1998, Carnap's Construction of the World, Cambridge: Cambridge University Press.
- Richardson, A. W. & T. E Uebel (eds.), 2007, The Cambridge Companion to Logical Empiricism, Cambridge: Cambridge University Press.
- Rosen, G., 1994, ‘What is Constructive Empiricism?’, Philosophical Studies, 74: 143–178.
- Rouse, J., 1988, ‘Arguing for the Natural Ontological Attitude’, Proceedings of the Philosophy of Science Association, 1988, vol. 1: 294–301.
- –––, 1991, ‘The Politics of Postmodern Philosophy of Science’, Philosophy of Science, 58: 607–627.
- Saatsi, J., 2005a, ‘Reconsidering the Fresnel-Maxwell Theory Shift: How the Realist Can Have Her Cake and EAT it Too’, Studies in History and Philosophy of Science, 36: 509–538.
- –––, 2005b, ‘On the Pessimistic Induction and Two Fallacies’, Philosophy of Science, 72: 1088–1098.
- Salmon, W. C., 1984, Scientific Explanation and the Causal Structure of the World, Princeton: Princeton University Press.
- Sankey, H., 1994, The Incommensurability Thesis, London: Ashgate.
- Shapere, D., 1982, ‘The Concept of Observation in Science and Philosophy’, Philosophy of Science, 49: 485–525.
- Shapin, S. & S. Schaffer, 1985, Leviathan and the Air Pump, Princeton: Princeton University Press.
- Smart, J. J. C., 1963, Philosophy and Scientific Realism, London: Routledge & Kegan Paul.
- Smith, P., 1998, ‘Approximate Truth and Dynamical Theories’, British Journal for the Philosophy of Science, 49: 253–277.
- Stanford, P. K., 2001, ‘Refusing the Devil's Bargain: What Kind of Underdetermination Should We Take Seriously?’, Philosophy of Science, 68: S1-S12.
- –––, 2003a, ‘Pyrrhic Victories for Scientific Realism’, Journal of Philosophy, 100: 553–572.
- –––, 2003b, ‘No Refuge for Realism: Selective Confirmation and the History of Science’ , Philosophy of Science, 70: 913–925.
- –––, 2006, Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives, Oxford: Oxford University Press.
- Tichý, P., 1974, ‘On Popper's Definitions of Verisimilitude’, British Journal for the Philosophy of Science, 25: 155–160.
- –––, 1976, ‘Verisimilitude Redefined’, British Journal for the Philosophy of Science, 27: 25–42.
- –––, 1978, ‘Verisimilitude Revisited’, Synthese, 38: 175–196.
- Vaihinger, H., 1923 (1911), The Philosophy of ‘As If’, C.K. Ogden (tr.), London: Kegan Paul.
- van Fraassen, B. C., 1980, The Scientific Image, Oxford: Oxford University Press.
- –––, 1985, ‘Empiricism in the Philosophy of Science’, in Churchland & Hooker (eds.), Images of Science: Essays on Realism and Empiricism, (with a reply from Bas C. van Fraassen), Chicago: University of Chicago Press.
- –––, 1989, Laws and Symmetry, Oxford: Clarendon.
- –––, 1994, ‘Gideon Rosen on Constructive Empiricism’, Philosophical Studies, 74: 179–192.
- –––, 2001, ‘Constructive Empiricism Now’, Philosophical Studies, 106: 151–170.
- Votsis, I., 2003, ‘Is Structure Not Enough?’, Philosophy of Science, 70: 879–890.
- Weston, T., 1992, ‘Approximate Truth and Scientific Realism’, Philosophy of Science, 59: 53–74.
- White, R., 2003, ‘The Epistemic Advantage of Prediction Over Accommodation’, Mind, 112: 654–683.
- Worrall, J., 1989, ‘Structural Realism: The Best of Both Worlds?’, Dialectica, 43: 99–124.
- –––, 2009, ‘Miracles, Pessimism, and Scientific Realism’, PhilPapers, http://philpapers.org/rec/WORMPA.
- Wray, K. B., 2007, ‘A Selectionist Explanation of the Success and Failures of Science’, Erkenntnis, 67: 81–89.
- –––, 2008, ‘The Argument from Underconsideration as Grounds for Anti-Realism: A Defence’, International Studies in the Philosophy of Science, 22: 317–326.
- –––, 2010, ‘Selection and Predictive Success’, Erkenntnis, 72: 365–377.
- Wylie, A., 1986, ‘Arguments for Scientific Realism: The Ascending Spiral’, American Philosophical Quarterly, 23: 287–298.
- Zammito, J. H., 2004, A Nice Derangement of Epistemes: Post-Positivism in the Study of Science from Quine to Latour, Chicago: University of Chicago Press.
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
- Boyd, Richard, “Scientific Realism”, The Stanford Encyclopedia of Philosophy (Summer 2010 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/sum2010/entries/scientific-realism/>. [This was the previous entry on scientific realism in the Stanford Encyclopedia of Philosophy — see the version history.]
For helpful comments on the whole or parts of this article, the author is very grateful to Jacob Busch, Arthur Fine, Gregory Frost-Arnold, David Harker, Christopher Hitchcock, Alex Koo, Timothy D. Lyons, Ilkka Niiniluoto, K. Brad Wray, and Bas C. van Fraassen.