Until recently, epistemology—the study of knowledge and justified belief—was heavily individualistic in focus. The emphasis was on evaluating doxastic attitudes (beliefs and disbeliefs) of individuals in abstraction from their social environment. The result is a distorted picture of the human epistemic situation, which is largely shaped by social relationships and institutions. Social epistemology seeks to redress this imbalance by investigating the epistemic effects of social interactions and social systems. After reviewing the history of the field in section 1, we provide a three-part taxonomy for social epistemology in section 2. The first part is concerned with inputs to individual doxastic decisions from other people’s assertions and opinions. The second part investigates the epistemic features of collective doxastic agents, such as courts and scientific panels. Finally, the third part studies epistemic properties of social institutions and systems: how they improve or impair epistemic outcomes for their individual members or the systems as a whole. We offer overviews of these three types of social epistemology in sections 3, 4 and 5 respectively.
- 1. Background
- 2. Giving Shape to the Field: A Taxonomy of Social Epistemology
- 3. First Branch of Social Epistemology: Testimony and Peer Disagreement
- 4. Second Branch of Social Epistemology: The Nature and Epistemology of Collective Agents
- 5. The Third Branch of SE: Institutions and Systems
- Academic Tools
- Other Internet Resources
- Related Entries
In the long history of philosophy there have been comparatively few signs of social epistemology until recently. Treatments of such topics that would nowadays be subsumed under this heading have occurred in various periods (think of discussions of testimony by Hume and Reid), but they were never assembled into a single unified package. In the second half of the 20th century, however, philosophers and theorists of assorted stripes launched a variety of debunking movements aimed at traditional epistemology. Although most of these writers did not (at first) use the phrase “social epistemology”, it was the socializing direction of their thought that makes the phrase appropriate to their work. In the 1960s and 1970s there was a convergence of such thinkers who attacked the notion of truth and objectivity, a constellation that gained powerful influence in the academic community. The relevant authors included Thomas Kuhn, Michel Foucault, and members of the “Strong Programme” in the sociology of science. These writers sought to replace “rational” approaches to science and other intellectual activities with political, military, and/or other arational models of cognitive affairs. Many of them challenged the intelligibility of the truth concept, or challenged the feasibility of truth acquisition. In the social studies of science practitioners such as Bruno Latour and Steve Woolgar (1986 ) rejected the ideas of truth or fact as traditionally understood. So-called “facts” they argued, are not discovered or revealed by science, but rather are “constructed”, “constituted”, or “fabricated” when scientific statements come to be accepted, or are no longer contested. “There is no object beyond discourse … the organization of discourse is the object” (1986: 73). Discourse being a social phenomenon, what they were saying, in effect, is that facts were to be eliminated in favor of social phenomena. (Whether social facts should also be eliminated is a question they didn’t address very clearly.)
Although few practicing philosophers of the period endorsed these ideas, at least one influential philosopher, Richard Rorty (1979), seemed to be a camp follower. He contrasted the conception of knowledge as “accuracy of representation” (which he rejected) with a conception of knowledge as the “social justification of belief” (1979: 170). His notion of “social justification”, it appears, simply amounted to the practice of “keeping the conversation going” (whatever this meant) rather than the classical project of pursuing “objective truth” or rationality (1979: 377).
Sharply departing from these debunking themes, contemporary social epistemology is substantially continuous with classical epistemology. It sees no need to reject or distance itself from the epistemological projects of the past. Even social practices, after all, can be—and often are—aimed at finding the truth. Such social practices have a hit-or-miss record; but the same could be said of individual practices. At any rate, epistemologists can engage in their traditional enterprise of appraising alternative methods in terms of their capacities or propensities to achieve this kind of goal. In social epistemology, however, the relevant “methods” might be social practices. Thus, the type of social epistemology one finds in today’s philosophical literature does not call for any large-scale debunking of classical epistemology. Such epistemology can survive, and even thrive, with an expanded conception of how the truth-goal (and the justification and rationality goals) can be served, namely, with the help of well-designed social and interpersonal practices and institutions.
Initial moves toward a positive form of social epistemology (as opposed to a debunking form) were begun in the mid-1980s, largely in response to the debunkers. Alvin Goldman offered promissory notes (1978: 509–510; 1986: 1, 5–6, 136–138) toward such a conception and then developed a more detailed objectivist program in a contribution to a special issue of Synthese edited by Frederick Schmitt (Goldman 1987). In that latter contribution, Goldman advocated an avowedly truth-oriented (“veritistic”) approach to social epistemic evaluation. In the same issue Steve Fuller (1987) pursued a line more akin to the debunkers, and elaborated his position the following year with a monograph (Fuller 1988). That same year Fuller launched a journal entitled Social Epistemology, which became a prime venue for science studies work. This path, however, does not express what most philosophers now pursue under the heading of social epistemology. The science-and-technology-studies model, as it is now called, continues to have much in common with the history and sociology of science (à la Kuhn especially) and rather little in common with traditional epistemology.
The decade of the 1990s (and its run-up) saw the publication of several monographs and chapter-length treatments of various branches of social epistemology, followed by a wide-angle depiction of the field as a whole. C.A.J. Coady’s (1992) book-length treatment of testimony was a core contribution, especially given testimony’s centrality to social epistemology. The same may be said of Edward Craig’s monograph, Knowledge and the State of Nature (1990). The final chapter of Philip Kitcher’s (1993) book The Advancement of Science was devoted to the organization of cognitive labor in science, building on a journal article of the same theme (Kitcher 1990). Kitcher highlighted diversity within scientific communities as an important tool in the pursuit of truth. Margaret Gilbert’s On Social Facts (1989) made a forceful case for the existence of “plural subjects”, a crucial metaphysical thesis that provides one possible foundation for group-oriented, or collective, social epistemology. Alvin Goldman published a series of papers applying social epistemology to a number of topics, including argumentation (Goldman 1994), freedom of speech (Goldman and Cox 1996), legal procedure (Talbott and Goldman 1998), and scientific inquiry (Goldman and Shaked 1991). His book Knowledge in a Social World (1999) showed how classical epistemology, with its focus on the values truth possession and error avoidance, could be applied to the social domain without abandoning its traditional rigor. Among the domains covered were testimony, argumentation, the Internet, science, law, and democracy. It sought to show how epistemology can have real-world applications even within a “veritistic” framework.
The years since 2000 have witnessed a surge of activity in social epistemology. This surge was encouraged by the 2004 launch of the journal Episteme, which is heavily dedicated to work in the field. The present entry will explore many of these developments in some detail; but it begins with a taxonomy of the field (based on Goldman 2010/2011) to help organize and distinguish the multifarious enterprises found under the umbrella of social epistemology
Traditional epistemology focuses on individual agents and their doxastic states or attitudes. Doxastic attitudes are a sub-species of propositional attitudes, ones that make categorical or graded judgments concerning the truth or falsity of their propositional contents. A doxastic attitude is right or wrong—accurate or inaccurate—as a function of the genuine truth-value of its propositional content. In addition to assessing beliefs as accurate or inaccurate, token attitudes (e.g., George’s believing Q at time t) can be evaluated along various epistemic dimensions such as justified or unjustified, rational or irrational, and knowledge-qualifying or not knowledge-qualifying.
Traditional epistemology has primarily concerned itself with formulating criteria for the epistemic evaluation of individuals’ doxastic states. Such evaluations may be based on whether the attitude token comports with the agent’s evidence or whether it is produced by a reliable belief-forming process. Given that justification evaluation is the paradigm of individual epistemology, what is (or are) the paradigm task(s) for social epistemology?
There are different ways in which an epistemic activity can count as “social”. One such way is for an individual agent to base a doxastic decision on what we may dub “social evidence”, where by social evidence we shall understand evidence concerning the utterances, messages, deeds, or thoughts of other people. A great deal of evidence that epistemic agents possess, of course, does not involve others at all. Consider the proposition, “A poodle pooped on Sylvia’s doorstep”. If Sylvia’s evidence for this proposition is purely perceptual, social epistemology may have no occasion to weigh in on the matter. But if Sylvia doesn’t witness any such canine action (because she isn’t home at the relevant time), she might still believe the proposition—and believe it justifiedly—based on different evidence. Her next-door neighbor might relate the incident to her when she comes home. The justifiedness of Sylvia’s belief will then hinge on criteria for justified trust in testimony, a staple problem of social epistemology.
Here we can introduce a first branch of social epistemology:
- Social Epistemology I:
- Assessing the epistemic quality of individuals’ doxastic attitudes where social evidence is used.
The first branch of social epistemology, so characterized, subsumes two of the most intensively debated topics in the field: (A) the problem of testimony-based justification, and (B) the problem of peer disagreement. These topics will be addressed in due course.
Obviously, what makes the first branch of social epistemology social is not the character of the doxastic agents who are studied. Rather, it is the social character of the evidence (relative to the agent). The second branch of social epistemology, by contrast, is social in an altogether different way. It is social because the doxastic agent is a social, or collective, entity. This branch of social epistemology starts by assuming that there are group entities that possess doxastic attitudes analogous to those possessed by individual humans. That there are such agents is a question of ontology (or perhaps philosophy of mind). It is undeniable, however, that we often acknowledge such group subjects in everyday thought and speech. If we are philosophically content with this practice, then the adoption of such doxastic attitudes by various groups gives rise to epistemological questions. Under what conditions are these entities justified in adopting a specified doxastic attitude, or making such a judgment? How does it depend—assuming it does so depend—on the various doxastic attitudes of the group’s members? Here, then, is a core problem for the second branch of social epistemology:
- Social Epistemology II:
- Assessing the epistemic quality of group doxastic attitudes (whatever their provenance may be).
A third branch of social epistemology has a wider assortment of manifestations. Within this branch the locus of activity ranges from social systems to social practices, institutions, or patterns of interaction. For example, a social system might formally select a pattern of rewards or punishments to motivate its actors or agents to engage in certain activities rather than others. Science as a social institution, for example, has adopted a reward system that confers honors, prizes, or credit to scientists and mathematicians who make important discoveries or prove major theorems. Legal systems adopt trial procedures designed to issue in judgments of defendants’ guilt or innocence. Choices among alternative procedures can be assessed in terms of how the chosen procedures “perform” in yielding judgments with high truth ratios. How often does a given trial system generate accurate judgments? How does it (or would it, if adopted) compare with alternative systems? Turning to science, how well does a given reward system function to motivate scientists to engage in fruitful inquiry that ultimately produces new knowledge?
Instead of deliberately adopted institutional arrangements, the same questions can be asked about alternative patterns of social interaction, which can also generate truth-linked consequences. Different patterns of communication, for example, and different choices of participants in a collective activity can vary in their degree of epistemic success. What are the best kinds of systems or practices? To what extent does the deployment of experts, for example, enhance a group’s accuracy, and how should relevant experts be identified? Some authors contend that diversity trumps expertise when it comes to group problem-solving. Is this correct? With these questions in mind, we can formulate the third branch of social epistemology as follows:
- Social Epistemology III:
- Assessing the epistemic consequences of adopting certain institutional arrangements or systemic relations as opposed to alternatives.
Under the aegis of this third branch of social epistemology, philosophers (and other professionals) can weigh the epistemic value of choosing one kind of institution or system rather than others. In the real world, of course, epistemic properties of an institution or system may not be the paramount properties to consider; they are certainly not the only ones of interest. But this doesn’t mean they should be neglected. Veritistic properties of a trial system, for example, are surely a major factor to consider when assessing a trial system’s level of success. It is generally conceded that we don’t want a system that commonly yields convictions of the innocent.
Epistemologists often speak of epistemic “sources”, which refers roughly to ways we can get knowledge or justified belief. Standard examples of such sources in traditional (individual) epistemology are perception, introspection, memory, deductive and inductive reasoning, and so forth. When turning to social epistemology, we quickly encounter an ostensibly new kind of source, viz., testimony. Knowledge or justification can be acquired, it seems, by hearing what others say or reading what they write (and believing it).
In the realm of epistemic sources, a distinction can be drawn between basic and non-basic (derived) sources. Vision is presumably a basic source of justification, but not all sources are basic. If testimony is also a source, is it a basic or non-basic source? David Hume argued for non-basicness. Although we are generally entitled to trust what others tell us, we are so entitled only in virtue of what we have learned from other (basic) sources. Here’s how the story goes, more fully. Each of us can remember many occasions on which people told us things that we independently verified (by perception) and found to be true. This reliable track record from the past—which we remember—warrants us in inferring (via induction) that testimony is generally reliable. From this we can conclude that any new instance of testimony we encounter is also likely to be true, (assuming we have no defeaters). As James Van Cleve formulates the view,
testimony gives us justified belief … not because it shines by its own light, but because it has often enough been revealed true by our other lights. (Van Cleve 2006: 69)
This sort of view is called reductionism about testimony, because it “reduces” the justificational force of testimony to the combined justificational forces of perception, memory, and inductive inference.
More precisely, this view is usually called global reductionism, because it allows hearers of testimony to be justified in believing particular instances of testimony by inferential appeal to testimony’s general reliability. However, global reductionism has come under fire. C.A.J. Coady argues that the observational base of ordinary epistemic agents is much too small and limited to allow an induction to the general reliability of testimony. Coady writes:
[I]t seems absurd to suggest that, individually, we have done anything like the amount of field-work that [reductionism] requires … many of us have never seen a baby born, nor have most of us examined the circulation of the blood nor the actual geography of the world …nor a vast number of other observations that [reductionism] would seem to require. (Coady 1992: 82)
An alternative to global reductionism is local reductionism (Fricker 1994). Local reductionism does not require a hearer to be justified in believing that testimony is generally reliable. It only requires a hearer to be justified in believing that the particular speaker whose current testimony is the target is reliable (or reliable—and sincere—about the specific topic she was addressing). This is a much weaker and more easily satisfied requirement than that of global reductionism.
Local reductionism may still be too strong, however, but for a different reason. Is a speaker S trustworthy for hearer H only if H has positive evidence or justification for the reliability of this particular speaker S? This is far from clear. If I am at an airport or a train station and hear a public announcement of the gate or track for my departure, am I justified in believing that testimony only if I have evidence for the announcer’s general reliability (or even her reliability about departures)? I do not normally gather such evidence for a given public address announcer, but surely I am justified in trusting such announcements.
Given these problems for both kinds of reductionism, some epistemologists embrace testimonial anti-reductionism (Coady 1992; Burge 1993; Foley 1994). Anti-reductionism holds that testimony is itself a basic source of evidence or justifiedness. No matter how little positive evidence a hearer has about the reliability and sincerity of a given speaker, or of speakers in general, she has default or prima facie warrant in believing what the speaker says. This thesis is endorsed, for example, by Tyler Burge, who writes:
[A] person is entitled to accept as true something that is presented as true and that is intelligible to him, unless there are stronger reasons not to do so. (Burge 1993: 457)
Experience, of course, can provide defeaters for such prima facie justification, so that, on balance the receiver may not be justified. Absent such defeaters, however, justification arrives “for free”. The hearer needs no positive reason for believing the speaker’s report.
According to anti-reductionism, then, a hearer doesn’t need positive support for testimonial reliability, or the speaker’s sincerity, to justifiedly believe what the speaker says. Only a weaker condition is imposed that the hearer not have evidence that defeats the speaker’s being reliable and sincere. Since this negative requirement is extremely weak, many anti-reductionists add an additional requirement, i.e., that the speaker actually be competent and sincere. However, Jennifer Lackey (2008: 168 ff) argues that these conditions do not suffice for hearer justifiedness. Suppose Sam sees an alien creature in the woods drop something that seems to be a diary, written in a language that appears to be English. Sam has no evidence for or against the sincerity and reliability of aliens as testifiers, so he lacks both positive reasons for trusting the diary’s contents and negative reasons against trusting them. Anti-reductionism implies that if the alien is both reliable and sincere, Sam is justified in believing the diary’s contents. Intuitively, however, this is dubious.
Reductionism and anti-reductionism both assume that testimonial beliefs can be justified because testimony provides evidence for the truth of what is asserted. Proponents of the assurance or interpersonal view of testimony (Ross 1986; Hinchman 2005; Moran 2006; Faulkner 2007; Fricker 2012; Zagzebski 2012) reject this assumption. On their view, testimonial beliefs are justified not (or not only) because testimony is evidence, but because testimony is assurance. More precisely, testimonial justification has its roots in the fact that the hearer takes responsibility for the truth of her assertion (Moran 2006) or invites the speaker to trust her (Hinchman 2005). The assurance view is motivated by the following line of argument. As Moran points out,
when the hearer believes the speaker, he not only believes what is said but does so on the basis of taking the speaker’s word for it. (2006: 274)
We believe what the speaker says on the ground of her assurance that what she says is true. “Evidential” accounts of testimonial justification have difficulties explaining this phenomenon. If all that matters for testimonial justification is that the speaker be a reliable indicator of the truth, the fact that she is inviting us to trust her should be epistemically superfluous. Proponents of the assurance view conclude that the speaker’s assurance provides a distinctive epistemic but non-evidential kind of reason for believing her assertion.
Lackey (2008) and Schmitt (2010) raise an important problem for the assurance view. Perhaps the fact that a speaker invites us to trust her provides a distinctive kind of reason to accept her testimony. But it is not clear at all that the kind of reason in question is epistemic (rather than ethical or prudential). Lackey (2008) makes the point through the following example. Ben tells Kate that their boss is having an affair with an intern. Earl is eavesdropping on their conversation. On the basis of Ben’s testimony, both Kate and Earl form the belief that their boss is having an affair. On the assurance view, their respective beliefs should have different epistemic statuses. Since Ben was addressing Kate but not Earl, Kate has a distinctive epistemic reason to believe Ben that Earl lacks. However, if both Kate and Earl are functioning properly, have the same background information, etc. the claim that their respective beliefs have different epistemic values or statuses is implausible. To be sure, the fact that Ben was inviting Kate but not Earl to trust him does give rise to certain asymmetries between Kate and Earl. For instance, if it is revealed that Ben was lying, Kate is entitled to feel betrayed, while Earl isn’t. But it is dubious that these asymmetries have any epistemic significance.
Another important question in the epistemology of testimony is whether testimony can generate rather than merely transmit knowledge. It is tempting to regard testimony as transmission of information from speaker to hearer. This may be analogous to what transpires in (an individual’s) memory when justifiedness or knowledge possession is passed from an earlier to a later time within a particular subject. If a person retains a belief in memory from an earlier to a later time, the belief’s justificational status at the later time will normally be the same as its justificational status at the earlier time, so long as no new evidence is encountered during the interval. In other words, memory serves as a device for preserving justifiedness from one time to another. Testimony might have an analogous property. It might be the transmission of justifiedness and/or knowledge across individuals. If this were right, however, it would imply that a hearer cannot know p or be justified in believing p as a result of testimony unless that same proposition is known by (or justified for) the speaker. Is this correct?
Lackey (2008: 48–53, 2011: 83–86) argues to the contrary with several examples, one of which features a creationist teacher. A certain teacher is a devout creationist who does not believe in evolutionary theory. Nonetheless, she has the responsibility to teach her students that Homo sapiens evolved from Homo erectus, and she does so teach them although she doesn’t believe it herself. Because she doesn’t believe it, it isn’t something she knows, because knowledge requires belief (in the knower). Nonetheless, the students come to believe the proposition that Homo sapiens evolved from Homo erectus from the evidence the teacher presents, and they are justified by the evidence, so they thereby come to know this proposition. But the scenario cuts against the transmission thesis, because the hearers acquire knowledge despite the fact that the teacher doesn’t know.
The previous sub-section addressed the basic or generic problem of testimony. In this sub-section and the next one we examine two “spin-offs” of the generic problem of testimony.
In every society there are topics on which some people have substantially greater expertise than do others. When it comes to medical matters or financial investments,, some people have special training and/or experience that most people lack. An expert in any domain will know more truths and have more evidence than an average layperson, and these things can be used to form true beliefs about new questions concerning the domain. In addition, laypersons will commonly recognize that they know less than experts. Indeed, they may start out having no opinion about the correct answer to many important questions; and feel hesitant in trying to form such opinions. They are therefore motivated to consult with a suitable expert to whom they can pose the relevant question and thereby learn the correct answer. In all such cases, one seeks an expert whose statements or opinions are likely to be true.
But are laypersons in a position to recognize who is a (relevant) expert? Even genuine experts often disagree with one another. That is why a wise layperson won’t necessarily accept the first piece of testimony he receives from a putative expert, but will often seek a “second opinion”. But what should he do if the second opinion conflicts with the first? Can a layperson justifiedly identify which (professed) expert to trust? This is called the novice/two experts problem (Goldman 2001/2011).
A fundamental problem facing the layperson is that genuine expertise often arises from knowledge of esoteric matters, matters of which most people are ignorant. Thus, even when a layperson listens carefully to someone professing great expertise, the layperson may be at a loss to decide whether the self-professed expert merits much trust. Goldman considers several ways by which a layperson might try to choose (justifiedly) between two or more disagreeing experts. Let us review three of these ways. One is to arrange a “debate” between the self-professed experts. The layperson would hear the criticisms and responses by each participant and try to decide who has the better argument. It is not obvious, however, how the layperson can do this. Many premises asserted by the experts are likely to be esoteric and therefore difficult—if not impossible—for the layperson to assess. Both the truth-values of the asserted premises and the strength of evidential support they confer on the conclusions will be difficult for the layperson to assess.
Another way for a layperson to choose among experts is to inquire which position endorsed by one of them is most common among all (professed) experts. But how significant is it that one expert belongs to a more numerous camp? This is a function of the dependence or independence relations between the consulted experts and other experts in the field. If one expert, for example, belongs to a large following behind a “guru”, who charismatically persuades many people to agree with him uncritically, it may not matter how numerous they are. The sameness of their opinions does not really add much unless the followers have used enough independence to be at least partially conditionally independent of one another (Goldman 2001/2011: 121–124).
A third way to assess the comparative trustworthiness of rival experts is by comparing their respective track-records: how often has each expert correctly answered past questions in the domain? The problem here is how a layperson could assess an expert’s past track-record. Laypersons will typically lack knowledge and justification about whether the past answers were correct. Arguably, the situation is not quite as bleak as it seems initially. Suppose that a putative expert in astronomy predicts a solar eclipse at a certain time and place fifteen years hence. At the time of prediction, a layperson could not say whether she gets her prediction right, so this is no help for estimating the putative expert’s track record. But if the layperson, fifteen years later, is in the right place at the right time, he can observe whether or not a solar eclipse occurs then. So expert statements are not inevitably beyond the verificational capacities of laypersons.
The previous two sub-sections provide examples in which a doxastic agent decides whether to believe a proposition based on another person’s assertion. This sub-section introduces an epistemic situation in which a doxastic agent receives no testimony from any source but regards the very absence of testimony as evidence in its own right. This situation is described by Sanford Goldberg (2010, 2011).
As a point of comparison, some theorists of testimony (e.g., Lipton 1998) hold that a hearer is justified in believing P in virtue of somebody’s testimony-that-P just in case P’s being true is part of the best explanation of the person’s testifying-that-P. Analogously, the absence of testimony that P might serve as a “negative” piece of evidence for the truth of not-P in special circumstances in which not-P is part of the best explanation of the silence that is “heard”.
Goldberg characterizes the type of inference in question as having the following form: “P must not be true, because if it were true, I would have heard it by now”. This type of inference is reasonable in a special type of social situation. If you are a regular consumer of online news, occurrences like catastrophes and media sensations will be broadcast widely and you will rapidly become apprised of them. If you haven’t gotten wind of any such event in the last twelve hours, it is reasonable to infer that no such events have occurred. In such circumstances, silence can be as informative as a verbal message.
A precise description of the kind of standing communication system that warrants this kind of inference by a suitably positioned agent is a delicate matter, which Goldberg explores (2011: 96–105). First, he explains there must exist a “source” that regularly reports to a community about a certain class of events. Second, the doxastic subject of interest must be attuned to this source, so that s/he will receive such a report in a timely fashion. This and other similar conditions are proposed as sufficient conditions to confer warrant on the subject in believing that no such event has occurred if there has been silence from the source. Details aside, this does seem to be a legitimate type of “testimony”-based belief where the so-called testimony is really an absence of testimony. Nonetheless, this fits our general characterization of the first branch of social epistemology as a branch that studies warrant based on social evidence. Given the features of the social communication system sketched above, silence qualifies as a kind of social evidence.
The cases surveyed thus far in this section involve substantial epistemic asymmetry between the agent and her source of information. Interesting questions also arise when we turn to situations involving epistemic symmetry between agents. Suppose that two people form conflicting beliefs about a given question: one believes p while the other believes not-p. Suppose moreover that they share all their evidence relevant to the question. Finally, suppose that each believes that they are epistemic peers: that they have equally good perceptual abilities, reasoning skills, and so on. Obviously they cannot both be correct in their beliefs; the two propositions believed are contradictory. But can they be rational to hold fast to their initial beliefs, now that they know they have the same evidence and respect one another as equally good reasoners? How (if at all) should they proceed to revise their initial assessments in light of their disagreement? This is the problem of peer disagreement.
Responses to this problem have tended to fall on either side of the following spectrum. At one end are “conciliationist” or “equal weight” views, according to which two peers who disagree about p should subsequently become substantially less confident in their opinions regarding p. At the other end of the spectrum are “non-conciliationist” or “steadfast” views, on which one is not rationally required to change one’s view in the face of peer disagreement. In the middle of the spectrum, one finds views that agree with conciliationism in certain cases, and with steadfastness in others.
Conciliatory views are motivated by cases like the following one, adapted from Christensen (2007). (Similar examples are used by Feldman (2007) and Elga (2007) in defense of conciliationism.) You and your friend have been going out to dinner together for several years. Each time you add a 20% tip and split the check; upon receiving the check you each do the calculation in your head. Over the years, you and your friend have been right equally often, so that you regard each other as epistemic peers when it comes to determining your share. Tonight, after doing the math in your head, you conclude that your share is $43, and become confident of this. But your friend announces that she is quite confident that your share is $45. Here it seems quite obvious that upon learning of your disagreement, you should become substantially less confident in your belief that the share is $43; in fact, you should become equally confident that the share is $45. After all, your disagreement is evidence that one of you has made a mistake; and you have no particular reason to suppose that your friend is the one who made a mistake. Under these circumstances, lowering your confidence that the share is $43 seems the only reasonable attitude. And of course the same holds for your friend, mutatis mutandis.
Feldman (2007) offers an influential, more abstract argument for conciliationism on the basis of the “uniqueness thesis”. This is the view that for any proposition p and any body of evidence E, exactly one doxastic attitude is the rational attitude to have toward p on the basis of E, where the possible attitudes include believing p, disbelieving p and suspending judgment. Feldman’s argument seems to be the following. If the uniqueness thesis is true, it follows that in cases where I believe p and my peer believes not-p at least one of us must have formed an irrational opinion. Since I have no good reason to believe that I am not the one who responded improperly to the evidence, the only rational option seems to be to suspend judgment about p. Christensen (2007) offers a similar argument formulated in terms of credences rather than all-or-nothing doxastic attitudes. (Note that the uniqueness thesis by itself doesn’t entail conciliationism. One might endorse uniqueness but hold that if my initial belief that p was the proper response to the evidence, the evidential impact of peer disagreement on my belief is nullified, so that I am rational in holding fast to my initial belief. See Kelly 2010.)
One problem with this argument is that the uniqueness thesis is a very strong and controversial thesis. In particular, when the evidence bearing on p is meager, it seems implausible to hold that only one doxastic attitude toward p is permitted. Kelly (2010) turns this into an argument against conciliationism, by arguing that conciliationism entails uniqueness. Whether it does so, however, is a matter of debate (see Ballantyne and Coffman 2012).
Proponents of steadfast views often motivate their position by pointing to alleged problems for conciliationism. One issue is the degree of skepticism to which conciliationism seems to lead. For every political, philosophical or religious view one may endorse, there are competent and well-informed people who disagree. The implication that one should become an agnostic in all these areas of controversy is worrisome.
A second issue for conciliationism is that it seems to demand “throwing away evidence” (Kelly 2005). Conciliationism, the objection goes, is too insensitive to the possibility that one of the parties initially reasoned well while the other didn’t. If one peer was epistemically more virtuous than the other in arriving at her initial opinion, this should be relevant to what attitudes are rationally required of each peer upon learning of the disagreement. Conciliationism, however, seems to imply that the epistemic quality of the process by which each peer arrived at her initial opinion has no bearing on the correct attitudes for them to adopt after learning of their disagreement. Each one is rationally required to move her views in the other’s direction, regardless of whether or not they initially reasoned correctly.
A third problem for conciliationism is that it seems self-undermining. Since the epistemic significance of disagreement is itself a matter of controversy, it seems that a proponent of conciliationism should become much less convinced of its truth upon learning about this disagreement. One may worry that there is something wrong with a principle that tells you not to believe in its own truth.
Conciliationists have offered responses to each of these three objections. Elga (2007) responds to the charge that conciliationism leads to widespread skepticism in the following way. For many controversial topics, he argues, disagreement involves a large number of interconnected issues. If two people disagree about the morality of abortion, they will likely disagree on many connected normative and factual matters as well. Under these circumstances, he argues, neither is in a position to regard the other as an epistemic peer. Christensen (2007) discusses the charge that conciliationism requires throwing away evidence. He contends that when conciliationists advocate “splitting the difference”, they do not mean that such revision of credences guarantees full rationality. Instead, conciliationism is a view about the bearing of one kind of evidence: evidence regarding what epistemic peers believe on a certain subject matter. This sort of evidence should indeed be taken into account, but it isn’t the whole story, and conciliationism doesn’t advocate ignoring the epistemic quality of the steps taken by agents in forming their initial beliefs. Elga (2010) discusses the problem of self-undermining and argues for a view on which conciliationism is the right response to cases of peer disagreement, except when the controversy is about how to respond to disagreement. This restriction, he claims, is not ad hoc, because any fundamental epistemic policy or rule must be dogmatic about its correctness on pain of incoherence.
A second, more positive motivation for steadfastness is the thought that, contrary to what the uniqueness thesis says, a single body of evidence can rationalize more than one doxastic attitude. Thus Gideon Rosen (2001: 71) writes:
It should be obvious that reasonable people can disagree, even when confronted with a single body of evidence. When a jury or a court is divided in a difficult case, the mere fact of disagreement does not mean that someone is being unreasonable.
Rosen expands on this view by arguing that epistemic norms are permissive norms, not obligating or coercive norms. Thus, even when two people share the same evidence, it is permissible for one to adopt one doxastic attitude toward a proposition and for the other to adopt a different attitude (also see Pettit 2006).
A third motivation for steadfastness is the idea that from the first-person perspective, there is an important epistemic asymmetry between me and my peer. As Wedgwood (2007) points out, when forming opinions I am guided directly by my experiences and other beliefs, and only indirectly (if at all) by other people’s epistemic states. According to Wedgwood, this asymmetry makes it rational for me to be epistemically biased in my favor. In cases of peer disagreement, I am therefore justified in sticking to my guns, even if I have no independent reason for thinking that I (rather than my peer) got things right.
A fourth motivation for steadfast views is that in certain cases they give intuitively more plausible results than conciliationism. Consider Christensen’s restaurant case, but suppose this time that after having done the math in your head, you double-check the result using pen and pencil and a reliable calculator. Each time the result is $43, so that I become extremely confident that this is our share. Your peer, who has done the same thing, then announces he believes that the share is $45. Intuitively, under those circumstances you are rational in discounting your peer’s opinion and holding fast to your initial belief. (This variation on the restaurant case is due to Sosa 2010.)
Recently, treatments of peer disagreement have emerged that are neither strictly conciliatory nor steadfast. Those views agree with conciliationism in certain cases, and with steadfastness in others. The two main approaches in this vein are Kelly’s (2010) total evidence view and Lackey’s (2010) justificationist view. According to the total evidence view, what reaction to peer disagreement is reasonable depends both on the quality of one’s original evidence and on the amount of evidence provided by the fact that my peer disagrees. When the original evidence is relatively weak, it is swamped by the evidence provided by the disagreement. In such cases, the total evidence view gives the same verdicts as conciliationism. Conversely, the more substantial the original evidence, the less substantial the epistemic impact of peer disagreement, and the more rational it is to stick to one’s guns. On Lackey’s justificationist view, how one should respond to peer disagreement depends on one’s degree of justified confidence before learning of the disagreement. In cases where this initial degree is relatively low, Lackey’s view agrees with conciliationism. In cases where one’s degree of justified confidence is high, such as Sosa’s restaurant case mentioned above, it is rational to remain very confident in the truth of one’s original belief.
It is extremely common to ascribe actions, intentions, and representational states to collections or groups of people. We might describe an army battalion as setting out on a mission that was chosen because of what the unit thought its enemy was planning to do. A government might be described as refusing to recognize a foreign dictator because, it believes, his recent “election” was fraudulent. In short, we ascribe representational states to collective entities, including motivational and informational states, despite the fact that they are not individual human beings. We make similar ascriptions to colonies, swarms, hives, flocks, and packs of animals. In all of these cases, it is debatable whether our ascriptions are predicated on genuine convictions that the collective entities literally have representational states, or whether we merely speak metaphorically. For present purposes it will be assumed that such talk should be taken seriously rather than metaphorically. For social epistemological purposes, it is the ascription of group doxastic states in particular that is essential to the second branch of the enterprise.
The rest of this section, then, presupposes that human groups exist and enjoy “intellectual” attitudes such as belief, disbelief, and suspension of judgment. Social epistemology is especially interested in how their epistemologies work. Under what conditions do collective beliefs attain such statuses as knowledge or justifiedness? We shall focus on the latter.
We begin, however, with questions about social metaphysics. A major sub-question here is how group entities relate to their members. One approach to this relationship is a so-called “summative” account (a term that is used rather variously by different authors). Here is one articulation of the summative approach.
- (S) A group G believes that P if and only if all or most of its members believe P.
As Margaret Gilbert (1989) points out, however, this is too weak a condition. Two committees might have the very same membership, for example, the Library Committee and the Food Committee. Every member of the Library Committee might believe that the library has a million volumes, and so might the Library Committee itself. Every member of the Food Committee will also have the same belief. But the Food Committee does not have this belief because it doesn’t make judgments on that subject.
Gilbert (1989: 306) therefore formulates and embraces another theory, seconded by Frederick Schmitt (1994a: 262), called the Joint Acceptance account:
- (JAA) A group G believes that P just in case the members of G jointly accept that P, where the latter happens just in case each member has openly expressed a willingness to let P stand as the view of G, or openly expressed a commitment jointly to accept that P, conditional on a like open expression of commitment of other members of G.
As Alexander Bird (2014) points out, on this model of group belief the members of a group will be mutually aware of one another as members of the group and aware of the group’s modus operandi. Hence, it might be called the mutual awareness model (following Philip Pettit 2003).
Bird contrasts the mutual awareness model with a distributed model. A distributed model deals with systems that feature information-intensive tasks which cannot be processed by a single individual. Several individuals must gather different pieces of information while others coordinate this information and use it to complete the task. A famous example is provided by Edwin Hutchins (1995), who described the distribution of tasks on a large ship, where different crew members take different bearings so that a plotter can determine the ship’s position and course. The key feature of such examples is that the task is broken into components that are assigned to different members of the group. Members in such distributed systems will not ordinarily satisfy the conditions of the commitment, or mutual awareness, model. In particular, Bird argues, science instantiates a distributed system. This is what makes it legitimate to represent science as a social agent, a subject that possesses (inter alia) knowledge.
Clearly, there are multiple conceptions of how sets of individuals might “compose” a group agent, and each of these conceptions can legitimately apply to real cases. A dichotomy between “summativism” and “non-summativism” may be inadequate to capture the multiplicity of the phenomena. This may complicate the social epistemologist’s task of providing an account of social epistemic statuses. But so be it; life is complicated. In discussing summativism (as we will in section 4.3) we must be careful to distinguish between summativism about belief versus summativism about justification. In this section and the next we discuss how beliefs of a group agent are determined or constituted by beliefs of all, most, or some of its members. This topic is usually pursued under the heading of belief aggregation.
Christian List and Philip Pettit (2011) explore complications of belief aggregation under the heading of “judgment” aggregation. They preface their discussion with the following depiction of the metaphysical relation between group attitudes (and actions), on the one hand, and members’ attitudes (and actions), on the other:
The things a group agent does are clearly determined by the things its members do; they cannot emerge independently. In particular, no group agent can form propositional attitudes without these being determined, in one way or another, by certain contributions of its members, and no group agent can act without one or more of its members acting. (2011: 64)
As indicated earlier, the “attitudes” of special interest to social epistemology are doxastic attitudes, principally belief. List and Pettit reflect on what might be a suitable belief aggregation function, a mapping from profiles of members’ beliefs into group beliefs. Are there plausible functions that a social epistemologist should be happy to endorse?
A sticky problem arises in this territory, illustrated by the so-called “doctrinal paradox” (Kornhauser and Sager 1986). Suppose that a three-membered court has to render a judgment in a breach-of-contract case. The court needs to make a judgment on each of three (related) propositions, where the first two are premises and the third is a conclusion.
- The defendant was legally obliged not to do a certain action.
- The defendant did that action.
- The defendant is liable for breach of contract.
Legal doctrine entails that obligation and action are jointly necessary and sufficient for liability. So conclusion (3) is true if and only if the two premises are both true. Suppose, as shown in the table below, that the three judges form the indicated beliefs, vote accordingly, and that the judgment aggregation function is guided by majority rule, so that the group (or court) believes (and votes) as shown below:
Obligation? Action? Liable? Judge 1 True True True Judge 2 True False False Judge 3 False True False Group True True False
As is apparent, although each of the three judges has consistent beliefs, and the aggregation proceeds by an ostensibly unproblematic majority rule provision, the upshot is that the court’s beliefs are inconsistent. Given the legal doctrine, it is impossible for the defendant to have had the obligation and done the action yet not be liable.
This kind of problem can easily occur whenever collective judgments are made on a set of connected issues. No plausible, simple principle like majority rule can generate a function in which the group attitude unproblematically reflects its members’ attitudes (List and Pettit 2011: 8). Indeed, List and Pettit (and others) have proved several impossibility theorems in which reasonable-seeming combinations of constraints are shown to be jointly unsatisfiable (List and Pettit 2011: 50; List and Pettit 2002).
No such impossibility results, however, have been produced for cases where the question is how a group attitude with respect to a single proposition depends on the members’ attitudes with respect to the same proposition (and nothing else). In the remainder of the discussion, therefore, we shall concentrate on that class of cases. It is also worth stressing that many groups operate in a highly “executive driven” style, where a chairperson, CEO, or “leader” of the group makes most of the decisions and actions on behalf of the group, and where his/her (individual) belief essentially fixes or comprises that of the group. This kind of case can still be treated under the heading of “aggregation”, where the leader’s opinion simply outweighs that of (all of the) other members. The social epistemologist renders no normative judgment about how these belief relations between members and groups play out. At this stage, at any rate, she only wishes to describe and take account of the varieties of belief aggregation. That is, she just studies the alternative attitudinal “psychologies” of organizations, collectives, or groups. The next stage of social epistemology would then consist of epistemic evaluations of group beliefs. These evaluations belong to a separate phase of the enterprise.
We now turn to this next stage of social epistemology, where the primary question is not what determines group belief but what determines group justification. In the newly emerging literature on collective epistemology there are relatively few developed accounts of group justification. We shall examine three such approaches, beginning with the dialectical approach.
Many formulations of a dialectical approach to individual justification are found in the literature, including those of Annis (1978), Brandom (1983), Williams (1999), Kusch (2002), and Lammenranta (2004). It should not be surprising, therefore, to find dialectical approaches to group justification patterned on this work. One such proposal is made by Raul Hakli (2011).
… [A] collectively accepted group view that p [is] justified if and only if the group can successfully defend p against reasonable challenges by providing reasons or evidence that are collectively acceptable to the group and that support p according to the epistemic principles collectively accepted in the epistemic community of the group. (2011: 150)
One salient feature of this approach is its relativization of group justifiedness to the epistemic principles of the group’s own community. If a group’s community harbors a highly dubious set of principles—for example, principles that embrace reliance on oracles or astrology—this would authorize (in justificational terms) any so-called “reasons” or “evidence” that employ these methods. This passage in Hakli clearly implies that there is no higher epistemic standard than that of the local community. This extreme kind of relativism will not appeal to epistemologists who hanker for greater objectivity. (In addition, what does the approach say if a given group has multiple sub-communities with conflicting epistemic principles?)
In other parts of his discussion, Hakli offers a further requirement for dialectically based justification:
[I]n order for a group to form an epistemically justified view it should first collect all the evidence available to the group members and openly discuss the arguments for and against the candidate views before voting or otherwise making the decision concerning the group view. (2011: 136)
This is an extremely restrictive condition. What if a single member fails to volunteer a certain marginal item of evidence available to him, so that it goes undebated by the group as a whole? This should not prevent the group from becoming justified with respect to a proposition p based on a large and weighty body of further evidence possessed by other group members and discussed by the group.
In contrast with Hakli’s relativistic account of group justifiedness, consider an account in the reliabilist tradition, which makes truth conduciveness (hence objectivity) a key element of the theory. This style of approach is exemplified by Alvin Goldman’s (2014) “social process reliabilist” approach to collective justifiedness, patterned on his earlier account of individual justifiedness (Goldman 1979). Goldman starts by distinguishing two ways in which a group might form beliefs. First, it might use processes of belief aggregation (see section 4.2 above), in which member beliefs vis-à-vis proposition p are somehow transmuted into a collective belief of the group vis-à-vis the same proposition. He calls this “vertical” belief formation. Second, the group might use an inferential process in which its own (collective) beliefs in other propositions q, r, and s lead it to form a new belief in p. This is called “horizontal” belief formation. Goldman focuses his attention on vertical, or aggregative, belief formation, since it is more fundamental and a more distinctive aspect of collective epistemology. In metaphysical terms, however, no group belief is ever (token) identical with any member belief (or set of member beliefs). However, it will be common for group beliefs to supervene on, or to be “grounded” in, beliefs of its members.
Given that groups engage in belief-formation of the vertical, or aggregative kind, a central question for group epistemology is how the justificational status of such group beliefs is fixed or determined. It is natural to assume that the justificational status of a group belief—at least when it is fixed in a vertical, or aggregative fashion—is a function of the justificational statuses of its members’ belief states (with respect to the same proposition). But what exactly is the functional relationship?
According to process reliabilism for individuals, justificational status for a belief is determined by the reliability of the psychological process(es) used by the agent in forming (or retaining) the belief. In process reliabilism for groups (in Goldman’s proposal), the justificational status of a group belief is, most directly, determined by the reliability (more precisely, the conditional reliability) of the aggregation process used, where aggregation processes take member belief states as inputs and group beliefs as outputs (all with respect to the same proposition). An example of such a process would be a majoritarian one: if more than 50% of the members believe p, the process generates a group belief in p as well.
However, this should not settle the deal if one wants to preserve a firm analogy between individual process reliabilism and group process reliabilism. In individual reliabilism, it isn’t sufficient for an output belief to be justified that it result from a (conditionally) reliable process. It is also necessary that the inputs to the process themselves be justified. In the case of inference, it isn’t sufficient that an agent use a reliable inferential process, i.e., one that usually carries true premises into true conclusions. The agent’s operative premises must also be justified. Applied to the collective belief case, this might mean that some appropriately high proportion of those members whose beliefs in p are causally responsible for the group’s formation of a belief in p were also justified. In other words, sufficiently many input beliefs (in p) must have been justified in order for the group’s output belief (in p) to be justified. Thus, members’ J-statuses with respect to their beliefs in p are determined by a history of belief-forming processes (some reliable and some unreliable). The J-status of the group belief in p is determined, in turn, by what proportion of its members believe p (as opposed to disbelieve or suspend judgment vis-à-vis p), and what proportion of them hold their doxastic states justifiedly.
Goldman winds up with two specific proposals, which seek to accommodate degrees of justifiedness. The first principle is:
- (1) If a group belief in p is aggregated based on a profile of member attitudes toward p, then (ceteris paribus) the greater the proportion of members who justifiedly believe p and the smaller the proportion of members who justifiedly reject p, the greater the group’s level, or grade, of justifiedness in believing p.
The second principle is:
- (2) A group belief G that is produced by an operation of a belief-aggregation process π is justified only if (and to the degree that) π has high conditional reliability.
These principles are the core of the process reliabilist approach, though many details, of course, are omitted here.
A third approach to group justifiedness is presented by Jennifer Lackey (forthcoming). It is motivated and partly defended by reference to two generic competitors. One competitor is deflationary summativism, and the second is inflationary non-summativism. Lackey explains her terminology as follows. Inflationary non-summativism is a view that understands group justifiedness as a status that “floats freely” from the epistemic statuses of its members’ beliefs. By contrast, deflationary summativism is an approach that treats group justifiedness as nothing more than an aggregation of the justified beliefs of its members. Summativism is the thesis that a group’s justifiedly believing p is understood only in terms of some or all of G’s members’ justifiedly believing p.
An example of inflationary non-summativism is the “joint acceptance account” (JAA) defended by Frederick Schmitt (1994a). According to this approach,
A group G justifiedly believes that p if and only if G has good reason to believe that p and believes that p for this reason.
G has reason r to believe p if and only if G would properly express openly a willingness to accept r jointly as the group’s reason to believe p. (Schmitt 1994a: 265)
Lackey first objects to JAA on the grounds that it is too strong. Not all members of a group need express willingness to accept a reason jointly in order for it to qualify as a group reason. Moreover, even weakening the requirement to some group members leaves the requirement too strong. Second, consider the tobacco company Phillip Morris and its board members, each of whom were aware of scientific evidence of the addictiveness of smoking and its links with lung cancer and heart disease. Each of these members—as well as the company as a whole—had a reason to believe that the company should put warning labels on its cigarette packages. Yet none of these members were willing to accept jointly (i.e., verbally and publicly) that labels should be put on its cigarette packages. So this “joint acceptance” test is not a proper criterion for having a reason, neither for members nor for a group.
Turning now to the deflationary summativist approach, Lackey views Goldman’s process reliabilist account as the most detailed version of this approach and therefore concentrates her criticisms of deflationary summativism on his proposals. There are three main criticisms. First, she points out that an adequate account of group justifiedness cannot be attained without attending to the evidential relations that exist between members’ beliefs, as well as which of these beliefs are operative in generating the group belief. Second, she contends that group justification is constrained by certain epistemic obligations that arise from professional relationships among group members, a complication that the aggregative account lacks the ability to accommodate. Third, there can be cases of “defeating” evidence against group belief that the aggregative account does not accommodate. Detailed examples are given to illustrate these points.
Building on these problems confronting the previous two views, Lackey advances her own view, which she calls the “group epistemic agent account” (GEAA). It is expressed in two principles:
- (1)A group, G, justifiedly believes that p if and only if a significant percentage of the operative members of G (a) justifiedly believe that p, and (b) are such that adding together the bases of their justified beliefs that p yields a belief set that is coherent.
- (2) Full disclosure of the evidence relevant to the proposition that p, accompanied by rational deliberation among the members of G in accordance with their individual and group epistemic normative requirements, would not result in further evidence that, when added to the bases of G’s members’ beliefs that p, yields a total belief set that fails to make probable that p.
All of these proposals are significant and will doubtless be studied by everyone interested in collective epistemology. Some of the proposals, however, are perfectly compatible with the spirit of (some of) the rival views For example, it is surely right that a group belief’s justifiedness depends not only on the percentage of members who are justified in believing p but on whether those members are operative in producing the group’s belief. This factor, however, could cheerfully be incorporated into a process reliabilist account, especially because causation of belief is at the core of process reliabilism. Its omission from Goldman’s treatment seems more of an oversight than a weakness in the theory’s fundamental character.
Since science is the paradigm of a knowledge-seeking enterprise, epistemology and philosophy of science are intimately connected. Up to the 1960s, epistemology of science was conducted in a largely individualistic fashion. It focused on individual agents rather than teams and communities of scientists, and paid little attention to the social norms and arrangements governing scientific activity. However, at least since the publication of Kuhn’s hugely influential The Structure of Scientific Revolutions (1962), the scientific enterprise has been studied from a more social point of view. Scientists, after all, are influenced by their colleagues; they work in teams competing and collaborating with each other; they follow social norms governing methodology, presentation of results, allocation of prestige, and so on. Social epistemology of science investigates how these social dimensions influence the epistemic outcomes of scientific activity.
Historically the first social epistemological studies of science were conducted by sociologists, not philosophers. Post-Kuhnian sociology of science (a tradition often called “social studies of science” or “science and technology studies”) departs largely from the concerns and convictions of traditional epistemology and philosophy of science by rejecting the classical epistemological notions of objective truth, justification and knowledge, and/or by attempting to debunk the epistemic authority of science.
The question whether social studies of science really count as social epistemology is a subtle one. Many researchers in this tradition simply ignore traditional epistemological concerns with truth, justification and rationality. Consider for instance the symmetry thesis, according to which true and false beliefs should be given the same kind of causal explanation (Barnes and Bloor 1982). This is a central idea of the “strong program” developed in the 1970s by the Edinburgh school, the most influential group in the social studies of science. Proponents of the symmetry thesis claim that whether or not a belief is true should play no role in explaining why people hold it. Thus they officially decline to make any judgment about the epistemic properties of a belief in giving a causal explanation for it. They claim that epistemic concepts like truth or justification are not useful for their purposes.
Nevertheless, researchers in the social studies of science can be regarded as social epistemologists of science because they often endorse or suggest debunking or skeptical views about the epistemic authority of science. That is, they make epistemologically significant pronouncements (in the classical sense of “epistemology”) that cast doubt on science’s status as a privileged source of truth, justified belief and knowledge.
First, researchers in the social studies of science tend to embrace a form of relativism about the traditional concepts of epistemic justification and rationality, by rejecting the idea of universal and objective epistemic norms. As Barry Barnes and David Bloor put it, “there are no context-free or super-cultural norms of rationality” (1982: 27). (Researchers in the social studies of science usually try to defend this view by appealing to the Duhem-Quine’s thesis and Kuhnian considerations about incommensurability.) One consequence of this form of relativism is that science has no special universal or objective epistemic authority. The claim that science is a better source of justified belief or knowledge about the world than tealeaves-reading holds only relative to our local, socially situated norms of justification. There are familiar problems with relativism about epistemic justification, however (see Boghossian 2006).
Second, historical case studies undertaken by members of the Edinburgh school attempt to show that scientists are heavily influenced by social factors “external” to the proper business of science. Thus Mackenzie (1981) argues that the development of statistics in the 19th century was heavily influenced by the interests of the ruling classes of the time (for similar studies, see Forman 1971 and Shapin 1975). Other social analyses of science try to show how the game of scientific persuasion is essentially a battle for political power, where the outcome depends on the number or strength of one’s allies as contrasted with, say, genuine epistemic worth. If either of these claims were right, the epistemic status of science as an objective and authoritative source of information would be greatly reduced. However, there is an obvious theoretical problem here. How can these studies establish the debunking conclusions unless the studies themselves have epistemic authority? The studies themselves use the very empirical, scientific procedures they purport to debunk. If such procedures are epistemically questionable, the studies’ own results should be in question. Members of the Edinburgh School sometimes deny that they are trying to debunk or undermine science. Bloor, Barnes and Henry (1996), for example, say that they “honour science by imitation” (1996: viii). However, as James Robert Brown (2001) points out, this claim is disingenuous. They cannot intelligibly propose a revolution and then deny that it would change anything (2001: 143).
Third, some sociological approaches to science claim to show that scientific “facts” are not “out-there” entities, but are mere “fabrications” resulting from social interactions. This metaphysical thesis is a form of social constructivism. This is a view suggested by Latour and Woolgar in their influential book Laboratory Life: The Construction of Scientific Facts (1986 ). In discussing social constructivism, it is essential to distinguish between weak and strong versions. Weak social constructivism is the view that human representations of reality—either linguistic or mental representations—are social constructs. For example, to say that gender is socially constructed, in this weak version of social constructivism, is to say that people’s representations or conceptions of gender are socially constructed. Strong social constructivism claims further that the entities themselves to which these representations refer are socially constructed. In other words, not only are scientific representations of certain biochemical substances socially constructed, but the substances themselves are socially constructed. The weak version of social constructivism is quite innocuous. Only the thesis of strong social constructivism is metaphysically (and, by implication, epistemologically) interesting. However, there are severe problems with this metaphysical thesis, as Andre Kukla (2000) explains.
Although the debunking aspects of social studies of science have left analytic philosophers by and large unmoved, post-Kuhnian sociology of knowledge has convinced many philosophers of science that close attention to the actual social practices of scientists is required. As a result, a growing body of work in analytic philosophy of science investigates the epistemic effects of these social practices. By contrast to social studies of science, philosophers in this tradition stand in continuity with traditional epistemology, and make no attempt at debunking science’s epistemic authority on the basis of social considerations. In fact, they tend to argue that what makes scientific activity epistemically special is in part the fact that its social structure is particularly well-attuned to science’s epistemic goals. In particular, they stress the epistemic benefits of the reward system and division of labor peculiar to science.
The ground-breaking work here is due to Philip Kitcher (1990, 1993). The starting point of his work is the thought that there is a tension between individual and collective rationality in science. Consider a situation in which there are several available methods or “research programs” for tackling a scientific problem (for instance the structure of DNA). And suppose in addition that method I has more chance of succeeding than method II. Then if every scientist is motivated purely by doing the best science, she will choose to work on method I. However, Kitcher points out, it may be in the community’s best interest to “hedge its bets” and have a number of scientists working on the less promising method II. Kitcher points out that one can achieve the desired division of labor by adopting a certain reward scheme. On the relevant scheme, the reward of each scientist working on a successful program decreases as the number of people working on the program increases. (You may think of the reward as a fixed amount of prestige allocated equally between successful scientists.) Then if many people are already working on method I, new scientists will have an incentive to work on method II. Although the method has fewer chances of being successful, if it is successful the reward will be bigger. Kitcher argues that the actual reward system of science works in pretty much this way.
Michael Strevens (2003) develops a formal model of scientific activity similar to Kitcher’s but argues that Kitcher’s reward system won’t produce the best division of labor. Another reward system is both better and closer to the actual practice of science. This is the priority rule, according to which the first research program that discovers a certain result gets all the reward (in this case, prestige). The fact that the actual reward system of science follows the priority rule was first discovered by the sociologist Robert Merton (1957), who pointed out that the history of science is littered with severe priority disputes. Merton took the priority rule to be a pathology of scientific activity. On Strevens’s view, by contrast, the priority rule works as an incentive for scientists to adopt the division of labor most epistemically beneficial to society.
Kitcher and Strevens point out the epistemic effects of diversity in “methods” or “research programs”, which encompass model-building strategies, ways of conducting experiments, and so on. More recently, Weisberg and Muldoon (2009) have pointed out the benefits of another kind of cognitive diversity, namely variation in patterns and strategies of intellectual engagement with other research teams.
Weisberg and Muldoon consider three strategies of engagement with the activity of other scientists. Scientists who follow the “control” research strategy simply ignore what other scientists are doing: they do not take into account the results of others in deciding which research program to explore. “Followers” (as the name suggests) follow the methods of research adopted by their predecessors: if a research program has already been explored and yielded significant results, they will tend to adopt this program. “Mavericks” also take into account the results of others in their exploration strategy, but in the opposite way: if a method has already been explored, they will adopt a different one. The question is whether and how fast these various groups can discover significant scientific results.
To investigate this question, Weisberg and Muldoon build a model in which a topic of scientific inquiry is represented by a 3-dimensional “epistemic landscape”. The x and y axes represent research programs (in Kitcher’s and Strevens’s sense). The vertical z axis represents the scientific importance of the results attainable by the research program corresponding to the (x, y) position. Scientists discover the comparative epistemic significance of research programs by visiting patches of the landscape, i.e., by working within the research program represented by the patch.
Algorithmizing the three strategies allows Weisberg and Muldoon to run computer simulations to discover whether and how fast followers of these strategies can “climb the peak” of the landscape, i.e., discover significant results. Thus their work is an instance of an increasingly popular way to do social epistemology, namely using computer simulations of social-intellectual activities.
Through their simulations, Weisberg and Muldoon find out that large populations of control are good at finding patches with high degrees of epistemic significance, but this takes considerable time. Populations of followers fare worse than controls: they find peaks less frequently, and cover only a small portion of the regions of high epistemic significance. Mavericks fare better than controls: they find peaks more often and more quickly. The most interesting finding, however, concerns mixed populations of mavericks and followers: adding even a small number of mavericks to a population of followers boosts epistemic productivity. This is because when they interact, each strategy has a fruitful role to play. As a result, mixed populations not only discover peaks quickly but cover a lot of significant ground. Weisberg and Muldoon suggest that this situation (a few mavericks stimulating many followers) is close to what we observe in science. Thus, they provide further support for Kitcher’s insight that the cognitive division of labor in science is epistemically beneficial.
The work of Kevin Zollman (2007, 2010) is another example of the use of computer simulations in the social epistemology of science. Zollman investigates the following issue. Even though scientific diversity has epistemic benefits, the aim of scientific communities is eventually to arrive at a consensus on the right theory. But scientific consensus may occur prematurely. Suppose there are two competing theories T1 and T2 in a given scientific field. Even if T2 is the correct one the initial experiments may very well favor T1. If all scientists come to accept T1 as a result, the wrong hypothesis will become the consensual one. Zollman illustrates this with an example from physiology. At the end of the 19th century, there were two proposed treatments for peptic ulcer, one relying on the hypothesis that ulcer is caused by bacteria, the other on the hypothesis that it is caused by peptic acid. Initial results favored the latter hypothesis, so a scientific consensus formed around it, and the bacterial hypothesis was abandoned for a long time. We now know that the bacterial hypothesis is correct. So in this case the scientific community reached consensus too quickly.
Zollman uses computer simulations to explore how diversity can ensure that scientific consensus isn’t reached in this premature fashion. His computer simulations reveal interesting correlations between the structure of the communication network and whether scientists’ beliefs converge on the right hypothesis. Surprisingly, structures with less communication between scientists are correlated with scientists converging on the right hypothesis. In strongly connected networks, initial results that favor the wrong theory become quickly known by everybody, which increases the risk that a consensus will form against the right hypothesis. Structures with less communication make for a wider diversity in scientists’ beliefs by ensuring that even if initial results disfavor the right theory many agents will not become aware of them. Zollman mentions another feature that can ensure that consensus isn’t reached too quickly. Suppose that some of the scientists are dogmatic—they are strongly biased in favor of what is in fact the right theory. Then even if initial results favor the other hypothesis, these scientists will be less responsive to this piece of evidence and will continue investigating the correct theory. This is another illustration of the idea that prima facie detrimental features of the practice of science (reduced communication and dogmatism) may in fact be epistemically beneficial.
Democracy is a widely touted institution, .but what is its connection to epistemology? In recent decades an influential movement has arisen in political theory called the “epistemic” approach to democracy. Its general claim is that what makes democracy a superior form of government has something to do with its epistemic properties. Aristotle referred to democracy as the “rule of the many”, as contrasted with “the rule of the few” (oligarchy) and “the rule of the one” (monarchy, or autocracy). What is better about the rule of the many? Aristotle writes:
[T]he many, who are not as individuals excellent men, nevertheless can, when they have come together, be better than the few best people … just as feasts to which many contribute are better than feasts provided at one person’s expense. (Politics III, 11, 1281a41–1281b, trans. Reeve 1998: 83)
At best this only hints at a possible answer. For many current theorists, however, a core feature of democracy is majoritarian rule, which consists of granting votes to the citizenry at large and letting the majority opinion expressed in such a vote prevail. Furthermore, according to the Condorcet Jury Theorem (CJT), established by the French Enlightenment figure Marquis de Condorcet, majority rule can greatly enhance a polity’s prospects for getting a true answer on a binary (yes/no) question. Omitting appropriate qualifications for the moment, CJT says that if all voters in the electorate are individually more likely than not to hold a true opinion in a two-option choice, then aligning the group judgment with the majority judgment makes it even more likely that the group will be right than any individual. As the size of the electorate increases, moreover, the likelihood of the majority being right rapidly approaches 1.0 as an asymptote.
Other ways of expressing a similar core idea is to speak of the power of information pooling, or “The Wisdom of Crowds” (Surowiecki 2004). A striking illustration is due to Francis Galton, who performed a little experiment at an agricultural fair in rural England. About 800 participants were invited to estimate the weight of a displayed ox. Few participants had accurate individual estimates, but the average estimate, 1197 pounds, was almost identical to the true weight of the ox, 1198 pounds.
Let us examine the CJT more carefully to see if it lives up to its billing. The above roll-out is actually rather misleading. It hints at the notion that majority voting is an unconditionally reliable (truth-conducive) method, whereas in fact it is only conditionally reliable. That is, the group tends to be right only if all voters are individually biased in the direction of truth, for example, have a probability of 0.52 of being correct. But there is no a priori guarantee that each voter will be individually biased toward the truth. To state this otherwise, CJT does not imply unconditional reliability because such reliability does not follow when, for example, all individual voters have a prior probability of being wrong. Indeed, for this circumstance a “reverse” form of CJT implies that the group’s likelihood of being right (when following the majority) approaches zero as group size increases. A further constraint built into CJT is that voters must form their opinions independently of one another, where independence is not an easy condition to satisfy. There are different ways to define independence, however (Dietrich and Spiekermann 2013), and these details are not pursued here.
Even if these crucial—and rather restrictive –constraints are met, it still cannot be said that majority voting is at the top of the veritistic (truth-conducive) heap as compared with rival voting methods. Shapley and Grofman (1984) and Nitzan and Paroush (1982) proved that the optimal voting scheme from the perspective of maximizing the group’s (chance of) getting the truth is a weighted scheme. A maximally truth-conducive weighting scheme would assign to each voter a weight wi that is proportional to log(pi / (1−pi)), where pi is the probability that voter i gets the correct answer. To illustrate, suppose that a local weather bureau wants the best practice for predicting the weather, and can exploit the accuracy likelihoods of five independent experts, whose probabilities of correctly predicting rain versus non-rain is .90 for two of them and .60 for the other three. The optimal scheme for the bureau is not to give equal weight to all five forecasters, but instead to give weights of .392 to each of the two superior experts and weights of .072 to each of the three lesser experts. This weighting scheme gives the bureau a correctness probability of .927 as compared with a correctness probability of .877 for an equal weighting scheme. Since democracy is standardly associated with equal (i.e., unweighted) voting, democracy is not the best scheme from a purely epistemic standpoint. This does not necessarily show that democracy is an inferior political system, only that one might hesitate to make purely epistemic considerations the be-all and end-all of political desirability (a conception that Estlund 2008 dubs “epistocracy”).
Another perspective aimed at achieving the highest group competence emphasizes the value of a diverse set of voices or points of view (Hong and Page 2004; Sunstein 2006; Landemore 2011). Diversity expands the problem-solving approaches employed by the community and gathers a wider range of relevant evidence. Hong and Page (2004) produce evidence alleged to show that group success at problem-solving is less a function of its members’ abilities than their diversity of methods.
In political matters, group deliberation can be seen as occupying a stage prior to that of voting, a stage at which voters form their personal opinions by conversing or otherwise exchanging perspectives and arguments with other voters. This is thought by many theorists to be of fundamental importance to democracy.
However, there are two rather different conceptions of, or rationales for, the deliberative approach to democracy (Freeman 2000: 375–379). The first conception sees public deliberation as essential to the discovery of truth(s) about how best to promote the common good. In brief, it holds that deliberation is the best epistemic means to what is truly the common good—the presumed end of political association. The second form of deliberative democracy, associated with Rawls and his followers, is that deliberative democracy is what is required to legitimate political institutions. To be legitimate, political institutions should be justifiable to all on the basis of reasons that all can reasonably accept. Notice that “justification” and “reasonableness” are both epistemic notions, or at least can be understood in epistemic senses. So this justification for deliberative democracy may also be epistemic, although it is also easily understood in non-epistemic senses (e.g., as directed to collective planning).
Each of the two deliberative approaches to democracy faces serious challenges, albeit of different kinds. Starting with the truth-oriented approach, what is the proof, or even substantial evidence, that public deliberation is conducive to truth, i.e., accuracy of judgment? And does such truth-conduciveness hold for political discourse in particular? If we consider actual deliberation as it occurs in existing democracies, there is much room for skepticism. In the United States (which may be a particularly problematic example) public political “deliberation” seems to consist substantially in communications deliberately crafted to distort important facts and appeal to narrow interests and biases rather than a concern for the common good. Even setting such distortions aside, social scientific studies reveal surprising failures of deliberating groups, such as cascades and group polarization (Sunstein 2006). Finally, although public deliberation might hold promise to be the ingredient needed to complement the Condorcet Jury Theorem in the pursuit of truth, a little reflection indicates that it fails this test. Our discussion of CJT indicated that majoritarian voting has excellent truth-conducive prospects when the inputs to such voting reflect pre-existing voter competences (likelihoods greater than .50). Such competences must be acquired at a pre-voting stage, and this might be achieved by public deliberation. We have seen, however, that CJT is not guaranteed to deliver its hoped-for goods unless individual competences are independent of one another, and public deliberation seems directly inconsistent with the independence requirement. This is not a categorical refutation, however. Claims that a certain mechanism or practice tends to have good (collective) epistemic consequences do not have to proceed by probabilistic considerations that hinge on independence assumptions. If collective competence is going to arise through the CJT, this is required. But nothing dictates that it must arise in this fashion. .
What of the second approach to deliberative democracy, the legitimizing and/or justifying approach? A main problem here is what kind of deliberation, justification, or reasoning is supposed to be at work. A traditional distinction is between theoretical and practical reasoning (and justification). There is no consensus, however, about how to draw this distinction. Roughly, practical reasoning aims at forming a decision, choice, or intention about what to do, whereas theoretical reasoning aims at deciding (or considering) what to believe. The attitudes directed at the “premises” are also different in the two cases. In theoretical reasoning the premise attitudes are all beliefs or credences (cognitive states of affairs), whereas in practical reasoning some of the premise attitudes are prior intentions, or related states such as desires and preferences. In collective practical deliberation, there may be “we-intentions” to endorse as a group member. Engaging in public deliberation of a political kind, it is often—and perhaps characteristically—a matter of practical rather than theoretical deliberation. Much of what political speakers say to their audiences is exhortative—to do this, or to plan or commit to doing that. It is doubtful whether such arguments, discourses, or reasonings are purely epistemic (though many of their premises will be epistemic). Of course, we speak about such inter-personal or collective discussion as “deliberation”. And the discourses are aimed at what we call “justification”. But the type of justification in question does not seem to be purely epistemic, hence not the proper province of epistemology (even social epistemology). A successful rationale for democracy along these lines would not be firmly epistemological, even if the term “justification” could legitimately be used.
Even if it the most central problems of political theory cannot be construed in fundamentally epistemic terms, perhaps selected problems of political theory should be so construed. Consider, for example, a variety of issues surrounding speech and communication. Begin with the pivotal doctrine of freedom of speech. What is its fundamental rationale? Might this rationale be epistemic, or have a significant epistemic component?
Historically, a number of prominent philosophers and social theorists have embraced precisely this idea. John Milton (1644/1959) and John Stuart Mill (1859/1960) both contended that free, unregulated speech would promote the discovery and acceptance of truth better than the restriction or suppression of truth. Milton wrote, for example, “Let [Truth] and Falshood grapple; who ever knew Truth put to the wors, in a free and open encounter” (1959: 561). In the twentieth century similar statements are found by authoritative individuals, though frequently expressed in economic terms. Justice Holmes (dissenting) wrote:
the ultimate good desired is better reached by free trade in ideas—… the best test of truth is the power of the thought to get itself accepted in the competition of the market …. (Abrams v. United States, 1919: 630)
This Holmesian dictum became very influential in legal circles. In 1969, for example, the Supreme Court wrote:
It is the purpose of the First Amendment to preserve an uninhibited marketplace of ideas in which truth will ultimately prevail…”. (Red Lion Broadcasting v. FCC, 1969: 390)
This “argument from truth” for freedom of speech is obviously an epistemic one.
Holmes’s economic version of the argument from truth has been influential. Frederick Schauer formulates the thesis as follows:
Just as Adam Smith’s “invisible hand” will ensure that the best products emerge from free competition, so too will an invisible hand ensure that the best ideas emerge when all opinions are permitted freely to compete. (1982: 161)
Of course, an idea’s “emergence” is not the same as its “prevailing” in the marketplace of ideas. The relevant kind of “prevailing” is presumably acceptance by a majority or super-majority of relevant people. Thus, a better formulation of the idea might be the following:
- (MMTP) More total truth possession will be achieved if speech is regulated only by free-market mechanisms rather than by other forms of regulation.
This is how the thesis is formulated (though not embraced) by Goldman and Cox (1996), who examine the claim with close attention to economic theory. The most favorable interpretation of the argument, they suggest, is that economic theory implies that competitive markets are the most efficient modes of social organization, and lead to the production and consumption of superior products. The realm of speech is a marketplace in which the products are messages, and superior messages will presumably be the true ones. Having competitive markets should lead to a maximization of the production and consumption of truths. This would comport with Schauer’s formulation (above) that Adam Smith’s invisible hand will ensure that the best products emerge from free competition”. Goldman and Cox go on to argue that it’s a mistake to interpret economic theory as having this implication. It does not imply that competition inevitably generates the best products (however exactly “best” might be defined). Of course, it may still be correct that a “free market for speech” is the most truth-promoting type of communication institution even if it doesn’t flow from any “theorem” of economics. This leaves it open whether the thesis is true or not.
Even if large-scale generalizations about the veritistic consequences of assorted speech institutions are difficult to establish, this should not deter us from examining such matters for particular speech policies. Two widely discussed Supreme Court decisions—Citizens United and McCutcheon vs. FEC—have drawn on free speech considerations to bar limits on corporate and individual contributions to political campaigns, with major consequences for electoral politics in the United States. And commentators are rightfully wary (if not outraged) about these alleged epistemic consequences. The practice of granting corporations the same unlimited speech rights as individuals in electoral contexts must surely be assessed—at least partly if not primarily—by its epistemic consequences. The majority of the Court in the earlier, Red Lion Broadcasting, decision seemed to see this clearly when it recognized that the interests of the hearers or receivers of messages is at least as important as the interests of the “speakers”.
It is the right of the viewers and listeners, not the right of the broadcasters, which is paramount. It is the purpose of the First Amendment to preserve an uninhibited marketplace of ideas in which truth will ultimate prevail rather than to countenance monopolization of that market, whether it be by the Government itself or a private licensee. It is the right of the public to receive suitable access to social, political, esthetic, moral, and other ideas and experiences which is crucial here. (Red Lion Broadcasting Co. v. FCC 1969: 390)
Moving from the electoral realm to the media environment more generally is highly appropriate in the era of the digital revolution. The impact of the Internet and the practices that affect its accessibility is a source of intense analysis, of course. And much of this analysis adopts an epistemic standpoint.
The Internet gives rise to online collaborative tools for aggregating information disseminated among a large number of individuals who may not be experts on the topics they treat. A notable example is the free online encyclopedia Wikipedia. Wikipedia’s goal of making existing knowledge widely available is distinctively epistemic, so the question naturally arises as to how well it can achieve its aim. The question is of great practical importance, for on the one hand Wikipedia is one of the most widely used sources of information, while on the other hand there are pressing concerns about its epistemic quality. Since anyone can contribute anonymously to Wikipedia, there is no guarantee that writers of an entry are experts (or even know anything) about the topic at hand. Indeed, Wikipedia’s culture is sometimes said to openly deter experts from contributing. In addition, since contributions are anonymous or at any rate not easily trackable, contributors may vandalize pages or actively try to deceive readers by spreading false information. For three years Wikipedia contained an article on an alleged abolitionist named “Léon-Robert de l’Astran” who in fact never existed. Other cases involve false, self-serving edits by corporations, politicians and government agencies. Thus there are reasons to be pessimistic about Wikipedia’s epistemic prospects. Fallis (2011) offers a more optimistic assessment. As he points out, to give a fair assessment of Wikipedia we need to focus less on its absolute reliability than on its reliability compared to other sources of information. Here there are grounds for optimism. For instance, a 2005 article in Nature (Giles 2005) found that Wikipedia was only slightly less reliable than the venerable Encyclopedia Brittanica. Moreover, whether a source of information meets acceptable standards of reliability depends on the purposes for which it is used. Many readers use Wikipedia only to satisfy their curiosity or as a mere starting point for in-depth research, and Wikipedia may well be reliable enough for these purposes. Worries about vandalism and deception are in part alleviated by the fact that Wikipedia has several built-in features (such as disclaimers and discussion pages) to help readers assess the reliability of an entry. Finally, Fallis points out that reliability is not the only virtue that matters when it comes to sources of information. In addition, we care about power (how much information can be acquired from a source), speed (how fast the information can be acquired) and fecundity (how many people have access to the source). Wikipedia may well be less reliable than traditional encyclopedias but it is certainly a more powerful, speedy and fecund source of information.
Law is a core institution in almost all societies. Here we focus not on the making of laws but on its enforcement; in other words, the adjudicatory arm of the law. Legal adjudication characteristically features an assortment of agents and role-players who handle cases of alleged criminal offenses or civil harms and determine what treatment is in order. The process of making such determinations is therefore a social process. Since a principal aim of such undertakings is the determination of the truth, or the facts of the case, such undertakings are also plausibly understood as (at least partly) epistemological in nature. For any institutional arrangement or legal adjudication system, a theorist can ask how well this arrangement succeeds in its pursuit of truth. This real-world question is seriously on the table whenever a body of findings emerges (based on later DNA evidence, for example) that substantial numbers of innocent people have been convicted under the aegis of an existing system. In addition to considering the track record of an existing system, one wants to ask whether an alternative system might do better. This is a task for framers and modifiers of legal adjudication systems, an undertaking that should include social epistemology as a contributor. The system isn’t usually treated as a collective entity with beliefs of its own, but as an entity the operations of which influence the beliefs of certain components, which may be either individuals of collective entities (e.g., juries).
Getting at the truth isn’t the only goal of judicial systems. Another aim is premised on the recognition that any judicial system will make errors from time to time. However, since it is worse to convict an innocent person than to acquit a guilty person, the distribution of errors by a judicial system is important. When errors occur, it is better for them to be false acquittals than false convictions. The American judicial system includes a vast and complex body of rules geared towards achieving this objective. Many of these rules pertain to the kind of evidence that juries may consider while making decisions, and the degree of confidence they should have when making guilty verdicts. For instance, juries may convict a person only if they are confident of her guilt “beyond a reasonable doubt”. Since evidence, confidence, reasonable doubt and so on are epistemological notions, these rules are of interest to the social epistemologist.
One philosopher, Larry Laudan (2006), sharply criticizes these rules on the ground that they are unclear, unjustified, and often impossible to apply. One example he discusses is the admissibility requirement for evidence. For evidence to be presented to a jury, it must not only be epistemically relevant to the legal status of the defendant, but also satisfy a variety of other demands. In particular, the evidence should not be “unfairly prejudicial” to the defendant. “Unfairly prejudicial” evidence may be evidence that the defendant has a bad character or graphic and gruesome pictures of the crime, etc. The justification for excluding evidence of this kind is that presenting it at trial may lead the jury to make decisions on purely visceral grounds and thus increase the rate of false conviction. Thus, more abstractly characterized, a piece of evidence e is unfairly prejudicial if the probability of false conviction given that e is presented at trial is higher than the probability of false acquittal if e is excluded. Laudan points out that it is often very hard or even impossible for a judge to determine whether a piece of evidence is unfairly prejudicial. We have very little empirical information about what sort of evidence can distort a jury’s ability to give such evidence its proper weight. Thus, judges’ decisions to exclude evidence on the basis that it is prejudicial are often arbitrary, as Laudan shows by looking at actual cases. Moreover, the justification for the rule displays a bizarre sort of epistemic paternalism toward jurors. Laudan argues that similar criticisms apply to other rules such as the “beyond reasonable doubt” principle.
Social moral epistemology studies two kinds of questions. First, just as moral epistemology investigates the acquisition of true moral beliefs, social moral epistemology studies how social institutions and practices foster or impede the acquisition of true moral beliefs (and true factual beliefs insofar as they bear on moral questions). Allen Buchanan (2002) argues that social moral epistemology understood in this sense is a crucial component of applied ethics: for applied ethics to achieve its goal of increasing morally correct behavior, it must identify the social mechanisms impeding the formation of true beliefs that play a role in right action. One example that Buchanan uses to illustrate this claim is medical paternalism, the practice (widespread among physicians up to the mid-1970s) of withholding information from a patient for her own good. Bioethicists have shown that arguments for this practice are patently unsound, relying either on a patent misunderstanding of the patient-physician relationship or a gross confusion of an individual’s medical good with her overall best interest. However, these criticisms have had little effect on the practice of physicians and leave us with a puzzle: why do highly educated individuals embrace such transparently wrong arguments to justify their paternalistic behavior? For Buchanan, the answer has to do with the institutional mechanisms and social norms that support the socially privileged status of the medical profession, which tend to disable physicians’ ability for self-criticism and insulate the medical profession from outsiders’ criticisms. Buchanan concludes that to effectively decrease the incidence of paternalism in the medical community, it is not enough to criticize the arguments in its favor.; One must also identify the social mechanisms insulating physicians from criticism so as to devise social interventions to correct these mechanisms. Buchanan (2002, 2004) also argues that social moral epistemology provides a novel argument for political liberalism. Liberal institutions and attitudes, he claims, provide the best conditions for the formation of true beliefs that underlie right action. For instance, freedom of the press can help prevent the emergence of conflicts in ethnically divided societies: a free press can make it more difficult for racist propagandists to foster the belief that certain ethnic groups are unworthy of respect.
Elizabeth Anderson (2014) also gives an example of how social mechanisms and practices can foster the acquisition of true moral beliefs. In contrast to Buchanan, she focuses specifically on collective moral beliefs, and on how groups and societies come to learn new moral principles. Her specific example is the emergence of a consensus against the permissibility of slavery in 19th century Western societies. Anderson stresses the importance of “contention” in moral group learning. (For instance, episodes of rebellion and disobedience among slaves in the American South played a crucial role in advancing the cause of emancipation.) When faced with large-scale rebellion, the dominating group cannot simply impose its will, but must use other strategies that appeal to the subordinate group’s interests and thereby go some way toward recognizing its moral status. Like Buchanan, Anderson emphasizes the fact that social hierarchies are a source of moral bias. As she puts it,
it is extraordinarily difficult for social groups that exercise unaccountable power over other groups to distinguish what they want subordinate groups to do for them from what those groups are obligated to do.
Moral progress benefits from being organized in an egalitarian fashion, which occurs when all sides to a moral dispute are able to participate in the moral inquiry and to make their interests recognized.
Social moral epistemology also investigates the morality of our social practices involving truth, justification and knowledge. Miranda Fricker (2007) provides perhaps the most influential example of recent work in this vein. Fricker introduces the idea of epistemic injustice, which arises when somebody is wronged in her capacity as a knower. An easily recognizable form of such injustice is when a person or a social group is unfairly deprived of knowledge because of their lack of access to education or other epistemic resources. Fricker’s work focuses on two less obvious forms of epistemic injustice. The first is testimonial injustice, which occurs when a speaker is given less credibility than she deserves because the hearer has prejudices about a social group to which the speaker belongs. An example discussed at length by Fricker is Harper Lee’s To Kill a Mockingbird, in which an all-white jury refuses to believe the black defendant’s testimony because of racial prejudices. The second kind is hermeneutical injustice. This occurs when as a result of a group being socially powerless, members of the group lack the conceptual resources to make sense of certain distinctive social experiences. For instance, before the 1970s victims of sexual harassment had trouble understanding and describing the behavior of which they were the victims, because (in part as a result of women’s social powerlessness) the concept had not yet been articulated. Hookway (2010) builds on Fricker’s work and argues that there are other forms of epistemic injustice that do not involve testimony or conceptual resources. For instance, a teacher may refuse to consider a student’s question or objection as worthy of serious consideration because of a prejudice about the social group to which the student belongs.
Victims of epistemic injustice can suffer in practical terms. If a defendant is given less credibility than she deserves, she may end up being wrongly judged guilty. But, Fricker argues, the wrong done to somebody in her capacity as a knower is also an intrinsic harm. Our abilities as knowers are instances of our capacity for rationality, which is part of what makes us human beings intrinsically valuable. Epistemic injustice can also be harmful to the perpetrator herself: by giving less credibility to a speaker than she deserves, one may fail to acquire important knowledge.
To remedy epistemic injustice, Fricker stresses the importance of individual virtues to correct the effects of prejudices. For instance, an individual who possesses the virtue of testimonial justice will be attentive to the possibility that biases and prejudice affect her judgments about a speaker’s credibility, and will learn to distrust her credibility judgments when such biases may be operative. Linda Alcoff (2010) raises the worry that since cognitive biases are deeply entrenched and unconscious mental features, it may be difficult or even impossible to consciously correct the operation of these biases. Because of similar worries, Anderson (2012) argues that we need epistemically virtuous social institutions, not only individuals. For instance, egalitarian educational systems promote an equal distribution in markers of credibility (e.g., using standard grammar) and thus help prevent members of marginalized groups from experiencing testimonial injustice.
- Alcoff, Linda M., 2010, “Epistemic Identities”, Episteme, 7(2): 128–137.
- Anderson, Elizabeth, 2012, “Epistemic Justice as a Virtue of Social Institutions”, Social Epistemology, 26(2): 163–173.
- –––, 2014, “The Social Epistemology of Morality: Learning from the Forgotten History of the Abolition of Slavery”, in The Epistemic Life of Groups: Essays in the Epistemology of Collectives, M. Brady and M. Fricker (eds.), Oxford: Oxford University Press.
- Annis, D.B., 1978, “A Contextualist Theory of Epistemic Justification”, American Philosophical Quarterly, 15(3): 213–219.
- Aristotle, 1998, Politics, (trans.) C.D.C. Reeve. Indianapolis, IN: Hackett.
- Ballantyne, Nathan and E.J. Coffman, 2012, “Conciliationism and Uniqueness”, Australasian Journal of Philosophy, 90(4): 657–670.
- Barnes, Barry and David Bloor, 1982, “Relativism, Rationalism, and the Sociology of Knowledge”, in Rationality and Relativism, M. Hollis and S. Lukes (eds.), Cambridge, MA: MIT Press.
- Bird, Alexander, 2014, “When Is There a Group that Knows? Distributed Cognition, Scientific Knowledge, and the Social Epistemic Subject”, in Lackey 2014: 42–63.
- Bloor, David, Barry Barnes, and John Henry, 1996, Scientific Knowledge: A Sociological Analysis, Chicago: University of Chicago Press.
- Boghossian, Paul, 2006, Fear of Knowledge: Against Relativism and Constructionism, New York: Oxford University Press.
- Brandom, R., 1983, “Assertion”, Nous, 17(4): 637–650.
- Brown, James Robert, 2001, Who Rules in Science? An Opinionated Guide to the Wars, Cambridge, MA: Harvard University Press.
- Buchanan, Allen, 2002, “Social Moral Epistemology”, Social Philosophy and Policy, 19(2): 126–152.
- –––, 2004, “Political Liberalism and Social Epistemology”, Philosophy and Public Affairs, 32(2): 95–130.
- Burge, Tyler, 1993, “Content Preservation”, Philosophical Review, 102: 457–488.
- Christensen, David, 2007, “Epistemology of Disagreement: The Good News”, Philosophical Review, 116(2): 187–217.
- Coady, C.A.J., 1992, Testimony, Oxford: Oxford University Press.
- Craig, Edward, 1990, Knowledge and the State of Nature, Oxford: Oxford University Press.
- Dietrich, Franz and Kai Spiekermann, 2013, “Independent Opinions? On the Causal Foundations of Belief Formation and Jury Theorems”, Mind, 122(487): fz1074.
- Durkheim, Emile, 1997 , The Division of Labor in Society, New York: Free Press.
- Elga, Adam, 2007, “Reflection and Disagreement”, Noûs, 41 (3): 478–502.
- –––, 2010, “How to Disagree about how to Disagree”, in Feldman and Warfield 2010: 175–186.
- Estlund, David M., 2008, Democratic Authority: A Philosophical Framework, Princeton, NJ: Princeton University Press.
- Fallis, Don, 2011, “Wikipistemology”, in Goldman and Whitcomb 2011: 297–313.
- Faulkner, Paul, 2007, “What is Wrong with Lying?”, Philosophy and Phenomenological Research, 75: 535–57.
- Feldman, Richard, 2007, “Reasonable Religious Disagreements”, in L. Antony (ed.), Philosophers without Gods, (pp. 194–214), Oxford: Oxford University Press.
- Feldman, Richard and Ted A. Warfield (eds.), 2010, Disagreement, Oxford: Oxford University Press.
- Foley, Richard, 1994, “Egoism in Epistemology”, in Schmitt 1994b: 53–73.
- Forman, Paul, 1971, “Weimar Culture, Causality and Quantum Theory, 1918–1927: Adaptation by German Physicists and Mathematicians to a Hostile Intellectual Environment”, in Historical Studies in the Physical Sciences 3, R. McCormmach (ed.), Philadelphia: University of Pennsylvania Press.
- Freeman, Samuel, 2000, “Deliberative Democracy: A Sympathetic Comment”, Philosophy and Public Affairs, 29(4): 371–418.
- Fricker, Elizabeth, 1994, “Against Gullibility”, in B.K. Matilal and A. Chakrabarti (eds.) Knowing from Words, (pp. 125–161), Dordrecht: : Kluwer Academic Publishers.
- Fricker, Miranda, 2007, Epistemic Injustice, Oxford: Oxford University Press.
- –––, 2012, “Group Testimony: The Making of a Good Informant”, Philosophy and Phenomenological Research, 84: 249–276.
- Fuller, Steve, 1987, “On Regulating What is Known: A Way to Social Epistemology”, Synthese, 73(1): 145–184.
- –––, 1988, Social Epistemology, Bloomington, IN: Indiana University Press.
- Gilbert, Margaret, 1989, On Social Facts, New York: Routledge.
- Giles, Jim, 2005, “Internet Encyclopaedias Go Head to Head: Jimmy Wales' Wikipedia Comes Close to Britannica in Terms of the Accuracy of its Science Entries”, Nature, 438(7070): 900–1.
- Goldberg, Sanford C., 2010, Relying on Others: An Essay in Epistemology, Oxford: Oxford University Press.
- –––, 2011, “’If That Were True I Would Have Heard It by Now’”, in Goldman and Whitcomb 2011: 92–108.
- Goldman, Alvin I., 1978, “Epistemics: The Regulative Theory of Cognition”, Journal of Philosophy, 75(10): 509–523.
- –––, 1979, “What Is Justified Belief?”, in G. Pappas (ed.), Knowledge and Justification, (pp. 1–23), Dordrecht: Reidel.
- –––, 1986, Epistemology and Cognition, Cambridge, MA: Harvard University Press.
- –––, 1987, “Foundations of Social Epistemics”, Synthese, 73(1): 109–144.
- –––, 1994, “Argumentation and Social Epistemology”, Journal of Philosophy, 91: 27–49.
- –––, 1999, Knowledge in a Social World, Oxford: Oxford University Press.
- –––, 2001/2011, “Experts: Which Ones Should You Trust?”, Philosophy and Phenomenological Research, 63 (1): 85–110. Reprinted in Goldman and Whitcomb 2011: 109–133.
- –––, 2010/2011, “Systems-Oriented Social Epistemology”, in T.S. Gendler and J. Hawthorne (eds.), Oxford Studies in Epistemology, vol. 3 (pp. 189–214). Reprinted as “A Guide to Social Epistemology”, in Goldman and Whitcomb 2011: 11–37.
- –––, 2014, “Social Process Reliabilism: Solving Justification Problems in Collective Epistemology”, in Lackey 2014: 11–41.
- Goldman, Alvin I. and James Cox, 1996, “Speech, Truth, and the Free Market for Ideas”, Legal Theory, 2: 1–32.
- Goldman, Alvin I. and Moshe Shaked, 1991, “An Economic Model of Scientific Activity and Truth Acquisition”, Philosophical Studies, 63: 31–55.
- Goldman, Alvin I. and Dennis Whitcomb (eds), 2011, Social Epistemology: Essential Readings, New York: Oxford University Press.
- Grofman, Bernard, Guillermo Owen, and Scott L. Feld, 1983, “Thirteen Theorems in Search of Truth”, Theory and Decision, 13: 261–278.
- Haddock, Adrian, Alan Millar, and Duncan Pritchard (eds.), 2010, Social Epistemology, Oxford: Oxford University Press.
- Hakli, Raul, 2011, “On Dialectical Justification of Group Beliefs”, in H. B. Schmid, D. Sirtes, and M. Weber (eds.), Collective Epistemology, (pp. 119–153). Frankfurt; Ontos Verlag.
- Hinchman, Edward S., 2005, “Telling as Inviting to Trust” Philosophy and Phenomenological Research, 70: 562–87.
- Holmes, Oliver Wendell, 1919, Abrams v. United States (dissenting).
- Hong, Lu and Scott Page, 2004, “Groups of Diverse Problem Solvers Can Outperform Groups of High-Ability Problem Solvers”, Proceedings of the National Academy of Sciences of the United States, 101(46): 16385–16389.
- Hookway, Christopher, 2010, “Some Varieties of Epistemic Injustice: Reflections on Fricker”, Episteme, 7(2): 151–163.
- Hutchins, Edwin, 1995, Cognition in the Wild, Cambridge, MA: MIT Press.
- Kelly, Thomas, 2005, “The Epistemic Significance of Disagreement”, in Oxford Studies in Epistemology, Volume 1, T.S. Gendler and J. Hawthorne (eds.), Oxford: Oxford University Press.
- –––, 2010, “Peer Disagreement and Higher-Order Evidence”, in Feldman and Warfield 2010: 111–174.
- Kitcher, Philip, 1990, “The Division of Cognitive Labor” Journal of Philosophy, 87: 5–22.
- –––, 1993, The Advancement of Science, New York: Oxford University Press.
- Kornhauser, L.A. and L.G. Sager, 1986, “Unpacking the Court”, Yale Law Journal, 96: 82–117.
- –––, 2004, “The Many as One: Integrity and Group Choice in Paradoxical Cases”, Philosophy and Public Affairs, 32: 249–276.
- Kuhn, Thomas, 1962, The Structure of Scientific Revolutions, Chicago: University of Chicago Press.
- Kukla, Andre, 2000, Social Construction and the Philosophy of Science, London: Routledge.
- Kusch, Martin, 2002, Knowledge by Agreement, Oxford: Oxford University Press.
- Lackey, Jennifer, 2008, Learning from Words: Testimony as a Source of Knowledge, Oxford: Oxford University Press.
- –––, 2010, “A Justificationist View of Disagreement’s Epistemic Significance” in Haddock, Millar, and Pritchard 2010: 298–325.
- –––, 2011, “Testimony: Acquiring Knowledge from Others”, in Goldman and Whitcomb 2011: 71–91.
- ––– (ed.), 2014, Essays in Collective Epistemology, New York: Oxford University Press.
- –––, forthcoming, “What Is Justified Group Belief?”
- Lackey, Jennifer and Ernest Sosa (eds.), 2006, The Epistemology of Testimony, Oxford: Oxford University Press.
- Lammenranta, Markus, 2004, “Theories of Justification”, in I. Niiniluoto, M. Sintonen, and J. Wolenski (eds.), Handbook of Epistemology, (467–495), Dordrecht: Kluwer Academic Publishers.
- Landemore, Helene, 2011, Democratic Reason: Politics, Collective Intelligence, and the Rule of the Many, Princeton, NJ: Princeton University Press.
- Latour, Bruno and Steve Woolgar, 1986 , Laboratory Life: The Construction of Scientific Facts, Princeton: Princeton University Press.
- Laudan, Larry, 2006, Truth, Error, and Criminal Law: An Essay in Legal Epistemology, New York: Cambridge University Press.
- Lipton, Peter, 1998, “The Epistemology of Testimony”, Studies in the History and Philosophy of Science, 29: 1–31.
- List, Christian and Philip Pettit, 2002, “Aggregating Sets of Judgments: An Impossibility Result”, Synthese, 140: 207–235.
- –––, 2011, Group Agency: The Possibility, Design, and Status of Corporate Agents, Oxford: Oxford University Press.
- Mackenzie, Donald, 1981, Statistics in Britain: 1865–1930, The Social Construction of Scientific Knowledge, Edinburgh: Edinburgh University Press.
- Merton, Robert K., 1957, “Priorities in Scientific Discovery”, American Sociological Review, 22: 635–659.
- Mill, John Stuart, 1859/1960, “On Liberty”, in On Liberty, Representative Government, The Subjection of Women, New York: Oxford University Press.
- Milton, John, 1644/1959, “Areopagtica, A Speech for the Liberty of Unlicensed Printing”, in E. Sirluck (ed.), Complete Prose Works of John Milton, New Haven: Yale University Press.
- Moran, Richard, 2006, “Getting Told and Being Believed”, in Lackey and Sosa 2006: 272–306.
- Nitzan, Shmuel and Jacob Paroush, 1982, “Optimal Decision Rules in Uncertain Dichotomous Choice Situations”, International Economic Review, 23: 289–297.
- Pettit, Philip, 2003, “Groups with Minds of Their Own”, in F. Schmitt (ed.) Socializing Metaphysics, (pp. 167–193). Lanham, MD: Rowman and Littlefield. Reprinted in Goldman and Whitcomb 2011: 242–268.
- –––, 2006, “When to Defer to Majority Testimony—and When Not”, Analysis, 66 (3): 179–187.
- Red Lion Broadcasting Co. v. FCC, 395 U.S. 387, 1969.
- Rorty, Richard, 1979, Philosophy and the Mirror of Nature, Princeton, NJ: Princeton University Press.
- Rosen, Gideon, 2001, “Nominalism, Naturalism, Philosophical Relativism”, Philosophical Perspectives, 15: 69–91.
- Ross, Angus, 1986, “Why Do We Believe What We Are Told?”, Ratio, 28: 69–88.
- Schauer, Frederick, 1982, Free Speech: A Philosophical Enquiry, New York: Cambridge University Press.
- Schmitt, Frederick, 1994a, “The Justification of Group Beliefs”, in Schmitt 1994b: 257–287.
- ––– (ed.), 1994b, Socializing Epistemology: The Social Dimensions of Knowledge, Lanham, MD: Rowman & Littlefield.
- –––, 2010, “The Assurance View of Testimony”, in Haddock, Millar, and Pritchard 2010: 216–242.
- Shapin, Steven, 1975, “Phrenological Knowledge and the Social Structure of Early Nineteenth-Century Edinburgh”, Annals of Science, 32: 219–243.
- Shapley, Lloyd and Bernard Grofman, 1984, “Optimizing Group Judgmental Accuracy in the Presence of Interdependence”, Public Choice, 43: 329–343.
- Sosa, Ernest, 2010, “The Epistemology of Disagreement”, in Haddock, Millar, and Pritchard 2010: 278–297.
- Strevens, Michael, 2003, “The Role of the Priority Rule in Science”, Journal of Philosophy, 100(2): 55–79.
- Sunstein, Cass, 2006, Infotopia: How Many Minds Produce Knowledge, New York: Oxford University Press.
- Surowiecki, James, 2004, The Wisdom of Crowds: Why the Many are Smarter than the Few and How Collective Wisdom Shapes Business, Economics, Societies, and Nations, New York: Doubleday.
- Talbott, William and Alvin I. Goldman, 1998, “Games Lawyers Play: Legal Discovery and Social Epistemology”, Legal Theory, 4: 93–163.
- Van Cleve, James, 2006, “ Reid on the Credit of Human Testimony”, in Lackey and Sosa 2006: 50–74.
- Wedgwood, Ralph, 2007, The Nature of Normativity, Oxford: Oxford University Press.
- Weisberg, Michael and Ryan Muldoon, 2009, “Epistemic Landscapes and the Division of Cognitive Labor”, Philosophy of Science, 76(2): 225–252.
- Williams, Michael, 1999, Groundless Belief: An Essay on the Possibility of Epistemology (2nd edition), Princeton, NJ: Princeton University Press.
- Zagzebski, Linda, 2012, Epistemic Authority: A Theory of Trust, Authority and Autonomy in Belief, Oxford: Oxford University Press.
- Zollman, Kevin, 2007, “The Communication Structure of Epistemic Communities”, Philosophy of Science, 74(5): 574–587.
- –––, 2010, “The Epistemic Benefit of Transient Diversity”, Erkenntnis, 72(1): 17–35.
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.