This is a file in the archives of the Stanford Encyclopedia of Philosophy. 
version 
Stanford Encyclopedia of Philosophy

last substantive content change

In §1 we will examine the basic assumptions which generate the problem of truthlikeness, which in part explain why the problem emerged when it did. Attempted solutions to the problem quickly proliferated, but they can all be gathered together into two broad lines of attack. The first, the content approach (§2), was initiated by Popper in his groundbreaking work. However, because it treats truthlikeness as a function of just two variables, neither Popper's original proposals, nor subsequent attempts to elaborate them, can fully capture the richness of the concept. The second, the similarity approach (§3), takes the likeness in truthlikeness seriously Although it promises to catch more of the complexity of the concept than does the content approach, it faces two serious problems: whether the approach can be suitably generalized to complex examples (§5), and whether it can be developed in a way that is translation invariant (§6).
Truthlikeness is a relative latecomer to the philosophical scene largely because it wasn't until the latter half of the twentieth century that mainstream philosophers gave up on the Cartesian goal of infallible knowledge. The idea that we are quite possibily, even probably, mistaken in our most cherished beliefs, that they might well be just false, was mostly considered tantamount to capitulation to the skeptic. By the middle of the twentieth century, however, it was clear that natural science postulated a very odd world behind the phenomena, one rather remote from our everyday experience, one which renders many of our commonsense beliefs, as well as previous scientific theories, strictly speaking, false. Further, the increasingly rapid turnover of scientific theories suggested that, far from being established as certain, they are ever vulnerable to refutation, and typically are eventually refuted, to be replaced by some new theory. Taking the dismal view, the history of inquiry is a history of theories shown to be false, replaced by other theories awaiting their turn at the guillotine.
Realism affirms that the primary aim of inquiry is the truth of some matter. Epistemic optimism affirms that the history of inquiry is one of progress with respect to its primary aim. But fallibilism affirms that, typically, our theories are false or very likely to be false, and when shown to be false they are replaced by other false theories. To combine all three ideas, we must affirm that some false propositions better realize the goal of truth  are closer to the truth  than others. So the optimistic realist who has discarded infallibilism has a problem  the problem of truthlikeness.
While a multitude of apparently different solutions to the problem have been proposed, it is now standard to classify them into two main approaches  the content approach and the likeness approach.
According to Popper, Hume had shown not only that we can't verify an interesting theory, we can't even render it more probable. Luckily, there is an asymmetry between verification and falsification. While no finite amount of data can verify or probabilify an interesting scientific theory, they can falsify the theory.. According to Popper, it is the falsifiability of a theory which makes it scientific. In his early work, he implied that the only kind of progress an inquiry can make consists in falsification of theories. This is a little depressing, to say the least. What it lacks is the idea that a succession of falsehoods can constitute genuine cognitive progress. Perhaps this is why, for many years after first publishing these ideas in his 1934 Logik der Forschung Popper received a pretty short shrift from the philosophers. If all we can say with confidence is “Missed again!” and “A miss is as good as a mile!”, and the history of inquiry is a sequence of such misses, then epistemic pessimism follows. Popper eventually realized that this naive falsificationism is compatible with optimism provided we have an acceptable notion of verisimilitude (or truthlikeness). If some false hypotheses are closer to the truth than others, if verisimilitude admits of degrees, then the history of inquiry may turn out to be one of steady progress towards the goal of truth. Moreover, it may be reasonable, on the basis of the evidence, to conjecture that our theories are indeed making such progress even though it would be unreasonable to conjecture that they are true simpliciter.
Popper saw very clearly that the concept of truthlikeness is easily confused with the concept of epistemic probability, and that it has often been so confused. (See Popper, 1963 for a history off the confusion). Popper's insight here was undoubtedly facilitated by his deep, and largely unjustified, antipathy to epistemic probability. His starkly falsificationist account favors bold, contentful theories. Degree of informative content varies inversely with probability  the greater the content the less likely a theory is to be true. So if you are after theories which seem, on the evidence, to be true, then you will eschew those which make bold  that is, highly unlikely  predictions. On this picture the quest for theories with high probability must be quite wrongheaded. Certainly we want inquiry to yield true propositions, but not any old truths will do. A tautology is a truth, and as certain as anything can be, but it is never the answer to any interesting inquiry outside mathematics and logic. What we want are deep truths, truths which capture more rather than less, of the whole truth.
What, then, is the source of the widespread confusion between probability and truthlikeness? Epistemic probability measures the degree of seeming to be true, while truthlikeness measures degree of being like the truth. Seeming and being like might at first strike one as closely related, but of course they are rather different. Seeming concerns the appearances whereas being like concerns the objective facts, facts about similarity or likeness. Even more important, there is a difference between being true and being the truth. The truth, of course, has the property of being true, but not every proposition that is true is the truth in the sense required by the aim of inquiry. The truth of a matter at which an inquiry aims has to be the complete, true answer. Thus there are two dimensions along which probability (seeming to be true) and truthlikeness (being like the truth) differ radically.
To see this distinction clearly, and to articulate it, was one of Popper's most significant contributions, not only to the debate about truthlikeness, but to philosophy of science and logic in general. As we will see, however, his deep antagonism to probability combined with his great love affair with boldness was both a blessing and a curse. The blessing: it led him to produce not only the first interesting and important account of truthlikeness, but to initiate a whole approach to the problem  the content approach (Oddie 1978, 1981 and 1986, Zwart 2000). The curse, as is now almost universally recognized: content alone is insufficient to characterize truthlikeness.
Popper made the first assay on the problem in his famous collection Conjectures and Refutations. First, let a matter for investigation be circumscribed by a language L adequate for discussing it. (Popper was a great admirer of Tarski's assay on the concept of truth and strove to model his theory of truthlikeness on Tarski's theory.) The world induces a partition of sentences of L into those that are true and those that are false. The set of all true sentences is thus a complete true account of the world, as far as that investigation goes. It is aptly called the Truth, T. T is the target of the investigation couched in L. It is the theory that we are seeking, and, if truthlikeness is to make sense, theories other than T, even false theories, come more or less close to capturing T.
T, the Truth, is a theory only in the technical Tarskian sense, not in the ordinary everyday sense of that term. It is a set of sentences closed under the consequence relation: a consequence of some sentences in the set is also a sentence in the set. T may not be finitely axiomatisable, or even axiomatisable at all. Where the language involves elementary arithmetic it follows (from Gödel's theorem) that T won't be axiomatisable. However, it is a perfectly good set of sentences all the same. In general we will follow the TarskiPopper usage here and call any set of sentences closed under consequence a theory, and we will assume that each proposition we deal with is identified with the theory it generates in this sense. (Note that when theories are classes of sentences, theory A logically entails theory B just in case B is a subset of A.)
The complement of T, the set of false sentences F, is not a theory even in this technical sense. Since falsehoods always entail truths, F is not closed under the consequence relation. (This is part of the reason we have no complementary expression like the Falsth. The set of false sentences does not describe a possible alternative to the actual world.) But F too is a perfectly good set of sentences. The consequences of any theory A that can be formulated in L will thus divide its consequences between T and F. Popper called the intersection of A and T, the truth content of A (A_{T}), and the intersection of A and F, the falsity content of A (A_{F}). Any theory A is thus the union of its nonoverlapping truth content and falsity content. Note that since every theory entails all logical truths, these will constitute a special set, at the center of T, which will be included in every theory, whether true or false.
A false theory will cover some of F, but because every false theory has true consequences, it will also overlap with some of T (Diagram 1).
Diagram 1: Truth and falsity contents of false theory A
A true theory, however, will only cover T (Diagram 2):
Diagram 2: True theory A is identical to its own truth content
Amongst true theories, then, it seems that the more true sentences entailed the closer we get to T, hence the more truthlike. Set theoretically that simply means that, where A and B are both true, A will be more truthlike than B just in case B is a subset of A (which for true theories means that the truth content of B is a subset of the truth content of A.
Diagram 3: True theory A has more truth content than true theory B
This account has some nice features. It induces a partial ordering
of truths, with the whole Truth T at the top of the ordering:
T is closer to the Truth than any other true theory. The set
of logical truths is at the bottom: further from the Truth than any
other true theory. In between these two extremes, true theories are
ordered simply by logical strength: the more logical content, the
closer to the Truth. Since epistemic probability varies inversely
with logical strength, amongst truths the theory with the greatest
truthlikeness (T) must have the smallest probability, and the
theory with the largest probability (the logical truth) is the
furthest from the Truth. Popper's love affair with logical
strength is thus consummated in his first sketch of an account of
truthlikeness.
Popper made a bold and simple generalization of
this. Just as truth content (coverage of T) counts in favour
of truthlikeness, falsity content (coverage of F) counts
against. In general then, a theory A is closer to the truth if it
has more truth content without engendering more falsity content, or
has less falsity content without sacrificing truth content (diagram
4):
Diagram 4: False theory A closer to the Truth than false theory B
This account also has some nice features. It preserves the comparisons of true theories mentioned above. The truth content A_{T} of a false theory A (itself a theory) will clearly be closer to the truth than A (diagram 1). More generally, a true theory A will be closer to the truth than a false theory B provided A's truth content exceeds B's.
Despite these nice features the account suffers the following fatal flaw: it entails that no false theory is closer to the truth than any other. This was shown independently by Pavel Tichý and David Miller (in 1973, reported in Miller 1974, and Tichý 1974). It is instructive to see why. Let us suppose that A and B are both false, and that A's truth content exceeds B's. Let a be a true sentence entailed by A but not by B. Let f be any falsehood entailed by A. Since A entails both a and f the conjunction, a&f is a falsehood entailed by A, and so part of A's falsity content. If a&f were also part of B's falsity content B would entail both a and f. But then it would entail a contrary to the assumption. Hence a&f is in A's falsity content and not in B's. So A's truth content cannot exceeds B's without A's falsity content also exceeding B's. Suppose now that B's falsity content exceeds A's. Let g be some falsehood entailed by B but not by A, and let f, as before, be some falsehood entailed by A. The sentence f g is a truth, and since it is entailed by g, is in B's truth content. If it were also in A's then both f and f g would be consequences of A and hence so would g, contrary to the assumption. Thus A's truth content lacks a sentence, f g, which is in B's. So B's falsity content cannot exceeds A's without B's truth also content exceeding A's. The relationship depicted in diagram 4 simply cannot obtain.
It is tempting to retreat to something like the comparison of truth contents alone. But then we get a result which is almost as bad: that a false theory is the closer to the truth the stonger it is. So, for example, since the false proposition that all heavenly bodies are made of green cheese is logically stronger than the false proposition that all heavenly bodies orbiting the earth are made of green cheese the former is closer to the truth. And once we know a theory is false we can be confident that tacking any old arbitrary proposition will lead us inexorably closer to the truth. Amongst false theories, brute strength becomes the only criterion of a theory's likeness to truth.
After the failure of Popper's proposal there have been two main variations on the content approach. One stays within Popper's essentially syntactic paradigm, comparing classes of true and false sentences (e.g. Schurz and Weingarter 1987, Newton Smith 1981). The other makes the switch to a more semantic paradigm, searching for a plausible theory of distance between propositions, construing these not as classes of sentences, but rather as classes of possibilities. One main variant takes the class of models of a language as a surrogate for possible states of affairs (Miller 1978a). The other utilizes a semantics of incomplete possible states like those favored by structuralist accounts of scientific theories (Kuipers 1987). The idea which these share in common is that the distance between two propositions is measured by the symmetric difference of the two classes of associated states. Roughly speaking, the larger the symmetric difference, the greater the distance between the two propositions.
If the truth is taken to be represented by a unique model, or complete possible world, then we end up with results very close to Popper's truth content account (Oddie 1978). In particular, false propositions are closer to the truth the stronger they are. However, if we take the structuralist approach then we will take the relevant states of affairs to be “small” states of affairs  chunks of the world rather than the entire world  and then the possibility of more finegrained distinctions between theories opens up. The most promising recent developments exploring this idea are to be found in Volpe 1995.
The fundamental problem with the oriignal pure content approach lies not with the particular proposals but with the underlying strength assumption: that verisimilitude is a function of just two variables  logical strength and truth value. This assumption has a number of somewhat counterintuitive consequences.
Firstly, it is clear that whatever function of strength and truth value one selects, a given theory A can have only two degrees of verisimilitude: one in case it is false and the other in case it is true. This is obviously wrong. A theory can be false in very many different ways. The proposition that there are eight planets is false whether there are nine planets or a thousand planets, but the degree of truthlikeness is much higher in the first case than in the latter. We will see later that the degree of verisimilitude of a true theory may also vary according to where the truth lies.
Secondly, the brute strength assumption entails that if we fix truth value, verisimilitude will vary only on strength. So, for example, two equally strong false theories will have to have the same degree of verisimilitude. That's pretty farfetched. That there are eight planets and that there are a thousand planets are (intuitively) equally strong, and both are false in fact (assuming that Pluto is indeed a planet), but the latter is much further from the truth.
Finally, how does strength determine verisimilitude amongst false theories? There are really only two plausible candidates: that verisimilitude increases with increasing strength, or that it decreases with increasing strength. Both proposals are at odds with attractive judgements and principles. One does not necessarily make a step toward the truth by reducing the content of a false proposition. The proposition that the moon is the only heavenly body made of green cheese is logically stronger than the proposition that the moon is made of green cheese, but the latter hardly seems a step towards the truth. Nor does one necessarily make a step toward the truth by increasing the content of a false theory. The false proposition that all heavenly bodies are made of green cheese is logically stronger than the false proposition all heavenly bodies orbiting the earth are made of green cheese but doesn't seem to constitute progess towards the truth.
A possible world is a complete possible way for things to be. It is a complete distribution of properties, relations and magnitudes over the appropriate kinds of items. Naturally, these distributions are relativized to a certain collection of features. A proposition carves the class of possibilities into two  those in which the proposition is true and those in which it is false. Call the class of worlds in which the proposition is true its range. Some have proposed that propositions simply be identified with their ranges, but whether or not that identification is plausible, certainly the range of a proposition is an important aspect of it. It is the proposition's truth condition. Normal logical relations and operations correspond to wellunderstood settheoretic relations and operations on ranges. The range of the conjunction of two proposition is the intersection of the ranges of the two conjuncts. Entailment corresponds to the subset relation on ranges. The actual world is a single point in logical space  a complete specification of every matter of fact  and a proposition is true if its range contains the actual world, false otherwise. The Truth is the complete true proposition: that proposition which entails all true propositions. It is none other than the singleton of the actual world. That singleton is the target, the bullseye, the thing at which the most comprehensive inquiry is aiming.
In addition to the settheoretic structures which underlie the familiar logical relations, the logical space might be structured by similarity or likeness. For example, worlds might be more or less like other worlds. There might be a betweeness relation amongst worlds, or even a fullyfledged distance metric. If that's the case we can start to see how one proposition might be closer to the Truth, the target world, than another. Suppose, for example, that worlds are arranged in similarity spheres nested around the actual world, familiar from the StalnakerLewis approach to counterfactuals. Consider Diagram 5:
Diagram 5: Verisimilitude by similarity circles
The bullseye is the actual world and the small sphere which includes it is T, the Truth. The nested spheres represent likeness to the actual world. A world is less like the actual world the larger the first sphere of which it is a member. Propositions A and B are false, C and D are true. A carves out a class of worlds which are rather close to the actual world  all within spheres two to four  whereas B carves out a class rather far from the actual world  all within spheres five to seven. Intuitively A is closer to the bullseye than is B.
The largest sphere which does not overlap at all with a proposition is plausibly a measure of how close the proposition is to being true. Call that the truth factor. A proposition X is closer to being true than Y if the truth factor of X is included in the truth factor of Y. The truth factor of A, for example, is the smallest nonempty sphere, T itself, whereas the truth factor of B is the fourth sphere, of which T is a proper subset.
If a proposition includes the bullseye then of course it is true simpliciter, it has the maximal truth factor (the empty set). So all true propositions are equally close to being true. But truthlikeness is not just a matter of being close to being true. The tautology, D, C and the Truth itself are equally true, but in that order they increase in their closeness to the whole truth. Taking a leaf out of Popper's book, we can regard closeness to the whole truth as in part a matter of degree of informativeness of a proposition. In the case of the true propositions, this correlates roughly with the smallest sphere which totally includes the proposition. The further out the outermost sphere, the less informative the proposition is, because the larger the area of the logical space which it covers. So, in a way which echoes Popper's account, we could take truthlikeness to be a combination of the truth factor and the content factor.
X is closer to the truth than Y if and only if X does as well as Y on both truth factor and content factor, and better on at least one of those.
Applying this definition we capture two judgements, in addition to those already mentioned, that seem intuitively acceptable: that C is closer to the truth than A, and that D is closer than B. (Note, however, that we have here a partial ordering: A and D, for example, are not ranked). We can derive various apparently desirable features of the relation closer to the truth: for example, that the relation is transitive, asymmetric and irreflexive; that the Truth is closer to the Truth than any other theory; that the tautology is at least as far from the Truth as any other truth; that one cannot make a true theory worse by strengthening it by a truth; that a falsehood is not necessarily improved by adding another falsehood. But there are also some worrying features here. No falsehood can be closer to the truth than any truth, for example. So Newton's theory is no closer to the Truth than the tautology. That's bad.
Stating Hilpinen's account in the above fashion masks its departure from Popper's account.. The incorporation of similarity spheres marks a fundamental break with the pure content approach, and opens up a range of possible new accounts.
One objection to Hilpinen's proposal is that it simply takes as given the similarity relation. Tichý anticipated this objection, and at the end of his 1974 paper he not only suggested the use of similarity rankings on worlds, but also provided a ranking in simple cases and indicated how to generalize this to more complex cases.
Examples and counterexamples in Tichý 1974 are very simple, framed in a language with three primitives  h (for the state hot), r (for rainy) and w (for windy). The sentences of this language are taken to express propositions over a dinky little eightmembered logical space. Tichý took judgements of truthlikeness like the following to be selfevident: Suppose that in fact it is hot, raining and windy. Then the proposition that it is cold, and dry and still (expressed by the sentence ~h&~r&~w) is further from the truth than the proposition that it is cold, rainy and windy (expressed by the sentence ~h&r&w). And the proposition that it is cold, dry and windy (expressed by the sentence ~h&~r&w) is somewhere between the two. These kinds of judgements are taken to be core intuitions which any adequate account of truthlikeness would have to deliver, and which Popper's theory patently can not handle. Unlike Popper, Tichý is not trying to find the missing theoretical bridge to epistemic optimism in a fallibilist philosophy of science. Rather, he takes the intuitive concept of truthlikeness to be as much a standard component of the intellectual armory of the folk as is the concept of truth. Doubtless, like the concept of truth, it needs tidying up and trimming down, but he assumes that it is basically sound, and that the job of the philosopherlogician is to explicate it: to give a precise, logically perspicuous, consistent account which captures the core intuitions and excludes core counterintuitions. In the grey areas, where our intuitions are not clear, it is a case of “spoils to the victor”  the best account of the core intuitions can legislate where the intuitions are fuzzy or contradictory.
Worlds differ in the distributions of these traits, and a natural, albeit simple, suggestion is to measure the likeness between two worlds by the number of agreements on traits. (This is tantamount to taking distance to be measured by the size of the symmetric difference of generating states. As is well known, this will generate a genuine metric, in particular satisfying the triangular inequality.) If w1 is the actual world this immediately induces a system of nested spheres, but one in which the spheres come with numbers attached:
w1 h&r&w w2 h&r&~w w5 ~h&r&w w8 ~h&~r&~w
Diagram 6: Similarity circles for the weather space
Those worlds orbiting on the sphere n are of distance n from the actual world. Now that we have numbers associated we can do something a little more ambitious than the partial ordering induced by Hilpinen's proposal. A numerical measure can be defined as some function of distances, from the actual world, of worlds in the range of a proposition. One particularly simple proposal is to take the average distance of worlds from the actual world. This is tantamount to measuring the distance from actuality of the “center of gravity” of the proposition.
This idea of averaging delivers all of the particular judgements we used above to motivate Hilpinen's proposal, but in conjunction with the metric it delivers more comparisons. For example, we have the following sample propositions in descending order of truthlikeness:
Truth Value Proposition Distance true h&r&w 0 true h&r 0.5 false h&r&~w 1.0 true h 1.3 false h&~r 1.5 false ~h 1.7 false ~h&~r&w 2.0 false ~h&~r 2.5 false ~h&~r&~w 3.0
So far these results look quite pleasing. Propositions are closer to the truth the more they get the basic weather traits right, further away the more mistakes they make. A false proposition may be made either worse or better by strengthening (~w is the same distance from the Truth as ~h; h&r&~w is better than ~w while ~h&~r&~w is worse). A false proposition (like h&r&~w) can be closer to the truth than some true propositions (like h).
But a number of problems immediately arise. First, the proposal embodies presuppositions of equal weight: both for basic states (in the distance metric on worlds) and for worlds (in the averaging procedure). These simplifications can be easily relaxed. We can weight the different factors according to their importance for the purposes of similarity, and we can clearly take a weighted average rather than a straight average. More importantly, some of the apparently pleasing general features of Hilpinen's account are violated. We can find pairs of true propositions such that the stronger is further from the truth. The tautology is not the true proposition furthest from the Truth (Popper 1976).
Truth Value Proposition Distance true h ~r w 1.4 true h ~r 1.5 true h ~h 1.5
In deciding how to proceed here we confront a methodological problem. The methodology exemplified by Tichý is very much bottomup. For the purposes of deciding between rival accounts it takes the intuitive data very seriously . Popper, and Popperians like Miller, take a far more topdown approach. They are suspicious of folk intuitions, and consider themselves to be in the business of constructing a new concept rather than explicating an existing one. They do place enormous weight on certain plausible general principles, largely those that fit in with other principles of their overall theory of science: like the principle that strength is a virtue and that the stronger of two true theories is the closer to the Truth. A third approach, one which lies between these two extremes, is that of reflective equilibrium. This recognizes the claims of both intuitive judgements on lowlevel cases, and plausible highlevel principles, and enjoins us to bring principle and judgement into equilibrium, possibly by tinkering with both. Neither intuitive lowlevel judgements nor plausible highlevel principles are given advance priority. The protagonist in the truthlikeness debate who argues most consistently for this approach is Niiniluoto.
How does this impact on the current dispute? Consider a different space of possibilities, generated by a single magnitude like the number of the planets (N). For the sake of the argument let's agree that N is in fact 9 and that the further n is from 9, the further the proposition that N=n from the Truth. Consider three sets of propositions. In the lefthand column we have a sequence of false propositions which, intuitively, decrease in truthlikeness while increasing in strength. In the middle column we have a sequence of corresponding true propositions, in each case the strongest true consequence of its false counterpart on the left (Popper's “truth content”). Again members of this sequence steadily increase in strength. Finally on the right we have another column of falsehoods. These are also steadily increasing in strength, and like the lefthand falsehoods, seem also to be decreasing in truthlikeness.
Falsehood (1) Strongest True Consequence Falsehood (2) 11 N 20 N=9 or 11 N 20 N=10 or 11 N 20 12 N 20 N=9 or 12 N 20 N=10 or 12 N 20 …… …… …… 19 N 20 N=9 or 19 N 20 N=10 or 19 N 20 N = 20 N=9 or N = 20 N=10 or N = 20
Judgements about the closeness of the true propositions to the truth may be less clear than are intuitions about their lefthand counterparts. However, it would seem highly incongruous to judge the truths to be steadily increasing in truthlikeness, while the falsehoods on the right, minimally different in content, steadily decrease in truthlikeness. So both the bottomup approach and reflective equilibrium suggest that all three are sequences of steadily increasing strength combined with steadily decreasing truthlikeness. And that is enough to overturn Popper's claim that amongst true theories strength and truthlikeness covary. This removes an objection to averaging (or weighted averaging), but does not settle the issue in its favor, for there may still be other more plausible counterexamples to averaging that we have not considered.
One fruitful way of generalizing the simple idea to complex spaces involves cutting such spaces down into finite chunks. This can be done in various ways, but one promising idea (Tichý, 1974, Niiniluoto 1976) is to make use of a certain kind of normal form  Hintikka's distributive normal forms (Hintikka 1963). Constituents correspond to propositional maximal conjunctions. Hintikka defined what he called constituents, which, like maximal conjunctions, are jointly exhaustive and mutually exclusive. Constituents lay out, in a very perspicuous manner, all the different ways individuals can be related. For example, Every sentence in a firstorder language comes with a certain depth  the number of embedded quantifiers required to formulate it. So, for example, (1) is a depth1 sentence; (2) is depth2; and (3) is depth3.
(1) Everyone loves himself.We could call a proposition depthd if the shallowest depth at which it can be expressed is d. What Hintikka showed is that every depthd proposition can be expressed by a disjunction of depthd constituents. Constituents can be represented as finite treestructures, the nodes of which are like straightforward conjunctions of atomic states. Consequently, if we can measure distance between such trees we will be well down the path of measuring the truthlikeness of depthd propositions: it will be some function of the distance of constituents in its normal form from the true depthd constituent.
(2) Everyone loves another.
(3) Everyone who loves another loves the other's lovers.
This program has proved flexible and fruitful, delivering a wide range of intuitively appealing results in simple firstorder cases. Further, the general idea can be extended in a number of different directions: to higherorder languages and to spaces based on functions rather than properties.
Take our simple weatherframework above. This trafficks in three primitives  hot, rainy, and windy Suppose, however, that we define the following two new weather conditions:
minnesotan =_{df} hot if and only if rainyNow it appears as though we can describe the same sets of weather states in an hmaese based on these conditions.arizonan =_{df} hot if and only if windy
hrwese hmaese T h&r&w h&m&a A ~h&r&w ~h&~m&~a B ~h&~r&w ~h&m&~a C ~h&~r&~w ~h&m&a
If T is the truth about the weather then theory A, in hrwese, seems to make just one error concerning the original weather states, while B makes two and C makes three. However, if we express these two theories in hmaese however, then this is reversed: A appears to make three errors and B still makes two and C makes only one error. But that means the account makes truthlikeness, unlike truth, radically languagerelative.
There are two live responses to this criticism. But before detailing them, note a dead one: the similarity theorist cannot object that hma somehow logically different from hrw, on the grounds that the primitives of the latter are essentially biconditional whereas the primitives of the former are not. This is because there is a perfect symmetry between the two. Starting within hmaese we can arrive at the original primitives by exactly analogous definitions:
rainy =_{df} hot if and only if minnesotanThus if we are going to object to hmaese it will have to be on other than purely logical grounds.
windy =_{df} hot if and only if arizonan
Firstly, then, the similarity theorist could maintain that certain predicates (presumably “hot”, “rainy” and “windy”) are primitive in some absolute, realist, sense. Such predicates “carve reality at the joints” whereas others (like “minnesotan” and “arizonan”) are gerrymandered affairs. With the demise of predicate nominalism as a viable account of properties and relations this approach is not as unattractive as it might have seemed in the middle of the last century. Realism about universals is certainly on the rise. While this version of realism presupposes a sparse theory of properties  that is to say, it is not the case that to every definable predicate there corresponds a genuine universal  such theories have been championed both by those doing traditional a priori metaphysics of properties (e.g. Bealer 1982) as well as those who favor or more empiricist, scientifically informed approach (e.g. Armstrong 1978, Tooley 1977). According to Armstrong, for example, which predicates pick out genuine universals is a matter for developed science. The primitive predicates of our best fundamental physical theory will give us our best guess at what the genuine universals in nature are. They might be predicates like electron or mass, or more likely something even more abstruse and remote from the phenomena  like the primitives of String Theory.
One apparently powerful objection to this realist solution is that it would render the task of empirically estimating degree of truthlikeness completely hopeless. If we know a priori which primitives should be used in the computation of distances between theories it will be difficult to estimate truthlikeness, but not impossible. For example, we might compute the distance of a theory from the various possibilities for the truth, and then make a weighted average, weighting each possible true theory by its probability on the evidence. That would be the credencemean estimate of truthlikeness. However, if we don't know which features should count towards the computation of similarities and distances then we cannot even get off first base.
To see this consider our simple weather frameworks. Suppose that all I learn is that it is rainy. Do I thereby some grounds for thinking A is closer to the truth than B? I would if I also knew that hrwese is the language for calculating distances. For then whatever the truth is, A makes one fewer mistake than B makes. A gets it right on the rain factor, while B doesn't, and they must score the same on the other two factors whatever the truth of the matter. But if we switch to hmaese then A's epistemic superiority is no longer guaranteed. If, for example, T is the truth then B will be closer to the truth than A. That's because in the hma framework raininess as such doesn't count in favor or against the truthlikeness of a proposition.
However, this objection fails if there can be empirical indicators of which conditions are the genuine ones, the ones that carve reality at the joints. Obviously the framework would have to contain more than just h, m and a. It would have to contain resources for describing the states that indicate whether these were genuine primitives. Maybe whether they enter into genuine causal relations will be important, for example. Once we can distribute probabilities over the various candidates for the real universals, then we can use those probabilities to weight the various possible distances which a hypothesis might be from any given theory.
The second live response is both more modest and more radical. It is more modest in that it is not hostage to the objective priority of a particular conceptual scheme, whether that priority is accessed a priori or a posteriori. It is more radical in that it denies a premise of the invariance argument that at first blush is apparently obvious. It denies the equivalence of the two conceptual schemes. It denies that h&r&w, for example, expresses the very same proposition as h&mr&a expresses. If we deny translatability then we can grant the invariance principle, and grant the judgements of distance in both cases, but remain untroubled. There is no contradiction. (Tichý 1978,, Oddie 1986).
At first blush this seems truly desperate. Haven't the respective conditions been defined in such a way that they are simple equivalent by fiat? That would, of course, be the case if m and a had been introduced as defined terms into hrw. But if that were the intention then the similarity theorist could retort that the calculation of distances should proceed in terms of the primitives, not the introduced terms. However that is not the only way the argument can be read. We are asked to contemplate two partially overlapping sequences of conditions, and two spaces of possibilities generated by those two sequences. We can thus think of each possibility as a point in a simple three dimensional space. These points are ordered triples of 0s and 1s, the nth entry being a 0 if the nth condition is satisfied and 1 if it isn't. Thinking of possibilities in this way, we already have rudimentary geometrical features generated simply by the selection of generating conditions. Points are adjacent if the differ on only one dimension. A path is a sequence of adjacent points. A point q is between two points p and r if q lies on a shortest path from p to r. A region of possibility space is convex if it is closed under the betweeness relation  anything between two points in the region is also in the region.
Evidently we have two spaces of possibilities, S1 and S2, and the question now arises whether a sentence interpreted over one of these spaces expresses the very same thing as any sentence interpreted over the other. Does h&r&w express the same thing as h&m&a? h&r&w expresses (the singleton of) u1 (which is the entity <1,1,1> in S1 or <1,1,1>_{S1}) and h&m&a expresses v1 (the entity <1,1,1>_{S2}). ~h&r&w expresses u2 (<0,1,1>_{S1}), a point adjacent to that expressed by h&r&w. However ~h&~m&~a expresses v8 (<0,0,0>_{S2}), which is not adjacent to v1 (<1,1,1>_{S2}). So now we can construct a simple proof that the two sentences do not express the same thing.
u1 is adjacent to u2thereforev1 is not adjacent to v8
either u1 is not identical to v1 or u2 is not identical v8.therefore
Either h&r&w and h&m&a do not express the same thing, or~h&r&w and ~h&~m&~a do not express the same thing.
Thus at least one of the two required intertranslatability claims fails, and hrwese is not intertranslatable with hmaese. The important point here is that a space of possibilities already comes with a structure and the points in such a space cannot be individuated without reference to rest of the space and its structure. The identity of a possibility is bound up with its geometrical relations to other possibilities. Different relations, different possibilities.
This idea meshes well with recent work on conceptual spaces in Gärdenfors [2000]. Gärdenfors is concerned both with the semantics and the nature of genuine properties, and his bold and simple hypothesis is that properties carve out convex regions of an ndimensional quality space. He supports this hypothesis with an impressive array of logical, linguistic and empirical data. (Looking back at our little spaces above it is not hard to see that the convex regions are those that correspond to the generating (or atomic) conditions and conjunctions of those. See Oddie 1987a.) While is dealing with properties it is not hard to see that similar considerations apply, since propositions can be regarded as 0ary properties.
Ultimately, however, this response seems less than entirely satisfactory by itself. If the choice of a conceptual space is just a matter of taste then we may be forced to embrace a radical kind of incommensurability. Those who talk hrwese and conjecture ~h&r&w on the basis of the available evidence will be close to the truth. Those who talk hmaese while exposed to the “same” circumstances would presumably conjecture ~h&~m&~a on the basis of the “same” evidence (or the corresponding evidence that they gather). If in fact h&r&w is the truth (in hrwese) then the hrw weather researchers will be close to the truth. But the hma researchers will be very far from the truth. This may not be an explicit contradiction, but it should be worrying. Realists started out with the ambition of defending a concept of truthlikeness which would enable them to embrace both fallibilism and optimism. But what they have ended up with is something that smacks of rather too radical a version of the incommensurability of competing conceptual frameworks. Presumably the realist will need to add that some conceptual schemes really are better than others. Some “carve reality at the joints” and others don't. But is that something the realist will be reluctant to affirm?
Graham Oddie oddie@spot.colorado.edu 