This is a file in the archives of the Stanford Encyclopedia of Philosophy. |

- 1. Introduction
- 2. Truth Conditions for Indicative Conditionals
- 3. The Suppositional Theory
- 4. Truth Conditions Revisited: Stalnaker and Jackson
- 5. Other Conditional Speech Acts and Propositional Attitudes
- Bibliography
- Other Internet Resources
- Related Entries

It is controversial how best to classify conditionals. According to some theorists, the forward-looking "indicatives" (those with a "will" in the main clause) belong with the "subjunctives" (those with a "would" in the main clause), and not with the other "indicatives". (See Gibbard (1981, pp. 222-6), Dudman (1984, 1988), Bennett (1988). Bennett (1995) changed his mind. Jackson (1990) defends the traditional view.) The easy transition from typical "wills" to "woulds" is indeed a datum to be explained. Still, straightforward statements about the past, present or future, to which a conditional clause is attached -- the traditional class of indicative conditionals -- do (in my view) constitute a single semantic kind. The theories to be discussed do not fare better or worse when restricted to a particular subspecies.

As well as conditional statements, there are conditional commands, promises, offers, questions, etc.. As well as conditional beliefs, there are conditional desires, hopes, fears, etc.. Our focus will be on conditional statements and what they express -- conditional beliefs; but we will consider which of the theories we have examined extends most naturally to these other kinds of conditional.

Three kinds of theory will be discussed. In §2 we compare truth-functional and non-truth-functional accounts of the truth conditions of conditionals. In §3 we examine what I call the suppositional theory: that conditional judgements essentially involve suppositions. On development, it turns out to be incompatible with construing conditionals as statements with truth conditions. §4 looks at some responses from advocates of truth conditions. In §5 we consider a wider variety of conditional speech acts and propositional attitudes.

Where I need to distinguish between different interpretations, I
write "*A* *B*" for the
truth-functional conditional, "*A*
*B*" for a non-truth-functional conditional and "*A*
*B*" for the conditional as
interpreted by the suppositional theory; and for brevity I call
protagonists of the three theories Hook, Arrow and Supp,
respectively. I use "~" for negation.

The truth-functional theory of the conditional was integral to
Frege's new logic (1879). It was taken up enthusiastically by
Russell (who called it "material implication"), Wittgenstein in the
*Tractatus*, and the logical positivists, and it is now found in
every logic text. It is the first theory of conditionals which
students encounter. Typically, it does not strike students as
*obviously* correct. It is logic's first surprise. Yet, as
the textbooks testify, it does a creditable job in many
circumstances. And it has many defenders. It is a strikingly simple
theory: "If *A*, *B*" is false when *A* is true and
*B* is false. In all other cases, "If *A*, *B*"
is true. It is thus equivalent to "~(*A*&~*B*)" and
to "~*A* or *B*". "*A*
*B*" has, by stipulation, these truth conditions.

*If* "if" is truth-functional, this is the right truth function
to assign to it: of the sixteen possible truth-functions of
*A* and *B*, it is the only serious candidate. First,
it is uncontroversial that when *A* is true and *B* is
false, "If *A*, *B*" is false. A basic rule of
inference is modus ponens: from "If *A*, *B*" and
*A*, we can infer *B*. If it were possible to have
*A* true, *B* false and "If *A*, *B*"
true, this inference would be invalid. Second, it is uncontroversial
that "If *A*, *B*" is *sometimes* true when
*A* and *B* are respectively (true, true), or (false,
true), or (false, false). "If it's a square, it has four sides",
said of an unseen geometric figure, is true, whether the figure is a
square, a rectangle or a triangle. Assuming truth-functionality --
that the truth value of the conditional is *determined* by the
truth values of its parts -- it follows that a conditional is
*always* true when its components have these combinations of
truth values.

Non-truth-functional accounts agree that "If *A*, *B*"
is false when *A* is true and *B* is false; and they
agree that the conditional is sometimes true for the other three
combinations of truth-values for the components; but they deny that
the conditional is always true in each of these three cases. Some
agree with the truth-functionalist that when *A* and
*B* are both true, "If *A*, *B*" must be
true. Some do not, demanding a further relation between the facts
that *A* and that *B* (see Read (1995)). This dispute
need not concern us, as the arguments which follow depend only on the
feature on which non-truth-functionalists agree: that when *A*
is false, "If *A*, *B*" may be either true or
false. For instance, I say (*) "If you touch that wire, you will get
an electric shock". You don't touch it. Was my remark true or
false? According to the non-truth-functionalist, it depends on
whether the wire is live or dead, on whether you are insulated, and
so forth. Robert Stalnaker's (1968) account is of this type:
consider a possible situation in which you touch the wire, and which
otherwise differs minimally from the actual situation. (*) is true
(false) according to whether or not you get a shock in that possible
situation.

Let *A* and *B* be two logically independent
propositions. The four lines below represent the four incompatible
logical possibilities for the truth values of *A* and
*B*. "If *A*, *B*", "If ~*A*, *B*"
and "If *A*, ~*B*" are interpreted truth-functionally
in columns (i)-(iii), and non-truth-functionally (when their
antecedents are false) in columns (iv)-(vi). The
non-truth-functional interpretation we write
"*A* *B*". "T/F" means both truth
values are possible for the corresponding assignment of truth values
to *A* and *B*. For instance, line 4, column (iv),
represents two possibilities for *A*, *B*, If
*A*, *B*, (F, F, T) and (F, F, F).

Truth-Functional Interpretation

(i) (ii) (iii) ABAB~ ABA~B1. T T T T F 2. T F F T T 3. F T T T T 4. F F T F T

Non-Truth-Functional Interpretation

(iv) (v) (vi) ABAB~ ABA~B1. T T T T/F F 2. T F F T/F T 3. F T T/F T T/F 4. F F T/F F T/F

Suppose you start off with no information about which of the four
possible combinations of truth values for *A* and *B*
obtains. You then acquire compelling reason to think that either
*A* or *B* is true. You don't have any stronger
belief about the matter. In particular, you have no firm belief as to
whether *A* is true or not. You have ruled out line 4. The
other possibilities remain open. Then, intuitively, you are justified
in inferring that if ~*A*, *B*. Look at the
possibilities for *A* and *B* on the left. You have
eliminated the possibility that both *A* and *B* are
false. So if *A* is false, only one possibility remains:
*B* is true.

The truth-functionalist (call him Hook) gets this right. Look at
column (ii). Eliminate line 4 and line 4 only, and you have
eliminated the only possibility in which
"~*A* *B*" is false. You
know enough to conclude that
"~*A* *B*" is true.

The non-truth-functionalist (call her Arrow) gets this wrong. Look at
column (v). Eliminate line 4 and line 4 only, and some possibility of
falsity remains in other cases which have not been ruled out. By
eliminating just line 4, you do not thereby eliminate these further
possibilities, incompatible with line 4, in which
"~*A* *B*" is false.

The same point can be made with negated conjunctions. You discover
for sure that ~(*A*&*B*), but nothing stronger than
that. In particular, you don't know whether *A*. You rule
out line 1, nothing more. You may justifiably infer that if
*A*, ~*B*. Hook gets this right. In column (iii), if we
eliminate line 1, we are left only with cases in which
"*A* ~*B*" is true. Arrow gets
this wrong. In column (vi), eliminating line 1 leaves open the
possibility that
"*A* ~*B*" is false.

The same argument renders compelling the thought that if we eliminate
*just* *A*&~*B*, nothing stronger, i.e., we
don't eliminate *A*, then we have sufficient reason to
conclude that if *A*, *B*.

Here is a second argument in favour of Hook, in the style of Natural
Deduction. The rule of Conditional Proof (CP) says that if *Z* follows
from premises *X* and *Y*, then "If *Y*,
*Z*" follows from premise *X*. Now the three premises
~(*A*&*B*), *A* and *B* entail a
contradiction. So, by Reductio Ad Absurdum, from
~(*A*&*B*) and *A*, we can conclude
~*B*. So by CP, ~(*A*&*B*) entails "If
*A*, ~*B*". Substitute "~*C*" for *B*,
and we have a proof of "If *A*, then ~~*C*" from
"~(*A*&~*C*)". And provided we also accept Double
Negation Elimination, we can derive "If *A*, then *C*"
from "~(*A*&~*C*)".

Conditional Proof seems sound: "From *X* and *Y*, it
follows that *Z*. So from *X* it follows that if
*Y*, *Z*". Yet *for no reading of "if" which is
stronger than the truth-functional reading is CP valid* -- at
least this is so if we treat "&" and "~" in the classical way and
accept the validity of the inference: (I)
~(*A*&~*B*); *A*; therefore
*B*. Suppose CP is valid for some interpretation of "If
*A*, *B*". Apply CP to (I), and we get
~(*A*&~*B*); therefore if *A*, *B*,
i.e., *A* *B* entails if
*A*, *B*.

Hook might respond as follows. How do we test our intuitions about
the validity of an inference? The direct way is to imagine that we
know for sure that the premise is true, and to consider what we would
then think about the conclusion. Now when we know for sure that
~*A*, we have no use for thoughts beginning "If *A*,
...". When you know for sure that Harry didn't do it, you
don't go in for "If Harry did it ..." thoughts or remarks. In
this circumstance conditionals have no role to play, and we have no
practice in assessing them. The direct intuitive test is, therefore,
silent on whether "If *A*, *B*" follows from
~*A*. If our smoothest, simplest, generally satisfactory
theory has the consequence that it does follow, perhaps we should
learn to live with that consequence.

There may, of course, be further consequences of this feature of Hook's theory which jar with intuition. That needs investigating. But, Hook may add, even if we come to the conclusion that "" does not match perfectly our natural-language "if", it comes close, and it has the virtues of simplicity and clarity. We have seen that rival theories also have counterintuitive consequences. Natural language is a fluid affair, and we cannot expect our theories to achieve better than approximate fit. Perhaps, in the interests of precision and clarity, in serious reasoning we should replace the elusive "if" with its neat, close relative, .

This was no doubt Frege's attitude. Frege's primary concern
was to construct a system of logic, formulated in an idealized
language, which was adequate for mathematical reasoning. If
"*A* *B*" doesn't
translate perfectly our natural-language "If *A*, *B*",
but plays its intended role, so much the worse for natural
language.

For the purpose of doing mathematics, Frege's judgement was probably correct. The main defects of don't show up in mathematics. There are some peculiarities, but as long as we are aware of them, they can be lived with. And arguably, the gain in simplicity and clarity more than offsets the oddities.

The oddities are harder to tolerate when we consider conditional
judgements about empirical matters. The difference is this: in
thinking about the empirical world, we often accept and reject
propositions with degrees of confidence less than certainty. "I
think, but am not sure, that *A*" plays no central role in
mathematical thinking. We can, perhaps, ignore as unimportant the use
of indicative conditionals in circumstances in which we are
*certain* that the antecedent is false. But we cannot ignore our
use of conditionals whose antecedent we think is likely to be
false. We use them often, accepting some, rejecting others. "I think
I won't need to get in touch, but if I do, I shall need a phone
number", you say as your partner is about to go away; not "If I do
I'll manage by telepathy". "I think John spoke to Mary; if he
didn't he wrote to her"; not "If he didn't he shot
her". Hook's theory has the unhappy consequence that *all*
conditionals with unlikely antecedents are likely to be true. To
think it likely that ~*A* is to think it likely that a
sufficient condition for the truth of "*A*
*B*" obtains. Take someone who
thinks that the Republicans won't win the election
(~*R*), and who rejects the thought that if they do win, they
will double income tax (*D*). According to Hook, this person
has grossly inconsistent opinions. For if she thinks it's likely
that ~*R*, she must think it likely that at least one of the
propositions, {~*R*, *D*} is true. But that is just to
think it likely that *R*
*D*. (Put the other way round, to
reject *R*
*D* is to accept
*R*&~*D*; for this is the only case in which
*R* *D* is false. How can
someone accept *R*&~*D* yet reject *R*?) Not
only does Hook's theory fit badly the patterns of thought of
competent, intelligent people. It cannot be claimed that we would be
better off with
. On the contrary, we would
be intellectually disabled: we would not have the power to
discriminate between believable and unbelievable conditionals whose
antecedent we think is likely to be false.

Arrow does not have this problem. Her theory is designed to avoid it,
by allowing that
"*A* *B*" may be
false when *A* is false.

The other paradox of material implication is that according to Hook
all conditionals with true consequents are true: from *B* it
follows that
*A* *B*. This is perhaps
less obviously unacceptable: if I'm sure that *B*, and
treat *A* as an epistemic possibility, I must be sure that if
*A*, *B*. Again the problem becomes vivid when we
consider the case when I'm only nearly sure, but not quite sure,
that *B*. I think *B* *may* be false, and will be
false if certain, in my view unlikely, circumstances obtain. For
example, I think Sue is giving a lecture right now. I don't
think that if she was seriously injured on her way to work, she is
giving a lecture right now. I reject that conditional. But on
Hook's account, the conditional is false only if the consequent
is false. I think the consequent is true: I think a sufficient
condition for the truth of the conditional obtains.

Another example, from David Lewis (1976, p. 143): "You won't eat those and live", I say of some wholesome and delicious mushrooms -- knowing that you will now leave them alone, deferring to my expertise. I told no lie -- for indeed you don't eat them -- but of course I misled you.

Grice drew attention, then, to situations in which a person is
*justified in believing* a proposition, which would nevertheless
be an unreasonable thing for the person to *say*, in normal
circumstances. His lesson was salutary and important. He is right, I
think, about disjunctions and negated conjunctions. Believing that
John is in the pub, I can't consistently *disbelieve*
"He's either in the pub or the library"; if I have any epistemic
attitude to this proposition, it should be one of belief, however
inappropriate for me to assert it. Similarly for "You won't eat
those and live" when I know you won't eat them. But it is
implausible that the difficulties with the truth-functional
conditional can be explained away in terms of what is an
inappropriate conversational remark. They arise at the level of
belief. Thinking that John is in the pub, I may without irrationality
disbelieve "If he's not in the pub he's in the
library". Thinking you won't eat the mushrooms, I may without
irrationality reject "If you eat them you will die". As facts about
the norms to which people defer, these claims can be tested. A good
enough test is to take a co-operative person, who understands that
you are merely interested in her opinions about the propositions you
put to her, as opposed to what would be a reasonable remark to make,
and note which conditionals she assents to. Are we really to brand as
illogical someone who dissents from both "The Republicans will win"
and "If the Republicans win, income tax will double"?

The Gricean phenomenon is a real one. On anyone's account of conditionals, there will be circumstances when a conditional is justifiably believed, but is liable to mislead if stated. For instance, I believe that the match will be cancelled, because all the players have ’flu. I believe that whether or not it rains, the match will be cancelled: if it rains, the match will be cancelled, and if it doesn't rain, the match will be cancelled. Someone asks me whether the match will go ahead. I say, "If it rains, the match will be cancelled". I say something I believe, but I mislead my audience -- why should I say that, when I think it will be cancelled whether or not it rains? This does not demonstrate that Hook is correct. Although I believe that the match will be cancelled, I don't believe that if all the players make a very speedy recovery, the match will be cancelled.

Another example, due to Gibbard (1981, pp. 235-6): of a glass that had been held a foot above the floor, you say (having left the scene) "If it broke if it was dropped, it was fragile". Intuitively this seems reasonable. But by Hook's lights, if the glass was not dropped, and was not fragile, the conditional has a true (conditional) antecedent and false consequent, and is hence false.

Grice's strategy was to explain why we don't assert certain conditionals which (by Hook's lights) we have reason to believe true. In the above two cases, the problem is reversed: there are compounds of conditionals which we confidently assert and accept which, by Hook's lights, we do not have reason to believe true.

The above examples are not a problem for Arrow. But other cases of embedded conditionals count in the opposite direction. Here are two sentence forms which are, intuitively, equivalent:

(i) If ((Following Vann McGee (1985) I'll call the principle that (i) and (ii) are equivalent the Import-Export Principle, or "Import-Export" for short.) Try any example: "If Mary comes then if John doesn't have to leave early we will play Bridge"; "If Mary comes and John doesn't have to leave early we will play Bridge". "If they were outside and it rained, they got wet"; "If they were outside, then if it rained, they got wet". For Hook, Import-Export holds. (Exercise: do a truth table, or construct a proof.) Gibbard (1981, pp. 234-5) has proved that for no conditional with truth conditions stronger than does Import-Export hold. Assume Import-Export holds for some reading of "if". The key to the proof is to consider the formulaA&B),C.

(ii) IfA, then ifB,C.

(1) If (By Import-Export, (1) is equivalent toAB) then (ifA,B).

(2) If ((The antecedent of (2) entails its consequent. So (2) is a logical truth. So by Import-Export, (1) is a logical truth. On any reading of "if", "ifAB) &A)) thenB.

(3) (So (3) is a logical truth. That is, there is no possible situation in which its antecedent (AB) (ifA,B).

Neither kind of truth condition has proved entirely satisfactory. We still have to consider Jackson's defence of Hook, and Stalnaker's response to the problem about non-truth-functional truth conditions raised in §2.2. These are deferred to §4, because they depend on the considerations developed in §3.

If two people are arguing "IfA suppositional theory was advanced by J. L. Mackie (1973, chapter 4). Peter Gärdenfors's work (1986, 1988) could also come under this heading. But the most fruitful development of the idea (in my view) takes seriously the last part of the above quote from Ramsey, and emphasises the fact that conditionals can be accepted with different degrees of closeness to certainty. Ernest Adams (1965, 1966, 1975) has developed such a theory.p, willq?" and are both in doubt as top, they are addingphypothetically to their stock of knowledge, and arguing on that basis aboutq; ... they are fixing their degrees of belief inqgivenp(1929, p. 247).

When we are neither certain that *B* nor certain that
~*B*, there remains a range of epistemic attitudes we may have
to *B*: we may be nearly certain that *B*, think
*B* more likely than not, etc.. Similarly, we may be certain,
nearly certain, etc. that *B* given the supposition that
*A*. Make the idealizing assumption that degrees of closeness
to certainty can be quantified: 100% certain, 90% certain, etc.; and
we can turn to probability theory for what Ramsey called the "logic
of partial belief". There we find a well-established, indispensable
concept, "the conditional probability of *B* given
*A*". It is to this notion that Ramsey refers by the phrase
"degrees of belief in *q* given *p*".

It is, at first sight, rather curious that the best-developed and most illuminating suppositional theory should place emphasis on uncertain conditional judgements. If we knew the truth conditions of conditionals, we would handle uncertainty about conditionals in terms of a general theory of what it is to be uncertain of the truth of a proposition. But there is no consensus about the truth conditions of conditionals. It happens that when we turn to the theory of uncertain judgements, we find a concept of conditionality in use. It is worth seeing what we can learn from it.

The notion of conditional probability entered probability theory at an early stage because it was needed to compute the probability of a conjunction. Thomas Bayes (1763) wrote:

The probability that two ... events will both happen is ... the probability of the first [multiplied by] the probability of the secondA simple example: a ball is picked at random. 70% of the balls are red (so the probability that a red ball is picked is 70%). 60% of the red balls have a black spot (so the probability that a ball with a black spot is picked, on the supposition that a red ball is picked, is 60%). The probability that a red ball with a black spot is picked is 60% of 70%, i.e. 42%.on the supposition thatthe first happens [my emphasis].

Ramsey, arguing that "degrees of belief" should conform to probability theory, stated the same "fundamental law of partial belief":

Degree of belief in (For example, you are about 50% certain that the test will be on conditionals, and about 80% certain that you will pass, on the supposition that it is on conditionals. So you are about 40% certain that the test will be on conditionals and you will pass.pandq) = degree of belief inpdegree of belief inqgivenp. (1926, p. 77)

Accepting Ramsey's suggestion that "if", "given that", "on the
supposition that" come to the same thing, writing
"**p**(*B*)" for "degree of belief in
*B*", and
"**p**_{A}(*B*)" for "degree of
belief in *B* given *A*", and rearranging the basic
law, we have:

p(BifA) =p_{A}(B) =p(A&B)/p(A), providedp(A) is not 0.

Call a set of mutually exclusive and jointly exhaustive propositions
a partition. The lines of a truth table constitute a
partition. One's degrees of belief in the members of a
partition, idealized as precise, should sum to 100%. That is all
there is to the claim that degrees of belief should have the
structure of probabilities. Consider a partition of the form
{*A*&*B*, *A*&~*B*,
~*A*}. Suppose someone X thinks it 50% likely that ~*A*
(hence 50% likely that *A*), 40% likely that
*A*&*B*, and 10% likely that
*A*&~*B*. Think of this distribution as displayed
geometrically, as follows. Draw a long narrow horizontal
rectangle. Divide it in half by a vertical line. Write "~*A*"
in the right-hand half. Divide the left-hand half with another
vertical line, in the ratio 4:1, with the larger part on the
left. Write "*A*&*B*" and
"*A*&~*B*" in the larger and smaller cells
respectively.

A&B |
A&~B |
A |

(Note that as {*A*&*B*,
*A*&~*B*, ~*A*} and {*A*,
~*A*} are both partitions, it follows that
**p**(*A*) =
**p**(*A*&*B*) +
**p**(*A*&~*B*).)

How does X evaluate "If *A*, *B*"? She assumes that
*A*, that is, hypothetically eliminates ~*A*. In the
part of the partition that remains, in which *A* is true,
*B* is four times as likely as ~*B*; that is, on the
assumption that *A*, it is four to one that *B*:
**p**(*B* if *A*) is 80%,
**p**(~*B* if *A*) is 20%. Equivalently,
as *A*&*B* is four times as likely as
*A*&~*B*, **p**(*B* if
*A*) is 4/5, or 80%. Equivalently,
**p**(*A*&*B*) is 4/5 of
**p**(*A*). In non-numerical terms: you believe
that if *A*, *B* to the extent that you think that
*A*&*B* is nearly as likely as *A*; or, to
the extent that you think *A*&*B* is much more
likely than *A*&~*B*. If you think
*A*&*B* is as likely as *A*, you are certain
that if *A*, *B*. In this case, your
**p**(*A*&~*B*) = 0.

Go back to the truth table. You are wondering whether if *A*,
*B*. Assume *A*. That is, ignore lines 3 and 4 in which
*A* is false. Ask yourself about the relative probabilities of
lines 1 and 2. Suppose you think line 1 is about 100 times more
likely than line 2. Then you think it is about 100 to 1 that *B* if
*A*.

Note: these thought-experiments can only be performed when
**p**(*A*) is not 0. On this approach, indicative
conditionals only have a role when the thinker takes *A* to be
an epistemic possibility. If you take yourself to know for sure that
Ann is in Paris, you don't go in for "If Ann is not in Paris
..." thoughts (though of course you can think "If Ann had not been in
Paris ..."). In conversation, you can pretend to take something as an
epistemic possibility, temporarily, to comply with the epistemic
state of the hearer. When playing the sceptic, there are not many
limits on what you *can*, at a pinch, take as an epistemic
possibility -- as not already ruled out. But there are some limits,
as Descartes found. Is there a conditional thought that begins "If I
don't exist now ..."?

On Hook's account, to be close to certain that if *A*,
*B* is to give a high value to
**p**(*A*
*B*). How does
**p**(*A* *B*)
compare with **p**_{A}(*B*)? In
two special cases, they are equal: first, if
**p**(*A*&~*B*) = 0 (and
**p**(*A*) is not 0),
**p**(*A* *B*)
= **p**_{A}(*B*) = 1
(i.e. 100%). Second, if **p**(*A*) = 100%,
**p**(*A* *B*)
= **p**_{A}(*B*) =
**p**(*B*). In all other cases,
**p**(*A* *B*)
is greater than
**p**_{A}(*B*). To see this we
need to compare **p**(*A*&~*B*) and
**p**(*A*&~*B*)/**p**(*A*).
Consider again the partition {*A*&*B*,
*A*&~*B*,
~*A*}. **p**(*A*&~*B*) is a
smaller proportion of the whole space than it is of the
*A*-part -- the part of the space in which *A* is true
-- except in the special cases in which
**p**(*A*&~*B*) = 0, or
**p**(~*A*) = 0. So, except in these special
cases, **p**_{A}(~*B*) is
greater than **p**(*A*&~*B*). Now
**p**(*A* *B*)
= **p**(~(*A*&~*B*)); and
**p**(*A*&~*B*) +
**p**(~(*A*&~*B*)) = 1. Also
**p**_{A}(*B*) +
**p**_{A}(~*B*) = 1. So from
**p**_{A}(~*B*) >
**p**(*A*&~*B*) it follows that
**p**(*A* *B*)
> **p**_{A}(*B*).

Hook and the suppositional theorist (call her Supp) come
spectacularly apart when **p**(~*A*) is high and
**p**(*A*&*B*) is much smaller than
**p**(*A*&~*B*). Let
**p**(~*A*) = 90%,
**p**(*A*&*B*) = 1%,
**p**(*A*&~*B*) =
9%. **p**_{A}(*B*) = 10%.
**p**(*A* *B*)
= 91%. For instance, I am 90% certain that Sue won't be offered
the job (~*O*), and think it only 10% likely that she will
decline the offer (*D*) if it is made, that is
**p**_{O}(*D*) = 10%.
**p**(*O* *D*)
= **p**(~*O* or (*O*&*D*)) =
91%.

Now let us compare Hook, Arrow, and Supp with respect to two questions raised in §2.

- Question 1. You are certain that ~(
*A*&~*B*), but not certain that ~*A*. Should you be certain that if*A*,*B*?Hook: yes. Because "

*A**B*" is true whenever*A*&~*B*is false.Supp: yes. Because

*A*&*B*is as likely as*A*.**p**_{A}(*B*) = 1.Arrow: no, not necessarily. For "

*A**B*" may be false when*A*&~*B*is false. With just the information that*A*&~*B*is false, I should not be certain that if*A*,*B*. - Question 2. If you think it likely that ~
*A*, might you still think it unlikely that if*A*,*B*?Hook: no. "

*A**B*" is true in all the possible situations in which ~*A*is true. If I think it likely that ~*A*, I think it likely that a sufficient condition for the truth of "*A**B*" obtains. I must, therefore, think it likely that if*A*,*B*.Supp: yes. We had an example above. That most of my probability goes to ~

*A*leaves open the question whether or not*A*&*B*is more probable than*A*&~*B*. If**p**(*A*&~*B*) is greater than**p**(*A*&*B*), I think it's unlikely that if*A*,*B*. That's compatible with thinking it likely that ~*A*.Arrow: yes. "If

*A*,*B*" may be false when*A*is false. And I might well think it likely that that possibility obtains, i.e. unlikely that "If*A*,*B*" is true.

To make the point in a slightly different way, let me adopt the
following as an expository, heuristic device, a harmless
fiction. Imagine a partition as carved into a large finite number of
equally-probable chunks, such that the propositions with which we are
concerned are true in an exact number of them. The probability of any
proposition is the proportion of chunks in which it is true. The
probability of *B* on the supposition that *A* is the
proportion *of the A-chunks* (the chunks in which

Although Supp and Hook give the same answer to Question 1, their
reasons are different. Supp answers "yes" *not* because a
proposition, *A***B*, is true whenever
*A*&~*B* is false; but because *B* is true
in all the "worlds" which matter for the assessment of "If
*A*, *B*": the *A*-worlds. Although Supp and
Arrow give the same answer to Question 2, their reasons are
different. Supp answers "yes", not because a proposition
*A***B* may be false when *A* is false; but
because the fact that most worlds are ~*A*-worlds is
irrelevant to whether most *of the A-worlds* are

By a different argument, David Lewis (1976) was the first to prove
this remarkable result: there is no proposition *A***B*
such that, in all probability distributions,
**p**(*A***B*) =
**p**_{A}(*B*). A conditional
probability does not measure the probability of the truth of any
proposition. If a conditional has truth conditions, one should
believe it to the extent that one thinks it is probably true. If Supp
is correct, that one believes "If *A*, *B*" to the
extent that one thinks it probable that *B* on the supposition
that *A*, then this is not equivalent to believing some
proposition to be probably true. Hence, if Supp is right,
conditionals shouldn't be construed as having truth conditions
at all. A conditional judgement involves two propositions, which play
different roles. One is the content of a supposition. The other is
the content of a judgement made under that supposition. They do not
combine to yield a single proposition which is judged to be likely to
be true just when the second is judged likely to be true on the
supposition of the first.

Note: ways of restoring truth conditions, compatible with Supp's thesis, are considered in §4.

First consider classically valid (that is, necessarily truth-preserving) arguments which don't involve conditionals. We use them in arguing from contingent premises about which we are often less than completely certain. The question arises: how certain can we be of the conclusion of the argument, given that we think, but are not sure, that the premises are true? Call the improbability of a statement one minus its probability. Adams showed this: if (and only if) an argument is valid, then in no probability distribution does the improbability of its conclusion exceed the sum of the improbabilities of its premises. Call this the Probability Preservation Principle (PPP).

The proof of PPP rests on the Partition Principle -- that the
probabilities of the members of a partition sum to 100% -- nothing
else, beyond the fact that if *A* entails *B*,
**p**(*A*&~*B*) = 0. Here are three
consequences:

- if
*A*entails*B*,**p**(*A*)**p**(*B*) **p**(*A*or*B*) =**p**(*A*) +**p**(*B*)**p**(*A*&*B*)**p**(*A*) +**p**(*B*)- For all
*n*,**p**(*A*_{1}or ... or*A*_{n})**p**(*A*_{1}) + ... +**p**(*A*_{n})

The result is useful to know: if you have two premises of which you are at least 99% certain, they entitle you to be at least 98% certain of a conclusion validly drawn from them. Of course, if you have 100 premises each at least 99% certain, your conclusion may have zero probability. That is the lesson of the "Lottery Paradox". Still, Adams's result vindicates deductive reasoning from uncertain premises, provided that they are not too uncertain, and there are not too many of them.

So far, we have a very useful consequence of the classical notion of
validity. Now Adams extends this consequence to arguments involving
conditionals. Take a language with "and", "or", "not" and "if" -- but
with "if" occurring only as the main connective in a sentence. (We
put aside compounds of conditionals.) Take any argument formulated in
this language. Consider any probability function over the sentences
of this argument which assigns non-zero probability to the
antecedents of all conditionals -- that is, any assignment of numbers
to the non-conditional sentences which conforms to the Partition
Principle, and to the conditional sentences which conforms to
Supp's thesis: **p**(*B* if *A*) =
**p**_{A}(*B*) =
**p**(*A*&*B*)/**p**(*A*).
Let the improbability of the conditional "If *A*, *B*"
be 1
**p**_{A}(*B*). *Define* a
valid argument as one such that there is no probability function in
which the improbability of the conclusion exceeds the sum of the
improbabilities of the premises. And a nice logic emerges, which is
now well known. It is the same as Stalnaker's logic over this
domain (see §4.1). There are rules of proof, a decision
procedure, consistency and completeness can be proved. See Adams
(1998 and 1975).

I shall write the conditional which satisfies Adams's criterion
of validity
"*A* *B*". We have already
seen that in all distributions,
**p**_{A}(*B*)
**p**(*A*
*B*). Therefore,
*A* *B* entails
*A* *B*: the former cannot
be probable and the latter improbable. Call a non-conditional
sentence a factual sentence. If an argument has a factual conclusion,
and is classically valid with the conditional interpreted as
,
it is valid with the conditional interpreted as the stronger
. The following patterns of inference are
therefore valid:

We cannot consistently have their premises highly probable and their conclusion highly improbable.A;AB; soB(modus ponens)AB; ~B; so ~A(modus tollens)AorB;AC;BC; soC.

Arguments with conditional conclusions, however, may be valid when
the conditional is interpreted as the weaker
*A* *B*, but invalid when it
is interpreted as the stronger
*A* *B*. Here are some
examples.

I can consistently be close to certain that Sue is lecturing right now, while thinking it highly unlikely that if she had a heart attack on her way to work, she is lecturing just now.B; soAB.

~You can consistently be close to certain that the Republicans won't win, while thinking it highly unlikely that if they win they will double income tax.A; soAB.

~(I can consistently be close to certain that it's not the case that I will be hit by a bomb and injured today, while thinking it highly unlikely that if I am hit by a bomb, I won't be injured.A&B); soA~B

As I think it is very likely to rain tomorrow, I think it's very likely to be true that it will rain or snow tomorrow. But I think it's very unlikely that if it doesn't rain, it will snow.AorB; so ~AB.

I can think it's highly likely that if you strike the match, it will light; but highly unlikely that if you dip it in water and strike it, it will light.AB; so (C&A)B(strengthening of the antecedent).

Strengthening is a special case of transitivity, in which the missing
premise is a tautology: if *C*&*A* then *A*;
if *A*, *B*; so if *C*&*A*,
*B*. So transitivity also fails:

Adams gave this example (1966): I can think it highly likely that if Jones is elected, Brown will resign immediately afterwards; I can also think it highly likely that if Brown dies before the election, Jones will be elected; but I do not think it at all likely that if Brown dies before the election, Brown will resign immediately after the election!AB;BC; soAC.

We saw in §2.2 that Conditional Proof (CP) is invalid for any
conditional stronger than . It is invalid
in Adams's logic. For instance, "~(*A*&*B*);
*A*; so ~*B*" is valid. It contains no
conditionals. Any necessarily truth-preserving argument satisfies
PPP. If I'm close to certain that I won't be hit by a bomb
and injured, *and close to certain that I will be hit by a
bomb*, then I must be close to certain that I won't be
injured. But, as we saw, "~(*A*&*B*); so
*A* ~*B*" is invalid. Yet
we can get the latter from the former by CP.

Why does CP fail on this conception of conditionals? After all,
Supp's idea is to treat the antecedent of a conditional as an
*assumption*. What is the difference between the roles of a
premise, and of the antecedent of a conditional in the
conclusion?

The antecedent of the conditional is indeed treated as an assumption. On this conception of validity, the premises are not treated as assumptions. Indeed, it is not immediately clear what it would be to treat a conditional, construed according to Supp, as an assumption: to assume something, as ordinarily understood, is to assume that it is true; and conditionals are not being construed as ordinary statement of fact. But we could approximate the idea of taking the premises as assumptions: so doing is, in most contexts, tantamount to treating them, hypothetically, as certainties. So treating the premises would be to require of a valid argument that it preserve certainty: that there must be no probability distributions in which all the premises (conditional or otherwise) are assigned 1 and the conclusion is assigned less than 1. Call this the certainty-preservation principle (CPP).

The conception of validity we have been using (PPP) takes as central the fact that premises are accepted with degrees of confidence less than certainty. Now, anything which satisfies PPP satisfies CPP. And for argument involving only factual propositions, the converse is also true: the same class of arguments necessarily preserves truth, necessarily preserves certainty and necessarily preserves probability in the sense of PPP. But arguments involving conditionals can satisfy CPP without satisfying PPP. The invalid argument forms above do preserve certainty: if you assign probability 1 to the premises, then you are constrained to assign probability 1 to the conclusion (in all probability distributions in which the antecedent of any conditional gets non-zero probability). But they do not preserve high probability. They do not satisfy PPP. If at least one premise falls short of certainty by however small an amount, the conclusion can plummet to zero.

The logico-mathematical fact behind this is the difference in logical
powers between "All" and "Almost all". If all *A*-worlds are
*B*-worlds (and there are some
*C*&*A*-worlds) then all
*C*&*A*-worlds are *B*-worlds. But we can
have: almost all *A*-worlds are *B*-worlds but no
*C*&*A*-world is a *B*-world. If all
*A*-worlds are *B*-worlds and all *B*-worlds are
*C*-worlds, then all *A*-worlds are
*C*-worlds. But we can have: all *A*-worlds are
*B*-worlds, almost all *B*-worlds are
*C*-worlds, yet no *A*-world is a *C*-world;
just as we can have, all kiwis are birds, almost all birds fly, but
no kiwi flies.

Someone might react as follows: "All I want of a valid argument is that it preserve certainty. I'm not bothered if an argument can have premises close to certain and a conclusion far from certain, as long as the conclusion is certain when the premises are certain".

We *could* use the word "valid" in such a way that an argument
is valid provided it preserves certainty. If our interest in logic is
confined to its application to mathematics or other a priori matters,
that is fine. Further, when our arguments do not contain
conditionals, if we have certainty-preservation,
probability-preservation comes free. But if we use conditionals when
arguing about contingent matters, then great caution will be
required. Unless we are 100% certain of the premises, the arguments
above which are invalid on Adams's criterion guarantee nothing
about what you are entitled to think about the conclusion. The line
between 100% certainty and something very close is hard to make out:
it's not clear how you tell which side of it you are on. The
epistemically cautious might admit that they are never, or only very
rarely, 100% certain of contingent conditionals. So it would be
useful to have another category of argument, the "super-valid", which
preserves high probability as well as certainty. Adams has shown us
which arguments (on Supp's reading of "if") are super-valid.

Now that we have found an answer to the question, "How do we decide whether or not we believe a conditional statement?" [Ramsey's and Adams's answer] the problem is to make the transition from belief conditions to truth conditions; ... . The concept of aIf an argument is necessarily truth-preserving, the improbability of its conclusion cannot exceed the sum of the improbabilities of the premises. The latter was the criterion Adams used in constructing his logic. So Stalnaker's logic for conditionals must agree with Adams's over their common domain. And it does. The argument forms we showed to be invalid in Adams's logic (§3.2) are invalid on Stalnaker's semantics. For instance, the following is possible: in the nearest possible world in which you strike the match, it lights; in the nearest world in which you dip the match in water and strike it, it doesn't light. So Strengthening fails. (By "nearest world in which ..." I mean the possible world which is minimally different from the actual world in which ... .)possible worldis just what we need to make the transition, since a possible world is the ontological analogue of a stock of hypothetical beliefs. The following ... is a first approximation to the account I shall propose: Consider a possible world in whichAis true and otherwise differs minimally from the actual world."If. (1968, pp. 33-4)A, thenB" is true (false) just in caseBis true (false) in that possible world

Conditional Proof fails for Stalnaker's semantics. "*A*
or *B*; ~*A*; so *B*" is of course valid. But
(*) "*A* or *B*, therefore ~*A*>*B*"
is not: it can be true that Ann or Mary cooked the dinner (for Ann
cooked it); yet false that in the nearest world to the actual world
in which Ann did not cook it, Mary cooked it.

Stalnaker (1975) tried to show that although the above argument form
(*) is invalid, it is nevertheless a "reasonable inference" when
"*A* or *B*" is assertable, that is, when the speaker
has ruled out ~*A*&~*B*, but
~*A*&*B* and *A*&~*B* remain open
possibilities. Indicative conditionals, he claims, are used only when
their antecedents are epistemically possible for the speaker (here he
agrees with Supp). Then comes the crucial claim: *worlds which are
epistemically possible for the speaker count as closer to the actual
world than those which are not*. All
~*A*&~*B*-worlds have been eliminated. Not all
~*A*&*B*-worlds have been eliminated. All the
speaker's epistemically possible ~*A*-worlds are
*B*-worlds. So the nearest ~*A*-world is a
*B*-world. "*A*>*B*" is true.

This makes the truth conditions of a conditional, e.g. "If Ann
didn't cook the dinner, Bob cooked it" *dependent on what the
speaker believes*. All that is common to different utterances of
"*A*>*B*" is that they say that a certain
*A*-world is a *B*-world. That is not news: provided
that *A* and *B* are compatible, some *A*-world
is a *B*-world. Which world is being said to be a
*B*-world depends on the speaker's beliefs. With fixed
meanings for *A* and *B*, there is no single
proposition *A*>*B*, but a different one for each
belief state: we might write *A*>_{p}*B*,
where "p" is a probability function indexed to a person and a time.

This enables Stalnaker to avoid the argument against
non-truth-functional truth conditions given in §2.2. The
argument was as follows. There are six incompatible logically
possible combinations of truth values for *A*, *B* and
~*A* *B*. You start off with no
firm beliefs about which obtains. Now you eliminate
~*A*&~*B*, i.e. establish *A* or
*B*. That leaves five remaining possibilities, including two
in which
"~*A* *B*" is false. So you
can't be certain that
~*A* *B*. Stalnaker replies:
you can't, indeed, be certain that the proposition you were
wondering about earlier is true. But in your new epistemic state, you
express a new proposition by
"~*A* *B*", with different truth
conditions, governed by a new nearness relation, and you know that
that new proposition is true.

Disagreement and change of mind give way to equivocation. Suppose you
and I start off knowing *A* or *B* or *C*. You
then eliminate *C*. You accept "If ~*A*, *B*"
and reject "If ~*A*, *C*". I eliminate *B*. I
accept "If ~*A*, *C*", and reject "If ~*A*,
*B*". I assent to a sentence from which you dissent, and vice
versa. We do not disagree. We express different propositions, with
different truth conditions, governed by our different epistemic
states. Worlds which are near for me are far for you.

Are belief-relative truth conditions better than no truth conditions?
They account for the validity of arguments; but Adams's logic
has its own rationale, without them. They account for sentences with
conditional constituents. But we saw (§2.4) that they sometimes give
counterintuitive results. Are we able to escape Lewis's result
that a conditional probability is not the probability of the truth of
a proposition, by making the proposition dependent on the
believer's epistemic state? Lewis showed that there is no
proposition *A***B* such that in every belief state
**p**(*A***B*) =
**p**_{A}(*B*). He did not rule
out that in every belief state there is some proposition or other,
*A***B*, such that
**p**(*A***B*) =
**p**_{A}(*B*). However, in the
wake of Lewis, Stalnaker himself proved this stronger result, for his
conditional connective: the equation
**p**(*A*>*B*) =
**p**_{A}(*B*) cannot hold for
all propositions in a single belief state. If it holds for *A*
and *B*, we can find two other propositions, *C* and
*D* (truth-functional compounds of *A*, *B* and
*A*>*B*) for which, demonstrably, it does not
hold. (See Stalnaker's letter to van Fraassen published in van
Fraassen (1976, pp. 303-4), Gibbard (1981, pp. 219-20), and Edgington
(1995, pp. 276-8).)

It was Gibbard (1981, pp. 231-4) who showed just how belief-sensitive Stalnaker's truth conditions would be. Later (1984), Stalnaker abandoned the claim that conditionals express belief-relative propositions, writing "It follows that the conditional ... expresses one proposition when it is asserted, and a different one when it is denied" (p. 110).

According to Jackson, in asserting "If *A*, *B*" the
speaker expresses his belief that
*A* *B*, and also indicates
that this belief is "robust" with respect to the antecedent
*A*. In Jackson's early work (1979, 1980) "robustness"
was explained thus: the speaker would not abandon his belief that
*A* *B* if he were to learn
that *A*. This, it was claimed, amounted to the speaker's
having a high probability for
*A* *B* given *A*, i.e.
for (~*A* or *B*) given *A*, which is just to
have a high probability for *B* given *A*. Thus,
assertability goes by conditional probability. Robustness was meant
to ensure that an assertable conditional is fit for modus
ponens. Robustness is not satisfied if you believe
*A* *B* solely on the
grounds that ~*A*. Then, if you discover that *A*, you
will abandon your belief in
*A* *B* rather than conclude
that *B*.

Jackson came to realise, however, that there are assertable
conditionals which one would not continue to believe if one learned
the antecedent. I say "If Reagan worked for the KGB, I'll never
find out" (Lewis's example (1986, p. 155)). My conditional
probability for consequent given antecedent is high. But if I were to
discover that the antecedent is true, I would abandon the conditional
belief, rather than conclude that I will never find out that the
antecedent is true. So, in Jackson's later work (1987),
robustness with respect to *A* is simply defined as
**p**_{A}(*A*
*B*) being high, which is trivially
equivalent to **p**_{A}(*B*)
being high. In most cases, though, the earlier explanation will hold
good.

What do we need the truth-functional truth conditions for? Do they
explain the meaning of compounds of conditionals? According to
Jackson, they do not (1987, p. 129). We know what
"*A* *B*" means, as a
constituent in complex sentences. But
"*A* *B*" does not mean the
same as "If *A*, *B*". The latter has a special
assertability condition. And his theory has no implications about
what, if anything, "if *A*, *B*" means when it occurs,
unasserted, as a constituent in a longer sentence.

(Here his analogy with "but" etc. fails. "But" can occur in unasserted clauses: "Either he arrived on time but didn't wait for us, or he never arrived at all" (see Woods (1997, p. 61)). It also occurs in questions and commands: "Shut the door but leave the window open". "Does anyone want eggs but no ham?". "But" means "and in contrast". Its meaning is not given by an "assertability condition".)

Do the truth-functional truth conditions explain the validity of arguments involving conditionals? Not in a way that accords well with intuition, we have seen. Jackson claims that our intuitions are at fault here: we confuse preservation of truth and preservation of assertability (1987, pp. 50-1).

Nor is there any direct evidence for Jackson's theory. Nobody
who thinks the Republicans won't win treats "If the Republicans
win, they will double income tax" as inappropriate but probably true,
in the same category as "Even Gödel understood truth-functional
logic". Jackson is aware of this. He seems to advocate an error
theory of conditionals: ordinary linguistic behaviour fits the false
theory that there is a proposition *A***B* such that
**p**(*A***B*) =
**p**_{A}(*B*) (1987,
pp. 39-40). If this is his view, he cannot hold that his own theory
is a psychologically accurate account of what people do when they use
conditionals. Perhaps it is an account of how we *should* use
conditionals, and would if we were free from error: we *should
*accept that "If the Republicans win they will double income tax"
is probably true when it is probable that the Republicans won't
win. Would we gain anything from following this prescription? It is
hard to see that we would: we would deprive ourselves of the ability
to discriminate between believable and unbelievable conditionals
whose antecedents we think false.

A common complaint against Supp's theory is that if
conditionals do not express propositions with truth conditions, we
have no account of the behaviour of compound sentences with
conditionals as parts (see e.g. Lewis (1976, p. 142)). However, no
theory has an intuitively adequate account of compounds of
conditionals: we saw in §2.4 that there are compounds which Hook
gets wrong; and compounds which Arrow gets wrong. Grice's and
Jackson's defences of Hook focus on what more is needed to
justify the *assertion* of a conditional, beyond belief that
it is true. This is no help when it occurs, unasserted, as a
constituent of a longer sentence, as Jackson accepts. And with
negations of conditionals and conditionals in antecedents, we saw,
the problem is reversed: we assert conditionals which we would not
believe if we construed them truth-functionally.

There have been ambitious attempts to construct a general theory of
compounds of conditionals, compatible with Supp's thesis. They
are based on a partial restoration of truth values, which has some
merit. Note that the difficulties for Hook and Arrow in §§2
and 3 were focused on the last two lines of the truth table -- the
cases in which the antecedent is false. No problems arose in virtue
of the cases in which the antecedent is true. Perhaps we can say that
"If *A*, *B*" is true when *A* and *B*
are both true, is false when *A* is true and *B* is
false, and has no truth value when *A* is false. We must
immediately add that to believe (or assert) that if *A*,
*B*, is not to believe (assert) that it is true; for it is
true only if *A*&*B*; and one might believe that if
*A*, *B*, and properly assert it, without believing
that *A*&*B* -- indeed, while thinking that it is
very likely not true. If I say "If you press that button, there will
be an explosion", I hope and expect that you will not press it, and
hence that my remark is not true.

Instead, one must say that to believe "If *A*, *B*" is
to believe that it is true *rather than false*; it is to
believe that *A*&*B* is much more likely than
*A*&~*B*; i.e., to believe that it is true given
that it is true or false. This is just to say that one's
confidence in a conditional is measured by
**p**_{A}(*B*). Note that for a
bivalent proposition, belief that it is true coincides with belief
that it is true rather than false. But the latter, not the former,
generalizes to conditionals.

This has some minor advantages. It allows one to be right by luck,
and wrong by bad luck: however strong my grounds for thinking that
*B* if *A*, if it turns out that
*A*&~*B*, I was wrong. However poor my grounds, if
it turns out that *A*&*B*, I was vindicated.

Now in principle one could handle negations, conjunctions and
disjunctions of conditionals by three-valued truth tables; and
continue to say that a complex statement is believable to the extent
that it is judged probably true given that it is true or false. For a
conjunction,
((*A**B*)&(*C**D*)),
the most natural truth table would seem to be: the conjunction is
true iff both conjuncts are true; false iff at least one conjunct is
false; otherwise it lacks a truth value. This has unappetizing
consequences. Consider a conjunction of two conditionals whose
antecedents are *A* and ~*A* respectively, such that
the first conditional is 100% certain and the second 99% certain, for
instance,
((*A**A*)&(~*A**B*))
where **p**_{~A}(*B*) =
0.99. This looks like something about which you should be close to
certain. But it cannot be true (for one of the antecedents is false),
and it may be false, in the unlucky event that it turns out that
~*A*&~*B*. So the probability of its truth, given
that it has a truth value, is 0. One can try other truth tables: make
the conjunction true provided that it has at least one true conjunct
and no false conjunct, false if it has at least one false conjunct,
lacking truth value otherwise. And one can come up with equally
unappetizing consequences. For work in this tradition and valuable
surveys of related work, see McDermott (1996) and Milne (1997).

A variant of this approach gives "semantic values" to conditionals
as follows: 1 (= true) if *A*&*B*; 0 (= false) if
*A*&~*B*;
**p**_{A}(*B*) if
~*A*. Thus we have a belief-relative three-valued entity. Its
probability is its "expected value". For instance, I'm to pick a
ball from a bag. 50% of the balls are red. 80% of the red balls have
black spots. Consider "If I pick a red ball (*R*) it will have
a black spot
(*B*)". **p**_{R}(*B*) =
80%. If *R*&*B*, the conditional gets semantic
value 1, if *R*&~*B*, it gets semantic value
0. What does it get if ~*R*? One way of motivating this
approach is to treat it as a refinement of Stalnaker's truth
conditions. Is the nearest *R*-world a *B*-world or
not? Well, if I actually don't pick a red ball, there isn't
any difference, in nearness to the actual world, between the worlds
in which I do; but 80% of them are *B*-worlds. Select an
*R*-world at random; then it's 80% likely that it is a
*B*-world. So "If *R*, *B*" gets 80% if
~*R*. You don't divide the ~*R*-worlds into those
in which "If *R*, *B*" is true and those in which it is
false. Instead you make the conditional "80%-true" in all of
them. The expected value of "If *R*, *B*" is
(**p**(*R*&*B*)
1) + (**p**(*R*&~*B*)
0) + (**p**(~*R*)
0.8)) = (0.4 1) + (0.1
0) + (0.5 0.8) = 0.8 =
**p**_{R}(*B*). Ways of handling
compounds of conditionals have been proposed on the basis of these
semantic values. But again, they sometimes give implausible
results. For developments of this approach, see van Fraassen (1976),
McGee (1989), Jeffrey (1991), Stalnaker and Jeffrey (1994). For some
counterintuitive consequences, see Edgington (1991, pp. 200-2), Lance
(1991), McDermott (1996, pp. 25-28).

Thus, no general algorithmic approach to complex statements with conditional components has yet met with success. Many followers of Adams take (by default) a more relaxed approach to the problem. They try to show that when a sentence with a conditional subsentence is intelligible, it can be paraphrased, at least in context, by a sentence without a conditional subsentence. As conditionals are not ordinary propositions, in that they essentially involve suppositions, this (it is claimed) is good enough. They also point out that some constructions are rarer, and harder to understand, and more peculiar, than would be expected if conditionals had truth conditions and embedded in a standard way. See Appiah (1985, pp. 205-10), Gibbard (1981, pp. 234-8), Edgington (1995, pp. 280-4), Woods (1997, pp. 58-68 and 120-4); see also Jackson (1987, pp. 127-37).

For some constructions the paraphrase can be done in a general,
uniform way. For example, "If *A*, then if *B*,
*C*" can be paraphrased "If *A*&*B*,
*C*". "It's not the case that if *A*, *B*"
is probably best paraphrased as "If *A*, it's not the
case that *B*". The alternative would be something like "If
*A*, it might well be the case that ~*B*", expressing
the judgement that the probability of *B* given *A* is
not particularly high. But with a categorical statement, like "It
will rain today", it is when one disagrees strongly that one says "No
it won't" or "It's not the case that it will rain
today". When one disagrees weakly, one says something like "It might
well not" or "I wouldn't be so sure". By analogy, then, it seems
that it is strong disagreement with "If *A*, *B*" that
deserves the negation operator. If someone asserts two or more
conditionals joined by "and", each conditional can be assessed as a
separate assertion.

Disjunctions of conditionals are peculiar. Of course, it is a
sufficient condition for accepting such a disjunction that one
accepts one disjunct. But this is rather uninteresting. "Or" is a
very useful word when it connects things we are uncertain about, for
often we can be confident that *A* or *B*, while not
knowing which. We are often uncertain about conditionals. Yet "Either
(if *A*, *B*) or (if *C*, *D*) -- but I
don't know which" is a form of thought that is rarely if ever
instantiated in real life. If conditionals are ordinary statements of
fact, this is odd. The problem is not merely one of syntactic
complexity: "Either (*A*&*B*) or
(*C*&*D*) -- I don't know which" is just as
syntactically complex, but is quite commonplace. Several agile minds
have risen to the challenge of providing me with examples of the kind
I claim are virtually non-existent: "Either, if I go out I'll
get wet, or, if I turn the television on I'll see tennis -- I
don't know which": for, either it's raining or it
isn't. If it's raining and I go out, I'll get wet. If
it isn't raining and I turn the television on I'll see
tennis. "Either, if you open box *A*, you'll get ten
dollars, or, if you open box *B*, you'll get a button --
I don't know which"; for, if Fred is in a good mood he has put
ten dollars in box *A* and twenty dollars in box *B*;
if Fred is not in a good mood he has put a paper clip in box
*A* and a button in box *B*. All right. But the
disjunction of conditionals is an exceedingly bad way of conveying
the information you have, and once the necessary background is filled
in, we see that the disjunction belongs elsewhere. So we have little
use for them. On the other hand, our genuine need for disjunctions
shows up naturally inside a conditional "If *A*, either
*B* or *C* (I don't know which)". Some apparent
disjunctions of conditionals are really no such thing: "Either
we'll have fish, if John arrives, or we'll have leftovers,
if he doesn't". Note that both disjuncts are asserted. Note that
it doesn't seem to matter whether one uses "or" or "and" in "If
it's fine, we'll have a picnic, or [and] if it isn't,
we'll go to the movies". I conclude that disjunctions proper of
conditionals are of little use -- the best we can do, with my early
examples, is to discern some disjunction of categorical propositions
each disjunct of which supports one or other conditional.

Conditionals in antecedents are also problematic. Gibbard suggests
(1981, pp. 234-8) that we have no general way of decoding them, and
some cannot be deciphered, for example, said of a recent conference,
"If Kripke was there if Strawson was there, then Anscombe was
there". "Do you know what you have been told?", he asks
(p. 235). When we do understand utterances of this form, he suggests,
it is because we can identify some obvious basis, *D*, for an
assertion of "If *A*, *B*" and interpret "If
(*B* if *A*), *C*" as "If *D*,
*C*". For instance, "If the light will go on if you press the
switch, the electrician has come": if the power is on, the
electrician has come.

As said above, "If *A*, then if *B*, *C*" is to
be paraphrased as "If *A*&*B*, then
*C*". For to suppose that *A*, then to suppose that
*B* and make a judgement about *C* under those
suppositions, is the same as to make a judgement about *C*
under the supposition that *A*&*B*. Let's
consider this as applied to a problem raised by McGee (1985) with the
following example. Before Reagan's first election, Reagan was
hot favourite, a second Republican, Anderson, was a complete
outsider, and Carter was lagging well behind Reagan. Consider first

(1) If a Republican wins and Reagan does not win, then Anderson will win.As these are the only two Republicans in the race, (1) is unassailable. Now consider

(2) If a Republican wins, then if Reagan does not win, Anderson will win.We read (2) as equivalent to (1), hence also unassailable.

Suppose I'm close to certain (say, 90% certain) that Reagan will win, and hence close to certain that

(3) A Republican will win.But I don't believe

(4) If Reagan does not win, Anderson will win.I'm less than 1% certain that (4). On the contrary, I believe that if Reagan doesn't win, Carter will win. As these opinions seem sensible, we have a prima facie counterexample to modus ponens: I accept (2) and (3), but reject (4). Truth conditions or not, valid arguments obey the probability-preservation principle. I'm 100% certain that (2), 90% certain that (3), but less than 1% certain that (4).

Hook saves modus ponens by claiming that I must accept (4). For Hook, (4) is equivalent to "Either Reagan will win or Anderson will win". As I'm 90% certain that Reagan will win, I must accept this disjunction, and hence accept (4). Hook's reading of (4) is, of course, implausible.

Arrow saves modus ponens by claiming that, although (1) is certain, (2) is not equivalent to (1), and (2) is almost certainly false. For Stalnaker,

(5) If a Republican wins, then if Reagan doesn't win, Carter will winis true. To assess (5), we need to consider the nearest world in which a Republican wins (call it

Stalnaker's reading of (2) is implausible; intuitively, we accept (2) as equivalent to (1), and do not accept (5).

Supp saves modus ponens by denying that the argument is really of
that form.
"*A**B*;
*A*; so *B*" is demonstrably valid when *A* and
*B* are propositions. For instance, if
**p**(*A*) = 90% and
**p**_{A}(*B*) = 90% the lowest
possible value for **p**(*B*) is 81%. The
"consequent" of (2), "If Reagan doesn't win, Anderson will win",
is not a proposition. The argument is really of the form "If
*A*&*B*, then *C*; *A*; so if
*B* then *C*". This argument form is invalid (Supp and
Stalnaker agree). Take the case where *C* = *A*, and we
have "If *A*&*B* then *A*; *A*; so if
*B* then *A*". The first premise is a tautology and
falls out as redundant; and we are left with "*A*; so if
*B* then *A*". We have already seen that this is
invalid: I can think it very likely that Sue is lecturing right now,
without thinking that if she was injured on her way to work, she is
lecturing right now.

Compounds of conditionals are a hard problem for everyone. It is difficult to see why it should be so hard if conditionals have truth conditions. Supp is not at a unilateral disadvantage.

One believes that *B* to the extent that one thinks *B*
more likely than not *B*; according to Supp, one believes that
*B* if *A* to the extent that one believes that
*B* under the supposition that *A*, i.e. to the extent
that one thinks *A*&*B* more likely than
*A*&~*B*; and there is no proposition *X*
such that one must believe *X* more likely than ~*X*, just to
the extent that one believes *A*&*B* more likely
than *A*&~*B*. Conditional desires appear to be
like conditional beliefs: to desire that *B* is to prefer
*B* to ~*B*; to desire that *B* if *A* is
to prefer *A*&*B* to *A*&~*B*;
there is no proposition *X* such that one prefers *X*
to ~*X* just to the extent that one prefers
*A*&*B* to *A*&~*B*. I have
entered a competition and have a very small chance of winning. I
express the desire that if I win the prize (*W*), you tell
Fred straight away (*T*). I prefer *W*&*T*
to *W*&~*T*. I do not necessarily prefer
(*W* *T*) to
~(*W* *T*), i.e. (~*W*
or *W*&*T*) to *W*&~*T*. For I
also want to win the prize, and much the most likely way for
(~*W* or *W*&*T*) to be true is that I
don't win the prize. Nor is my conditional desire satisfied if I
don't win but in the nearest possible world in which I win, you
tell Fred straight away.

If I believe that *B* if *A*, i.e. (according to Supp)
think *A*&*B* much more likely than
*A*&~*B*, this puts me in a position to make a
conditional commitment to *B*: to assert that *B*,
conditionally upon *A*. If *A* is found to be true, my
conditional assertion has the force of an assertion of *B*. If
*A* is false, there is no proposition that I asserted. I did,
however, express my conditional belief -- it is not as though I said
nothing. Suppose I say "If you press that switch, there will be an
explosion", and my hearer takes me to have made a conditional
assertion of the consequent, one which will have the force of an
assertion of the consequent if she presses the button. Provided she
takes me to be trustworthy and reliable, she thinks that if she
presses the switch, the consequent is likely to be true. That is, she
acquires a reason to think that if she presses it, there will be an
explosion; and hence a reason not to press it.

Conditional commands can, likewise, be construed as having the force of a command of the consequent, conditional upon the antecedent's being true. The doctor says to the nurse in the emergency ward, "If the patient is still alive in the morning, change the dressing". Considered as a command to make Hook's conditional true, this is equivalent to "Make it the case that either the patient is not alive in the morning, or you change the dressing". The nurse puts a pillow over the patient's face and kills her. On the truth-functional interpretation, the nurse can claim that he was carrying out the doctor's order. Extending Jackson's account to conditional commands, the doctor said "Make it the case that either the patient is not alive in the morning, or you change the dressing", and indicated that she would still command this if she knew that the patient would be alive. This doesn't help. The nurse who kills the patient still carried out an order. Why should the nurse be concerned with what the doctor would command in a counterfactual situation?

Hook will reply to the above argument about conditional commands that we need to appeal to pragmatics. Typically, for any command, conditional or not, there are tacitly understood reasonable and unreasonable ways of obeying it; and killing the patient is to be tacitly understood as a totally unreasonable way of making the truth-functional conditional true -- as, indeed, would be changing the dressing in such an incompetent way that you almost strangle the patient in the process. The latter clearly is obeying the command, but not in the intended manner. But it is stretching pragmatics rather far to say the same of the former. To take a less dramatic example, at Fred's request, the Head of Department agrees to bring it about that he gives the Kant lectures if his appointment is extended. She then puts every effort into making sure that his appointment is not extended. Is it plausible to say that this is doing what she was asked to do, albeit not in the intended way?

Extending Stalnaker's account to conditional commands, "If it rains, take your umbrella" becomes "In the nearest possible world in which it rains, take your umbrella". Suppose I have forgotten your command or alternatively am inclined to disregard it. However, it doesn't rain. In the nearest world in which it rains, I don't take my umbrella. On Stalnaker's account, I disobeyed you. Similarly for conditional promises: on this analysis I could break my promise to go to the doctor if the pain gets worse, even if the pain gets better. This is wrong: conditional commands and promises are not requirements on my behaviour in other possible worlds.

Among conditional questions we can distinguish those in which the addressee is presumed to know whether the antecedent is true, and those in which he is not. In the latter case, the addressee is being asked to suppose that the antecedent is true, and give his opinion about the consequent: "If it rains, will the match be cancelled?". In the former case -- "If you have been to London, did you like it?" -- he is expected to answer the consequent-question if the antecedent is true. If the antecedent is false, the question lapses: there is no conditional belief for him to express. "Not applicable" as the childless might write on a form which asks "If you have children, how many children do you have?". You are not being asked how many children you have in the nearest possible world in which you have children. Nor is it permissible to answer "17" on the grounds that "I have children I have 17 children" is true. Nor are you being asked what you would believe about the consequent if you came to believe that you did have children.

Widening our perspective to include these other conditionals tends to confirm Supp's view. Any propositional attitude can be held categorically, or under a supposition. Any speech act can be performed unconditionally, or conditionally upon something else. Our uses of "if", on the whole, seem to be better and more uniformly explained without invoking conditional propositions.

- Edgington, Dorothy 1995: "On Conditionals".
*Mind*104, pp. 235-329. - Harper, W. L., Stalnaker, R., and Pearce, C. T. (eds.) 1981:
*Ifs*. Dordrecht: Reidel. - Jackson, Frank ed. 1991:
*Conditionals*. Oxford: Clarendon Press. - Sanford, David H. 1989:
*If P, then Q: Conditionals and the Foundations of Reasoning*. London: Routledge. - Woods, Michael 1997:
*Conditionals*. Oxford: Clarendon Press.

- Adams, E. W. 1965: "A Logic of Conditionals".
*Inquiry*, 8, pp. 166-97. - Adams, E. W. 1966: "Probability and the Logic of Conditionals",
in Hintikka, J. and Suppes, P. eds.,
*Aspects of Inductive Logic*. Amsterdam: North Holland, pp. 256-316. - Adams, E. W. 1970: "Subjunctive and Indicative
Conditionals".
*Foundations of Language*, 6, pp. 89-94. - Adams, E. W. 1975:
*The Logic of Conditionals*. Dordrecht: Reidel. - Adams, E. W. 1998:
*A Primer of Probability Logic*. Stanford: CLSI Publications. - Appiah, A. 1985:
*Assertion and Conditionals*. Cambridge: Cambridge University Press. - Bayes, Thomas 1763: "An Essay Towards Solving a Problem in the
Doctrine of Chances".
*Transactions of the Royal Society of London*, 53, pp. 370-418. - Bennett, Jonathan 1988: "Farewell to the Phlogiston Theory of
Conditionals".
*Mind*, 97, pp. 509-27. - Bennett, Jonathan 1995: "Classifying Conditionals: the
Traditional Way is Right".
*Mind*, 104, pp. 331-44. - Dudman, V. H. 1984: "Parsing "If"-sentences".
*Analysis*, 44, pp. 145-53. - Dudman, V. H. 1988: "Indicative and
Subjunctive".
*Analysis*, 48, pp. 113-22. - Edgington, Dorothy 1991: "The Mystery of the Missing Matter of
Fact".
*Proceedings of the Aristotelian Society Supplementary Volume*65, pp. 185-209. - Frege, G. 1879:
*Begriffsschrift*in Geach, Peter and Black, Max 1960:*Translations from the Philosophical Writings of Gottlob Frege*. Oxford: Basil Blackwell. - Gärdenfors, Peter 1986: "Belief Revisions and the Ramsey
Test for Conditionals".
*Philosophical Review*95, pp. 81-93. - Gärdenfors, Peter 1988.
*Knowledge in Flux*. Cambridge MA: MIT Press. - Gibbard, A. 1981: "Two Recent Theories of Conditionals" in Harper, Stalnaker and Pearce (eds.) 1981.
- Grice, H. P. 1989:
*Studies in the Way of Words*. Cambridge MA: Harvard University Press. - Jackson, Frank 1979: "On Assertion and Indicative
Conditionals".
*Philosophical Review*, 88, pp. 565-589. - Jackson, Frank 1981: "Conditionals and
Possibilia".
*Proceedings of the Aristotelian Society*81, pp. 125-137. - Jackson, Frank 1987:
*Conditionals*. Oxford: Basil Blackwell. - Jackson, Frank 1990: "Classifying Conditionals I",
*Analysis*, 50, pp. 134-47, reprinted in Jackson 1998. - Jackson, Frank 1998:
*Mind, Method and Conditionals*. London: Routledge. - Jeffrey, Richard 1991: "Matter of Fact
Conditionals".
*Proceedings of the Aristotelian Society Supplementary Volume*65, pp. 161-183. - Lance, Mark 1991: "Probabilistic Dependence among
Conditionals".
*Philosophical Review*, 100, pp. 269-76. - Lewis, David 1973:
*Counterfactuals*. Oxford: Basil Blackwell. - Lewis, David 1976: "Probabilities of Conditionals and Conditional
Probabilities".
*Philosophical Review*, 85, pp. 297-315. Page references to Lewis 1986. - Lewis, David 1986:
*Philosophical Papers*Volume 2. Oxford: Oxford University Press. - Mackie, J. 1973:
*Truth, Probability and Paradox*. Oxford: Clarendon Press. - McDermott, Michael 1996: "On the Truth Conditions of Certain
‘If’-Sentences".
*Philosophical Review*105, pp. 1-37. - McGee, Vann 1985: "A Counterexample to Modus Ponens".
*Journal of Philosophy*, 82, pp. 462-71. - McGee, Vann 1989: "Conditional Probabilities and Compounds of
Conditionals".
*Philosophical Review*98, pp. 485-542. - Milne, Peter 1997: "Bruno de Finetti and the Logic of Conditional
Events".
*British Journal for the Philosophy of Science*, 48, pp. 195-232. - Ramsey, F. P. 1926: "Truth and Probability" in Ramsey 1990 pp. 52-94.
- Ramsey, F. P. 1929: "General Propositions and Causality" in Ramsey 1990 pp. 145-63.
- Ramsey, F. P. 1990:
*Philosophical Papers*ed. by D. H. Mellor. Cambridge University Press. - Read, Stephen 1995: "Conditionals and the Ramsey
Test".
*Proceedings of the Aristotelian Society Supplementary Volume*, 69, pp. 47-65. - Stalnaker, R. 1968: "A Theory of Conditionals" in
*Studies in Logical Theory, American Philosophical Quarterly*Monograph Series, 2. Oxford: Blackwell, pp. 98-112. Reprinted in Jackson, Frank ed. 1991. Page references to 1991. - Stalnaker, R. 1970: "Probability and Conditionals".
*Philosophy of Science*, 37, pp. 64-80. Reprinted in Harper, W. L., Stalnaker, R. and Pearce, G. eds. 1981. - Stalnaker, R. 1975: "Indicative Conditionals",
*Philosophia*, 5, pp. 269-86, reprinted in Jackson, F. ed. 1991. - Stalnaker, R. 1984:
*Inquiry*. Cambridge MA: MIT Press. - Stalnaker, R. and Jeffrey, R. 1994: "Conditionals as Random
Variables", in Eells, E. and Skyrms, B. eds.,
*Probability and Conditionals*. Cambridge: Cambridge University Press. - Thomson, James 1990: "In Defense of
".
*Journal of Philosophy*, 87, pp. 56-70. - van Fraassen, Bas 1976: "Probabilities of Conditionals", in
Harper, W. and Hooker, C. eds.,
*Foundations of Probability theory, Statistical Inference, and Statistical Theories of Science*, Volume I. Dordrecht: Reidel, pp. 261-308.

Dorothy Edgington

University College, Oxford University

*First published: August 8, 2001*

*Content last modified: August 8, 2001*