# Moral Dilemmas

First published Mon Apr 15, 2002; substantive revision Sat Jun 16, 2018

Moral dilemmas, at the very least, involve conflicts between moral requirements. Consider the cases given below.

## 1. Examples

In Book I of Plato’s Republic, Cephalus defines ‘justice’ as speaking the truth and paying one’s debts. Socrates quickly refutes this account by suggesting that it would be wrong to repay certain debts—for example, to return a borrowed weapon to a friend who is not in his right mind. Socrates’ point is not that repaying debts is without moral import; rather, he wants to show that it is not always right to repay one’s debts, at least not exactly when the one to whom the debt is owed demands repayment. What we have here is a conflict between two moral norms: repaying one’s debts and protecting others from harm. And in this case, Socrates maintains that protecting others from harm is the norm that takes priority.

Nearly twenty-four centuries later, Jean-Paul Sartre described a moral conflict the resolution of which was, to many, less obvious than the resolution to the Platonic conflict. Sartre (1957) tells of a student whose brother had been killed in the German offensive of 1940. The student wanted to avenge his brother and to fight forces that he regarded as evil. But the student’s mother was living with him, and he was her one consolation in life. The student believed that he had conflicting obligations. Sartre describes him as being torn between two kinds of morality: one of limited scope but certain efficacy, personal devotion to his mother; the other of much wider scope but uncertain efficacy, attempting to contribute to the defeat of an unjust aggressor.

While the examples from Plato and Sartre are the ones most commonly cited, there are many others. Literature abounds with such cases. In Aeschylus’s Agamemnon, the protagonist ought to save his daughter and ought to lead the Greek troops to Troy; he ought to do each but he cannot do both. And Antigone, in Sophocles’s play of the same name, ought to arrange for the burial of her brother, Polyneices, and ought to obey the pronouncements of the city’s ruler, Creon; she can do each of these things, but not both. Areas of applied ethics, such as biomedical ethics, business ethics, and legal ethics, are also replete with such cases.

## 2. The Concept of Moral Dilemmas

What is common to the two well-known cases is conflict. In each case, an agent regards herself as having moral reasons to do each of two actions, but doing both actions is not possible. Ethicists have called situations like these moral dilemmas. The crucial features of a moral dilemma are these: the agent is required to do each of two (or more) actions; the agent can do each of the actions; but the agent cannot do both (or all) of the actions. The agent thus seems condemned to moral failure; no matter what she does, she will do something wrong (or fail to do something that she ought to do).

The Platonic case strikes many as too easy to be characterized as a genuine moral dilemma. For the agent’s solution in that case is clear; it is more important to protect people from harm than to return a borrowed weapon. And in any case, the borrowed item can be returned later, when the owner no longer poses a threat to others. Thus in this case we can say that the requirement to protect others from serious harm overrides the requirement to repay one’s debts by returning a borrowed item when its owner so demands. When one of the conflicting requirements overrides the other, we have a conflict but not a genuine moral dilemma. So in addition to the features mentioned above, in order to have a genuine moral dilemma it must also be true that neither of the conflicting requirements is overridden (Sinnott-Armstrong 1988, Chapter 1).

## 3. Problems

It is less obvious in Sartre’s case that one of the requirements overrides the other. Why this is so, however, may not be so obvious. Some will say that our uncertainty about what to do in this case is simply the result of uncertainty about the consequences. If we were certain that the student could make a difference in defeating the Germans, the obligation to join the military would prevail. But if the student made little difference whatsoever in that cause, then his obligation to tend to his mother’s needs would take precedence, since there he is virtually certain to be helpful. Others, though, will say that these obligations are equally weighty, and that uncertainty about the consequences is not at issue here.

Ethicists as diverse as Kant (1971/1797), Mill (1979/1861), and Ross (1930, 1939) have assumed that an adequate moral theory should not allow for the possibility of genuine moral dilemmas. Only recently—in the last sixty years or so—have philosophers begun to challenge that assumption. And the challenge can take at least two different forms. Some will argue that it is not possible to preclude genuine moral dilemmas. Others will argue that even if it were possible, it is not desirable to do so.

To illustrate some of the debate that occurs regarding whether it is possible for any theory to eliminate genuine moral dilemmas, consider the following. The conflicts in Plato’s case and in Sartre’s case arose because there is more than one moral precept (using ‘precept’ to designate rules and principles), more than one precept sometimes applies to the same situation, and in some of these cases the precepts demand conflicting actions. One obvious solution here would be to arrange the precepts, however many there might be, hierarchically. By this scheme, the highest ordered precept always prevails, the second prevails unless it conflicts with the first, and so on. There are at least two glaring problems with this obvious solution, however. First, it just does not seem credible to hold that moral rules and principles should be hierarchically ordered. While the requirements to keep one’s promises and to prevent harm to others clearly can conflict, it is far from clear that one of these requirements should always prevail over the other. In the Platonic case, the obligation to prevent harm is clearly stronger. But there can easily be cases where the harm that can be prevented is relatively mild and the promise that is to be kept is very important. And most other pairs of precepts are like this. This was a point made by Ross in The Right and the Good (1930, Chapter 2).

The second problem with this easy solution is deeper. Even if it were plausible to arrange moral precepts hierarchically, situations can arise in which the same precept gives rise to conflicting obligations. Perhaps the most widely discussed case of this sort is taken from William Styron’s Sophie’s Choice (1980; see Greenspan 1983 and Tessman 2015, 160–163). Sophie and her two children are at a Nazi concentration camp. A guard confronts Sophie and tells her that one of her children will be allowed to live and one will be killed. But it is Sophie who must decide which child will be killed. Sophie can prevent the death of either of her children, but only by condemning the other to be killed. The guard makes the situation even more excruciating by informing Sophie that if she chooses neither, then both will be killed. With this added factor, Sophie has a morally compelling reason to choose one of her children. But for each child, Sophie has an apparently equally strong reason to save him or her. Thus the same moral precept gives rise to conflicting obligations. Some have called such cases symmetrical (Sinnott-Armstrong 1988, Chapter 2).

## 4. Dilemmas and Consistency

We shall return to the issue of whether it is possible to preclude genuine moral dilemmas. But what about the desirability of doing so? Why have ethicists thought that their theories should preclude the possibility of dilemmas? At the intuitive level, the existence of moral dilemmas suggests some sort of inconsistency. An agent caught in a genuine dilemma is required to do each of two acts but cannot do both. And since he cannot do both, not doing one is a condition of doing the other. Thus, it seems that the same act is both required and forbidden. But exposing a logical inconsistency takes some work; for initial inspection reveals that the inconsistency intuitively felt is not present. Allowing $$OA$$ to designate that the agent in question ought to do $$A$$ (or is morally obligated to do $$A$$, or is morally required to do $$A)$$, that $$OA$$ and $$OB$$ are both true is not itself inconsistent, even if one adds that it is not possible for the agent to do both $$A$$ and $$B$$. And even if the situation is appropriately described as $$OA$$ and $$O\neg A$$, that is not a contradiction; the contradictory of $$OA$$ is $$\neg OA$$. (See Marcus 1980 and McConnell 1978, 273.)

Similarly rules that generate moral dilemmas are not inconsistent, at least on the usual understanding of that term. Ruth Marcus suggests plausibly that we “define a set of rules as consistent if there is some possible world in which they are all obeyable in all circumstances in that world.” Thus, “rules are consistent if there are possible circumstances in which no conflict will emerge,” and “a set of rules is inconsistent if there are no circumstances, no possible world, in which all the rules are satisfiable” (Marcus 1980, 128 and 129). Kant, Mill, and Ross were likely aware that a dilemma-generating theory need not be inconsistent. Even so, they would be disturbed if their own theories allowed for such predicaments. If this speculation is correct, it suggests that Kant, Mill, Ross, and others thought that there is an important theoretical feature that dilemma-generating theories lack. And this is understandable. It is certainly no comfort to an agent facing a reputed moral dilemma to be told that at least the rules which generate this predicament are consistent because there is a possible world in which they do not conflict. For a good practical example, consider the situation of the criminal defense attorney. She is said to have an obligation to hold in confidence the disclosures made by a client and to be required to conduct herself with candor before the court (where the latter requires that the attorney inform the court when her client commits perjury) (Freedman 1975, Chapter 3). It is clear that in this world these two obligations often conflict. It is equally clear that in some possible world—for example, one in which clients do not commit perjury—that both obligations can be satisfied. Knowing this is of no assistance to defense attorneys who face a conflict between these two requirements in this world.

Ethicists who are concerned that their theories not allow for moral dilemmas have more than consistency in mind. What is troubling is that theories that allow for dilemmas fail to be uniquely action-guiding. A theory can fail to be uniquely action-guiding in either of two ways: by recommending incompatible actions in a situation or by not recommending any action at all. Theories that generate genuine moral dilemmas fail to be uniquely action-guiding in the former way. Theories that have no way, even in principle, of determining what an agent should do in a particular situation have what Thomas E. Hill, Jr. calls “gaps” (Hill 1996, 179–183); they fail to be action-guiding in the latter way. Since one of the main points of moral theories is to provide agents with guidance, that suggests that it is desirable for theories to eliminate dilemmas and gaps, at least if doing so is possible.

But failing to be uniquely action-guiding is not the only reason that the existence of moral dilemmas is thought to be troublesome. Just as important, the existence of dilemmas does lead to inconsistencies if certain other widely held theses are true. Here we shall consider two different arguments, each of which shows that one cannot consistently acknowledge the reality of moral dilemmas while holding selected (and seemingly plausible) principles.

The first argument shows that two standard principles of deontic logic are, when conjoined, incompatible with the existence of moral dilemmas. The first of these is the principle of deontic consistency

$\tag{PC} OA \rightarrow \neg O\neg A.$

Intuitively this principle just says that the same action cannot be both obligatory and forbidden. Note that as initially described, the existence of dilemmas does not conflict with PC. For as described, dilemmas involve a situation in which an agent ought to do $$A$$, ought to do $$B$$, but cannot do both $$A$$ and $$B$$. But if we add a principle of deontic logic, then we obtain a conflict with PC:

$\tag{PD} \Box(A \rightarrow B) \rightarrow(OA \rightarrow OB).$

Intuitively, PD just says that if doing $$A$$ brings about $$B$$, and if $$A$$ is obligatory (morally required), then $$B$$ is obligatory (morally required). The first argument that generates inconsistency can now be stated. Premises (1), (2), and (3) represent the claim that moral dilemmas exist.

 1 $$OA$$ 2 $$OB$$ 3 $$\neg C (A \amp B)$$ [where ‘$$\neg C$$’ means ‘cannot’] 4 $$\Box(A \rightarrow B) \rightarrow(OA \rightarrow OB)$$ [where ‘$$\Box$$’ means physical necessity] 5 $$\Box \neg(B \amp A)$$ (from 3) 6 $$\Box(B \rightarrow \neg A)$$ (from 5) 7 $$\Box(B \rightarrow \neg A) \rightarrow(OB \rightarrow O\neg A)$$ (an instantiation of 4) 8 $$OB \rightarrow O\neg A$$ (from 6 and 7) 9 $$O\neg A$$ (from 2 and 8) 10 $$OA \text{ and } O\neg A$$ (from 1 and 9)

Line (10) directly conflicts with PC. And from PC and (1), we can conclude:

 11 $$\neg O\neg A$$

And, of course, (9) and (11) are contradictory. So if we assume PC and PD, then the existence of dilemmas generates an inconsistency of the old-fashioned logical sort. (Note: In standard deontic logic, the ‘$$\Box$$’ in PD typically designates logical necessity. Here I take it to indicate physical necessity so that the appropriate connection with premise (3) can be made. And I take it that logical necessity is stronger than physical necessity.)

Two other principles accepted in most systems of deontic logic entail PC. So if PD holds, then one of these additional two principles must be jettisoned too. The first says that if an action is obligatory, it is also permissible. The second says that an action is permissible if and only if it is not forbidden. These principles may be stated as:

$\tag{OP} OA \rightarrow PA;$

and

$\tag{D} PA \leftrightarrow \neg O\neg A.$

Principles OP and D are basic; they seem to be conceptual truths (Brink 1994, section IV). The second argument that generates inconsistency, like the first, has as its first three premises a symbolic representation of a moral dilemma.

 1 $$OA$$ 2 $$OB$$ 3 $$\neg C (A \amp B)$$

And like the first, this second argument shows that the existence of dilemmas leads to a contradiction if we assume two other commonly accepted principles. The first of these principles is that ‘ought’ implies ‘can’. Intuitively this says that if an agent is morally required to do an action, it must be possible for the agent to do it. This principle seems necessary if moral judgments are to be uniquely action-guiding. We may represent this as

 4 $$OA \rightarrow CA$$ (for all $$A$$)

The other principle, endorsed by most systems of deontic logic, says that if an agent is required to do each of two actions, she is required to do both. We may represent this as

 5 $$(OA \amp OB) \rightarrow O(A\amp B)$$ (for all $$A$$ and all $$B$$)

The argument then proceeds:

 6 $$O(A \amp B) \rightarrow C(A \amp B)$$ (an instance of 4) 7 $$OA \amp OB$$ (from 1 and 2) 8 $$O(A \amp B)$$ (from 5 and 7) 9 $$\neg O(A \amp B)$$ (from 3 and 6)

So if one assumes that ‘ought’ implies ‘can’ and if one assumes the principle represented in (5)—dubbed by some the agglomeration principle (Williams 1965)—then again a contradiction can be derived.

## 5. Responses to the Arguments

Now obviously the inconsistency in the first argument can be avoided if one denies either PC or PD. And the inconsistency in the second argument can be averted if one gives up either the principle that ‘ought’ implies ‘can’ or the agglomeration principle. There is, of course, another way to avoid these inconsistencies: deny the possibility of genuine moral dilemmas. It is fair to say that much of the debate concerning moral dilemmas in the last sixty years has been about how to avoid the inconsistencies generated by the two arguments above.

Opponents of moral dilemmas have generally held that the crucial principles in the two arguments above are conceptually true, and therefore we must deny the possibility of genuine dilemmas. (See, for example, Conee 1982 and Zimmerman 1996.) Most of the debate, from all sides, has focused on the second argument. There is an oddity about this, however. When one examines the pertinent principles in each argument which, in combination with dilemmas, generates an inconsistency, there is little doubt that those in the first argument have a greater claim to being conceptually true than those in the second. (One who recognizes the salience of the first argument is Brink 1994, section V.) Perhaps the focus on the second argument is due to the impact of Bernard Williams’s influential essay (Williams 1965). But notice that the first argument shows that if there are genuine dilemmas, then either PC or PD must be relinquished. Even most supporters of dilemmas acknowledge that PC is quite basic. E.J. Lemmon, for example, notes that if PC does not hold in a system of deontic logic, then all that remains are truisms and paradoxes (Lemmon 1965, p. 51). And giving up PC also requires denying either OP or D, each of which also seems basic. There has been much debate about PD—in particular, questions generated by the Good Samaritan paradox—but still it seems basic. So those who want to argue against dilemmas purely on conceptual grounds are better off focusing on the first of the two arguments above.

Some opponents of dilemmas also hold that the pertinent principles in the second argument—the principle that ‘ought’ implies ‘can’ and the agglomeration principle—are conceptually true. But foes of dilemmas need not say this. Even if they believe that a conceptual argument against dilemmas can be made by appealing to PC and PD, they have several options regarding the second argument. They may defend ‘ought’ implies ‘can’, but hold that it is a substantive normative principle, not a conceptual truth. Or they may even deny the truth of ‘ought’ implies ‘can’ or the agglomeration principle, though not because of moral dilemmas, of course.

Defenders of dilemmas need not deny all of the pertinent principles. If one thinks that each of the principles at least has some initial plausibility, then one will be inclined to retain as many as possible. Among the earlier contributors to this debate, some took the existence of dilemmas as a counterexample to ‘ought’ implies ‘can’ (for example, Lemmon 1962 and Trigg 1971); others, as a refutation of the agglomeration principle (for example, Williams 1965 and van Fraassen 1973). A common response to the first argument is to deny PD. A more complicated response is to grant that the crucial deontic principles hold, but only in ideal worlds. In the real world, they have heuristic value, bidding agents in conflict cases to look for permissible options, though none may exist (Holbo 2002, especially sections 15–17).

Friends and foes of dilemmas have a burden to bear in responding to the two arguments above. For there is at least a prima facie plausibility to the claim that there are moral dilemmas and to the claim that the relevant principles in the two arguments are true. Thus each side must at least give reasons for denying the pertinent claims in question. Opponents of dilemmas must say something in response to the positive arguments that are given for the reality of such conflicts. One reason in support of dilemmas, as noted above, is simply pointing to examples. The case of Sartre’s student and that from Sophie’s Choice are good ones; and clearly these can be multiplied indefinitely. It will tempting for supporters of dilemmas to say to opponents, “If this is not a real dilemma, then tell me what the agent ought to do and why?” It is obvious, however, that attempting to answer such questions is fruitless, and for at least two reasons. First, any answer given to the question is likely to be controversial, certainly not always convincing. And second, this is a game that will never end; example after example can be produced. The more appropriate response on the part of foes of dilemmas is to deny that they need to answer the question. Examples as such cannot establish the reality of dilemmas. Surely most will acknowledge that there are situations in which an agent does not know what he ought to do. This may be because of factual uncertainty, uncertainty about the consequences, uncertainty about what principles apply, or a host of other things. So for any given case, the mere fact that one does not know which of two (or more) conflicting obligations prevails does not show that none does.

Another reason in support of dilemmas to which opponents must respond is the point about symmetry. As the cases from Plato and Sartre show, moral rules can conflict. But opponents of dilemmas can argue that in such cases one rule overrides the other. Most will grant this in the Platonic case, and opponents of dilemmas will try to extend this point to all cases. But the hardest case for opponents is the symmetrical one, where the same precept generates the conflicting requirements. The case from Sophie’s Choice is of this sort. It makes no sense to say that a rule or principle overrides itself. So what do opponents of dilemmas say here? They are apt to argue that the pertinent, all-things-considered requirement in such a case is disjunctive: Sophie should act to save one or the other of her children, since that is the best that she can do (for example, Zimmerman 1996, Chapter 7). Such a move need not be ad hoc, since in many cases it is quite natural. If an agent can afford to make a meaningful contribution to only one charity, the fact that there are several worthwhile candidates does not prompt many to say that the agent will fail morally no matter what he does. Nearly all of us think that he should give to one or the other of the worthy candidates. Similarly, if two people are drowning and an agent is situated so that she can save either of the two but only one, few say that she is doing wrong no matter which person she saves. Positing a disjunctive requirement in these cases seems perfectly natural, and so such a move is available to opponents of dilemmas as a response to symmetrical cases.

Supporters of dilemmas have a burden to bear too. They need to cast doubt on the adequacy of the pertinent principles in the two arguments that generate inconsistencies. And most importantly, they need to provide independent reasons for doubting whichever of the principles they reject. If they have no reason other than cases of putative dilemmas for denying the principles in question, then we have a mere standoff. Of the principles in question, the most commonly questioned on independent grounds are the principle that ‘ought’ implies ‘can’ and PD. Among supporters of dilemmas, Walter Sinnott-Armstrong (Sinnott-Armstrong 1988, Chapters 4 and 5) has gone to the greatest lengths to provide independent reasons for questioning some of the relevant principles.

## 6. Moral Residue and Dilemmas

One well-known argument for the reality of moral dilemmas has not been discussed yet. This argument might be called “phenomenological.” It appeals to the emotions that agents facing conflicts experience and our assessment of those emotions.

Return to the case of Sartre’s student. Suppose that he joins the Free French forces. It is likely that he will experience remorse or guilt for having abandoned his mother. And not only will he experience these emotions, this moral residue, but it is appropriate that he does. Yet, had he stayed with his mother and not joined the Free French forces, he also would have appropriately experienced remorse or guilt. But either remorse or guilt is appropriate only if the agent properly believes that he has done something wrong (or failed to do something that he was all-things-considered required to do). Since no matter what the agent does he will appropriately experience remorse or guilt, then no matter what he does he will have done something wrong. Thus, the agent faces a genuine moral dilemma. (The best known proponents of arguments for dilemmas that appeal to moral residue are Williams 1965 and Marcus 1980; for a more recent contribution, see Tessman 2015, especially Chapter 2.)

Many cases of moral conflict are similar to Sartre’s example with regard to the agent’s reaction after acting. Certainly the case from Sophie’s Choice fits here. No matter which of her children Sophie saves, she will experience enormous guilt for the consequences of that choice. Indeed, if Sophie did not experience such guilt, we would think that there was something morally wrong with her. In these cases, proponents of the argument (for dilemmas) from moral residue must claim that four things are true: (1) when the agents acts, she experiences remorse or guilt; (2) that she experiences these emotions is appropriate and called for; (3) had the agent acted on the other of the conflicting requirements, she would also have experienced remorse or guilt; and (4) in the latter case these emotions would have been equally appropriate and called for (McConnell 1996, pp. 37–38). In these situations, then, remorse or guilt will be appropriate no matter what the agent does and these emotions are appropriate only when the agent has done something wrong. Therefore, these situations are genuinely dilemmatic and moral failure is inevitable for agents who face them.

There is much to say about the moral emotions and situations of moral conflict; the positions are varied and intricate. Without pretending to resolve all of the issues here, it will be pointed out that opponents of dilemmas have raised two different objections to the argument from moral residue. The first objection, in effect, suggests that the argument is question-begging (McConnell 1978 and Conee 1982); the second objection challenges the assumption that remorse and guilt are appropriate only when the agent has done wrong.

To explain the first objection, note that it is uncontroversial that some bad feeling or other is called for when an agent is in a situation like that of Sartre’s student or Sophie. But the negative moral emotions are not limited to remorse and guilt. Among these other emotions, consider regret. An agent can appropriately experience regret even when she does not believe that she has done something wrong. For example, a parent may appropriately regret that she must punish her child even though she correctly believes that the punishment is deserved. Her regret is appropriate because a bad state of affairs is brought into existence (say, the child’s discomfort), even when bringing this state of affairs into existence is morally required. Regret can even be appropriate when a person has no causal connection at all with the bad state of affairs. It is appropriate for me to regret the damage that a recent fire has caused to my neighbor’s house, the pain that severe birth defects cause in infants, and the suffering that a starving animal experiences in the wilderness. Not only is it appropriate that I experience regret in these cases, but I would probably be regarded as morally lacking if I did not. (For accounts of moral remainders as they relate specifically to Kantianism and virtue ethics, see, respectively, Hill 1996, 183–187 and Hursthouse 1999, 44–48 and 68–77.)

With remorse or guilt, at least two components are present: the experiential component, namely, the negative feeling that the agent has; and the cognitive component, namely, the belief that the agent has done something wrong and takes responsibility for it. Although this same cognitive component is not part of regret, the negative feeling is. And the experiential component alone cannot serve as a gauge to distinguish regret from remorse, for regret can range from mild to intense, and so can remorse. In part, what distinguishes the two is the cognitive component. But now when we examine the case of an alleged dilemma, such as that of Sartre’s student, it is question-begging to assert that it is appropriate for him to experience remorse no matter what he does. No doubt, it is appropriate for him to experience some negative feeling. To say, however, that it is remorse that is called for is to assume that the agent appropriately believes that he has done something wrong. Since regret is warranted even in the absence of such a belief, to assume that remorse is appropriate is to assume, not argue, that the agent’s situation is genuinely dilemmatic. Opponents of dilemmas can say that one of the requirements overrides the other, or that the agent faces a disjunctive requirement, and that regret is appropriate because even when he does what he ought to do, some bad will ensue. Either side, then, can account for the appropriateness of some negative moral emotion. To get more specific, however, requires more than is warranted by the present argument. This appeal to moral residue, then, does not by itself establish the reality of moral dilemmas.

Matters are even more complicated, though, as the second objection to the argument from moral residue shows. The residues contemplated by proponents of the argument are diverse, ranging from guilt or remorse to a belief that the agent ought to apologize or compensate persons who were negatively impacted by the fact that he did not satisfy one of the conflicting obligations. The argument assumes that experiencing remorse or guilt or believing that one ought to apologize or compensate another are appropriate responses only if the agent believes that he has done something wrong. But this assumption is debatable, for multiple reasons.

First, even when one obligation clearly overrides another in a conflict case, it is often appropriate to apologize to or to explain oneself to any disadvantaged parties. Ross provides such a case (1930, 28): one who breaks a relatively trivial promise in order to assist someone in need should in some way make it up to the promisee. Even though the agent did no wrong, the additional actions promote important moral values (McConnell 1996, 42–44).

Second, as Simon Blackburn argues, compensation or its like may be called for even when there was no moral conflict at all (Blackburn 1996, 135–136). If a coach rightly selected Agnes for the team rather than Belinda, she still is likely to talk to Belinda, encourage her efforts, and offer tips for improving. This kind of “making up” is just basic decency.

Third, the consequences of what one has done may be so horrible as to make guilt inevitable. Consider the case of a middle-aged man, Bill, and a seven-year-old boy, Johnny. It is set in a midwestern village on a snowy December day. Johnny and several of his friends are riding their sleds down a narrow, seldom used street, one that intersects with a busier, although still not heavily traveled, street. Johnny, in his enthusiasm for sledding, is not being very careful. During his final ride he skidded under an automobile passing through the intersection and was killed instantly. The car was driven by Bill. Bill was driving safely, had the right of way, and was not exceeding the speed limit. Moreover, given the physical arrangement, it would have been impossible for Bill to have seen Johnny coming. Bill was not at fault, legally or morally, for Johnny’s death. Yet Bill experienced what can best be described as remorse or guilt about his role in this horrible event (McConnell 1996, 39).

At one level, Bill’s feelings of remorse or guilt are not warranted. Bill did nothing wrong. Certainly Bill does not deserve to feel guilt (Dahl 1996, 95–96). A friend might even recommend that Bill seek therapy. But this is not all there is to say. Most of us understand Bill’s response. From Bill’s point of view, the response is not inappropriate, not irrational, not uncalled-for. To see this, imagine that Bill had had a very different response. Suppose that Bill had said, “I regret Johnny’s death. It is a terrible thing. But it certainly was not my fault. I have nothing to feel guilty about and I don’t owe his parents any apologies.” Even if Bill is correct intellectually, it is hard to imagine someone being able to achieve this sort of objectivity about his own behavior. When human beings have caused great harm, it is natural for them to wonder if they are at fault, even if to outsiders it is obvious that they bear no moral responsibility for the damage. Human beings are not so finely tuned emotionally that when they have been causally responsible for harm, they can easily turn guilt on or off depending on their degree of moral responsibility. (See Zimmerman 1988, 134–135.)

Work in moral psychology can help to explain why self-directed moral emotions like guilt or remorse are natural when an agent has acted contrary to a moral norm, whether justifiably or not. Many moral psychologists describe dual processes in humans for arriving at moral judgments (see, for example, Greene 2013, especially Chapters 4–5, and Haidt 2012, especially Chapter 2). Moral emotions are automatic, the brain’s immediate response to a situation. Reason is more like the brain’s manual mode, employed when automatic settings are insufficient, such as when norms conflict. Moral emotions are likely the product of evolution, reinforcing conduct that promotes social harmony and disapproving actions that thwart that end. If this is correct, then negative moral emotions are apt to be experienced, to some extent, any time an agent’s actions are contrary to what is normally a moral requirement.

So both supporters and opponents of moral dilemmas can give an account of why agents who face moral conflicts appropriately experience negative moral emotions. But there is a complex array of issues concerning the relationship between ethical conflicts and moral emotions, and only book-length discussions can do them justice. (See Greenspan 1995 and Tessman 2015.)

## 7. Types of Moral Dilemmas

In the literature on moral dilemmas, it is common to draw distinctions among various types of dilemmas. Only some of these distinctions will be mentioned here. It is worth noting that both supporters and opponents of dilemmas tend to draw some, if not all, of these distinctions. And in most cases the motivation for doing so is clear. Supporters of dilemmas may draw a distinction between dilemmas of type $$V$$ and $$W$$. The upshot is typically a message to opponents of dilemmas: “You think that all moral conflicts are resolvable. And that is understandable, because conflicts of type $$V$$ are resolvable. But conflicts of type $$W$$ are not resolvable. Thus, contrary to your view, there are some genuine moral dilemmas.” By the same token, opponents of dilemmas may draw a distinction between dilemmas of type $$X$$ and $$Y$$. And their message to supporters of dilemmas is this: “You think that there are genuine moral dilemmas, and given certain facts, it is understandable why this appears to be the case. But if you draw a distinction between conflicts of types $$X$$ and $$Y$$, you can see that appearances can be explained by the existence of type $$X$$ alone, and type $$X$$ conflicts are not genuine dilemmas.” With this in mind, let us note a few of the distinctions.

One distinction is between epistemic conflicts and ontological conflicts. (For different terminology, see Blackburn 1996, 127–128.) The former involve conflicts between two (or more) moral requirements and the agent does not know which of the conflicting requirements takes precedence in her situation. Everyone concedes that there can be situations where one requirement does take priority over the other with which it conflicts, though at the time action is called for it is difficult for the agent to tell which requirement prevails. The latter are conflicts between two (or more) moral requirements, and neither is overridden. This is not simply because the agent does not know which requirement is stronger; neither is. Genuine moral dilemmas, if there are any, are ontological. Both opponents and supporters of dilemmas acknowledge that there are epistemic conflicts.

There can be genuine moral dilemmas only if neither of the conflicting requirements is overridden. Ross (1930, Chapter 2) held that all moral precepts can be overridden in particular circumstances. This provides an inviting framework for opponents of dilemmas to adopt. But if some moral requirements cannot be overridden—if they hold absolutely—then it will be easier for supporters of dilemmas to make their case. Lisa Tessman has distinguished between negotiable and non-negotiable moral requirements (Tessman 2015, especially Chapters 1 and 3). The former, if not satisfied, can be adequately compensated or counterbalanced by some other good. Non-negotiable moral requirements, however, if violated produce a cost that no one should have to bear; such a violation cannot be counterbalanced by any benefits. If non-negotiable moral requirements can conflict—and Tessman argues that the can—then those situations will be genuine dilemmas and agents facing them will inevitably fail morally. It might seem that if there is more than one moral precept that holds absolutely, then moral dilemmas must be possible. Alan Donagan, however, argues against this. He maintains that moral rules hold absolutely, and apparent exceptions are accounted for because tacit conditions are built in to each moral rule (Donagan 1977, Chapters 3 and 6, especially 92–93). So even if some moral requirements cannot be overridden, the existence of dilemmas may still be an open question.

Another distinction is between self-imposed moral dilemmas and dilemmas imposed on an agent by the world, as it were. Conflicts of the former sort arise because of the agent’s own wrongdoing (Aquinas; Donagan 1977, 1984; and McConnell 1978). If an agent made two promises that he knew conflicted, then through his own actions he created a situation in which it is not possible for him to discharge both of his requirements. Dilemmas imposed on the agent by the world, by contrast, do not arise because of the agent’s wrongdoing. The case of Sartre’s student is an example, as is the case from Sophie’s Choice. For supporters of dilemmas, this distinction is not all that important. But among opponents of dilemmas, there is a disagreement about whether the distinction is important. Some of these opponents hold that self-imposed dilemmas are possible, but that their existence does not point to any deep flaws in moral theory (Donagan 1977, Chapter 5). Moral theory tells agents how they ought to behave; but if agents violate moral norms, of course things can go askew. Other opponents deny that even self-imposed dilemmas are possible. They argue that an adequate moral theory should tell agents what they ought to do in their current circumstances, regardless of how those circumstances arose. As Hill puts it, “[M]orality acknowledges that human beings are imperfect and often guilty, but it calls upon each at every new moment of moral deliberation to decide conscientiously and to act rightly from that point on” (Hill 1996, 176). Given the prevalence of wrongdoing, if a moral theory did not issue uniquely action-guiding “contrary-to-duty imperatives,” its practical import would be limited.

Yet another distinction is between obligation dilemmas and prohibition dilemmas. The former are situations in which more than one feasible action is obligatory. The latter involve cases in which all feasible actions are forbidden. Some (especially, Valentyne 1987 and 1989) argue that plausible principles of deontic logic may well render obligation dilemmas impossible; but they do not preclude the possibility of prohibition dilemmas. The case of Sartre’s student, if genuinely dilemmatic, is an obligation dilemma; Sophie’s case is a prohibition dilemma. There is another reason that friends of dilemmas emphasize this distinction. Some think that the “disjunctive solution” used by opponents of dilemmas—when equally strong precepts conflict, the agent is required to act on one or the other—is more plausible when applied to obligation dilemmas than when applied to prohibition dilemmas.

As moral dilemmas are typically described, they involve a single agent. The agent ought, all things considered, to do $$A$$, ought, all things considered, to do $$B$$, and she cannot do both $$A$$ and $$B$$. But we can distinguish multi-person dilemmas from single agent ones. The two-person case is representative of multi-person dilemmas. The situation is such that one agent, P1, ought to do $$A$$, a second agent, P2, ought to do $$B$$, and though each agent can do what he ought to do, it is not possible both for P1 to do $$A$$ and P2 to do $$B$$. (See Marcus 1980, 122 and McConnell 1988.) Multi-person dilemmas have been called “interpersonal moral conflicts.” Such conflicts are most theoretically worrisome if the same moral system (or theory) generates the conflicting obligations for P1 and P2. A theory that precludes single-agent moral dilemmas remains uniquely action-guiding for each agent. But if that same theory does not preclude the possibility of interpersonal moral conflicts, not all agents will be able to succeed in discharging their obligations, no matter how well-motivated or how hard they try. For supporters of moral dilemmas, this distinction is not all that important. They no doubt welcome (theoretically) more types of dilemmas, since that may make their case more persuasive. But if they establish the reality of single-agent dilemmas, in one sense their work is done. For opponents of dilemmas, however, the distinction may be important. This is because at least some opponents believe that the conceptual argument against dilemmas applies principally to single-agent cases. It does so because the ought-to-do operator of deontic logic and the accompanying principles are properly understood to apply to entities who can make decisions. To be clear, this position does not preclude that collectives (such as businesses or nations) can have obligations. But a necessary condition for this being the case is that there is (or should be) a central deliberative standpoint from which decisions are made. This condition is not satisfied when two otherwise unrelated agents happen to have obligations both of which cannot be discharged. Put simply, while an individual act involving one agent can be the object of choice, a compound act involving multiple agents is difficult so to conceive. (See Smith 1986 and Thomason 1981.) Erin Taylor (2011) has recently argued that neither universalizability nor the principle that ‘ought’ implies ‘can’ ensure that there will be no interpersonal moral conflicts (what she calls “irreconcilable differences”). These conflicts would raise no difficulties if morality required trying rather than acting, but such a view is not plausible. Still, moral theories should minimize cases of interpersonal conflict (Taylor 2011, pp. 189–190).To the extent that the possibility of interpersonal moral conflicts raises an intramural dispute among opponents of dilemmas, that dispute concerns how to understand the principles of deontic logic and what can reasonably be demanded of moral theories.

## 8. Multiple Moralities

Another issue raised by the topic of moral dilemmas is the relationship among various parts of morality. Consider this distinction. General obligations are moral requirements that individuals have simply because they are moral agents. That agents are required not to kill, not to steal, and not to assault are examples of general obligations. Agency alone makes these precepts applicable to individuals. By contrast, role-related obligations are moral requirements that agents have in virtue of their role, occupation, or position in society. That lifeguards are required to save swimmers in distress is a role-related obligation. Another example, mentioned earlier, is the obligation of a defense attorney to hold in confidence the disclosures made by a client. These categories need not be exclusive. It is likely that anyone who is in a position to do so ought to save a drowning person. And if a person has particularly sensitive information about another, she should probably not reveal it to third parties regardless of how the information was obtained. But lifeguards have obligations to help swimmers in distress when most others do not because of their abilities and contractual commitments. And lawyers have special obligations of confidentiality to their clients because of implicit promises and the need to maintain trust.

General obligations and role-related obligations can, and sometimes do, conflict. If a defense attorney knows the whereabouts of a deceased body, she may have a general obligation to reveal this information to family members of the deceased. But if she obtained this information from her client, the role-related obligation of confidentiality prohibits her from sharing it with others. Supporters of dilemmas may regard conflicts of this sort as just another confirmation of their thesis. Opponents of dilemmas will have to hold that one of the conflicting obligations takes priority. The latter task could be discharged if it were shown that one these two types of obligations always prevails over the other. But such a claim is implausible; for it seems that in some cases of conflict general obligations are stronger, while in other cases role-related duties take priority. The case seems to be made even better for supporters of dilemmas, and worse for opponents, when we consider that the same agent can occupy multiple roles that create conflicting requirements. The physician, Harvey Kelekian, in Margaret Edson’s (1999/1993) Pulitzer Prize winning play, Wit, is an oncologist, a medical researcher, and a teacher of residents. The obligations generated by those roles lead Dr. Kelekian to treat his patient, Vivian Bearing, in ways that seem morally questionable (McConnell 2009). At first blush, anyway, it does not seem possible for Kelekian to discharge all of the obligations associated with these various roles.

In the context of issues raised by the possibility of moral dilemmas, the role most frequently discussed is that of the political actor. Michael Walzer (1973) claims that the political ruler, qua political ruler, ought to do what is best for the state; that is his principal role-related obligation. But he also ought to abide by the general obligations incumbent on all. Sometimes the political actor’s role-related obligations require him to do evil—that is, to violate some general obligations. Among the examples given by Walzer are making a deal with a dishonest ward boss (necessary to get elected so that he can do good) and authorizing the torture of a person in order to uncover a plot to bomb a public building. Since each of these requirements is binding, Walzer believes that the politician faces a genuine moral dilemma, though, strangely, he also thinks that the politician should choose the good of the community rather than abide by the general moral norms. (The issue here is whether supporters of dilemmas can meaningfully talk about action-guidance in genuinely dilemmatic situations. For one who answers this in the affirmative, see Tessman 2015, especially Chapter 5.) Such a situation is sometimes called “the dirty hands problem.” The expression, “dirty hands,” is taken from the title of a play by Sartre (1946). The idea is that no one can rule without becoming morally tainted. The role itself is fraught with moral dilemmas. This topic has received much attention recently. John Parrish (2007) has provided a detailed history of how philosophers from Plato to Adam Smith have dealt with the issue. And C.A.J. Coady (2008) has suggested that this reveals a “messy morality.”

For opponents of moral dilemmas, the problem of dirty hands represents both a challenge and an opportunity. The challenge is to show how conflicts between general obligations and role-related obligations, and those among the various role-related obligations, can be resolved in a principled way. The opportunity for theories that purport to have the resources to eliminate dilemmas—such as Kantianism, utilitarianism, and intuitionism—is to show how the many moralities under which people are governed are related.

## 9. Conclusion

Debates about moral dilemmas have been extensive during the last six decades. These debates go to the heart of moral theory. Both supporters and opponents of moral dilemmas have major burdens to bear. Opponents of dilemmas must show why appearances are deceiving. Why are examples of apparent dilemmas misleading? Why are certain moral emotions appropriate if the agent has done no wrong? Supporters must show why several of many apparently plausible principles should be given up—principles such as PC, PD, OP, D, ‘ought’ implies ‘can’, and the agglomeration principle. And each side must provide a general account of obligations, explaining whether none, some, or all can be overridden in particular circumstances. Much progress has been made, but the debate is apt to continue.

## Bibliography

### Cited Works

• Aquinas, St. Thomas, Summa Theologiae, Thomas Gilby et al. (trans.), New York: McGraw-Hill, 1964–1975.
• Blackburn, Simon, 1996, “Dilemmas: Dithering, Plumping, and Grief,” in Mason (1996): 127–139.
• Brink, David, 1994, “Moral Conflict and Its Structure,” The Philosophical Review, 103: 215–247; reprinted in Mason (1996): 102–126.
• Coady, C.A.J., 2008. Messy Morality: The Challenge of Politics, New York: Oxford University Press.
• Conee, Earl, 1982, “Against Moral Dilemmas,” The Philosophical Review, 91: 87–97; reprinted in Gowans (1987): 239–249.
• Dahl, Norman O., 1996, “Morality, Moral Dilemmas, and Moral Requirements,” in Mason (1996): 86–101.
• Donagan, Alan, 1977, The Theory of Morality, Chicago: University of Chicago Press.
• –––, 1984, “Consistency in Rationalist Moral Systems,” The Journal of Philosophy, 81: 291–309; reprinted in Gowans (1987): 271–290.
• Edson, Margaret, 1999/1993. Wit, New York: Faber and Faber.
• Freedman, Monroe, 1975, Lawyers’ Ethics in an Adversary System, Indianapolis: Bobbs-Merrill.
• Gowans, Christopher W. (editor), 1987, Moral Dilemmas, New York: Oxford University Press.
• Greene, Joshua, 2013, Moral Tribes: Emotion, Reason, and the Gap Between Us and Them, New York: Penguin Books.
• Greenspan, Patricia S., 1983, “Moral Dilemmas and Guilt,” Philosophical Studies, 43: 117–125.
• –––, 1995, Practical Guilt: Moral Dilemmas, Emotions, and Social Norms, New York: Oxford University Press.
• Haidt, Jonathan, 2012, The Righteous Mind: Why Good People are Divided by Politics and Religion, New York: Pantheon.
• Hill, Thomas E., Jr., 1996, “Moral Dilemmas, Gaps, and Residues: A Kantian Perspective,” in Mason (1996): 167–198.
• Holbo, John, 2002, “Moral Dilemmas and the Logic of Obligation,” American Philosophical Quarterly, 39: 259–274.
• Hursthouse, Rosalind, 1999, On Virtue Ethics, New York: Oxford University Press.
• Kant, Immanuel, 1971/1797, The Doctrine of Virtue: Part II of the Metaphysics of Morals, Trans, Mary J. Gregor, Philadelphia: University of Pennsylvania Press.
• Lemmon, E.J., 1962, “Moral Dilemmas,” The Philosophical Review, 70: 139–158; reprinted in Gowans (1987): 101–114.
• –––, 1965, “Deontic Logic and the Logic of Imperatives,” Logique et Analyse, 8: 39–71.
• Marcus, Ruth Barcan, 1980, “Moral Dilemmas and Consistency,” The Journal of Philosophy, 77: 121–136; reprinted in Gowans (1987): 188–204.
• Mason, H.E., (editor), 1996, Moral Dilemmas and Moral Theory, New York: Oxford University Press.
• McConnell, Terrance, 1978, “Moral Dilemmas and Consistency in Ethics,” Canadian Journal of Philosophy, 8: 269–287; reprinted in Gowans (1987): 154–173.
• –––, 1988, “Interpersonal Moral Conflicts,” American Philosophical Quarterly, 25: 25–35.
• –––, 1996, “Moral Residue and Dilemmas,” in Mason (1996): 36–47.
• –––, 2009, “Conflicting Role-Related Obligations in Wit,” in Sandra Shapshay (ed.), Bioethics at the Movies, Baltimore: Johns Hopkins University Press.
• Mill, John Stuart, 1979/1861, Utilitarianism, Indianapolis: Hackett Publishing.
• Parrish, John, 2007, Paradoxes of Political Ethics: From Dirty Hands to Invisible Hands, New York: Cambridge University Press.
• Plato, The Republic, trans, Paul Shorey, in The Collected Dialogues of Plato, E. Hamilton and H. Cairns (eds.), Princeton: Princeton University Press, 1930.
• Ross, W.D., 1930, The Right and the Good, Oxford: Oxford University Press.
• –––, 1939, The Foundations of Ethics, Oxford: Oxford University Press.
• Sartre, Jean-Paul, 1957/1946, “Existentialism is a Humanism,” Trans, Philip Mairet, in Walter Kaufmann (ed.), Existentialism from Dostoevsky to Sartre, New York: Meridian, 287–311,
• –––, 1946, “Dirty Hands,”, in No Exit and Three Other Plays, New York: Vintage Books.
• Sinnott-Armstrong, Walter, 1988, Moral Dilemmas, Oxford: Basil Blackwell.
• Smith, Holly M., 1986, “Moral Realism, Moral Conflict, and Compound Acts,” The Journal of Philosophy, 83: 341–345.
• Styron, William, 1980, Sophie’s Choice, New York: Bantam Books.
• Taylor, Erin, 2011, “Irreconciliable Differences,” American Philosophical Quarterly, 50: 181–192.
• Tessman, Lisa, 2015, Moral Failure: On the Impossible Demands of Morality, New York: Oxford University Press.
• Thomason, Richmond, 1981, “Deontic Logic as Founded on Tense Logic,” in Risto Hilpinen (ed.), New Studies in Deontic Logic, Dordrecht: Reidel, 165–176.
• Trigg, Roger, 1971, “Moral Conflict,” Mind, 80: 41–55.
• Vallentyne, Peter, 1987, “Prohibition Dilemmas and Deontic Logic,” Logique et Analyse, 30: 113–122.
• –––, 1989, “Two Types of Moral Dilemmas,” Erkenntnis, 30: 301–318.
• Van Fraassen, Bas, 1973, “Values and the Heart’s Command,” The Journal of Philosophy, 70: 5–19; reprinted in Gowans (1987): 138–153.
• Walzer, Michael, 1973, “Political Action: The Problem of Dirty Hands,” Philosophy and Public Affairs, 2: 160–180.
• Williams, Bernard, 1965, “Ethical Consistency,” Proceedings of the Aristotelian Society (Supplement), 39: 103–124; reprinted in Gowans (1987): 115–137.
• Zimmerman, Michael J., 1988, An Essay on Moral Responsibility, Totowa, NJ: Rowman and Littlefield.
• –––, 1996, The Concept of Moral Obligation, New York: Cambridge University Press.

### Other Worthwhile Readings

• Anderson, Lyle V., 1985, “Moral Dilemmas, Deliberation, and Choice,” The Journal of Philosophy 82: 139–162,
• Atkinson, R.F., 1965, “Consistency in Ethics,” Proceedings of the Aristotelian Society (Supplement), 39: 125–138.
• Baumrin, Bernard H., and Peter Lupu, 1984, “A Common Occurrence: Conflicting Duties,” Metaphilosophy, 15: 77–90.
• Bradley, F. H., 1927, Ethical Studies, 2nd edition, Oxford: Oxford University Press.
• Brink, David, 1989, Moral Realism and the Foundations of Ethics, New York: Cambridge University Press.
• Bronaugh, Richard, 1975, “Utilitarian Alternatives,” Ethics, 85: 175–178.
• Carey, Toni Vogel, 1985, “What Conflict of Duty is Not,” Pacific Philosophical Quarterly, 66: 204–215.
• Castañeda, Hector-Neri, 1974, The Structure of Morality, Springfield, IL: Charles C. Thomas.
• –––, 1978, “Conflicts of Duties and Morality,” Philosophy and Phenomenological Research, 38: 564–574.
• Chisholm, Roderick M., 1963, “Contrary-to-Duty Imperatives and Deontic Logic,” Analysis, 24: 33–36.
• Conee, Earl, 1989, “Why Moral Dilemmas are Impossible,” American Philosophical Quarterly, 26(2): 133–141.
• Dahl, Norman O., 1974, “‘Ought’ Implies ‘Can’ and Deontic Logic,” Philosophia, 4: 485–511.
• DeCew, Judith Wagner, 1990, “Moral Conflicts and Ethical Relativism,” Ethics, 101: 27–41.
• Donagan, Alan, 1996, “Moral Dilemmas, Genuine and Spurious: A Comparative Anatomy,” Ethics, 104: 7–21; reprinted in Mason (1996): 11–22.
• Feldman, Fred, 1986, Doing the Best We Can, Dordrecht: D. Reidel Publishing Co.
• Foot, Philippa, 1983, “Moral Realism and Moral Dilemma,” The Journal of Philosophy, 80: 379–398; reprinted in Gowans (1987): 271–290.
• Gewirth, Alan, 1978, Reason and Morality, Chicago: University of Chicago Press.
• Goldman, Holly Smith, 1976, “Dated Rightness and Moral Imperfection,” The Philosophical Review, 85: 449–487, [See also, Holly Smith.]
• Gowans, Christopher W., 1989, “Moral Dilemmas and Prescriptivism,” American Philosophical Quarterly, 26: 187–197.
• –––, 1994, Innocence Lost: An Examination of Inescapable Wrongdoing, New York: Oxford University Press.
• –––, 1996, “Moral Theory, Moral Dilemmas, and Moral Responsibility,” in Mason (1996): 199–215.
• Griffin, James, 1977, “Are There Incommensurable Values?” Philosophy and Public Affairs, 7: 39–59.
• Guttenplan, Samuel, 1979–80, “Moral Realism and Moral Dilemma,” Proceedings of the Aristotelian Society, 80: 61–80.
• Hansson, Sven O., 1998, “Should We Avoid Moral Dilemmas?,” Journal of Value Inquiry, 32: 407–416.
• Hare, R.M., 1952, The Language of Morals, Oxford: Oxford University Press.
• –––, 1963, Freedom and Reason, Oxford: Oxford University Press.
• –––, 1981, Moral Thinking: Its Levels, Methods, and Point, Oxford: Oxford University Press.
• Hill, Thomas E., Jr, 1983, “Moral Purity and the Lesser Evil,” The Monist, 66: 213–232.
• –––, 1992, “A Kantian Perspective on Moral Rules,” Philosophical Perspectives, 6: 285–304.
• Hoag, Robert W., 1983, “Mill on Conflicting Moral Obligations,” Analysis, 43: 49–54.
• Howard, Kenneth W., 1977, “Must Public Hands Be Dirty?” The Journal of Value Inquiry, 11: 29–40.
• Kant, Immanuel, 1965/1797, The Metaphysical Elements of Justice: Part I of the Metaphysics of Morals, Trans, John Ladd, Indianapolis: Bobbs-Merrill.
• Kolenda, Konstantin, 1975, “Moral Conflict and Universalizability,” Philosophy, 50: 460–465.
• Ladd, John, 1958, “Remarks on Conflict of Obligations,” The Journal of Philosophy, 55: 811–819.
• Lebus, Bruce, 1990, “Moral Dilemmas: Why They Are Hard to Solve,” Philosophical Investigations, 13: 110–125.
• MacIntyre, Alasdair, 1990, “Moral Dilemmas,” Philosophical and Phenomenological Research, 50: 367–382.
• Mallock, David, 1967, “Moral Dilemmas and Moral Failure,” Australasian Journal of Philosophy, 45: 159–178,
• Mann, William E., 1991, “Jephthah’s Plight: Moral Dilemmas and Theism,” Philosophical Perspectives, 5: 617–647.
• Marcus, Ruth Barcan, 1996, “More about Moral Dilemmas,” in Mason (1996): 23–35.
• Marino, Patricia, 2001, “Moral Dilemmas, Collective Responsibility, and Moral Progress,” Philosophical Studies, 104: 203–225.
• Mason, H.E., 1996, “Responsibilities and Principles: Reflections on the Sources of Moral Dilemmas,” in Mason (1996): 216–235.
• McConnell, Terrance, 1976, “Moral Dilemmas and Requiring the Impossible,” Philosophical Studies, 29: 409–413.
• –––, 1981, “Moral Absolutism and the Problem of Hard Cases,” Journal of Religious Ethics, 9: 286–297.
• –––, 1981, “Moral Blackmail,” Ethics, 91: 544–567.
• –––, 1981, “Utilitarianism and Conflict Resolution,” Logique et Analyse, 24: 245–257.
• –––, 1986, “More on Moral Dilemmas,” The Journal of Philosophy, 82: 345–351.
• –––, 1993, “Dilemmas and Incommensurateness,” The Journal of Value Inquiry, 27: 247–252.
• McDonald, Julie M., 1995, “The Presumption in Favor of Requirement Conflicts,” Journal of Social Philosophy, 26: 49–58.
• Mothersill, Mary, 1996, “The Moral Dilemmas Debate,” in Mason (1996): 66–85.
• Nagel, Thomas, “War and Massacre,” Philosophy and Public Affairs, 1: 123–144.
• –––, 1979, “The Fragmentation of Value,” in Mortal Questions, New York: Cambridge University Press; reprinted in Gowans (1987): 174–187.
• Nozick, Robert, 1968, “Moral Complications and Moral Structures,” Natural Law Forum, 13: 1–50.
• Paske, Gerald H., 1990, “Genuine Moral Dilemmas and the Containment of Incoherence,” The Journal of Value Inquiry, 24: 315–323.
• Pietroski, Paul, 1993, “Prima Facie Obligations, Ceteris Paribus Laws in Moral Theory,” Ethics, 103: 489–515.
• Price, Richard, 1974/1787, A Review of the Principal Questions of Morals, Oxford: Oxford University Press.
• Prior, A.N., 1954, “The Paradoxes of Derived Obligation,” Mind, 63: 64–65.
• Quinn, Philip, 1978, Divine Commands and Moral Requirements, New York: Oxford University Press.
• –––, 1986, “Moral Obligation, Religious Demand, and Practical Conflict,” in Robert Audi and William Wainwright (eds.), Rationality, Religious Belief, and Moral Commitment, Ithaca, NY: Cornell University Press, 195–212.
• Rabinowicz, Wlodzimierz, 1978, “Utilitarianism and Conflicting Obligations,” Theoria, 44: 1924.
• Rawls, John, 1971, A Theory of Justice, Cambridge: Harvard University Press.
• Railton, Peter, 1992, “Pluralism, Determinacy, and Dilemma,” Ethics, 102: 720–742.
• –––, 1996, “The Diversity of Moral Dilemma,” in Mason (1996): 140–166.
• Santurri, Edmund N., 1987, Perplexity in the Moral Life: Philosophical and Theological Considerations, Charlottesville, VA: University of Virginia Press.
• Sartorius, Rolf, 1975, Individual Conduct and Social Norms: A Utilitarian Account of Social Union and the Rule of Law, Encino, CA: Dickenson Publishing.
• Sayre-McCord, Geoffrey, 1986, “Deontic Logic and the Priority of Moral Theory,” Noûs, 20: 179–197.
• Sinnott-Armstrong, Walter, 1984, “’Ought’ Conversationally Implies ‘Can’,” The Philosophical Review, 93: 249–261.
• –––, 1985, “Moral Dilemmas and Incomparability,” American Philosophical Quarterly, 22: 321–329.
• –––, 1987, “Moral Dilemmas and ‘Ought and Ought Not’,” Canadian Journal of Philosophy, 17: 127–139.
• –––, 1987, “Moral Realisms and Moral Dilemmas,” The Journal of Philosophy, 84: 263–276.
• –––, 1996, “Moral Dilemmas and Rights,” in Mason (1996): 48–65.
• Slote, Michael, 1985, “Utilitarianism, Moral Dilemmas, and Moral Cost,” American Philosophical Quarterly, 22: 161–168.
• Statman, Daniel, 1996, “Hard Cases and Moral Dilemmas,” Law and Philosophy, 15: 117–148.
• Steiner, Hillel, 1973, “Moral Conflict and Prescriptivism,” Mind, 82: 586–591.
• Stocker, Michael, 1971, “’Ought’ and ‘Can’,” Australasian Journal of Philosophy, 49: 303–316.
• –––, 1986, “Dirty Hands and Conflicts of Values and of Desires in Aristotle’s Ethics,” Pacific Philosophical Quarterly, 67: 36–61.
• –––, 1987, “Moral Conflicts: What They Are and What They Show,” Pacific Philosophical Quarterly, 68: 104–123.
• –––, 1990, Plural and Conflicting Values, New York: Oxford University Press.
• Strasser, Mark, 1987, “Guilt, Regret, and Prima Facie Duties,” The Southern Journal of Philosophy, 25: 133–146.
• Swank, Casey, 1985, “Reasons, Dilemmas, and the Logic of ‘Ought’,” Analysis, 45: 111–116.
• Tannsjo, Torbjorn, 1985, “Moral Conflict and Moral Realism,” The Journal of Philosophy, 82: 113–117.
• Thomason, Richmond, 1981, “Deontic Logic and the Role of Freedom in Moral Deliberation,” in Risto Hilpinen (ed.), New Studies in Deontic Logic, Dordrecht: Reidel, 177–186.
• Vallentyne, Peter, 1992, “Moral Dilemmas and Comparative Conceptions of Morality,” The Southern Journal of Philosophy, 30: 117–124.
• Williams, Bernard, 1966, “Consistency and Realism,” Proceedings of the Aristotelian Society (Supplement), 40: 1–22.
• –––, 1972, Morality: An Introduction to Ethics, New York: Harper & Row.
• Zimmerman, Michael J., 1987, “Remote Obligation,” American Philosophical Quarterly, 24: 199–205.
• –––, 1988, “Lapses and Dilemmas,” Philosophical Papers, 17: 103–112.
• –––, 1990, “Where Did I Go Wrong?” Philosophical Studies, 58: 83–106.
• –––, 1992, “Cooperation and Doing the Best One Can,” Philosophical Studies, 65: 283–304.
• –––, 1995, “Prima Facie Obligation and Doing the Best One Can,” Philosophical Studies, 78: 87–123.

## Academic Tools

 How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.

## Other Internet Resources

[Please contact the author with suggestions.]

### Acknowledgments

I thank Michael Zimmerman for helpful comments on this essay.

This is a file in the archives of the Stanford Encyclopedia of Philosophy.
Please note that some links may no longer be functional.
[an error occurred while processing this directive]