|This is a file in the archives of the Stanford Encyclopedia of Philosophy.|
The expected value of the game is the sum of the expected payoffs of all the consequences. Since the expected payoff of each possible consequence is $1, and there are an infinite number of them, this sum is an infinite number of dollars. A rational gambler would enter a game iff the price of entry was less than the expected value. In the St. Petersburg game, any finite price of entry is smaller than the expected value of the game. Thus, the rational gambler would play no matter how large the finite entry price was. But it seems obvious that some prices are too high for a rational agent to pay to play. Many commentators agree with Hacking's (1980) estimation that "few of us would pay even $25 to enter such a game." If this is correct, then something has gone wrong with the standard decision-theory calculations of expected value above. This problem, discovered by the Swiss eighteenth-century mathematician Daniel Bernoulli (1738; English trans. 1954) is the St. Petersburg paradox.
n P(n) Prize Expected payoff 1 1/2 $2 $1 2 1/4 $4 $1 3 1/8 $8 $1 4 1/16 $16 $1 5 1/32 $32 $1 6 1/64 $64 $1 7 1/128 $128 $1 8 1/256 $256 $1 9 1/512 $512 $1 10 1/1024 $1024 $1
The sum of expected utilities is not infinite: it reaches a limit of about 0.60206 utiles (worth $4.00). The rational gambler, then, would pay any sum less than $4.00 to play.
n P(n) Prize Utiles Expected Utility 1 1/2 $2 0.301 0.1505 2 1/4 $4 0.602 0.1505 3 1/8 $8 0.903 0.1129 4 1/16 $16 1.204 0.0753 5 1/32 $32 1.505 0.0470 6 1/64 $64 1.806 0.0282 7 1/128 $128 2.107 0.0165 8 1/256 $256 2.408 0.0094 9 1/512 $512 2.709 0.0053 10 1/1024 $1024 3.010 0.0029
This response to the paradox is, however, unsatisfactory. Let us agree that money has a decreasing marginal utility, and accept (for the purposes of argument) that a reasonable calculation of the utility of any dollar amount takes the logarithm of the amount in dollars. The St. Petersburg game as proposed, then, presents no paradox, but it is easy to construct another St. Petersburg game which is paradoxical, merely by altering the dollar prizes. Suppose, for example, that instead of paying $2n for a run of n, the prize were $10 to the power 2n. Here is the table for this game:
This version contains much larger prizes than the original version, and one would presumably be willing to pay more to play this version than the original. But the expected value of this game - the sum of the infinite series of numbers in the last column - is infinite, and the paradox returns.
n P(n) Prize Utiles of Prize Expected utility 1 1/2 $102 2 1 2 1/4 $104 4 1 3 1/8 $108 8 1 4 1/16 $1016 16 1 5 1/32 $1032 32 1 6 1/64 $1064 64 1 7 1/128 $10128 128 1 8 1/256 $10256 256 1 9 1/512 $10512 512 1 10 1/1024 $101024 1024 1
Of course, it is not clear how in fact dollar values relate to utility, but we can imagine a generalized paradoxical St. Petersburg game (suggested by Paul Weirich, 1984, following Menger, 1967) which offers prizes in utiles at the rate of 2n utiles for a run of n, however that number of utiles is to be translated into dollars or other goods. This game would have infinite expected value, and the rational gambler should pay any amount, however large, to play. For simplicity, we shall ignore the generalized version of the game, and continue to discuss it in terms of the original dollar prizes, recognizing, however, that the diminishing marginal utility of dollars may make some revision of the prizes necessary to produce the paradoxical result.
This sort of reasoning is appealing, and may very well account for intuitions that agree with Hacking's. Many of us are risk-averse, and unwilling to gamble for a very small chance of a very large prize, because the chance is so small. Weirich claims that this sort of consideration in fact solves the St. Petersburg paradox. He offers a complicated way (which we need not go into here) of including a risk-aversion factor in our rational calculations, with the result that there is a finite upper limit to the rational entrance fee for the game.
But there are objections to this approach. For one thing, a factor for risk-aversion is not a generally applicable consideration in making rational decisions, because some people are not risk averse. In fact, some people may enjoy risk. What should we make, for example, of those people who routinely play state lotteries, or who gamble at pure games of chance in casinos? (In these games, the entry fee is greater than the expected utility.) It's possible to dismiss such behaviour as merely irrational, but sometimes these players offer the explanation that they enjoy the excitement of risk. In any case, it's not at all clear that risk-aversion can explain why the St. Petersburg game would be widely intuited to have a fairly small maximum rational entry fee, while so many people at the same time are not averse to the huge risk entailed by the very small expected probability of large prizes in lotteries.
But for the purposes of argument, let's assume that risk-aversion is what's responsible for the rational intuition that the appropriate entrance-fee for the St. Petersburg game is finite and small. But this will not make the paradox go away, for we can again adjust the prizes to take account of this risk-aversion.
Suppose you don't like to gamble, and wouldn't risk an entry fee in a game that offers a small possibility of a large prize, even when the odds were in your favour. For example, imagine that you were offered a lottery ticket costing $1, which gave you a one-in-ten chance at a prize of $20. Playing costs you utility, because you hate risk. But presumably, we could compensate you for this utility-loss by making the prize even bigger. Maybe you would invest $1 for a one-in-ten chance at making $100. If not, how about $1000? It appears that there is some prize large enough to compensate you for your risk aversion.
Now let's imagine that you consider the St. Petersburg game, and suppose that you're willing to pay an entrance fee of only $20 to play. The reason you're not willing to go higher is your risk-aversion. We can imagine that the increasing risk of large payments subtracts from their utility, and the result is that the last column contains numbers that decrease as probabilities shrink, and the sum of the last column reaches a limit - perhaps $25. But now, the game can be reformulated to repay you for the risk inherent in each outcome, by correspondingly increasing the prizes. For example, suppose we square each dollar-prize in compensation for the increasing risk - the lower probability - of the larger prizes. If this doesn't provide sufficient compensation for your risk-aversion, then we can make the prizes even higher. In any case, there seems to be some prize scheme huge enough to compensate you for your risk-aversion - one which makes the dollar utility of each prize minus its risk-factor equal 1 utile. A game with these larger prizes is again paradoxical.
But Weirich argues that offering increased prizes cannot sufficiently compensate for risk-aversion in such a way as to make the sum of the series unlimited. He appears to suggest that increasing the prize for an outcome may increase one's cost in terms of dread of risk. In the lottery example, then, increasing the prize to $1000 would correspondingly increase the risk for you, so you still wouldn't bet. No matter how high a prize you are offered, you still are unwilling to buy the ticket for $1, because the higher prizes raise the risk for you. Putting it picturesquely, he says, there is some number of birds in hand worth more than any number of birds in the bush.
But one might doubt that risk-aversion works this way - or, anyway, that this sort of risk-aversion can be justified as rational. It's highly implausible to claim that an increase in prize-size increases the risk of a game. In the lottery example, the only sort of risk-aversion that would make one refuse to play no matter how high the prize is pathological, not rational. There must be some prize which is so valuable to any rational but risk-averse person that the person would see it as compensating him for the risk of $1, (where that dollar has the usual small utility). If someone prefers $1 worth of birds in hand to any value of birds in the bush, then that person needs psychiatric help; this is not a rational decision strategy.
One might argue, however, for a risk-aversion factor such that increase in prizes makes certain games more attractive, but never attractive enough to override the risk factor. The most obvious way to take risk-aversion into consideration in calculating the utility of a game would be to add the negative utility of its risk to the positive utility of each prize, as if its risk were a negative aspect of the prize. On this way of calculating, it is always possible to compensate for risk by increasing the utility of the payoff. But Weirich's proposal appears to make risk a function of the whole gamble and of one's current utility level, in such a way that no addition to the prizes can make a game desirable to a risk-averse agent. This might seem implausible: if one's risk-aversion to a game is finite, and does not increase merely because of increase in payoffs, and if the utility of the prizes can be increased without limit, it would seem that some prize-increase can always compensate for the risk-aversion, however it is reasonably calculated. But if prize-increase, while increasing the expected utility (ignoring the risk-factor) of the whole game, nevertheless cannot make it sufficient to overcome the finite risk-aversion, then something else (such as the diminishing marginal utility of prizes, or an upper limit on their utility) is operating. These possibilities are discussed elsewhere in this article.
But the St. Petersburg game is supposed to justify even an enormously high entry price, so the lottery example is not precisely germane. Let's consider examples with a high entry price, for example, your entire life-savings. Would it be rational always to refuse to risk this, no matter what the gamble is? It doesn't seem so. When you deposit your life-savings in a solid bank for a year, you are in fact accepting a gamble. There's a very high probability of the consequence that at the end of the year, you can get your savings back with interest, but there's also an extremely low probability that the bank and the deposit insurance will both collapse, and you'll be wiped out. Someone who refused to run this very tiny risk no matter how high the interest and how low the probability of disaster is clearly irrational. Everyone who crosses a street is, in effect, gambling his life, because crossing a street increases, to a small extent, the risk of being run down and killed. But to refuse to cross any street on these grounds is irrational. This sort of risk-aversion, when generally applied, would paralyze anyone. It is central to rationality that one take account of the actual risks, and run suitably small ones.
The counter argument we have been considering is that risk-aversion is irrational when it refuses to gamble a small entry-price for no matter how high a prize, or when it refuses to gamble a large entry-price for no matter how high a probability of prize. But this does not answer all St. Petersburg objections, for here we imagine a gamble with a large entry-price and a small probability of large prizes. The most compelling examples of the rational unacceptability of risk no matter how high the prize, are the ones in which the entry price is high and the prize improbable. Imagine, for example, that you are risk averse, and are offered a gamble in which the entry price is your life-savings of (say) $100,000, and the chances of the prize are one-in-a-million. It seems rational to refuse, no matter how huge the prize. The reason for this is worth considering.
Classical decision theory says that, for this gamble to be rational, the prize must be enormous: in the imagined case, at least one million times the value of your life-savings - more than a hundred billion dollars. Compensating you for your risk-aversion by increasing the size of this prize makes it even larger - two hundred billion? a trillion? - so huge that our intuitions are inadequate to appreciate such a value. What is worth much more than a million times your life-savings? You don't know. Your intuitions boggle when considering this gamble. The diminishing marginal utility of money and of ordinary goods operates here as well. You might suppose that nothing would give you that much utility. But the facts that the world happens not to contain such huge utilities, or that one's intuitions get unreliable when considering them, are not difficulties with classical decision theory per se. More will be said about this below, when the argument will be proposed that these these sorts of practical considerations don't show that there's something wrong with classical decision theory.
Let us, then, increase the prizes in the St. Petersburg game to compensate a rational potential player for his risk-aversion, and the game once again has infinite expected value for that person.
The sum of the infinite series in the right-hand column reaches a limit of about 7.56, and the rational entry price is anything under $7.56.
n P(n) Prize Utiles of Prize Expected utility 1 1/2 $2 2 1 2 1/4 $4 4 1 3 1/8 $8 8 1 4 1/16 $16 16 1 5 1/32 $32 32 1 6 1/64 $64 64 1 7 1/128 $128 100 0.78 8 1/256 $256 100 0.391 9 1/512 $512 100 0.195 10 1/1064 $1064 100 0.098
The assumption that maximum utility is reached by any dollar-prize over $100 is implausible because it means that the value of $100, $1000, and $1,000,000 are all the same - the maximum. A more plausible point for maximum utility of dollars is much higher. Setting it at 16,000,000 makes the maximum rational entry price of the game close to $25, which is Hacking's guess at what our intuitions would accept.
Some people think that it is reasonable to set an upper limit on utility. Russell Hardin (1982), for example, calls this assumption "compelling in its own right." William Gustason (1994) suggests that one restrict the expected value concept by stipulating that values of any consequence have an upper bound. Richard Jeffrey (1983) agrees.
But the idea of an upper limit on utility might not be seen to be compelling in its own right. Note that this idea must be distinguished from the diminishing marginal value of money. Perhaps you find it reasonable to think that, once one had (say) $16,000,000 in the bank, you'd be able to buy anything you could possibly want; but this is not to say that that sum of money provides the maximum permissible utility. We can readily imagine someone with that amount of money - or any amount of money - still short of utility, due to lack of certain goods that money can't buy. What the idea of an upper limit on utility means is that there is some amount of utility which is so high that no additional utility is possible - that nothing additional adds any value at all. Imagine someone with all the wealth he could use: still he might have unfulfilled desires, for example, that his friends and relations be as fortunate as he. If this desire were fulfilled, then he might still desire that strangers be as fortunate; and that there be more people on earth than there currently were, to share his happiness, and more populated planets full of happy people. How many more? Why, the more the better - indefinitely more. If there is an upper limit on utility, then there is some finite amount of utility which is maximally good, an amount for which one would rationally trade anything else. It doesn't appear plausible to think that there is any such amount.
One might imagine that some people have an upper limit on the utility they can enjoy - people who have a finite number of desires, and whose desires can each be completely satisfied by some finite state. For these people, the utility of prizes does not increase without limit, and the St. Petersburg game has some finite expected utility. Do such people exist? This is an empirical question. In any case, there surely are some people with some the-more-the-better desires, and the theory of rational choice ought not to be restricted by the empirical and doubtful propositions that there aren't any, and that value cannot increase without limit. And these propositions are surely insufficiently well-founded to invoke to solve the St. Petersburg paradox.
Gustason says that "the upshot of the paradox is that if there is such a thing as an infinite value, then acts and consequences that involve it are beyond the scope of the expected value concept." Jeffrey states that the evaluation theory we are applying here has "from its inception...been closely tied to the notion that desirabilities are ... bounded." But the fact that the theory wasn't designed with such a result in mind is not a very good reason to try to resist its application in this case. The main reason both authors give for excluding unbounded-desirability games is that otherwise the St. Petersburg game has infinite expected utility. But this ad-hoc rationale is not compelling unless one can't bear this result. The acceptability of this result will be considered later.
Hardin offers the opinion that whether utility is bounded "is more a factual than a logical issue," and that its invocation to resolve the St. Petersburg paradox "is to grant that the paradox is not an antinomy." He may mean that the difficulty posed by the game is a result of a factual assumption that utility is unbounded (and not merely by its logical features), and can be removed by rejecting that assumption. But if one finds no no difficulty posed by the game, one is not tempted to reject the assumption.
(a) Each act has only finitely many consequences, orHe points out that imposing either restriction will suffice to rule out the St. Petersburg result. If one resists imposing restriction (b), in this case, by setting an upper bound to the value of consequences, perhaps restriction (a) might be found plausible.
(b) Values must be bounded, i.e., there are numbers n and m such that no value to be assigned a consequence exceeds n or is less than m.
One way to impose restriction (a) is merely to insist that the St. Petersburg game fails to meet it, so its value, and fair entrance price, cannot be calculated using standard expected value theory. How then, if at all, can it be calculated? Where does the intuition that $25 is too much to pay come from (if anywhere)? How (if at all) can it be justified?
Another way is to assume that the way the game will take place is not exactly as described, and that there are some possible very long strings that would never be carried out - i.e., that there is only a finite number of prizes to be considered when calculating the expected value of the game. Presumably, this would be applied by setting some upper limit L to the number of flips which would be considered; after a run of L heads in a row, the game would be terminated and payment made for the run so far, despite the fact that tails hadn't yet come up. If L were set at 25, then the game would have an expected value of $25, and that would be the maximum entry price which a rational agent would pay to play (as in Hacking's intuition). Do we, perhaps unconsciously, assume that any run of 25 heads would be truncated, and paid off, at that point?
Many authors have pointed out that, practically speaking, there must be some point at which a run of heads would be truncated without a final tail. For one thing, the patience of the participants of the game would have to end somewhere. If you think that this sets too narrow a limit L, consider the considerably higher limit set by the life-spans of the participants, or the survival of the human race; or the limit imposed by the future time when the sun explodes, vaporizing the earth. Any of these limits produces a finite expected value for the game, but sets an L which is higher than 25; what, then, explains Hacking's $25 intuition?
Another fact that would set a limit on L is the finitude of the bankroll necessary to fund the game. Any casino that offers the game must be prepared to truncate any run that, were it to continue, would cost them more than the total funds they have available for prizes. A run of 25 would require a prize of a mere $33,554,432, possibly within the reach of a larger casino. A run of 40 would require a prize of about 1.1 trillion dollars.
Other facts make an upper limit L plausible, such as the limit on the amount of money available in the world. Perhaps all these financial limits can be overridden if we conceive of the game's being offered by a state capable of printing all the money it wanted to. This state could pay any prize whatever; but printing up and handing out a huge amount of cash would create havoc with any economy, so no rational state would.
Hardin claims that "the slightest bit of realism is sufficient to do in the St. Petersburg Paradox." But is the slightest bit of realism a justifiable consideration? The fact of which we are sure, that some upper limit on L, and thus a finite number of possible consequences of the game, would certainly be imposed, does not really solve the St. Petersburg problem because it does not show that the expected value of the game as described is not infinite. After all, any game with a limit L is not the game we have been talking about. Our question was about the St. Petersburg game, not about its relative.
One might argue: we are considering the St. Petersburg game, but under realistic conditions. Realistically, the game would be truncated, whether this is mentioned in its rules or not. Thus there is a finite and realistic price for entry. But if this is the case, why isn't the game offered, with an entry price somewhat above this (to produce a profit in the long run) by casinos (who after all, are quite realistic)?
Do these realistic considerations show that the genuine St. Petersburg game - exactly as originally described - can never be encountered in real life? Jeffrey says: "Put briefly and crudely, our rebuttal of the St. Petersburg paradox consists in the remark that anyone who offers to let the agent play the St. Petersburg game is a liar, for he is pretending to have an indefinitely large bank."
It can be quibbled that Jeffrey is not exactly right: that someone can offer a game even though he is aware that there's a possibility that this offer involves the possibility of requiring consequences he cannot fulfill. Compare my offer to drive you to the airport tomorrow. I realise that there's a small possibility my car will break down between now and then, and thus that I'm making an offer I might not be able to fulfill. But the conclusion is not that I'm not really offering what I appear to be. If someone invites you to play St. Petersburg, we can't conclude that he's in fact not offering the St. Petersburg game, that he's really offering some other game.
Real casinos right now play games that offer the extremely remote possibility of continuing too long for anyone to complete, or of prizes too large to be managed. Casinos can go ahead and play these games anyway, confident that the risk of running into an impossible situation is very very small. They need not lose any sleep worrying about incurring a debt they can't manage. They live, and prosper, on probabilities, not certainties.
If these considerations are persuasive, then what Jeffry gives then is not a rebuttal of the paradox. In effect, he accepts the fact that the game offers the possibility of indefinitely large payoffs. The reason the game is not offered by casinos is that they realise that sooner or later (probably much later) the game will bankrupt them. This is correct reasoning - but it is done using the ordinary, general theory of choice. When casinos reason about the game, they do not decide that, since ordinary theory shows that the game has infinite value, ordinary theory should be restricted to exclude its consideration.
There are other reasons why we should not restrict theory to exclude consideration of the game. This ruling, in order to be theoretically acceptable, ought not merely rule out the St. Petersburg game in particular, ad hoc; it ought to be general in scope. And if it is, it will also rule out perfectly acceptable calculations. Michael Resnik (1987) notes that utility theory "is easily extended to cover infinite lotteries, and it must be in order to handle more advanced problems in statistical decision theory" but he gives no examples.
Imagine a life insurance policy bought for a child at its birth, which pays to the child's estate, when it eventually dies, $1000 for each birthday the child has passed, without limit. What price should an insurance company charge for this policy? (For simplicity, we shall ignore possible effects of inflation, and profits from investing the entry price.) Standard empirically-based mortality charts give the chances of living another year at various ages. Of course, they don't give the chances of surviving another year at age 140, because there's no empirical evidence available for this; but a reasonable function to extend the mortality curve indefinitely beyond what's provided by available empirical evidence can be produced; this curve asymptotically approaches zero. On this basis, ordinary mathematical techniques can give the expected value of the policy. But note that it promises to pay off without limit. If we think that, for each age, there is a (large or small) probability of living another year, then there are an indefinitely large number of consequences to be considered when doing this calculation, but mathematics can calculate the limit of this infinite series; and (ignoring other factors) an insurance company will make a profit, in the long run, buy charging anything above this amount. There's no problem in calculating its expected value.
This insurance policy (call it Policy 1) offers an indefinite number of outcomes; but consider a different one (call it Policy 2) which would truncate the series at age 140, and offer only 140 outcomes. The probability of reaching age 140 is so tiny that the difference in expected value between the two policies is negligible, a tiny fraction of 1 cent. If you don't like infinite lotteries, you might claim that Policy 1 is ill-formed, and suggest substitution of Policy 2, pointing out that the expected value of this one is, for all practical purposes, equal to that of Policy 1. But note that your judgment that the two are virtually identical in expected value depends on your having calculated the expected value of Policy 1. So your statement presupposes that the expected value of Policy 1 is calculable, after all.
Imagine you were offered the following deal. For a price to be negotiated, you will be given permanent possession of a cash machine with the following unusual property: every time you punch in a dollar amount, that amount is extruded. This is not a withdrawal from your account; neither will you later be billed for it. You can do this as often as you care to. Now, how much would you offer to pay for this machine? Do you find it impossible to perform this thought-experiment, or to come up with an answer? Perhaps you don't, and your answer is: any price at all. Provided that you can defer payment for a suitable time after receiving the machine, you can collect whatever you need to pay for it from the machine itself.
Of course, there are practical considerations: how long would it take you to collect, say, a trillion dollars from the machine, if this were its price? Would you be worn out or dead by then? Any bank would be crazy to offer to sell you an infinite cash machine, and unfortunately I seem to have lost the address of the crazy bank which has made this offer. Anyway, there appears to be nothing wrong with this thought experiment: it imagines an action (buying the machine) with no upper limit on expected value. We easily ignore practical considerations when calculating the expected value (in this case, merely potential withdrawals minus purchase price), which is infinite.
Do your intuitions tell you to offer (say) $25 at most for this machine? I doubt that they do. But the only difference between this machine and a single-play St. Petersburg game is that this machine guarantees an indefinitely large number of payouts, while the game offers a one-time lottery from among an indefinitely large number of possible payouts, each with a certain probability. The only difference between them is the probability factor: the same difference that exists between a game which gives you a guaranteed prize of $5, and one which gives you half a chance of $10, and half a chance of $0. The expected value of both the St. Petersburg game and the infinite cash machine are both indefinitely large. You should offer any price at all for either. It appears, then, that the notion of infinite expected value is perfectly reasonable.
In a sense, the counter-intuitiveness of the St. Petersburg result is a special case of a general and familiar objection to classical decision theory. Someone might object that it would be perfectly rational for you to prefer a guaranteed prize of $5 to a gamble which offers half a chance of $10, and half a chance of $0, despite the theory's claim that they have equal value. If you intuit that an infinite cash machine has infinite expected value, but the St. Petersburg game does not, you're probably relying on this more general objection to classical theory. Arguments can be made that your preference of the guaranteed prize to the gamble is irrational; or attempts can be made to "fix" the theory to account for the rationality of this preference (for example, by allowing adjustments for risk-aversion.) However this more general "problem" with classical theory is dealt with, it is not a problem with St. Petersburg alone, and making ad-hoc fixes to theory to rule out St. Petersburg will not help in dealing with this more general problem.
If you see standard theory as normative, you can ignore objections of the first type. People are not always rational, and some people are rarely rational, and an adequate descriptive theory must take into account the various irrational ways people really do make decisions. It's no surprise that the classical rather a-prioristic theory fails to be descriptively adequate, and to criticize it on these grounds rather misses its normative point.
T he objections from standpoint (2) need to be taken more seriously; and we have been treating the responses to St. Petersburg as cases of this sort. Various sorts of "realistic" considerations have been adduced to show that the result the theory draws in the St. Petersburg game about what a rational agent should do is incorrect. It's concluded that the unrestricted theory must be wrong, and (sometimes) that it must be restricted to exclude consideration of the game as invented. We'll now consider the general plausibility of restricting the theory in these ways.
When considering the plausibility of restricting expected value calculations in various ways that would rule out the St. Petersburg calculation, Amos Nathan (1984) remarks, "it ought, however, to be remembered that important and less frivolous application of such games have nothing to do with gambling and lie in the physical world where practical limitations may assume quite a different dimension." Nathan doesn't mention any physical applications of analogous infinite value calculations. But it's nevertheless plausible to think that imposing restrictions on theory to rule out St. Petersburg bath water would throw out some valuable babies as well.
Any theoretical model is an idealization, leaving aside certain practicalities. "From the mathematical and logical point of view," observes Resnick, "the St. Petersburg paradox is impeccable." But this is the point of view to be taken when evaluating a theory per se (though not the only point of view ever to be taken). By analogy, the aesthetic evaluation of a movie does not take into account the facts that the only local showing of the movie is far away, and that finding a baby sitter will be impossible at this late hour. If aesthetic theory tells you that the movie is wonderful, but other considerations show you that you shouldn't go, this isn't a defect in aesthetic theory. Similarly, the mathematical/logical theory for explaining ordinary casino games is not defective because it ignores practicalities such as a particular limit on a casino's bankroll, or on participants' patience.
There are all sorts of practical considerations which must be considered in making a real gambling decision. For example, in deciding whether to raise, see, fold, or cash in and go home, in a particular poker game, you must consider not only probability and expected value, but also the facts that it's 5 A.M. and you are cross-eyed from fatigue and drink; but it's not expected that classical decision theory has to deal with these.
The St. Petersburg game commits participants to doing what we know they will not. The casino may have to pay out more than it has. The player may have to flip a coin longer than physically possible. But this may not show a defect with choice theory. Classical unrestricted theory is still serving its purpose, which is modeling the abstract ideal rational agent. It tells us that no amount is too great to pay as a ideally rationally acceptable entrance fee, and this may be right. What it's reasonable for real agents, limited in time, patience, bankroll, and imaginative capacity to do, given the constraints of the real casino, the real economy, and the real earth, is another matter, one that the theoretical core of decision theory can be forgiven for not specifying. From this point of view, the St. Petersburg paradox does not, after all, point out any defect with classical decision theory, and is not a paradox after all.
Table of Contents
First published: November 4, 1998
Content last modified: June 14, 2001