# The St. Petersburg Paradox

*First published Wed Nov 4, 1998; substantive revision Mon Jun 16, 2008*

The St. Petersburg game is played by flipping a fair coin until it
comes up tails, and the total number of flips, *n*, determines
the prize, which equals $2^{n}. Thus if the coin
comes up tails the first time, the prize is $2^{1} = $2, and
the game ends. If the coin comes up heads the first time, it is
flipped again. If it comes up tails the second time, the prize is
$2^{2} = $4, and the game ends. If it comes up heads the
second time, it is flipped again. And so on. There are an infinite
number of possible ‘consequences’ (runs of heads followed
by one tail) possible. The probability of a consequence of *n*
flips (P(*n*)) is 1 divided by 2^{n}, and the
‘expected payoff’ of each consequence is the prize times
its probability. The following table lists these figures for the
consequences where *n* = 1 … 10:

nP(n)PrizeExpected payoff1 1/2 $2 $1 2 1/4 $4 $1 3 1/8 $8 $1 4 1/16 $16 $1 5 1/32 $32 $1 6 1/64 $64 $1 7 1/128 $128 $1 8 1/256 $256 $1 9 1/512 $512 $1 10 1/1024 $1024 $1

The ‘expected value’ of the game is the sum of the
expected payoffs of all the consequences. Since the expected payoff of
each possible consequence is $1, and there are an infinite number of
them, this sum is an infinite number of dollars. A rational gambler
would enter a game iff the price of entry was less than the expected
value. In the St. Petersburg game, any finite price of entry is
smaller than the expected value of the game. Thus, the rational
gambler would play no matter how large the finite entry price was. But
it seems obvious that some prices are too high for a rational agent to
pay to play. Many commentators agree with Hacking's (1980) estimation
that “few of us would pay even $25 to enter such a game.”
If this is correct—and if most of us are rational—then
something has gone wrong with the standard decision-theory
calculations of expected value above. This problem, discovered by the
Swiss eighteenth-century mathematician Daniel Bernoulli is the
St. Petersburg paradox. It's called that because it was first
published by Bernoulli in the *St. Petersburg Academy
Proceedings* (1738; English trans. 1954).

- 1. Decreasing Marginal Utility
- 2. Risk-Aversion
- 3. An Upper Bound on Utility
- 4. Finitely Many Consequences
- 5. Infinite Value?
- 6. Theory and Practicality
- Bibliography
- Other Internet Resources
- Related Entries

## 1. Decreasing Marginal Utility

Bernoulli argued that the calculations leading to the paradox err by adding expected payoffs in money (dollars, in our version), whereas what should be added are the expected utilities of each consequence. The same paper in which he proposed this problem contains the first published exposition of the Principle of Decreasing Marginal utility, which he developed to deal with St. Petersburg. This principle, later widely accepted in the theory of economic behavior, states that marginal utility (the extra utility obtained from consuming a good) decreases as the quantity consumed increases; in other words, that each additional good consumed is less satisfying than the previous one. He went on to suggest that a realistic measure of the utility of money might be given by the logarithm of the money amount. Here are the first few lines in the table for this gamble if utiles = log($):

nP(n)PrizeUtilesExpected Utility1 1/2 $2 0.301 0.1505 2 1/4 $4 0.602 0.1505 3 1/8 $8 0.903 0.1129 4 1/16 $16 1.204 0.0753 5 1/32 $32 1.505 0.0470 6 1/64 $64 1.806 0.0282 7 1/128 $128 2.107 0.0165 8 1/256 $256 2.408 0.0094 9 1/512 $512 2.709 0.0053 10 1/1024 $1024 3.010 0.0029

The sum of expected utilities is not infinite: it reaches a limit of about 0.602 utiles (worth $4.00). The rational gambler, then, would pay any sum less than $4.00 to play.

Many have found this response to the paradox unsatisfactory. For one
thing, Bernoulli's association of utility with the logarithm of money
seems way off: $1024 seems clearly worth more than 10 times $2. But
this, it's argued, is not the main problem. Let us agree that money
has a decreasing marginal utility, and accept (for the purposes of
argument) that a reasonable calculation of the utility of any dollar
amount takes the logarithm of the amount in dollars. The St.
Petersburg game as proposed, then, presents no paradox, but it is easy
to construct another St. Petersburg game which is paradoxical, merely
by altering the dollar prizes. Suppose, for example, that instead of
paying $2^{n} for a run of *n*, the prize were
$10 to the power 2^{n}. Here is the table for this
game:

nP(n)PrizeUtiles of PrizeExpected utility1 1/2 $10 ^{2}2 1 2 1/4 $10 ^{4}4 1 3 1/8 $10 ^{8}8 1 4 1/16 $10 ^{16}16 1 5 1/32 $10 ^{32}32 1 6 1/64 $10 ^{64}64 1 7 1/128 $10 ^{128}128 1 8 1/256 $10 ^{256}256 1 9 1/512 $10 ^{512}512 1 10 1/1024 $10 ^{1024}1024 1

This version contains much larger prizes than the original version, and one would presumably be willing to pay more to play this version than the original. But the expected value of this game—the sum of the infinite series of numbers in the last column—is infinite, and the paradox returns.

Of course, it is not clear how in fact dollar values relate to
utility, but we can imagine a generalized paradoxical St. Petersburg
game (suggested by Paul Weirich, 1984, following Menger, 1967) which
offers prizes in *utiles* instead, at the rate of
2^{n} utiles for a run of *n* (however that
number of utiles is to be translated into dollars or other
goods). This game would have infinite expected value, and the rational
gambler should pay any amount, however large, to play. For
simplicity, we shall ignore the generalized version of the game, and
continue to discuss it in terms of the original dollar prizes,
recognizing, however, that the diminishing marginal utility of dollars
may make some revision of the prizes necessary to produce the
paradoxical result.

## 2. Risk-Aversion

Consider the following argument. The St. Petersburg game offers the possibility of huge prizes. A run of forty would, for example, pay a whopping $1.1 trillion. Of course, this prize happens rarely: only once in about 1.1 trillion times. Half the time, the game pays only $2, and you're 75% likely to wind up with a payment of $4 or less. Your chances of getting more than $25 (the entry price which Hacking suggests is a reasonable maximum) are less than one in 25. Very low payments are very probable, and very high ones very rare. It's a foolish risk to invest more than $25 to play.

This sort of reasoning is appealing, and may very well account for intuitions that agree with Hacking's. Many of us are risk-averse, and unwilling to gamble for a very small chance of a very large prize. Weirich claims that this sort of consideration in fact solves the St. Petersburg paradox. He offers a complicated way (which we need not go into here) of including a risk-aversion factor in calculations of expected utility, with the result that there is a finite upper limit to the rational entrance fee for the game.

But there are objections to this approach. For one thing, a factor for risk-aversion is not a generally applicable consideration in making rational decisions, because some people are not risk averse. In fact, some people may enjoy risk. What should we make, for example, of those people who routinely play state lotteries, or who gamble at pure games of chance in casinos? (In these games, the entry fee is greater than the expected utility.) It's possible to dismiss such behavior as merely irrational, but sometimes these players offer the explanation that they enjoy the excitement of risk. In any case, it's not at all clear that risk-aversion can explain why the St. Petersburg game would be widely intuited to have a fairly small maximum rational entry fee, while so many people at the same time are not averse to the huge risk entailed by the very small expected probability of large prizes in lotteries.

But for the purposes of argument, let's assume that risk-aversion is what's responsible for the intuition that the appropriate entrance-fee for the St. Petersburg game is finite and small. But this will not make the paradox go away, for we can again adjust the prizes to take account of this risk-aversion.

Suppose you don't like to gamble, and wouldn't risk an entry fee in a game that offers a small possibility of a large prize, even when the odds were in your favor. For example, imagine that you were offered a lottery ticket costing $1, which gave you a one-in-ten chance at a prize of $20. Playing costs you utility, because you hate risk. But presumably, we could compensate you for this utility-loss by making the prize even bigger. Maybe you would invest $1 for a one-in-ten chance at making $100. If not, how about $1000? It appears that there is some prize large enough to compensate you for your risk aversion.

Now let's imagine that you consider the St. Petersburg game, and suppose that you're willing to pay an entrance fee of only $20 to play. The reason you're not willing to go higher is your risk-aversion. We can imagine that the increasing risk of large payments subtracts from their utility, and the result is that the last column contains numbers that decrease as probabilities shrink, and the sum of the last column reaches a limit—perhaps $25. But now, the game can be reformulated to repay you for the risk inherent in each outcome, by correspondingly increasing the prizes. For example, suppose we square each dollar-prize in compensation for the increasing risk—the lower probability—of the larger prizes. If this doesn't provide sufficient compensation for your risk-aversion, then we can make the prizes even higher. In any case, there seems to be some prize scheme huge enough to compensate you for your risk-aversion—one which makes the dollar utility of each prize minus its risk-factor equal 1 utile. A game with these larger prizes is again paradoxical.

The idea that risk-aversion can be compensated for by a larger prize is hardly controversial. The fact that many more people enter lotteries when unusually big prizes are announced, keeping risk more or less constant, is evidence for this.

But Weirich argues that offering increased prizes cannot sufficiently compensate for risk-aversion in such a way as to make the sum of the series unlimited. He appears to suggest that increasing the prize for an outcome may increase one's cost in terms of dread of risk. In the lottery example, then, increasing the prize to $1000 would correspondingly increase the risk for you, so you still wouldn't bet. No matter how high a prize you are offered, you still are unwilling to buy the ticket for $1, because the higher prizes raise the risk for you. Putting it “picturesquely,” he says, “there is some number of birds in hand worth more than any number of birds in the bush.”

But one might doubt that risk-aversion works this way—or, anyway, that this sort of risk-aversion can be justified as rational. It's highly implausible to claim that an increase in prize-size increases the risk of a game. In the lottery example, the only sort of risk-aversion that would make one refuse to play no matter how high the prize is pathological, not rational. There must be some prize which is so valuable to any rational but risk-averse person that the person would see it as compensating him for the risk of $1, (where that dollar has the usual small utility). If someone prefers $1 worth of birds in hand to any value of birds in the bush, then that person needs psychiatric help; this is not a rational decision strategy.

The counter argument we have been considering is that risk-aversion is irrational when it refuses to gamble a small entry-price for no matter how high a prize, or when it refuses to gamble a large entry-price for no matter how high a probability of prize. But this does not answer all St. Petersburg objections, for here we imagine a gamble with a large entry-price and a small probability of large prizes. The most compelling examples of the rational unacceptability of risk no matter how high the prize, are the ones in which the entry price is high and the prize improbable. Imagine, for example, that you are risk averse, and are offered a gamble in which the entry price is your life-savings of (say) $100,000, and the chances of the prize are one-in-a-million. It seems rational to refuse, no matter how huge the prize. The reason for this is worth considering.

Exactly what sort of risk-aversion might explain why people won't bet more than $25 to play the St. Petersburg game? Is it an aversion to risking large sums of money? It's sometimes claimed that this is what's behind the refusal to gamble your life-savings no matter how probable or huge the winning payoff would be. But is this really a general principle of rationality in the face of risk? Doubts about this are raised by the following example. When you deposit your life savings in a solid bank bank for a year, you are in fact accepting a gamble. There's a very high probability of the consequence that at the end of the year, you can get your savings back with interest, but there's also an extremely low probability of the consequence that the bank and and the deposit insurance will both collapse, and you'll be wiped out. Someone who refused to run this very tiny risk no matter how high the interest and how low the probability of disaster is clearly irrational. Everyone who crosses a street is, in effect, gambling his life, because of the risk of being run down and killed. But to refuse to cross any street on these grounds is irrational. This sort of risk-aversion, when generally applied, would paralyze anyone. It is central to rationality that one take risks when the probability of disaster is suitably tiny, even though what would be lost is huge.

Or is it that rational risk-aversion will not gamble when the probability of gain is very low? The so-called “Sure Loss Principle” advises us never to take the risk of any significant loss when this loss is almost certain. But again examples appear to show that this is not general principle of rational behavior—examples in which it's seems rational to risk a highly probable and significant loss when the improbable gain is high enough. Hunting for a job or for a publisher for one's book are often like this: highly improbable that any particular attempt will succeed, and significantly costly, but worth it because of the big enough potential gain when one succeeds.

Despite the difficulty of coming up with a plausible principle of
rational risk aversion, it does appear that rational risk aversion
makes sense: that people not considered crazy avoid paying huge entry
sums to play when the enormous payoff is extremely improbable. Perhaps
that is what explains the unwillingness to make a big investment in
St. Petersburg. But note however that this may not be a sufficient
response to the paradox. This sort of risk-aversion would also provide
a psychological explanation of why (some) people are unwilling to
gamble large sums when the *finite* expected utility is greater
than the initial payment. So, for example, many people would be
unwilling to risk $100 for a one-in-a-hundred chance at winning
$20,000 (expected value $200). If risk is not a disutility that can be
compensated for by prize increase, then maybe their behavior runs
counter to the expected-value theory of rational choice; and if
they're rational, then maybe this shows the theory is wrong. The
paradox raised by St. Petersburg, however, is not thereby fully dealt
with. It is not merely a case—others of which are
well-known—in which apparently rational behavior disobeys the
advice to maximize expected value. The St. Petersburg paradox is that
the expected value is *infinite*.

## 3. An Upper Bound on Utility

The two reformulations of the game proposed so far share the feature that the dollar values of the prizes are increased as compensation (in the first case, for the diminishing marginal value of money, and, in the second case, for their improbability and risk-aversion). In both cases, it is assumed that the utility of each outcome can be increased without limit; but perhaps this assumption is incorrect, and there is an upper limit on the utility of the prizes. Then the sum of the series will reach a limit. In his classical treatment of the problem, Menger argues that the assumption that there is an upper limit to utility is the only way that the paradox can be resolved. Assume, for example, that utility = dollar value, except with an upper limit of 100 utiles. The chart for the game then looks like this:

nP(n)PrizeUtiles of PrizeExpected utility1 1/2 $2 2 1 2 1/4 $4 4 1 3 1/8 $8 8 1 4 1/16 $16 16 1 5 1/32 $32 32 1 6 1/64 $64 64 1 7 1/128 $128 100 0.78 8 1/256 $256 100 0.391 9 1/512 $512 100 0.195 10 1/1024 $1024 100 0.098

The sum of the infinite series in the right-hand column reaches a limit of about 7.56, and the rational entry price is anything under $7.56.

The assumption that maximum utility is reached by any dollar-prize over $100 is implausible because it means that the value of $100, $1000, and $1,000,000 are all the same—the maximum. That can't be: indifference between receiving $100 and $10,000 would be bizarre indeed. A more plausible point for maximum utility of dollars is much higher. Setting it at 16,000,000 makes the maximum rational entry price of the game close to $25, which is Hacking's guess at what our intuitions would accept. Is that the point where utility maximizes out?

Some people think that it is reasonable to set an upper limit on utility. Russell Hardin (1982), for example, calls this assumption “compelling in its own right.” William Gustason (1994) suggests that one restrict the expected value concept by stipulating that values of any consequence have an upper bound. Richard Jeffrey (1983) agrees.

But the idea of an upper limit on utility might not be seen to be compelling in its own right. Note that this idea must be distinguished from the diminishing marginal value of money. Perhaps you find it reasonable to think that, once one had (say) $16,000,000 in the bank, you'd be able to buy anything you could possibly want; but this is not to say that that sum of money provides the maximum permissible utility. We can readily imagine someone with that amount of money—or any amount of money—still short of utility, due to lack of certain goods that money can't buy. What the idea of an upper limit on utility means is that there is some amount of utility which is so high that no additional utility is possible—that nothing additional adds any value at all. Imagine someone with all the wealth he could use: still he might have unfulfilled desires, for example, that his friends and relations be as fortunate as he. If this desire were fulfilled, then he might still desire that strangers be as fortunate; and that there be more people on earth than there currently were, to share his happiness, and more populated planets full of happy people. How many more? Why, the more the better—indefinitely more. If there is an upper limit on utility, then there is some finite amount of utility which is maximally good, an amount for which one would rationally trade anything else. It doesn't appear plausible to think that there is any such amount.

One might imagine that some people have an upper limit on the utility they can enjoy—people who have a finite number of desires, and whose desires can each be completely satisfied by some finite state. For these people, the utility of prizes does not increase without limit, and the St. Petersburg game has some finite expected utility. Do such people exist? This is an empirical question. In any case, there surely are some people with some ‘the-more-the-better’ desires, and the theory of rational choice ought not to be restricted by the empirical and doubtful propositions that there aren't any, and that value cannot increase without limit. And these propositions are surely insufficiently well-founded to serve as solutions to the St. Petersburg paradox.

Gustason says that “the upshot of the paradox is that if there is such a thing as an infinite value, then acts and consequences that involve it are beyond the scope of the expected value concept.” Jeffrey states that the evaluation theory we are applying here has “from its inception...been closely tied to the notion that desirabilities are ... bounded.” But the fact that the theory wasn't designed with such a result in mind is not a very good reason to try to resist its application in this case. The main reason both authors give for excluding unbounded-desirability games is that otherwise the St. Petersburg game has infinite expected utility. But this ad-hoc rationale is not compelling unless one can't bear this result. The acceptability of this result will be considered later.

Hardin offers the opinion that whether utility is bounded “is more a factual than a logical issue,” and that its invocation to resolve the St. Petersburg paradox “is to grant that the paradox is not an antinomy.” He may mean that the difficulty posed by the game is a result of a factual assumption that utility is unbounded (and not merely by its logical features), and can be removed by rejecting that assumption. But if one finds no difficulty posed by the game, one is not tempted to reject the assumption.

## 4. Finitely Many Consequences

Gustason suggests that, in order to avoid the St. Petersburg problem, one has the choice between two restrictions on the expected value concept:

(a) Each act has only finitely many consequences, or(b) Values must be ‘bounded,’ i.e., there are numbers n and m such that no value to be assigned a consequence exceeds

nor is less than m.

He points out that imposing either restriction will suffice to rule out the St. Petersburg result. If one resists imposing restriction (b), in this case, by setting an upper bound to the value of consequences, perhaps restriction (a) might be found plausible.

One way to impose restriction (a) is merely to insist that the St. Petersburg game, which fails to meet it, is therefore not an appropriate application for standard expected value theory, which consequently cannot be used to calculate its fair entrance price. How then, if at all, can it be calculated? Where does the intuition that $25 is too much to pay come from (if anywhere)? How (if at all) can it be justified?

Another way is to assume that the way the game will take place is not exactly as described, and that there are some possible very long strings that would never be carried out—i.e., that there is only a finite number of prizes to be considered when calculating the expected value of the game. Presumably, this would be applied by setting some upper limit L to the number of flips which would be considered; after a run of L heads in a row, the game would be terminated and payment made for the run so far, despite the fact that tails hadn't yet come up. If L were set at 25, then the game would have an expected value of $25, and that would be the maximum entry price which a rational agent would pay to play (as in Hacking's intuition). Do we, perhaps unconsciously, assume that any run of 25 heads would be truncated, and paid off, at that point?

Many authors have pointed out that, practically speaking, there must be some point at which a run of heads would be truncated without a final tail. For one thing, the patience of the participants of the game would have to end somewhere. If you think that this sets too narrow a limit L, consider the considerably higher limit set by the life-spans of the participants, or the survival of the human race; or the limit imposed by the future time when the sun explodes, vaporizing the earth. Any of these limits produces a finite expected value for the game, but sets an L which is higher than 25; what, then, explains Hacking's $25 intuition?

Another fact that would set a limit on L is the finitude of the bankroll necessary to fund the game. Any casino that offers the game must be prepared to truncate any run that, were it to continue, would cost them more than the total funds they have available for prizes. A run of 25 would require a prize of a mere $33,554,432, possibly within the reach of a larger casino. A run of 40 would require a prize of about 1.1 trillion dollars. So any casino offering St. Petersburg must truncate very long runs.

Other facts, such as the limit on the amount of money in the world, make the necessity of an upper limit even more obvious. But perhaps all these financial limits can be overridden if we conceive of the game's being offered by a state capable of printing all the money it wanted to. This state could pay any prize whatever; still, printing up and handing out a huge amount of cash would create havoc with any economy, so no rational state would do this. (And anyway, if a foolish state did inject huge amounts of newly-minted currency to cover a stupendous win, the resulting inflation would severely undermine the value of the money won.)

But in any case, it appears that these practical difficulties may be circumvented; for example, as Michael Clark (2002) suggests, the casino might offer an enormous win merely as credit to the winner.

Hardin claims that “the slightest bit of realism is sufficient to do in the St. Petersburg Paradox.” But should we be even slightly realistic? Nowadays one frequently encounters, around philosophy departments, the refusal to take merely hpothetical situations seriously; simplifying thought-experiments, for example, are dismissed because they don't describe any realistic situation. But refusing to think about a problem isn't solving it.

It's of course true that any real game would impose some upper limit
on L, and thus a finite number of possible consequences of the game;
but this does not solve the St. Petersburg puzzle because it does not
show that the expected value of the game *as described* is not
infinite. After all, any game with a limit L is not the game we have
been talking about. Our question was about the St. Petersburg game,
not about its L-limited relative.

Do these realistic considerations show that the genuine St. Petersburg game—exactly as originally described—can never be encountered in real life? Jeffrey says: “Put briefly and crudely, our rebuttal of the St. Petersburg paradox consists in the remark that anyone who offers to let the agent play the St. Petersburg game is a liar, for he is pretending to have an indefinitely large bank.”

It can be quibbled that Jeffrey is not exactly right: that someone can offer a game even though he is aware that there's a possibility that this offer involves the possibility of requiring consequences he cannot fulfill. Compare my offer to drive you to the airport tomorrow. I realize that there's a small possibility my car will break down between now and then, and thus that I'm making an offer I might not be able to fulfill. But the conclusion is not that I'm not really offering what I appear to be. If someone invites you to play St. Petersburg, we can't conclude that he's in fact not offering the St. Petersburg game, that he's really offering some other game.

Real casinos right now play games that offer the extremely remote possibility of continuing too long for anyone to complete, or of prizes too large to be managed. Casinos can go ahead and play these games anyway, confident that the risk of running into an impossible situation is very very small. They need not lose any sleep worrying about incurring a debt they can't manage. They live, and prosper, on probabilities, not certainties.

If these considerations are persuasive, then what Jeffrey gives is not a rebuttal of the paradox. In effect, he accepts the fact that the game offers the possibility of indefinitely large payoffs. The reason the game is not offered by casinos is that they realize that sooner or later (probably much later) the game will bankrupt them. This is correct reasoning—but it is done using the ordinary, general theory of choice. When casinos reason about the game, they do not decide that, since ordinary theory shows that the game has infinite value, ordinary theory should be restricted to exclude its consideration.

There are other reasons why we should not restrict theory to exclude consideration of the game. This ruling, in order to be theoretically acceptable, ought not merely rule out the St. Petersburg game in particular, ad hoc; it ought to be general in scope. And if it is, it will also rule out perfectly acceptable calculations. Michael Resnik (1987) notes that utility theory “is easily extended to cover infinite lotteries, and it must be in order to handle more advanced problems in statistical decision theory” but he gives no examples.

Imagine a life insurance policy bought for a child at its birth, which pays to the child's estate, when it eventually dies, $1000 for each birthday the child has passed, without limit. What price should an insurance company charge for this policy? (For simplicity, we shall ignore possible effects of inflation, and profits from investing the entry price.) Standard empirically-based mortality charts give the chances of living another year at various ages. Of course, they don't give the chances of surviving another year at age 140, because there's no empirical evidence available for this; but a reasonable function to extend the mortality curve indefinitely beyond what's provided by available empirical evidence can be produced; this curve asymptotically approaches zero. On this basis, ordinary mathematical techniques can give the expected value of the policy. But note that it promises to pay off without limit. If we think that, for each age, there is a (large or small) probability of living another year, then there are an indefinitely large number of consequences to be considered when doing this calculation, but mathematics can calculate the limit of this infinite series; and (ignoring other factors) an insurance company will make a profit, in the long run, buy charging anything above this amount. There's no problem in calculating its expected value.

This insurance policy (call it Policy 1) offers an indefinite number of outcomes; but consider a different one (call it Policy 2) which would truncate the series at age 140, and offer only 140 outcomes. The probability of reaching age 140 is so tiny that the difference in expected value between the two policies is negligible, a tiny fraction of 1 cent. If you don't like infinite lotteries, you might claim that Policy 1 is ill-formed, and suggest substitution of Policy 2, pointing out that the expected value of this one is, for all practical purposes, equal to that of Policy 1. But note that your judgment that the two are virtually identical in expected value depends on your having calculated the expected value of Policy 1. So your statement presupposes that the expected value of Policy 1 is calculable, after all.

## 5. Infinite Value?

The St. Petersburg game is sometimes dismissed because it is has infinite expected value, which is, it's argued, not merely practically impossible, but theoretically objectionable—beyond the reach even of thought-experiment. But is it?

Imagine you were offered the following deal. For a price to be negotiated, you will be given permanent possession of a cash machine with the following unusual property: every time you punch in a dollar amount, that amount is extruded. This is not a withdrawal from your account; neither will you later be billed for it. You can do this as often as you care to. Now, how much would you offer to pay for this machine? Do you find it impossible to perform this thought-experiment, or to come up with an answer? Perhaps you don't, and your answer is: any price at all. Provided that you can defer payment of the initial price for a suitable time after receiving the machine, you can collect whatever you need to pay for it from the machine itself.

Of course, there are practical considerations: how long would it take you to collect its enormous purchase price from the machine? Would you (or the machine) be worn out or dead before you are finished? Any bank would be crazy to offer to sell you an infinite cash machine (and unfortunately I seem to have lost the address of the crazy bank which has made this offer). But so what? The point is that there appears to be nothing wrong with this thought experiment: it imagines an action (buying the machine) with no upper limit on expected value. We easily ignore practical considerations when calculating the expected value (in this case, merely potential withdrawals minus purchase price), which is infinite.

It seems unlikely that your intuitions tell you to offer (say) $25 at most for this machine. But the only difference between this machine and a single-play St. Petersburg game is that this machine guarantees an indefinitely large number of payouts, while the game offers a one-time lottery from among an indefinitely large number of possible payouts, each with a certain probability. The only difference between them is the probability factor: the same difference that exists between a game which gives you a guaranteed prize of $5, and one which gives you half a chance of $10, and half a chance of $0. The expected value of both the St. Petersburg game and the infinite cash machine are both indefinitely large. You should offer any price at all for either. These arguments appear to show that the notion of infinite expected value is perfectly reasonable.

It's quite true that when infinities show up in certain considerations, nonsense results. Consider this example: I write down an integer, at random, and seal it in an envelope. You open the envelope and observe I've written down 8,830,441. Given the infinity of integers I've had to choose from, the probability of my writing down this one is zero. It's a miracle! (Or maybe: a contradiction!) The problem here is, of course, the incoherence of the idea of choice among literally an infinite number of integers.

Doubts about the metaphysical reality of infinity, and about the proper rational employment of that concept have been raised throughout the history of philosophy and mathematics. So it's tempting to attribute the paradox raised by St. Petersburg to merely another illegitimate use of infinity. But notice that we don't need to invoke infinity in describing the gamble or its consequences. The payoff of any conceivable game is always finite. So is the length of any conceivable game. The paradoxical result can be put this way: no matter what (finite) entry price X is charged, it can be shown that the expected payoff of the game is larger than that, due to the (very small) possibility of the number of flips growing larger than X. (Note similarly that the wonderful cash machine mentioned above is not contingent on the reality of any infinity: every payoff it can make is finite; and at any point, it has been used only a finite number of times.)

## 6. Theory and Practicality

The St. Petersburg game is one of a large number of examples which have been brought against standard (unrestricted) Bayesian decision theory. Each example is supposed to be a counter-example to the theory because of one or both of these features: (1) the theory, in the application proposed by the example, yields a choice people really do not, or would not make; thus it is descriptively inadequate. (2) the theory, in the application proposed by the example, yields a choice people really ought not to make, or which a fully, ideally rational person, would not make; thus it is normatively inadequate.

If you see standard theory as normative, you can ignore objections of the first type. People are not always rational, and some people are rarely rational, and an adequate descriptive theory must take into account the various irrational ways people really do make decisions. It's no surprise that the classical rather a-prioristic theory fails to be descriptively adequate, and to criticize it on these grounds rather misses its normative point.

The objections from standpoint (2) need to be taken more seriously; and we have been treating the responses to St. Petersburg as cases of this sort. Various sorts of “realistic” considerations have been adduced to show that the result the theory draws in the St. Petersburg game about what a rational agent should do is incorrect. It's concluded that the unrestricted theory must be wrong, and that it must be restricted to prevent the paradoxical St. Petersburg result.

When considering the plausibility of restricting expected value calculations in various ways that would take care of the paradox. Amos Nathan (1984) remarks, “it ought, however, to be remembered that important and less frivolous application of such games have nothing to do with gambling and lie in the physical world where practical limitations may assume quite a different dimension.” Nathan doesn't mention any physical applications of analogous infinite value calculations. But it's nevertheless plausible to think that imposing restrictions on theory to rule out St. Petersburg bath water would throw out some babies as well.

Any theoretical model is an idealization, leaving aside certain practicalities. “From the mathematical and logical point of view,” observes Resnick, “the St. Petersburg paradox is impeccable.” But this is the point of view to be taken when evaluating a theory per se (though not the only point of view ever to be taken). By analogy, the aesthetic evaluation of a movie does not take into account the facts that the only local showing of the movie is far away, and that finding a baby sitter will be impossible at this late hour. If aesthetic theory tells you that the movie is wonderful, but other considerations show you that you shouldn't go, this isn't a defect in aesthetic theory. Similarly, the mathematical/logical theory for explaining ordinary casino games is not defective because it ignores practicalities such as a particular limit on a casino's bankroll, or on participants' patience.

There are all sorts of practical considerations which must be considered in making a real gambling decision. For example, in deciding whether to raise, see, fold, or cash in and go home, in a particular poker game, you must consider not only probability and expected value, but also the facts that it's 5 A.M. and you are cross-eyed from fatigue and drink; but it's not expected that classical decision theory has to deal with these.

The St. Petersburg game commits participants to doing what we know they will not. The casino may have to pay out more than it has. The player may have to flip a coin longer than physically possible. But this may not show a defect with choice theory. Classical unrestricted theory is still serving its purpose, which is modeling the abstract ideal rational agent. It tells us that no amount is too great to pay as an ideally rationally acceptable entrance fee, and this may be right. What it's reasonable for real agents, limited in time, patience, bankroll, and imaginative capacity to do, given the constraints of the real casino, the real economy, and the real earth, is another matter, one that the theoretical core of decision theory can be forgiven for not specifying. From this point of view, the St. Petersburg result is strange, but this does not show that there's a defect in classical decision theory. The appopriate reaction might just be to try to accept the strange result. As Clark says “This seems to be one of those paradoxes which we have to swallow.”

## Bibliography

### Works Cited

- Bernoulli, Daniel, 1954 [1738], “Exposition of a New Theory on the
Measurement of Risk”,
*Econometrica*22: 23-36. - Clark, Michael, 2002, “The St. Petersburg Paradox”,
in
*Paradoxes from A to Z*, London: Routledge, pp. 174-177. - Gustason, William, 1994,
*Reasoning from Evidence*, New York: Macmillan College Publishing Company. - Hacking, Ian, 1980, “Strange Expectations”,
*Philosophy of Science*47: 562-567. - Hardin, Russell, 1982,
*Collective Action*, Baltimore: The Johns Hopkins University Press. - Jeffrey, Richard C., 1983,
*The Logic of Decision*, Second Edition, Chicago: University of Chicago Press. - Menger, Karl, 1967 [1934], “The Role of Uncertainty in Economics”, in
*Essays in Mathematical Economics in Honor of Oskar Morgenstern*(ed. Martin Shubik), Princeton: Princeton University Press. - Nathan, Amos, 1984, “False Expectations”,
*Philosophy of Science*51: 128-136. - Resnik, Michael D., 1987,
*Choices: An Introduction to Decision Theory*, Minneapolis: University of Minnesota Press. - Weirich, Paul, 1984, “The St. Petersburg Gamble and
Risk”,
*Theory and Decision*17: 193-202.

### Other Discussions

- Ball, W. W. R. and Coxeter, H. S. M., 1987,
*Mathematical Recreations and Essays*, 13th ed., New York: Dover. - Bernstein, Peter, 1996,
*Against The Gods: the Remarkable Story of Risk,*New York: John Wiley & Sons. - Cowen, Tyler, and High, Jack, 1988, “Time, Bounded Utility, and
the St Petersburg Paradox”,
*Theory and Decision: An International Journal for Methods and Models in the Social and Decision Sciences*, 25: 219-223. - Gardner, Martin, 1959,
*The Scientific American Book of Mathematical Puzzles & Diversions*. New York: Simon and Schuster. - Kamke, E, 1932,
*Einführung in die Wahrscheinlichkeitstheorie*. Leipzig: S. Hirzel. - Keynes, J. M. K., 1988, “The Application of Probability to
Conduct”, in
*The World of Mathematics*, Vol. 2 (Ed. K. Newman), Redmond, WA: Microsoft Press. - Kraitchik, M., 1942, “The Saint Petersburg Paradox”, in
*Mathematical Recreations*. New York: W. W. Norton, pp. 138-139. - Todhunter, I.: 1949 [1865]
*A History of the Mathematical Theory of Probability*, New York: Chelsea.

## Other Internet Resources

- ‘Two Lessons from Fractals and Chaos’,
a
preprint of a paper in
*Complexity*, Vol. 5, No. 4, 2000, pp. 34-43, by Larry S. Liebovitch and Daniela Scheurle (Florida Atlantic University).