This is a file in the archives of the Stanford Encyclopedia of Philosophy. |

- Philosophical Motivation
- Defining a Game
- Statics and Dynamics
- Applications
- Bibliography
- Other Internet Resources
- Related Entries

Hobbes, of course, pushes his argument further, maintaining that
sovreignty must be indivisble. He thus rejects democracy as a viable
enforcement mechanism. We shall not concern ourselves with this
further argument here, which appears partly to rest on a false
dichotomy: when Hobbes speaks of democracy, he seems to have only the
unregulated democracies of the ancient world before him as examples;
the possibility that a constitution could bind government and governed
alike appears (unsurprisingly) not to have occurred to him. But his
central point continues to carry much force. Indeed, modern economics
and game theory can help us to see that it has more force than Hobbes
supposed. Hobbes's argument involves a questionable psychological
premise to the effect that *most* people are narrowly
self-interested. But this premise is unnecessarily strong. To see why,
let us consider the notion of a *utility function*.

We begin with a preliminary concept, that of a
*preference-ordering*. Following Paul Samuelson, we will regard
preference-orderings as *revealed* by behavior. That is to say,
imagine that each agent were presented with a set of possible states
of the world, or *bundles*, and the opportunity to trade bundles
with other agents. The agent would *reveal* her preferences among
bundles by swapping some for others. Eventually, as evidence
accumulates, we can construct, for each agent, an ordered list,
placing her most preferred bundle at the top, and proceeding downwards
to her least-preferred bundle. Note that she will likely be
indifferent in her preferences amongst many bundles; these will be
ranked together as *indifference sets*. Then the construction of
an *ordinal utility function* is straightforward: we simply
number her indifference-sets, from top to bottom, with real numbers 1
to *n*. We call this function *ordinal* because no
properties of the numbers matter except their order; thus, an ordinal
utility function does not capture relative intensities amongst
preferences. However, we can *cardinalize* these functions, so
that intensities *are* represented, by means of the following
trick. Present the agent with choices of gambles over lotteries
amongst the elements in her preference ordering, where each gamble
must be purchased in a uniform currency (say, money). Then we may
examine the ratios between the probabilities associated with her
maximizing the acquisition of bundles high on her ordinal utility
function and the amounts she is willing to pay for each gamble, and
derive her cardinal utility function from these ratios.

Now let us return to the Hobbesian dilemma. Game Theory permits us to
show, mathematically, that that dilemma can arise even in a population
consisting entirely of altruists. Suppose that everyone's
most-preferred bundle was one in which all of the world's surplus food
went to the starving children of some impoverished country; that all
favoured a different country's children; and that all preferred that
some country should have its children fed rather than none. These
agents are surely altruistic, in any normal sense of the word. But,
having conflicting utility functions, they must bargain with one
another. Suppose that all know that their own contribution alone would
make no significant difference, and that all attach *some* value
to consuming their own food. Then the Hobbesian nightmare is possible
among these saintly agents, and Game Theory can prove it. As Nash
showed in his series of papers (without specific reference to this
scenario I invented) there exists an assignment of possible utility
functions to these agents, consistent with the story as told, such
that the only equilibrium strategy for each agent would be to threaten
to withhold their own food unless others contributed to their
preferred country. In that case, we would have an instance of a
Hobbesian *social dilemma*: the only equilibrium in the game
would be one in which all of the children starve, despite the fact
that all of the agents prefer an outcome in which at least some
children are saved (in the technical parlance, the unique
Nash-equilibrium would be *Pareto-dominated* by another state of
affairs).

The fascination of Game Theory emerges from the fact that it shows us
how we cannot simply derive conclusions about outcomes in competitive
settings from psychological facts about the competitors. The intuitive
reason for this is straightforward. Our imagined agents are neither
selfish nor irrational. But the choice of strategy for each agent is
constrained by both scarcity - the basic insight on which all
economics rests - *and* by the utility functions of the other
agents with whom they are competing. The complete set of utility
functions, along with specifications about the extent to which the
agents are privy to one another's utility functions, *determines*
the equilbrium strategies available to them. Our altruists in the
scenario imagined above are trapped by the logic of the game in which
they find themselves; only an external force - say, a UN decree that
food shall go either and only to Mexico or to India - could get them
out of their social dilemma, by changing the game in which they find
themselves.

First, then, the informal definition. Consider an initial allocation
of resources distributed among a finite set of *Dennettian
agents*. A Dennettian agent is a unit that *acts*, where an
act is any move that potentially influences future allocations. The
qualifier *Denettian* here is in acknowledgement of Daniel
Dennett's careful separation, over a large body of work, of the
concept of *agency*, on the one hand, and the concepts of
deliberation and consciousness, on the other. (See the article by Don
Ross, mentioned in the Bibliography below, which explains this fully.)
A Dennettian agent, then, is an actor that is not necessarily presumed
to be either deliberative or conscious. (This is important in order to
respect the full range of applications of Game Theory; see below.)
Now, then: a *game* is a set of acts by 1 to *n* rational
Dennettian agents (with what is meant by *rational* to be
specified below), and, possibly, an arational Dennettian agent (a
random mechanism) called *nature*, where at least one Dennettian
agent (henceforth, a DA) has control over the outcome of the set of
acts, and where the DAs are potentially in conflict, in the sense that
one DA could rank outcomes differently from the others. A
*strategy* for a particular DA *i* is a vector that
specifies the acts that *i* will take in response to all possible
acts by other agents. A DA *i* is *rational* iff, for given
strategies of other agents, the set of acts specified by *i's*
strategy is such that it secures the available consequence which is
most highly ranked by *i*. *Nature* is a generator of
pobabilistic influences on outcomes; technically it is the unique DA
in a game that is not rational. An *outcome* is an allocation of
resources which results from the acts of the DAs. A DA *i* has
*control* if a change in *i*'s acts is sufficient to change
the outcome for at least one vector of strategies for the other DAs. A
*consequence* for *i* is the value for *i* of a
function that maps outcomes onto the real numbers, interpreted as
representing either an ordinal or a cardinal utility function for
*i*.

Games may be represented either in *extensive* form, that is,
using a "tree" structure of the sort that is familiar to
decision theorists, where each player's strategy is a path through the
tree, or in *strategic* form. The significance of this
distinstion will be discussed below. A game in strategic form is a
list:

G = {N, S, (s)}where

- N is the set of players, and the index
*i*designates a particular agent i N = {0, 1, 2, 3 ... n}; *S*is the strategy space for the agents*S*=*X*where^{n}_{i=0}S_{i}*S*is the set of all possible strategies for_{i}*i*;- (s) is a vector of payoff
functions, one for each agent, excluding player 0. Each payoff function
specifies the consequences, for the agent in question, of the strategies
specified for all agents:
(

*s*) = (_{1}(*s*), ...,(_{n}*s*)); : S^{n}

Given that game outcomes are determined by the agents' acts, and given
that these acts are specified by their strategies, it follows that
specification of a function *f*(·) together with
strategies implies the existence of the vector of payoff functions
(*s*). The payoff
functions provide, for each vector of strategies in *S*, a vector
of *n* real numbers in * ^{n}* representing the consequences for all
players.

- Binmore,
*Fun and Games*, D. C. Heath, 1992 - Binmore, Kirman, and Tani, (eds.),
*Frontiers of Game Theory*, MIT Press, 1993 - Binmore,
*Game Theory and the Social Contract*(v. 1):*Playing Fair*, MIT Press, 1994 - Danielson,
*Artificial Morality*, Routledge, 1992 - Danielson, (ed.),
*Modelling Rational and Moral Agents*, Oxford University Press, forthcoming 1997 - Fudenberg and Tirole,
*Game Theory*, MIT Press, 1991 - Gauthier,
*Morals By Agreement*, Oxford University Press, 1986 - Maynard Smith,
*Evolution and the Theory of Games*, Cambridge University Press, 1982 - Ross, "Dennett's Conceptual Reform",
*Beaviour and Philosophy*22:41-52, 1994. - Skyrms,
*The Evolution of the Social Contract*, Cambridge University Press, 1996. - Vallentyne, (ed.),
*Contractarianism and Rational Choice*, Cambridge University Press, 1991 - von Neumann and Morgenstern,
*The Theory of Games and Economic Behavior*, Princeton University Press 1947, 2nd edition - Weibull,
*Evolutionary Game Theory*, MIT Press, 1995

- JimRatliff's List of Game Theory Resources on the Net
- History of Game Theory
- Principia Cybernetica entry: Game Theory
- University of Rochester Economics Department: Game Theory
- Game Theory for War or Peace
- Prisoner's Dilemma
- Game Theory: TU and non-TU games
- TU Wroclaw IMath - Game Theory
- What is Game Theory?
- Al Roth's Game Theory and Experimental Economics Page
- Game Theory and Experimental Economics: Bibliographies

Don Ross

University of Cape Town

*First published: January 25, 1997*

*Content last modified: September 15, 1997*