Notes to Game Theory and Ethics

1. Diana Richards first suggested the term conflictual coordination game. Such games are often referred to by the more colloquial names Battle of the Sexes (Luce & Raiffa 1957: 90–91) or Hawk-Dove (Maynard Smith 1982: 11–12). The “hawk” and “dove” labels we’ve utilized are borrowed from the Maynard Smith model which was developed to help explain behavioral strategies that animals might employ to protect or acquire resources. While the label for the Luce and Raiffa model is a reference to scenarios where both partners in a couple desire to do something together, but have conflicting preferences over what they would most prefer to do.

2. A point \(x=(x_1,\ldots,x_n)\) is weakly Pareto superior to a point \(y=(y_1,\ldots,y_n)\) if \(x_i \ge y_i\) for each \(i \in \{1,\ldots,n\}\) and \(x_i > y_i\) for at least one \(i \in \{1,\ldots,n\}\). x is Pareto superior to y if \(x_i > y_i\) for each \(i \in \{1,\ldots,n\}\).

3. In his original analysis Nash argued that bargaining problems can be analyzed either axiomatically, by considering which members of the feasible set satisfy certain formal desiderata, or strategically, by examining which outcomes could be arrived at by a set of agents engaged in a bargaining process. However, the alternating offers models introduced by Stähl (1972) and Rubinstein (1982) that have been the most influential models of strategic bargaining give determinate results only in 2-agent contexts with infinitely divisible goods. As a result, the axiomatic approach has been more influential, and it is for that reason that we focus on it here.

4. Braithwaite presented his basis game in terms of the ratios of the utilities for each agent over the possible outcomes. To ease exposition Luce and Raiffa (1957: 146–147) translated these ratios into cardinal utilities. The payoffs depicted in Figure 10 follow the Luce and Raiffa presentation.

5. Specifically, for the Braithwaite bargaining problem depicted in Figure 11, that extends the game depicted in Figure 10, at the Nash solution, Luke and Matthew are assigned the respective shares \(x^{*}_{1} = \frac{1}{14}\) and \(x^{*}_{2} = \frac{13}{14}\). At the maximin proportionate gain solution, they are assigned shares of \(x'_{1} = \frac{4}{14}\) and \(x'_{2} = \frac{7}{14}\). And at the proportional solution, they are assigned shares of \(\tilde{x}_{1} = \frac{7}{23}\) and \(\tilde{x}_{2} = \frac{16}{23}\).

6. Formally, the proportional solution selects the payoff vector \(\tilde{\bu} = (\tilde{u}_{1},\ldots,\tilde{u}_{n}) \in P_{\Lambda}\) at which the proportionate gain \(\lambda_i(\tilde{u}_i - u_{i0})\) relative to the nonagreement point is equal for all the agents with respect to some scaling vector \(\lambda = (\lambda,\ldots,\lambda_{n})\) where \(\lambda > 0\) for each agent i. Here the scaling vector provides a way of weighting the claims of the respective agents. This solution concept was first proposed by Raiffa (1953) and later axiomatized by Kalai (1977). Binmore (1998: 396–401) shows that this solution is equivalent to Aristotle’s principle of distributive justice developed in Nicomachean Ethics Book V. And, in the special case where \(\lambda_i = 1\) for each agent i, then the solution corresponds with the intuitively egalitarian sense in which the surplus gains from agreement are equally divided.

7. Gauthier refers to the maximin proportionate gain solution concept by the alternative term minimax relative concession solution (1986: 145). In the 2-agent case this approach is also equivalent to a solution first proposed by Raiffa (1953) and later axiomatized by Kalai and Smorodinsky (1975). To define the Gauthier and the Kalai-Smorodinsky solutions, one first identifies each agent i’s ideal payoff \(\bar{u}_i - u_{0i}\), by taking the difference between the maximin payoff \(\bar{u}_i\) for agent i in the feasible set and that same agent’s nonagreement point payoff \(u_{0i}\). Then, for each point \(\bu = (u_1,\ldots,u_n)\), one identifies the settlement payoff \(u_i - u_{i0}\). The Kalai-Smorodinsky solution then selects the payoff vector \(\bu' = (u'_{1},\ldots,u'_{n}) \in P_{\Lambda}\) such that

\[ \frac{u'_1-u_{10}}{\bar{u}_1 - u_{10}} = \ldots = \frac{u'_{n}-u_{n0}}{\bar{u}_{n} - u_{n0}}, \]

that is, where the ratio of each agent i’s settlement payoff to her ideal payoff is equal across all agents. While, the maximin proportionate gain solution selects the payoff vector \(\bu'' = (u''_{1},\ldots,u''_{n})\) at which

\[ \text{min}_i \left(\frac{u''_i - u_{i0}}{\bar{u}_i - u_{i0}}\right) > \text{min}_j \left(\frac{u''_j - u_{j0}}{\bar{u}_j - u_{j0}}\right) \]

for each agent j and all \(\bu \ne \bu'',\) that is, it selects the point where the smallest ratio of any agent i’s settlement payoff to her ideal payoff is maximized. Although the Kalai-Smorodinsky and maximin proportionate gain solutions are equivalent in the two agent case, Gauthier shows by example that they can differ in bargaining problems with three or more agents (Gauthier 1985).

8. For the n-agent bargaining problem with feasible set \(\Lambda\) and nonagreement point \(\bu_0 = (u_{01},\ldots,u_{0n}),\) the Nash solution is the point \(\bu^{*} = (u^{*}_{1},\ldots,u^{*}_{n}) \in \Lambda\) that maximizes the Nash product \((u_1 - u_{01})\ldots(u_n - u_{0n})\) for \(\bu = (u_1,\ldots,u_n) \in \Lambda\).

9. In his original analysis Nash (1950) proved that his solution concept is uniquely capable of satisfying Pareto optimality, symmetry, scale invariance, and independence of irrelevant alternatives. Lensberg proved that the Nash solution is the unique solution to satisfy Pareto optimality, symmetry, scale invariance, and reapplication stability (Lensberg 1988). Nash’s paper is collected along with much of his other work in Nash (2002).

10. A payoff vector w is Pareto optimal if it lies on the Pareto frontier of \(\Lambda, P_{\Lambda} \subset \Lambda,\) where \((w_1,\ldots,w_n) \in P_{\Lambda}\) if there is no other payoff vector \((w'_1,\ldots,w'_n) \in \Lambda\) such that \(w'_{i} \ge w_{i}\) for each agent i.

11. A solution satisfies the symmetry condition when, if \(\Lambda\) and \(u_0\) are unchanged by interchanging the individual coordinates of the payoff vectors, the solution is also unchanged.

12. More precisely, what scale invariance requires is that the shares that agents receive according to a solution are not changed by affine rescalings of their payoffs. That the condition is only concerned with affine rescalings, and not more radical ways of changing how payoffs are represented, reflects the fact that von Neumann-Morgenstern utilities are invariant only with respect to positive affine transformations. In other words, scale invariance requires that a solution not be impacted by different, but equivalent representations of agents’ von Neumann-Morgenstern utility functions.

13. Reapplication stability is closely related to the independence of irrelevant alternatives property that von Neumann-Morgenstern utilities are required to satisfy. In the context of a bargaining problem this property requires that the payoff that each agent obtains from a given share of a good being divided does not depend on other agents’ shares of the good. The interpretation of this requirement in terms of reapplication of bargaining procedures, however, is more natural in the context of evaluating solutions to bargaining problems. What reapplication stability requires is that if a solution concept is initially applied to a bargaining problem and all the claimants are assigned their shares accordingly, then if that solution concept is reapplied to the subproblem characterized by the fraction of the good originally assigned to some proper subset of the claimants, the reapplication should leave none of the claimants in this subset worse off than they were in the initial allocation determined by applying the solution concept to the larger problem.

14. The 3-Piece Nash Demand Game game is illustrated in Figure N1 below. It is equivalent to the game illustrated in Figure 8 because one obtains that game by applying the transformation \(T(u_{ij}) = 3 \cdot u_{ij} - 1\) to each payoff \(u_{ij}\) in the 3-piece Nash demand game depicted here.

    Agent 2
    D M H
Agent 1 D \((\frac{1}{3}, \frac{1}{3})\) \((\frac{1}{3},\frac{1}{2})\) \((\frac{1}{3}, \frac{1}{3})\)
M \((\frac{1}{2}, \frac{1}{2})\) \((\frac{1}{2},\frac{1}{2})\) \(({0},{0})\)
H \((\frac{2}{3}, \frac{1}{3})\) \(({0},{0})\) \(({0},{0})\)

D= claim none, M = claim half, H = claim all

Figure N1. 3-Piece Nash Demand Game

15. For an \(n \times n\) game with a symmetric payoff structure the law of motion of the one-population discrete replicator dynamic is defined by the function:

\[ x_i (t+1) = x_i (t) \cdot \frac{\be^{T}_{i} A\bx(t)} {\bx(t)^{T} A\bx(t)} \]

where A is the payoff function, the n pure strategies \(s_1,\ldots,s_n\) are respectively characterized by the vectors \(\be_1 = (1,0,\ldots,0),\) …, \(\be_n = (0,0,\ldots,1),\) and \(\bx(t)^{T} = (x_1(t),\ldots,x_n(t))^{T}\) is the population state representing the fraction of the population following each of the respective strategies at time t. In this dynamic, at each period t the proportion \(x_i(t)\) of \(\be_i\)-followers in the population varies in direct proportion to the current expected payoff \(\be_{i}^{T} A \bx (t)\) of \(\be_i\) divided by the current overall average payoff \(\bx(t)^{T} A\bx(t)\) among the population.

16. For a given replicator dynamic a basin of attraction \(\Lambda_z\) of some point z is the set of points in the simplex where each orbit of the replicator dynamic starting from within \(\Lambda_z\) converges to z. The size \(m(\Delta_z)\) of a basin of attraction \(\Delta_z\) is the fraction of all of the points in a simplex that lie within this basin. Skyrms reports that over 10,000 trials of the replicator dynamic each with an initial point chosen at random in the simplex, 62% of the orbits converged to the point of the fair division strategy (1996 [2014: 15]), so \(m(\Delta_{e_w}) \approx 0.62.\)

17. As a point of comparison, the other notable equilibrium with a large basis of attraction in the Nash Demand Game whose replicator dynamics are depicted in Figure 13 is a polymorphic equilibrium consisting of some agents following D and a relatively larger number of agents following H. In this equilibrium the H-followers sometimes succeed in exploiting D-followers, but also incur the costs of conflict with one another, while the D-followers always receive positive payoffs and never incur the costs of conflicts, but are alternately exploited by H-followers, or leave possible claims on the table when paired with other D-followers.

18. Basu (2000: 92) gives the following perspicuous definition of an evolutionarily stable strategy that is equivalent to Maynard Smith’s (1982) definition:

If \(u(\sigma_1,\sigma_2)\) denotes the expected payoff for an agent who follows a strategy \(\sigma_1\) that can be pure or mixed and meets an agent who follows the strategy \(\sigma_{2}\), then a strategy \(\sigma^{*}\) is immune against strategy \(\sigma \ne \sigma^{*}\) if:

  1. \(u(\sigma^{*}, \sigma^{*}) > u(\sigma, \sigma^{*})\) or
  2. \(u(\sigma^{*}, \sigma^{*}) =; u(\sigma, \sigma^{*})\) and \(u(\sigma^{*}, \sigma) = u(\sigma, \sigma)\)

And \(\sigma^{*}\) is evolutionarily stable if \(\sigma^{*}\) is immune against every strategy \(\sigma \ne \sigma^{*}\) .

19. Eshel and Cavalli-Sforza (1982) first developed the correlated replicator dynamic. In this dynamic, if \(p(s_j)\) is the probability of meeting strategy \(s_j\) at random, and \(\alpha\) is the correlation rate, then with correlation the probability of a strategy \(s_i\) meeting itself \(p(s_i \mid s_i)\) is augmented to become \(p(s_i \mid s_i) = p(s_) + \alpha p (\text{not-}s_i),\) and the probability of \(s_i\) meeting a different strategy \(s_j,\) \(p(s_i \mid s_j)\) is diminished so that \(p(s_i \mid s_j) = p(s_j) + \alpha p(s_j).\)

20. Specifically, what we see in the correlated replicator dynamic depicted in Figure 14 is that initial population distributions that consisted of mostly H-followers and D-followers, and very few M-followers, now converge to \(\be_2\) as opposed to converging to the polymorphic equilibrium we saw with the original replicator dynamics that lied along the \(e_1 - e_2\) edge.

21. These figures were created using a simulation created by Frank McCown hosted at: http://nifty.stanford.edu/2014/mccown-schelling-model-segregation/
Readers interested in these issues are encouraged to explore the interactive tool available there.

Copyright © 2021 by
Keith Hankins <keith.s.hankins@gmail.com>
Peter Vanderschraaf <pvanderschraaf@arizona.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free