Solution Set 14 – Mixed Strategy Equilibrium

1. (Guess the average)

Let k* be the largest number to which any player's strategy assigns positive probability in a mixed strategy equilibrium and assume that player i's strategy does so. We now argue as follows.

The remaining possibility is that k* = 1: every player uses the pure strategy in which he announces the number 1.

2. (Investment race)

The set of actions of each player i is Ai = [0,1]. The payoff function of player i is

ui(a1,a2) =

-ai

 if ai < aj

1/2 - ai

 if ai = aj

1 - ai

 if ai > aj,

where j Î {1,2} \ {i}.

We can represent a mixed strategy of a player i in this game by a probability distribution function F i on the interval [0, 1], with the interpretation that F i(v) is the probability that player i chooses an action in the interval [0, v]. Define the support of F i to be the set of points v for which F i(v + e- F i(v - e) > 0 for all e > 0, and define v to be an atom of F i if F i(v) > lime®0+F i(v - e). Suppose that (F 1*, F 2*) is a mixed strategy Nash equilibrium of the game and let Si* be the support of F *i for i = 1, 2.

Step 1. S*1 Ç(0,1]= S*2Ç(0,1].

Proof. If not then there is an open interval, say (vw), to which F *i assigns positive probability while F *j assigns zero probability (for some i, j). But then i can increase his payoff by transferring probability to smaller values within the interval (since this does not affect the probability that he wins or loses, but increases his payoff in both cases).

Step 2. If v is an atom of F *i then it is not an atom of F *j and for some e > 0 the set S*j contains no point in (v - ev).

Proof. If v is an atom of F *i then for some e > 0, no action in (v - e,v] is optimal for player j since by moving any probability mass in F i* that is in this interval to either v + d for some small d > 0 (if v < 1) or 0 (if v = 1), player j increases his payoff.

Step 3. If v > 0 then v is not an atom of F *i for i = 1, 2.

Proof. If v > 0 is an atom of F *i then, using Step 2, player i can increase his payoff by transferring the probability attached to the atom to a smaller point in the interval (v - e, v).

Step 4. Si* = [0, M] for some M > 0 for i = 1, 2.

Proof. Suppose that v Ï Si* and let w* = inf{w: w Î Si* and w ³ v} > v. By Step 1 we have wÎ S*j, and hence, given that w* is not an atom of F i* by Step 3, we require j's payoff at w* to be no less than his payoff at v. Hence w* = v. By Step 2 at most one distribution has an atom at 0, so M > 0.

Step 5. S*i = [0,1] and F *i(v) = v for v Î [0,1] and i = 1, 2.

Proof. By Steps 2 and 3 each equilibrium distribution is atomless, except possibly at 0, where at most one distribution, say F *i, has an atom. The payoff of j at v > 0 is F i*(v- v, where i ¹ j. Thus the constancy of i's payoff on [0,M] and F *j(0) = 0 requires that F *j(v) = v, which implies that M = 1. The constancy of j's payoff then implies that F i*(v) = v.

We conclude that the game has a unique mixed strategy equilibrium, in which each player's probability distribution is uniform on [0,1].

3. (Guessing right)

In the game each player has K actions; u1(k, k) = 1 for each k Î {1, ..., K} and u1(k, l) = 0 if k ¹ l. The strategy pair ((1/K, ..., 1/K), (1/K, ..., 1/K)) is the unique mixed strategy equilibrium, with an expected payoff to player 1 of 1/K. To see this, let (p*, q*) be a mixed strategy equilibrium. If p*k > 0 then the optimality of the action k for player 1 implies that q*k is maximal among all the ql*, so that in particular q*k > 0, which implies that p*k is minimal among all the p*l, so that pk£ 1/K. Hence p*k = 1/K for all k; similarly qk = 1/K for all k.

4. (Air strike)

The payoffs of player 1 are given by the matrix

 0 

 v1 

 v1 

 v2 

 0 

 v2 

 v3 

 v3 

 0 

Let (p*, q*) be a mixed strategy equilibrium.

Step 1. If pi* = 0 then qi* = 0 (otherwise q* is not a best response to p*); but if qi* = 0 and i £ 2 then pi+1 = 0 (since player i can achieve vi by choosing i). Thus if for i £ 2 target i is not attacked then target i+1 is not attacked either.

Step 2. p¹ (1,0,0): it is not the case that only target 1 is attacked.

Step 3. The remaining possibilities are that only targets 1 and 2 are attacked or all three targets are attacked.

6. (Example of correlated equilibrium)

a. The pure strategy equilibria are (B, L, A), (T, R, A), (B, L, C), and (T, R, C).

b. A correlated equilibrium with the outcome described is given by: W = {x, y}, p(x) = p(y) = 1/2; P1 = P2 = {{x}, {y}}, P3 = W; s1({x}) = T, s1({y}) = B; s2({x}) = L, s2({y}) = R; s3(W) = B. (The P's should be in a calligraphic font.) Note that player 3 knows that (T,L) and (B,R) will occur with equal probabilities, so that if she deviates to A or C she obtains 3/2 < 2.

c. If player 3 were to have the same information as players 1 and 2 then the outcome would be one of those predicted by the notion of Nash equilibrium, in all of which she obtains a payoff of zero.