In [2]:
%pylab inline
Populating the interactive namespace from numpy and matplotlib

Elementary notions from game theory

Game theory is a mathematical field dealing with the analysis of strategic interactions, of the kind found in games but also in things like economic behavior, international relations, decision makiing under uncertainty and so on. Game theory is a deep and exciting mathematical theory, but some of the elementary notions from game theory are easy to grasp and have become standard terminology in discussions of social behavior. They are often used by philosophers. Moreover, games, in the formal sense of game theory, are very often used as a modeling framework for studying social behavior, and are thus used to discuss justice, fairness, cooperation, norms and so on.

Here I am just going to show some of the most basic notions from game theory and Python code for the simplest of cases.

We start with some definitions. They are given here mostly for review.

A game consists of a set of players, a set of possible moves (actions) each player can make, and a payoff function indicating the payoff each player gets depending on the moves made by every player (including herself, of course). Typically, positive numbers are cosnidered good, the higher the better, while negative payoff represents unwanted outcomes.

A strategy of a player is, for now, simply an action in the game. For example, playing "Rock" in Rock-Paper-Scissors.

The best response strategy for a player is the action that gives them the highest payoff given the actions made by all other players.

A Nash equilibrium is a situation in which each player plays the best response strategy to all other players. Thus in a Nash equilibrium, no one has any incentive to change his or her strategy: each player is doing the best given how other players are acting! As we will see, this does not mean everyone is getting the best possible outcome: indeed, everyone might be better off in a different scenario. This is what happens in the Prisoners' Dillema, for example.

First, we define some helper functions.

In [21]:
# Helper functions

def other(x):
    return 1-x

def first(x):
    return x[0]

def second(x):
    return x[1]
In [22]:
# Print a 2x2 psayoff matrix
def print_mat(payoffs):
    for i in range(2):
        for j in range(2):
            print payoffs[i][j],
        print
In [23]:
# Find Nash equilibria (2x2 matrix)

def Nash(payoffs):
    for i in range(2):
        for j in range(2):
            if first(payoffs[i][j])>=first(payoffs[other(i)][j]) and second(payoffs[i][j])>=second(payoffs[i][other(j)]):
                    print moves[i],moves[j],' ===> resulting in:',payoffs[i][j]
In [24]:
# prisoners dillema

moves=['C','D']
payoffs=[[(-1,-1),(-3,0)],[(0,-3),(-2,-2)]]

print_mat(payoffs)
Nash(payoffs)
(-1, -1) (-3, 0)
(0, -3) (-2, -2)
D D  ===> resulting in: (-2, -2)

Note that both players would have been better of if they both chose to play C ("cooperate"). Why isn't (-1,-1) a Nash equilibrium?

Coordination games

In the coordination games we will find more than one Nash equilibrium.

In [25]:
# Choosing sides

moves=['left','right']
payoffs=[[(10,10),(0,0)],[(0,0),(10,10)]]
print_mat(payoffs)
Nash(payoffs)
(10, 10) (0, 0)
(0, 0) (10, 10)
left left  ===> resulting in: (10, 10)
right right  ===> resulting in: (10, 10)
In [70]:
# Pure coordination game

moves=['Party','Home']
payoffs=[[(10,10),(0,0)],[(0,0),(5,5)]]
print_mat(payoffs)
Nash(payoffs)
(10, 10) (0, 0)
(0, 0) (5, 5)
Party Party  ===> resulting in: (10, 10)
Home Home  ===> resulting in: (5, 5)
In [71]:
# Battle of the sexes

moves=['Party','Home']
payoffs=[[(10,5),(0,0)],[(0,0),(5,10)]]
print_mat(payoffs)
Nash(payoffs)
(10, 5) (0, 0)
(0, 0) (5, 10)
Party Party  ===> resulting in: (10, 5)
Home Home  ===> resulting in: (5, 10)
In [73]:
# Stag hunt

moves=['Stag','Hare']
payoffs=[[(10,10),(0,8)],[(8,0),(7,7)]]
print_mat(payoffs)
Nash(payoffs)
(10, 10) (0, 8)
(8, 0) (7, 7)
Stag Stag  ===> resulting in: (10, 10)
Hare Hare  ===> resulting in: (7, 7)

Mixed strategy for coordination games

Since there is more than a single Nash equilibrium, we look for a mixed strategy in which we decide which action to take randomly. In other words, the strategy is a probability distribution over the possible actions.

Notation is the same as Wikipedia

In [105]:
def mixed(payoff):
    top,bottom=payoffs
    (A,a),(C,c)=top
    (B,b),(D,d)=bottom
    
    print "Player 1"
    p = float((d-b))/(a+d-b-c) 
    print p, "for ",moves[0]
    print 1-p, "for ",moves[1]
    
    print "Player 2"
    q = float((D-C))/(A+D-B-C)
    print q, "for ",moves[0]
    print 1-q, "for ",moves[1]
In [106]:
moves=['Party','Home']
payoffs=[[(10,5),(0,0)],[(0,0),(5,10)]]
print_mat(payoffs)
mixed(payoffs)
(10, 5) (0, 0)
(0, 0) (5, 10)
Player 1
0.666666666667 for  Party
0.333333333333 for  Home
Player 2
0.333333333333 for  Party
0.666666666667 for  Home

Aside

Here is how you get the formula for $p$ and $q$. Suppose you are player 1, your payoff will depend on how player 2 behaves, hence on $q$. So you calculate your expected payoff (which depends on $q$) for each of your possible actions (UP or DOWN). Your strategy is to be as indifferent about the other player as possible (some call this randomizing the other player). Think why this is the case (hint: think of the idea of "best response"). To be indifferent, we write down formulas for player 1's expected payoff for each of her possible actions and look for the value of $q$ that makes the expected payoff equal in tboth cases. We do the same for player 2, to calculate the value of $p$.

Note that this may be somewhat confusing: we get the formula for player's 2 strategy, $q$, by thinking of player 1. Well, maybe this isn't too surprising...

Modeling question: What does this mean about what the agents are presumed to know about the game?

Here is the algebra:

      Left(q) Right
Up(p) (A,a)   (C,c)

Down  (B,b)   (D,d) 

Up payoff: $qA+(1-q)C$

Down payoff $qB+(1-q)D$

Setting $Up=Down$ gives us:

$qA+(1-q)C=qB+(1-q)D$

Rearranging we get:

$q = \frac{D-C}{A+D-B-C}$

Similarly for p we get:

$p = \frac{d-b}{a+d-b-c}$

In [18]:
# Here is how you can get the computer to do the algebera for you.

from sympy import *
init_printing()

a,b,c,d,p = symbols("a b c d p")
A,B,C,D,q = symbols("A B C D q")

# to find p where LEFT=RIGHT we solve LEFT-RIGHT=0 for p.
print "p="
pretty_print(solve("p*a + (1-p)*b - p*c - (1-p)*d",p))

# to find q where UP=DOWN we solve UP-DOWN=0 for q.
print "q="
pretty_print(solve("q*A + (1-q)*C - q*B - (1-q)*D",q))
p=
⎡    -b + d   ⎤
⎢─────────────⎥
⎣a - b - c + d⎦
q=
⎡    -C + D   ⎤
⎢─────────────⎥
⎣A - B - C + D⎦

Modelling perspective: interpreting mixed strategies

In some games the idea of resorting to a mixed strategy is quite intuitive. If two (or more) possible outcomes seem possible when we consider what the other player might do, it might make sense to randomize so that our cummulative payoff will reflect the relative preference we have for each outcome. It seems this should lead to maximal cummulative payoff (why?). The expected payoff of a mixed strategy also agrees nicely with the notion of subjective expected utility.

A mixed strategy seems to presussopse multiple interactions, rather than a one-shot game. In a sense, then, probability or ranndomness represents time. Alternatively, we can think about mixed strategy as representing interactions in a population. According to this interpretation we can imagine two agents being chosen at random from the population (accodrding to the mixed strategy probabilities) and pitted against each other, each playing one of the non-mixed strategies. It is helpful to reflect on the fact that these intrepretations carry with them substantive assumptions about the temporal strcture of the situation or the role and structure of the population. As always, such assumptions are typically not netural with respect to the phenomemon we are studying.

From the perspective of the agents, randomization can be a method of confusing the opponent. Alternatively, it can be used when players are uncertain about the other players actions and want to remain "indifferent" to their choices (we saw that in coordination games). More fundamentally, mixed strategies may be used not only to represent multiple interactions (repeated games) or interaction with players drawn from a heterogenous population but to enable agents to handle these situations appropriately.

Food for thought: In the context of coordination games (in particular) it might be argued that a mixed strategy ensures fairness to some extent since the preferred outcomes of both players will occur, and with the appropriate frequency. Discuss.

Rock-Paper-Scissors

This is a zero-sum game. Let's denote the the actions by Rock, Paper, Scissors.

The payoff matrix for the first player (on rows) is:

    R    P   S
R   0   -1   1

P   1    0  -1  

S  -1    1   0

Let r,p,s be the probability the player 2 chooses the corresponding action and write down the expected payoff of player 1.

Player 1 chooses R: $0r-1p+1s$

Player 1 chooses P: $1r+0p-1s$

Player 1 chooses S: $-1r+1p+0s$

We want R=P, P=S, $r+p+s=1$. Which we will represent as R-P=0; P-S=0; 1-r-p-s=0.

That's three equations in three variables, and the computer can easily solve the system for us.

In [20]:
r,p,s = symbols("r p s")

solve(["-p+s-r+s",
       "r-s+r-p",
       "1-r-p-s"])
Out[20]:
$$\left \{ p : \frac{1}{3}, \quad r : \frac{1}{3}, \quad s : \frac{1}{3}\right \}$$

In other words, play each action with equal probability (could you have guessed this without doing the math?)