$$
\begin{aligned}
\underset{\Large{\text{L.Lessard, Spring 2017}}}{\large{\text{Introduction to Optimization CS 524}}} \\
\end{aligned}
$$$$
\begin{aligned}
\underset{\Large{\text{Due date: 11:00pm on Monday May 8, 2017}}}{\Large{\text{Determining Winning Strategies Using Game Theoretic Optimization}}} \\
\end{aligned}
$$$$
\begin{aligned}
\underset{\large{\text{[email protected]}}}{\large{\text{Tuan Dinh}}} && \underset{\large{\text{[email protected]}}}{\large{\text{Varun Sah}}} \\
\end{aligned}
$$### Table of Contents¶

In team sports, strategy plays a vital role in determining which team emerges victorious since team clashes cannot always be won by relying on individual prowess alone. The importance of strategy gains even more significance when the sport involves a series of head-to-head individual games between team members like in the [Davis Cup](https://en.wikipedia.org/wiki/Davis_Cup), [Fed Cup](https://en.wikipedia.org/wiki/Fed_Cup) or the [Chess Olympiad](https://en.wikipedia.org/wiki/Chess_Olympiad) where a good match-up strategy (who plays whom) can lead to results that defy expectations.

This project aims to solve the problem of determining the best match-up strategy that a team should adopt to maximize chances of winning in a sequence of individual games.

Historically, there have been several instances, where a team rubber has been won purely on the basis of a match-up strategy (who plays whom), despite being at a disadvantage in terms of individual ability or expertise.

The origin of the problem lies in the legendary [Chinese horse race story](http://chinese-story-collection.blogspot.com/2010/10/tian-jis-horse-race-tian-ji-sai-ma.html) involving the King of Qi and General Tianji which took place more than 2000 years ago. In this problem each player owned three horses of different speed classes and had to choose the sequence of horses to compete against each other. Tianji beat the King of Qi despite having horses that were slower than the King's horses of the corresponding speed category. He did so by racing his slowest horse against the King's fastest, his fastest horse against the King's moderate-paced, and his moderate-paced against the King's slowest. Although he lost the first match by a large margin, he won the other two rounds, thereby winning overall. The figure below illustrates this scenario, where K1 > G1 > K2 > G2 > K3 > G3. ![horse][horse]

More recently, at the Chess Olympiad in 2014, the Indian contingent, despite being seeded $19^{th}$, managed to win the bronze medal in the tournament. According to experts, strategic planning was key to India's success, apart from great performances by the members of the team ([citation article](https://www.sportskeeda.com/chess/solid-play-strategic-planning-key-to-indias-success-at-chess-olympiad)). The Indian team had decided to adopt a success strategy of fielding the strongest players at the lower boards and a mix of highly rated and lower-rated but solid players at the top two boards.

The aforementioned event perfectly exemlifies the importance of match-up strategy in team sports involving a series of individual games. It should be noted that solving the problem of determining a team's best one-on-one strategy is not restricted to the realm of chess and is a much more universal challenge. In fact, the task at hand becomes infinitely more engaging and interesting when one thinks about its numerous applications. The techniques described in this project are applicable, without any modification, to other team sports such as tennis, wrestling and boxing, that involve a series of one-on-one games to determine the victorious team. The only prerequisite is the existence of an empirical or theoretical mathematical model to determine realtive chances of winning or expected score obtained in individual matches. Moreover, the models developed within the project can also be used to develop strategies for online battle strategy games like Age of Empires and Clash of Clans as well as for determining squad composition and battle order in card and video games like Pokemon.

Now that we have obtained a bird's eye view of the problem, its origin and potential applications of a solution, in the subsequent subsections, we provide an overview of the specifics of the problem (jargon) as well as our approach to finding a solution (outline).

Here, we define a set of technical terms that form the foundation of the concepts discussed in this report.

**Rubber:** A rubber is a team contest consisting of a sequence of successive games that is won by the side that wins a majority of the individual games. Famous examples of events that involve rubbers are the [Davis Cup](https://en.wikipedia.org/wiki/Davis_Cup), the [Fed Cup](https://en.wikipedia.org/wiki/Fed_Cup) in tennis and the [Chess Olympiad](https://en.wikipedia.org/wiki/Chess_Olympiad), the [World Team Chess Championship](https://en.wikipedia.org/wiki/World_Team_Chess_Championship) in chess.

**Strategy:** In game theory, a player's strategy is any of the options he or she can choose in a setting where the outcome depends not only on his own actions but on the action of others. In the case of a team contest(rubber), we define a strategy as the assignment of each team member to a member of the opposition.

**Pure Strategy:** A [pure strategy](http://www.gametheory.net/dictionary/PureStrategy.html) is an unconditional, defined choice that a player makes in a game. For example, ALWAYS (100% of the time) assigning players 1, 2, 3, 4 to play against opposition's D, A, B, C respectively is a pure strategy.

**Mixed Strategy:** A [mixed strategy](http://www.gametheory.net/dictionary/MixedStrategy.html) is an assignment of a probability to each pure strategy.
For example, assigning team players 1, 2, 3, 4 to play against opposition's D, A, B, C respectively for 50% of the time and against opposition's C, A, D, B respectively for the remaining 50% of the time is an example of a mixed strategy. A mixed strategy can simply be considered as the probability distribution one uses to randomly choose among available actions in order to avoid being predictable.

Of course, one can regard a pure strategy as a degenerate case of a mixed strategy, in which that particular pure strategy is selected with probability 1 and every other strategy with probability 0.

**Nash Equilibrium:**
[Nash Equilibrium](http://www.investopedia.com/terms/n/nash-equilibrium.asp#ixzz4gSfkX0Wn) is a pair of strategies in which each player’s strategy is the best response to the other player’s strategy and no player has an incentive to deviate from his chosen strategy after considering an opponent's choice. Overall, in a Nash Equilibrium, a team can receive no incremental benefit (only stands to lose) from changing actions, assuming other team remains constant in its strategy.
For example, let us assume one team adopts a strategy of ordering players as 1, 2, 3, 4 and let the other team adopt a strategy of B, A, C, D without knowledge of each others' strategies. Now, the the first team's strategy is revealed to the second team and vice versa. If neither team decides to switch their strategy with this new information, then the strategy pair ({1234}, {BACD}) is said to form a Nash Equilibrium.

A more detailed explanation of the aforementioned concepts with respect to the popular Rock-Paper-Scissor game can be found in the [Appendix](#Game-Theory-Jargon:-Rock-Paper-Scissor). Having understood the game theoretic concepts involved in solving the problem of strategy determination, the next subsection describes the outline of our report particularly the incremental approach of starting from a naive model and building a more generalizd solution.

Our project attempts to use a combination of optimization and the aforementioned game theoretic concepts for determining the optimal match-up strategy for a team to maximize its probability of winning a rubber under a set of conditions that simulate a real world scenario.

We go about this task by building a series of optimization models each more comprehensive than the previous one. We start with the simple case of designing a winning pure strategy for the home team when the opposition's strategy is known. We then eliminate the assumption of knowledge of opposition's strategy in the second model and replace it with the assumption that the opposition will always respond by adopting a strategy to minimize the home team's chances of winning. We also try to check if a pure Nash Equilibrium exists in this case. In the subsequent model, we remove the constraint of being restricted to pure strategies. In this model, we try to find the optimal mixed strategy from the perspective of both teams to determine if a mixed Nash equilibrium exists. Finally, we add the complexity of determining a team's composition (from a set of available players) in addition to a team's strategy while having a budget trade-off in place.

We illustrate the aforementioned scenarios (starting from section 3) by using a case study approach with chess teams being the focus. The next section provides details and building blocks of the chess ecosystem that are necessary for the subsequent optimization modelling tasks.

</div>

Before we proceed to the optimization models, we present a brief overview of the rules, scoring and rating system involved in team chess tournaments. This section is intended to familiarize the reader with the chess ecosystem to fill any gaps in background knowledge that is required for understanding the remainder of the report.

- In tournaments like the [Chess Olympiad](https://en.wikipedia.org/wiki/Chess_Olympiad)) and the [World Team Chess Championship](https://en.wikipedia.org/wiki/World_Team_Chess_Championship), among others, each team clash (rubber) consists of 4 individual games.
- In a tournament match, colors are assigned to the teams for the first board. After that, the colors switch on alternating boards. For instance, if a team's first board plays White, the second board will play Black, and the third board will play White. All of a team's odd numbered boards will play one color, while all of the same team's even numbered boards will play the other color. This is done in order to eliminate the [first-move advantage](https://en.wikipedia.org/wiki/First-move_advantage_in_chess) described in the next section.
- At the end of each game, the winning team gets 1 point. The losing side earns no points and a draw results in both teams being allotted 0.5 points.

- We modeled the winning probability of a player based on the [ELO rating](https://en.wikipedia.org/wiki/Elo_rating_system) of the player and his opponent. The model used was based on a [study](https://chessprogramming.wikispaces.com/Pawn+Advantage%2C+Win+Percentage%2C+and+ELO) by Sune Fischer and Pradu Kannan in Dec. 2007. Details of the model can be found here: [Elo-Rating & Win-Probability](https://chessprogramming.wikispaces.com/Match+Statistics).

Expected score: $Expected Score_{player 1} = \frac{1}{1 + 10^{-\frac{\Delta}{400}}}$ where $\Delta = Elo_{player 1} - Elo_{player 2}$ - It is important to note that the winning probability is also the expected score from a given game. This is because there is no effect of weights since points scored are betwen 0 and 1. This is why the expected score from a match between teams is simply the sum of expected scores (interchangeably probabilities of winning) from each individual game of the match.
- Another important distinction to remember is that the expected scores reported in this report are always with respect to the home team. Even while modelling from the opponent's perspective, the score reported is the home team's score. This is because we assume that the home team tries to maximize its expected score and the opposition tries to lower it.
- The [first-move advantage](https://en.wikipedia.org/wiki/First-move_advantage_in_chess) in chess is the inherent advantage of the player (White) who makes the first move in chess. Chess players and theorists generally agree that White begins the game with some advantage. We have modeled this advantage by adding 35 to the Black side's player's ELO. [citation required]()

First-move advantage: $Elo_{white} = Elo_{black} + 35 $ - To make things more interesting, we decided to add the complexity of making each round (individual game) distinct by adding weights to the maximum points that can be earned from each of the rounds. The maximum points reduce linearly (by a small fraction) as one moves from the first round to the fourth round. Doing this not only makes the rounds distinguishable, but also helps factor in the perception of the initial rounds having more significance. Weighted rounds: $Weight_{round\_k}: 1 + \frac{1}{100}(1 - \frac{k}{N})$ where $N$ is the number of players under consideration (usually team size).

In [1]:

```
# naive score function; discrete points {0, 0.5, 1}; unrealistic
function get_simple_score(player, opponent, order, round, elo)
x = elo[player] - elo[opponent] + 0.75 * (-1)^(order + 1)
if x > 0
return 1
elseif x < 0
return 0
else
return 0.5
end
end;
# expected score of a game (continuous); each game identical; first-move advantage;
function get_chess_score(player, opponent, order, round, elo)
x = elo[player] - elo[opponent] + 35*(-1)^(order + 1)
return 1/(1 + 10^(-x/400))
end;
# expected score of a game; each game being distinct; first-move advantage;
function get_chess_score2(player, opponent, order, round, elo)
x = elo[player] - elo[opponent] + 35*(-1)^(order % 2 + 1)
return 1/(1 + 10^(-x/400)) * (1.01 - 0.01round/N_PLAYERS)
end;
# expected score of a game; each game being distinct; selection from a pool of players
function get_chess_score3(player, opponent, order, round, elo)
x = elo[player] - elo[opponent] + 35*(-1)^(order % 2 + 1)
return 1/(1 + 10^(-x/400)) * (1.01 - 0.01round/N_POOL)
end;
# total expected score of match; each game ientical
function get_match_score(team_player, team_opponent, order, elo)
sum(get_chess_score(team_player[i], team_opponent[i],
i + order + 1, elo) for i=1:length(team_player))
end;
# total expected score of match; each game distinct
function get_match_score2(team_player, team_opponent, order, elo)
sum(get_chess_score2(team_player[i], team_opponent[i],
i + order + 1, i, elo) for i=1:length(team_player))
end;
```

We obtained real world data of the top 200 players in the world (ranked by ELO rating) from online sources including [365chess](https://www.365chess.com/top-chess-players.php) and the world chess federation [FIDE](https://ratings.fide.com/top.phtml?list=men).

For the purpose of this project we consider allplayers from USA to constitute the home contingent. Players from Russia form the opposition pool. The top 4 players from USA and Russia form the home team and visitor team respectively.

In the absence of data about player worth, we generated player worth as a non-linear function of the players' Elo rating and number of matches played (experience).

In [2]:

```
# one is enough!! No of games - 1's score = 2's score
#Pkg.add("Images") # add images package if not already installed
using JuMP, Clp, Cbc, NamedArrays
using PyPlot
import Combinatorics
```

In [3]:

```
raw = readcsv("./data/Chess.csv")
(m,n) = size(raw) # dimensions of data
players = raw[2:end, 2] # vector of players
elos = raw[2:end, 3] # vector of Elo ratings
federations = raw[2:end, 6] # vector of
worth = raw[2:end, 9] # player worth in thousand dolaars
num_players = length(players)# number of total players (200)
player_elo = Dict(zip(players, elos)) # Dictionary player to Elo rating
player_country = Dict(zip(players, federations))# Dictionary player to country
player_worth = Dict(zip(players, worth)) # Dictionary player to worth
player_pool_USA = []
player_pool_Russia = []
# building USA and Russia pools
for i in 2: m
if raw[i, 6] == "United States"
push!(player_pool_USA, raw[i, 2])
elseif raw[i, 6] == "Russia"
push!(player_pool_Russia, raw[i, 2])
end
end
N_PLAYERS = 4 # number of players in both teams
team_USA = player_pool_USA[1:N_PLAYERS] # Best 4 USA players form the home team
team_Russia = player_pool_Russia[1:N_PLAYERS];# Best 4 Russia players form the visitor team
# sticking to descending elo order for fixed strategy
plan_Russia = [1 2 3 4];
pool = players
# N_POOL = length(pool) # top 200 players
N_POOL = 10 # top 10 players
N_CHOSEN = 4; # teams of 4 to be formed
```

Out[3]:

In [4]:

```
function print_match(aopt::Array, team_name, players, int_mode=true)
m = size(aopt)[1]
n = size(aopt)[2]
if int_mode
solution = NamedArray([Int(aopt[i, k])
for i=1:m, k=1:n], (players, [1:n;]),(team_name, "Table"))
else
solution = NamedArray([aopt[i, k]
for i=1:m, k=1:n], (players, [1:n;]),(team_name, "Table"))
end
println(solution)
println()
end
function print_fixed_strategy(xopt, N_PLAYERS,
team_home, team_visitor, plan_visitor, elo)
println("Strategy: ")
for k in 1:N_PLAYERS
for i in 1:N_PLAYERS
if xopt[i, k] != 0
@printf "%20s (%d) vs " team_home[i] elo[team_home[i]]
@printf "(%d) %s\n" elo[team_visitor[plan_visitor[k]]] team_visitor[plan_visitor[k]]
end
end
end
end
;
function print_pure_strategy(N_PLAYERS, team_home, team_visitor, elo, xopt, yopt)
for k in 1:N_PLAYERS # round
print("Round: ", k)
for i in 1:N_PLAYERS
if xopt[i, k] != 0
home_player = string(team_home[i], " (",elo[team_home[i]], ")")
@printf "%30s %5.5s " home_player "vs."
end
end
for j in 1:N_PLAYERS
if yopt[j, k] != 0
visitor_player = string( "(",elo[team_visitor[j]], ") ", team_visitor[j])
@printf " %s\n" visitor_player
end
end
end
end;
# score_matrix function from 2 lists
function get_score_matrix(list1::Array, list2::Array, players_dict,
get_match)
# assume that team 1 plays first
dim1 = length(list1)
dim2 = length(list2)
scores = Matrix(dim1, dim2)
for i in 1:dim1
for j in 1:dim2
scores[i, j] = get_match(list1[i], list2[j], 1, players_dict)
end
end
return scores
end;
# ordered combination: return a list of nplayers obtained from a pool
function get_combinations(list_players::Array, nchosen::Int)
return collect(combinations(list_players, nchosen))
end;
function get_permutations(list_players::Array)
return collect(permutations(list_players, N_PLAYERS))
end;
list_home = get_permutations(team_USA[1:N_PLAYERS])
list_visitor = get_permutations(team_Russia[1:N_PLAYERS])
scores_matrix = get_score_matrix(list_home, list_visitor, player_elo, get_match_score2);
```

We start by modelling the problem as a simple assignment problem where each player needs to be assigned to a board (or round). This model assumes the opposition's (visitor) strategy is already fixed and known to the home team a priori. This is a generally naive scenario which can have realistic applications if the home team knows with certainty what the opposition's strategy is.

We simply chose Elo-rating order as the opposition's fixed pure strategy. This is a reasonable assumption as several teams follow this strategy in reality. In fact, certain chess tournaments mandate that teams stick to the natural Elo-rating order.

Consider each table (fixed) as a position, our problem is to assign each player into exactly a position so that we can get the greatest score. This is a simple assignment problem:

Suppose:

- $x_{i,k}$: binary variable indicating the weight of connection from node i of layer 1 to node k of layer 2
- $score(i, j, j)$: expected score of the match between player i of team home and player j of team visitor at table number j

Our problem becomes:

$$ \begin{aligned} \underset{x \in \mathbb{\{0, 1\}}}{\text{maximize}} \qquad& \sum_{i=1}^N\sum_{j=1}^N x_{i,j} * score(i, j, j) \\ \text{subject to:} \qquad& \sum_{i=1}^{N}x_{i,k} = 1 && k=1,\dots,N\\ \qquad& \sum_{j=1}^{N}x_{k,j} = 1 && k=1,\dots,N \end{aligned} $$ </p>

In [5]:

```
function get_fixed_strategy(N_PLAYERS, team_home,
team_visitor, plan_visitor, get_score, elo)
simpleModel = Model(solver=CbcSolver())
@variable(simpleModel, x[1:N_PLAYERS, 1:N_PLAYERS], Bin) # matrix of main assignment variables
@constraint(simpleModel, supply[k in 1:N_PLAYERS],
sum(x[k, j] for j=1:N_PLAYERS) == 1) # supply constraint
@constraint(simpleModel, demand[k in 1:N_PLAYERS],
sum(x[i, k] for i=1:N_PLAYERS) == 1) # demand constraint
@objective(simpleModel, Max, sum(x[i, k] * get_score(
team_home[i], team_visitor[plan_visitor[k]], k, k, elo)
for i=1:N_PLAYERS, k=1:N_PLAYERS)) # maximization objective function
solve(simpleModel)
println("Score: ", getobjectivevalue(simpleModel))
xopt = getvalue(x)
return xopt
end;
```

We solve three variations of the naive model.

The first variation is unrealistic as it ignores the degree of certainty with which each game can be won by a team.

- It simply assumes that a player scores 1 point if his Elo rank is greater than his opponent's Elo rating by a certain amount and scores 0 if it is lwer than that certain amount. 0.5 is scored in other cases.
- Despite being outrageously unrealistic, it is included in this report to give an intuitive sense of how a team can maximize its score by assigning their lowest ranked player to the opposition's highest rank and losing one single match while their own highest ranked players rake in the points by each play with lower ranked opposition partners.
- The below function call illustrates this exact scenario.

In [6]:

```
xopt = get_fixed_strategy(N_PLAYERS,
team_USA, team_Russia, plan_Russia, get_simple_score, player_elo);
print_fixed_strategy(xopt, N_PLAYERS,
team_USA, team_Russia, plan_Russia, player_elo)
```

The second variation is more realistic, wherein the expected score also serves as a measure of certainty of winning a game.

- We model the first-move advantage wherein the white player (starts the game) has an advantage
- Each round is identical in terms of points that can be earned

In [7]:

```
xopt = get_fixed_strategy(N_PLAYERS,
team_USA, team_Russia, plan_Russia, get_chess_score, player_elo);
print_fixed_strategy(xopt, N_PLAYERS,
team_USA, team_Russia, plan_Russia, player_elo)
```

The third variation is even more interesting as we add a distinction between each round. According to our weights, it is more important to score earlier rounds than later ones.

In [8]:

```
x_opt = get_fixed_strategy(N_PLAYERS,
team_USA, team_Russia, plan_Russia, get_chess_score2, player_elo);
print_fixed_strategy(xopt, N_PLAYERS,
team_USA, team_Russia, plan_Russia, player_elo)
```

Although the above model (along with scoring variations) is a good place to start since it provides a quick overview of different chess strategies, we feel it is not a realistic model due to the assumption that a team knows the opposition's strategy a priori. While it may happen that teams can accurately guess their opposition's strategy based on experience or insider information, the chances of such an occurrence are rather rare. Hence, we now relax the assumption of knowledge of opposition's strategy to obtain a more realistic model.

In this phase, we eliminate the assumption of knowledge of opposition's strategy. We replace it with the assumption that the opposition will always respond by adopting a strategy to minimize the home team's expected score. In other words, the opposition is expected to be rational (takes decisions in its own best interest), which is an incredibly reasonable assumption in the real world.

Here is the intuition behind this model:

- Team 1 expects that, for a given strategy choice x, team 2 will choose the best response (smallest entry) in row x. Team 1 will then decide to choose the row with the maximum smallest entry.

- This is formally known as a Maximin strategy

What if we solved this from the opposition's perspective?

- Similar to the above logic, Team 2 can expect that, for a given strategy choice y, team 1 will respond by choosing the best strategy (largest entry) in column y. This means that team 2 has to choose the column with the minimum largest entry.

- This is formally known as a Minimax strategy.

We reiterate the fact that the objective is always with respect to the home team's expected score, even when the problem is being looked at from the visitors' perspective.

Here we formulate both the Maximin and Minimax versions of this model. Intuitively, we expect that if both the above strategies lead to the same optimal result, then that solution must be a pure Nash Equilibrium. Neither team would have any incentive to change strategies and would continue to always play the same optimal strategy (Maximin for home team and Minimax for opposition team).

Both home team and visitor try to assign their players to each table so that they can maximize their expected return, which means:

- Home team (X side) maximizes expected score.
- Visitor team (Y side) minimizes expected score.

This can be modeled as an 2-layer assignment problem:![maximin][2layer]

Suppose:

- $x_{i,k}$: binary variable indicating the weight of connection from node i of home layer to node k of table layer
- $y_{j,k}$: binary variable indicating the weight of connection from node j of visitor layer to node k of table layer
- $score(i, j, k)$: expected score of the match between player i of team home and player j of team visitor at table number k

Our problem becomes:

$$
\begin{aligned}
\underset{x \in \mathbb{\{0, 1\}}}{\text{max}}\underset{y \in \mathbb{\{0, 1\}}}{\text{min}}\qquad& \sum_{i=1}^N\sum_{j=1}^N\sum_{k=1}^N score(i, j, k) * x_{ik} * y_{jk} \\
\text{subject to:}
\qquad& \sum_{i=1}^{N}x_{i,k} = 1 \qquad k=1,\dots,N \qquad \sum_{k=1}^{N}x_{i,k} = 1 \qquad i=1,\dots,N\\
\qquad& \sum_{j=1}^{N}y_{j,k} = 1 \qquad k=1,\dots,N \qquad \sum_{k=1}^{N}y_{j,k} = 1 \qquad j=1,\dots,N
\end{aligned}
$$

To solve this model, we convert inside model (Min model) into its dual form so that our maximin becomes a max problem.

Consider an assignment of X as a parameter, rewrite the objective as a function of variable y:

$$f(y) = \sum_{j=1}^N\sum_{k=1}^N (\sum_{i=1}^N score(i, j, k) * x_{ik}) * y_{jk}$$
Then, our model is:
$$
\begin{aligned}
\underset{y \in \mathbb{\{0, 1\}}}{\text{min}} \qquad& f(y)\\
\text{subject to:}
\qquad& y_{j, k} \in {0, 1} && j,k=1,\dots,N\\
\qquad& \sum_{j=1}^{N}y_{j,k} = 1 && k=1,\dots,N\\
\qquad& \sum_{k=1}^{N}y_{j,k} = 1 && j=1,\dots,N
\end{aligned}
$$

Notice that this model is an 1-layer assignemnt problem with the incidence matrix being totally unimodular. Hence, a strong duality always hold. Therefore, the converted model has the same optimal solution as the former one.
Dual form of this model is:

$$
\begin{aligned}
\underset{p,q,r \in \mathbb{R}}{\text{maximize}}\qquad& \sum_{j=1}^{N}{p*j} + \sum*{k=1}^{N}{q_k}

- \sum
*{j=1}^{N}\sum*{k=1}^{N}r_{jk}\ \text{subject to:} \qquad& p_j, q*k: free && j,k=1,\dots,N \ \qquad& r*{jk} \leq 0 && j,k=1,\dots,N \ \qquad& p_j + q*k + r*{jk} \leq \sum*{i=1}^N score(i, j, k) * x*{i,k} && i,j,k=1,\dots,N \end{aligned} $$

Replace this dual form to the main model, we have the final model:

$$
\begin{aligned}
\underset{p,q,r \in \mathbb{R}, x \in {0,1}}{\text{Max}} \qquad& \sum_{j=1}^{N}{p*j} + \sum*{k=1}^{N}{q_k}

- \sum
*{j=1}^{N}\sum*{k=1}^{N}r_{jk}\ \text{subject to:} \qquad& p_j, q*k: free && j,k=1,\dots,N \ \qquad& r*{jk} \leq 0 && j,k=1,\dots,N \ \qquad& \sum*{i=1}^{N}x*{i,k} = 1 && k=1,\dots,N\ \qquad& \sum*{k=1}^{N}x*{i,k} = 1 && i=1,\dots,N\ \qquad& p_j + q*k + r*{jk} \leq \sum*{i=1}^N score(i, j, k) * x*{i,k} && i,j,k=1,\dots,N\ \end{aligned} $$

Home Team (Maximin)

In [9]:

```
function get_team_pure_strategy_maximin(N_PLAYERS, team_home, team_visitor, get_score, elo)
maximinModel = Model(solver=CbcSolver())
@variable(maximinModel, x[1:N_PLAYERS, 1:N_PLAYERS], Bin) # matrix of main assignment variables
@variable(maximinModel, p[1:N_PLAYERS]) # vector of inner model dual variables
@variable(maximinModel, q[1:N_PLAYERS]) # vector of inner model dual variables
@variable(maximinModel, r[1:N_PLAYERS, 1:N_PLAYERS] <= 0) # vector of inner model dual variables
@constraint(maximinModel, supply[k in 1:N_PLAYERS],
sum(x[k, j] for j=1:N_PLAYERS) == 1) # supply constraint
@constraint(maximinModel, demand[k in 1:N_PLAYERS],
sum(x[i, k] for i=1:N_PLAYERS) == 1) # demand constraint
@constraint(maximinModel, mincons[j in 1:N_PLAYERS, k in 1:N_PLAYERS],
p[j] + q[k] + r[j, k] <=
sum(get_score(team_home[i], team_visitor[j], k, k, elo) * x[i, k]
for i=1:N_PLAYERS)) # inner dual constraint
@objective(maximinModel, Max, sum(p) + sum(q) + sum(r)) # maximization objective function
solve(maximinModel)
println("Maximin Score: ", getobjectivevalue(maximinModel))
return getvalue(x)
end;
function get_expected_opponent_pure_strategy_maximin(N_PLAYERS, team_home, team_visitor, get_score, elo, xopt)
ym = Model(solver=CbcSolver())
# matrix of assignment variables
@variable(ym, y[1:N_PLAYERS, 1:N_PLAYERS], Bin)
# supply and demand constraints
@constraint(ym, supply[k in 1:N_PLAYERS], sum(y[k, j] for j=1:N_PLAYERS) == 1)
@constraint(ym, demand[k in 1:N_PLAYERS], sum(y[i, k] for i=1:N_PLAYERS) == 1)
# objective expression
@expression(ym, score, sum(get_score(team_home[i],
team_visitor[j], k, k, elo) * xopt[i, k] * y[j, k]
for i=1:N_PLAYERS, j in 1:N_PLAYERS, k in 1:N_PLAYERS))
# minimization of objective expression
@objective(ym, Min, score)
solve(ym)
return getvalue(y)
end;
```

Visitor Team (Minimax)

In [10]:

```
function get_opponent_pure_strategy_minimax(N_PLAYERS, team_home,
team_visitor, get_score, elo)
minimaxModel = Model(solver=CbcSolver())
@variable(minimaxModel, y[1:N_PLAYERS, 1:N_PLAYERS], Bin) # matrix of main assignment variables
@variable(minimaxModel, p[1:N_PLAYERS]) # vector of inner model dual variables
@variable(minimaxModel, q[1:N_PLAYERS]) # vector of inner model dual variables
@variable(minimaxModel, r[1:N_PLAYERS, 1:N_PLAYERS] >= 0) # vector of inner model dual variables
@constraint(minimaxModel, supply[k in 1:N_PLAYERS],
sum(y[k, j] for j=1:N_PLAYERS) == 1) # supply constraint
@constraint(minimaxModel, demand[k in 1:N_PLAYERS],
sum(y[i, k] for i=1:N_PLAYERS) == 1) # demand constraint
@constraint(minimaxModel, mincons[j in 1:N_PLAYERS, k in 1:N_PLAYERS],
p[j] + q[k] + r[j, k] >=
sum(get_score(team_home[i], team_visitor[j], k, k, elo) * y[i, k]
for i=1:N_PLAYERS)) # inner dual constraint
@objective(minimaxModel, Min, sum(p) + sum(q) + sum(r)) # minimization objective function
solve(minimaxModel)
println("Minimax Score: ", getobjectivevalue(minimaxModel))
return getvalue(y)
end;
function get_expected_team_pure_strategy_minimax(N_PLAYERS, team_home, team_visitor, get_score, elo, yopt)
xm = Model(solver=CbcSolver())
# assignment variable
@variable(xm, x[1:N_PLAYERS, 1:N_PLAYERS], Bin)
# supply and demand constraints
@constraint(xm, supply[k in 1:N_PLAYERS], sum(x[k, j] for j=1:N_PLAYERS) == 1)
@constraint(xm, demand[k in 1:N_PLAYERS], sum(x[i, k] for i=1:N_PLAYERS) == 1)
# objective expression
@expression(xm, score, sum(get_score(team_home[i],
team_visitor[j], k, k, elo) * yopt[i, k] * x[j, k]
for i=1:N_PLAYERS, j in 1:N_PLAYERS, k in 1:N_PLAYERS))
# maximization of objective expression
@objective(xm, Max, score)
solve(xm)
return getvalue(x)
end;
```

**Maximin:**

- We first determine the home team's strategy while it assumes the opposition will choose the best response to its strategy.
- Using the optimum home strategy found, we then compute the opposition's optimal strategy.
- As a confirmation for accuracy of the model, we provide an alternate solution to the maximin problem that involves enueration of all strategies. The enumeration verification can be found in the [Appendix](#Maximin-and-Minimax-by-enumeration-of-strategy-set)

In [11]:

```
xopt = get_team_pure_strategy_maximin(N_PLAYERS,
team_USA, team_Russia, get_chess_score, player_elo);
println("Home Strategy")
print_match(xopt, "USA", team_USA)
yopt = get_expected_opponent_pure_strategy_maximin(N_PLAYERS,
team_USA, team_Russia, get_chess_score, player_elo, xopt)
println("Expected Visitor Strategy")
print_match(yopt, "Russia", team_Russia)
println("Maximin Strategy: ")
print_pure_strategy(N_PLAYERS, team_USA, team_Russia, player_elo, xopt, yopt)
```

**Minimax: **

- We now determine the visitor team's strategy while it assumes the home team will choose the best response to its strategy.
- Using the optimum visitor strategy found, we then compute the home team's optimal strategy.
- As a confirmation for accuracy of the model, we provide an alternate solution to the minimax problem that involves enueration of all strategies. The enumeration verification can be found in the [Appendix](#Maximin-and-Minimax-by-enumeration-of-strategy-set)

In [12]:

```
yopt = get_opponent_pure_strategy_minimax(N_PLAYERS,
team_USA, team_Russia, get_chess_score, player_elo);
println("Visitor Strategy")
print_match(yopt, "Russia", team_Russia)
xopt = get_expected_team_pure_strategy_minimax(N_PLAYERS,
team_USA, team_Russia, get_chess_score, player_elo, yopt);
println("Expected Home Strategy")
print_match(xopt, "USA", team_USA)
print_pure_strategy(N_PLAYERS, team_USA, team_Russia, player_elo, xopt, yopt)
```

We present below a heat map to visualize the strategy set for both teams. The strategies along the axis are lexicographically ordered according to rank (highest Elo has rank 1). For instance, strategy at index 0 and 1 of USA are [1,2,3,4] and the indices 22 and 23 are [4,3,1,2] and [4,3,2,1] respectively where 1 is Wesley So (2815), 2 is Fabiano Caruana (2802), 3 is Hikaru Nakamura (2786), and 4 is Alexander Onischuk (2685). The ordering is similar for Russia with 1 being Vladimir Kramnik (2811) , 2 being Sergey Karjakin (2783), 3 being Peter Svidler (2755) and 4 being Ian Nepomniachtchi (2751).

In [13]:

```
function plot_heatmap(scores_matrix::Array, labelx, labely)
imshow(scores_matrix, cmap="jet", aspect="auto", interpolation="nearest")
colorbar()
plt[:ylabel](labely)
plt[:xlabel](labelx)
end;
plot_heatmap(scores_matrix, "Russia", "USA");
```

The darker the shade of red, the higher the expected score of team USA. The higher the shade of blue, the lower the expected score of team USA (better for Russia). We can see that the in thetop right region of the heat map, certain cells are dark red and in the bottom left region cells are predominantly shades of blue. This is interesting because our model lays some stress on the importance of winning earlier rounds. Hence, it is in the best interest of teams to play according to rank wise lexicographic ordering [1 2 3 4]. That is why the lower portions of the heat map are more blue because these strategies are closer to reverse lexicographic ordering for Team USA.

Finally, we compare and analyse the results of the Maximin and Minimax formulation. Since the Maximin (USA) and Minimax (Russia) optimal strategies have different optimal values, a pure Nash equilibrium does not exist. This is intuitive since we can't expect there to be one pure strategy in real life since aticking to one strategy would make the team predictable. If a team is predictable in its actions, a comparable opposition can always come up with a strategy to beat it. In order to find a Nash Equilibrium (if one exists), we must now consider mixed strategies.

Having seen that no pure strategy Nash Equilibrium exists for our problem, we must eliminate the restriction to pure strategies.

Since pure strategy doesn't have Nash equilibrium, in this part, we will solve this problem using mixed strategy and try to model it as a relaxed assignment problem. We start from the formal model of mixed strategy.

** Primal: **

- $p_i$ is the probability of strategy i of home team
- $match_score(h_i, v_j)$ is the expected match score when home team uses strategy $h_i$ and visitor uses strategy $v_j$

(1) Now, suppose $x_{ik}$ is probability of player i of home team playing at table k, we have:

- $x_{ik}$ is independent with all strategies of visitor
- $\sum_{i=1}^{N}x_{i,k} = 1$
- $\sum_{k=1}^{N}x_{i,k} = 1$

(2) Consider a strategy of visitor: $v_j = [v_{j1}, v_{j2}, ..., v_{jM}]$, then total score in $g_j$ to which $v_{jk}$ contributes is: $\sum_{i=1}^N score(i, j, k) * x_{ik}$

From (1) & (2), we see that mixed strategy can be modeled as a relaxed assignment problem. Using the similar approch for pure strategy, we can model this as: $$ \begin{aligned} \underset{p,q,r,x \in \mathbb{R}}{\text{maximize}} \qquad& \sum_{j=1}^{N}{p_j} + \sum_{k=1}^{N}{q_k} + \sum_{j=1}^{N}\sum_{k=1}^{N}r_{jk}\\ \text{subject to:} \qquad& 0 \leq x_{i,k} \leq 1 && i=1,\dots,N (constraint 0) \\ \qquad& p_j, q_k: free && j,k=1,\dots,N \\ \qquad& r_{jk} \leq 0 && j,k=1,\dots,N \\ \qquad& \sum_{i=1}^{N}x_{i,k} = 1 && k=1,\dots,N (constraint 1)\\ \qquad& \sum_{k=1}^{N}x_{i,k} = 1 && i=1,\dots,N (constraint 2)\\ \qquad& p_j + q_k + r_{jk} \leq \sum_{i=1}^N score(i, j, k) * x_{i,k} && j,k=1,\dots,N (constraint 3)\\ \end{aligned} $$

** Dual: **

To prove that Nash equilibrium is obtained, we find the dual form of the model above:

$$ \begin{aligned} \underset{b,c,e,a \in \mathbb{R}}{\text{minimize}} \qquad& \sum_{i=1}^{N}{b_i} + \sum_{k=1}^{N}{c_k} + \sum_{i=1}^{N}\sum_{k=1}^{N}e_{ik}\\ \text{subject to:} \qquad& 0 \leq a_{j,k} \leq 1 && j=1,\dots,N (~ constraint 3)\\ \qquad& b_j, c_k: free && i,k=1,\dots,N (~ constraint 1, 2)\\ \qquad& e_{ik} \geq 0 && i,k=1,\dots,N (~ constraint 0)\\ \qquad& \sum_{j=1}^{N}a_{j,k} = 1 && k=1,\dots,N\\ \qquad& \sum_{k=1}^{N}a_{j,k} = 1 && j=1,\dots,N\\ \qquad& b_i + c_k + e_{ik} \geq \sum_{j=1}^N score(i, j, k) * a_{j,k} && i,k=1,\dots,N\\ \end{aligned} $$ We now formulate the primal form of the visitor team.
** Primal form of visitor team (Minimax): **

On inspection, we notice that the above formulations of the dual form of home team and primal form of visitor team are actually identical ! This is a non-trivial observation, one that we utilize in the results section.

In [14]:

```
function get_MSNE(N_PLAYERS, team_home, team_visitor, get_score, elo)
nashModel = Model(solver=ClpSolver())
@variable(nashModel, 1 >= x[1:N_PLAYERS, 1:N_PLAYERS] >= 0)# matrix of main assignment variables
@variable(nashModel, p[1:N_PLAYERS]) # vector of inner model dual variables
@variable(nashModel, q[1:N_PLAYERS]) # vector of inner model dual variables
@variable(nashModel, r[1:N_PLAYERS, 1:N_PLAYERS] <= 0) # vector of inner model dual variables
@constraint(nashModel, supply[k in 1:N_PLAYERS],
sum(x[k, j] for j=1:N_PLAYERS) == 1) # supply constraint
@constraint(nashModel, demand[k in 1:N_PLAYERS],
sum(x[i, k] for i=1:N_PLAYERS) == 1) # supply constraint
@constraint(nashModel, mincons[j in 1:N_PLAYERS, k in 1:N_PLAYERS],
p[j] + q[k] + r[j, k] <=
sum(get_score(team_home[i], team_visitor[j], k, k, elo)
* x[i, k] for i=1:N_PLAYERS)) # inner model dual constraint
@objective(nashModel, Max, sum(p) + sum(q) + sum(r)) # maximization objective function
solve(nashModel)
println("Optimal Score: ", getobjectivevalue(nashModel))
return getvalue(x)
end;
```

In [15]:

```
function get_MSNE_opponent(N_PLAYERS, team_home, team_visitor, get_score, elo)
nashModel = Model(solver=ClpSolver())
@variable(nashModel, 0 <= y[1:N_PLAYERS, 1:N_PLAYERS] <= 1)# matrix of main assignment variables
@variable(nashModel, p[1:N_PLAYERS]) # vector of inner model dual variables
@variable(nashModel, q[1:N_PLAYERS]) # vector of inner model dual variables
@variable(nashModel, r[1:N_PLAYERS, 1:N_PLAYERS] >= 0) # vector of inner model dual variables
@constraint(nashModel, supply[k in 1:N_PLAYERS],
sum(y[k, j] for j=1:N_PLAYERS) == 1) # supply constraint
@constraint(nashModel, demand[k in 1:N_PLAYERS],
sum(y[i, k] for i=1:N_PLAYERS) == 1) # demand constraint
@constraint(nashModel, maxcons[i in 1:N_PLAYERS, k in 1:N_PLAYERS],
p[i] + q[k] + r[i, k] >=
sum(get_score(team_home[i], team_visitor[j], k%2, k, elo)
* y[j, k] for j=1:N_PLAYERS)) # inner model dual constraint
@objective(nashModel, Min, sum(p) + sum(q) + sum(r)) # minimization objective function
solve(nashModel)
println("Optimal Score: ", getobjectivevalue(nashModel))
return getvalue(y)
end;
```

**Primal:**

We solve the primal version of the problem which is essentially like determining the optimum strategy from the home team's perspective (USA).

In [16]:

```
xopt = get_MSNE(N_PLAYERS,
team_USA, team_Russia, get_chess_score2, player_elo);
println("Home Mixed Strategy")
print_match(xopt, "USA", team_USA, false)
```

The result can be interpreted as follows:

- Player 1 of USA should play on table 2 in 71.4% of the matches and on table 4 for the remaining 28.6% of matches. Similarly, player 2 should play on table 1 an average of 79.5% of the time and on table 3 for 20.5% of the time on average; and so on, for the other players.
- This probability distribution is the optimal mixed strategy for the primal formulation (Team USA).
- To confirm the accuracy of our model, we also find the expected score of the optimal mixed strategy for the primal formulation using enumeration of strategies. This can be found in the [Appendix](#Nash-Equilibrium-by-enumeration-of-strategy-set)

**Dual:**

Solving the dual version of the problem is akin to solving the problem from the visitor team's perspective (Russia) due to the equivalence of home team dual program and visitor team primal program as illustrated in the mathematical model.

In [17]:

```
yopt = get_MSNE_opponent(N_PLAYERS,
team_USA, team_Russia, get_chess_score2, player_elo);
println("Visitor Mixed Strategy")
print_match(yopt, "Russia", team_Russia, false)
```

The result can be interpreted as follows:

- Player 1 of Russia should play on table 1 in 43% of the matches and on table 4 in 57% of matches on average. Similarly, player 2 should, on an average, play on table 1 for 79.5% of the time and on table 3 for 20.5% of the time; and so on, for the other players.
- This probability distribution is the optimal mixed strategy for the dual formulation (Team Russia).
- To confirm the accuracy of our model, we also find the expected score of the optimal mixed strategy for the dual formulation using enumeration of strategies. This can be found in the [Appendix](#Nash-Equilibrium-by-enumeration-of-strategy-set)

The optimum score obtained by solving the primal and dual formulations of the strategy determining problem is identical ! This implies that we have successfully found the mixed strategy Nash Equilibrium for this problem.

We now add the additional complexity of building a chess squad while maximizing the nash optimal expected score. To make things more realistic, we add a requirement to minimize budget of the squad. The next section models this trade-off squad-building problem.

We now concentrate on determining team composition in addition to the optimal strategy. The problem of buying a good team while not using too much money is a universal one in sports like soccer and basketball. We consider a scenario where all chess players of the world are available to play in a Chess League (akin to the [Indian Premier League](https://en.wikipedia.org/wiki/Indian_Premier_League#Tournament_format) in cricket). We maintain a pool of players (N_POOL is the number of players in the pool) that are available for playing. We model the problem to form the best team from the available pool of players. What makes the task particularly interesting is that other players whom we don't buy could become our opponents. This becomes a trade-off problem between increasing expected match score and decreasing the amount of money.

We steadily solve this problem by solving 2 sub problems:

- Selecting a team composition from a pool to maximize expected score.
- Selecting a team composition from a pool to maximize expected score with a budget trade-off.

**(1) Selecting a team composition of N players from a pool of M players**

![tradeoff][pool]

We can think this problem based from the perspective of a 2-team game. Home team selects nChosen players from the pool to maximize its expected score while the visitor team selects a team from the rest players to minimize the score. We modify the Nash equilibrium model above to solve this.

A remarkably different point from Nash equilibrium model is that if a player is selected from home team, he cannot play for the visitor team. To represent this "if-then" relationship, we introduce a binary variable z: $z_i = 1 \equiv$ player i is selected for home team:

- $z_i = 0 \implies x_i = 0: \sum_{k=1}^{N}y_{i,k} \leq 1 - \sum_{k=1}^{N}z_{i,k}$
- $z_i = 1 \implies y_i = 0: \sum_{k=1}^{N}x_{i,k} \leq \sum_{k=1}^{N}z_{i,k}$

Using the similar approach to convert the inside model into its dual form, our model now becomes:

$$
\begin{aligned}
\underset{p,q,r,x,z \in \mathbb{R}}{\text{maximize}}\qquad& \sum_{j=1}^{M} p_j (1-\sum_{k=1}^N z_{j,k})+\sum_{k=1}^N q_k +\sum_{j=1}^M\sum_{k=1}^N r_{jk}\\
\text{subject to:}
\qquad& 0 \leq x_{i,k} \leq 1 \qquad& i=1,\dots,M \qquad k=1,\dots,N \\
\qquad& z_i: binary \qquad& i=1,\dots,M \\
\qquad& p_j \leq 0 \qquad& j=1,\dots,M \\
\qquad& q_k: free \qquad& k=1,\dots,N \\
\qquad& r_{jk} \leq 0 \qquad& j=1,\dots,M \qquad k=1,\dots,N \\
\qquad& \sum_{i=1}^{M}x_{i,k} = 1 \qquad k=1,\dots,M \qquad& \sum_{k=1}^{N}x_{i,k} \leq 1 \qquad i=1,\dots,N\\
\qquad& \sum_{i=1}^{M}z_{i,k} = 1 \qquad k=1,\dots,M \qquad& \sum_{k=1}^{N}z_{i,k} \leq 1 \qquad i=1,\dots,N\\
\qquad& \sum_{k=1}^{N}x_{i,k} \leq \sum_{k=1}^{N}z_{i,k} \qquad& i=1,\dots,N\\
\qquad& p_j + q_k + r_{jk} \leq \sum_{i=1}^M score(i, j, k) * x_{i,k} \qquad& i,j=1,\dots,M \qquad k=1,\dots,N\\
\end{aligned}
$$

Using epigraph, we can convert quadratic term in the objective into a linear function:

$$
\begin{aligned}
\underset{p,z \in \mathbb{R}}{\text{maximize}}\sum_{j=1}^{M} p_j (1-\sum_{k=1}^N z_{j,k}) =
\underset{t \in \mathbb{R}}{\text{maximize}}\sum_{j=1}^{M} t_j \\
\text{subject to:}
\qquad& t_j \leq 0 && j=1,\dots,M \\
\qquad& t_j \geq - G * (1 - \sum_{k=1}^{N}z_{j,k}) && j=1,\dots,M \\
\qquad& t_j \leq p_j + G * \sum_{k=1}^{N}z_{j,k} && j=1,\dots,M \\
\qquad& (G \sim infinity)
\end{aligned}
$$

Then, combine this transformation into the previous model, we have the final solution.

**(2) Selecting with a budget trade-off **

To solve this trade off, we only need to introduce a new value representing the cost of selected players for home team. With $price(i)$ being the price of player $i^{th}$ in the pool and $z_0$ being a selection, we have:

$$
\begin{aligned}
cost(z_0) = \sum_{i=1}^M (\sum_{k=1}^{N}z_{i,k}) * price(i)
\end{aligned}
$$

Then, our objective function becomes: $$ \begin{aligned} \underset{p,q,r,x,z \in \mathbb{R}}{\text{maximize}}\sum_{j=1}^{M} p_j (1-\sum_{k=1}^N z_{j,k})+\sum_{k=1}^N q_k +\sum_{j=1}^M\sum_{k=1}^N r_{jk} - \lambda *cost(z)\\ \end{aligned} $$ where $\lambda$ is the trade-off parameter. The constraints remain the same as part 1 above.

In [18]:

```
### Select a team compsition from a pool
function select_from_pool_mixed(N_POOL, N_CHOSEN, get_score, elo)
inf = -1000
sup = 1000 #
xm = Model(solver=CbcSolver())
# matrix of main probabilistic assignment variables
@variable(xm, 0 <= x[1:N_POOL, 1:N_CHOSEN] <= 1)
# matrix of main assignment variables
@variable(xm, z[1:N_POOL, 1:N_CHOSEN], Bin)
# vector of inner model dual variables
@variable(xm, p[1:N_POOL] <= 0)
# vector of inner model dual variables
@variable(xm, q[1:N_CHOSEN])
# matrix of inner model dual variables
@variable(xm, r[1:N_POOL, 1:N_CHOSEN] <= 0)
# vector of epigraph variables
@variable(xm, t[1:N_POOL] <= 0)
# supply constraint of z
@constraint(xm, supply[i in 1:N_POOL],
sum(z[i, k] for k=1:N_CHOSEN) <= 1)
# demand constraint of z
@constraint(xm, demand[k in 1:N_CHOSEN],
sum(z[i, k] for i=1:N_POOL) == 1)
# supply constraint of x
@constraint(xm, supplyProb[i in 1:N_POOL],
sum(x[i, k] for k=1:N_CHOSEN) <= 1)
# demand constraint of x
@constraint(xm, demandProb[k in 1:N_CHOSEN],
sum(x[i, k] for i=1:N_POOL) == 1)
# z = 0 => x = 0
@constraint(xm, node[i in 1:N_POOL],
sum(x[i, k] for k=1:N_CHOSEN) <= sum(z[i, k] for k=1:N_CHOSEN))
# epigraph constraint 1
@constraint(xm, icons[j in 1:N_POOL], t[j] >=
(1 - sum(z[j, k] for k=1:N_CHOSEN))*inf)
# epigraph constraint 2
@constraint(xm, ucons[j in 1:N_POOL], t[j] <=
p[j] + sup*sum(z[j, k] for k=1:N_CHOSEN))
# dual constraint
@constraint(xm, mincons[j in 1:N_POOL, k in 1:N_CHOSEN],
p[j] + q[k] + r[j, k] <= sum(get_score(i, j, k, k, elo)
* x[i, k] for i=1:N_POOL))
# objective
@objective(xm, Max, sum(t) + sum(q) + sum(r))
solve(xm)
println("Optimal Score: ", getobjectivevalue(xm))
return getvalue(x)
end;
```

In [19]:

```
### Select a team compsition from a pool with a budget trade-off
function select_with_budget_mixed(lambda, N_POOL, N_CHOSEN,
get_score, elo, worth)
inf = -1000
sup = 1000
xm = Model(solver=CbcSolver())
# matrix of main probabilistic assignment variables
@variable(xm, 0 <= x[1:N_POOL, 1:N_CHOSEN] <= 1)
# matrix of main assignment variables
@variable(xm, z[1:N_POOL, 1:N_CHOSEN], Bin)
# vector of inner model dual variables
@variable(xm, p[1:N_POOL] <= 0)
# vector of inner model dual variables
@variable(xm, q[1:N_CHOSEN])
# matrix of inner model dual variables
@variable(xm, r[1:N_POOL, 1:N_CHOSEN] <= 0)
# vector of epigraph variables
@variable(xm, t[1:N_POOL] <= 0)
# supply constraint of z
@constraint(xm, supply[i in 1:N_POOL],
sum(z[i, k] for k=1:N_CHOSEN) <= 1)
# demand constraint of z
@constraint(xm, demand[k in 1:N_CHOSEN],
sum(z[i, k] for i=1:N_POOL) == 1)
# supply constraint of x
@constraint(xm, supplyProb[i in 1:N_POOL],
sum(x[i, k] for k=1:N_CHOSEN) <= 1)
# demand constraint of x
@constraint(xm, demandProb[k in 1:N_CHOSEN],
sum(x[i, k] for i=1:N_POOL) == 1)
# z = 0 => x = 0
@constraint(xm, node[i in 1:N_POOL],
sum(x[i, k] for k=1:N_CHOSEN) <= sum(z[i, k] for k=1:N_CHOSEN))
# epigraph constraint 1
@constraint(xm, icons[j in 1:N_POOL], t[j] >=
(1 - sum(z[j, k] for k=1:N_CHOSEN))*inf)
# epigraph constraint 2
@constraint(xm, ucons[j in 1:N_POOL], t[j] <=
p[j] + sup*sum(z[j, k] for k=1:N_CHOSEN))
# dual constraint
@constraint(xm, mincons[j in 1:N_POOL, k in 1:N_CHOSEN],
p[j] + q[k] + r[j, k] <= sum(get_score(i, j, k, k, elo)
* x[i, k] for i=1:N_POOL))
# Player cost expression
@expression(xm, playercost, lambda *
sum(sum(z[i, k] for k=1:N_CHOSEN)
* worth[i] for i=1:N_POOL))
# Objective function
@objective(xm, Max, sum(t) + sum(q) + sum(r) - playercost)
solve(xm)
# println("Optimal Score: ", getobjectivevalue(xm))
return [getvalue(x), getobjectivevalue(xm), getvalue(playercost)]
end;
```

We solve the model for estimating team composition (team of 4) from a pool of 10 players. The goal is to maximize expected score in any match played against any of the remaining players in the pool.

In [20]:

```
xopt = select_from_pool_mixed(N_POOL, N_CHOSEN, get_chess_score3, elos);
print_match(xopt, "Name", players[1:N_POOL], false)
```

As expected, the model simply chooses the best 4 players for the team in the absence of any budget constraints. What's interesting, though is that the model predicts a pure strategy in this case. This is not really surprising because no matter what the opposition's strategy is, if the best 4 players belong to our team, the expected score will always be in the favor of our team.

The above result illustrates why we need to include a budget minimization object in the model. We study the impact of monetary contraints by checking the optimal score expected score values as well as the players chosen for different values of $\lambda$.

In [21]:

```
function explore_tradeoff(lambda)
result = select_with_budget_mixed(lambda, N_POOL, N_CHOSEN,
get_chess_score3, elos, worth * 0.001)
xopt = result[1]
cost = result[3]
score = result[2] + cost
println("Expected score: ", score)
println("Cost: ", cost)
print_match(xopt, "Name", players[1:N_POOL], false)
end;
explore_tradeoff(0.1)
```

In [22]:

```
explore_tradeoff(0.2)
```

In [23]:

```
explore_tradeoff(1)
```

Observing the different strategies for different $\lambda$, we can see that in general, the trend is such that expected match score decreases as we increase $\lambda$.

In [24]:

```
# compute optimal tradeoff curve (this may take a few seconds
N = 30
lambda_values = linspace(0.1,1,N)
# lambda_values = [0.001, 0.05, 0.01, 0.1, 1, 10]
N = length(lambda_values)
cost = zeros(N)
score = zeros(N)
opt = zeros(N)
for (i, lambda) in enumerate(lambda_values)
result = select_with_budget_mixed(lambda, N_POOL, N_CHOSEN,
get_chess_score3, elos, worth * 0.001)
cost[i] = result[3]
score[i] = result[2] + result[3]
opt[i] = result[2]
end
```

In [25]:

```
using PyPlot
plot(cost, score,"b-")
xlabel("cost (k)")
ylabel("expected score")
title("Pareto Curve")
tight_layout()
```

In [26]:

```
title("Expected Score vs. Lambda")
plot(lambda_values, score,"g-")
xlabel("lambda")
ylabel("expected score");
```

The pareto curve shows a non-traditional form in the sense that it has step reduction instead of a smooth curve. This is because our problem is essentially an assignment problem and assignments do not change for every variation in $\lambda$. Only at certain thresholds of $\lambda$ does the strategy change and hence its associated expected cost also changes.

The value of expected score also flattens out and does not change at all beyond a point (when $\lambda$ \geq 1). This is because the optimal strategy found alsready constitutes the cheapest players and hence increase in $\lambda$ stops having any effect on the expected score (the objective values go down but expected match score does not). Similarly, for smaller values of $\lambda$, (when $\lambda$ \leq 0.1), the expected cost has a lower bound corresponding to a team that already possesses the top-ranked players and hence reduction in $\lambda$ stops having any effect on the expected score (the objective values increase but expected match score does not).

We have solved the most generalized form of the team strategy problem. We started with the most naive and restricted model of determining a team strategy given the team composition as well as the opposition's pure strategy. Second, we eliminated the assumption of knowledge of opposition's strategy to find a team's optimal pure strategy in the Maximin model given its composition. The third model no longer restricted itself to just pure strategies. It found an optimal mixed strategy for a particular team composition and also introduced the concept of Nash Equilibrium in the strategy determination problem. Finally, the last model considered variable team compositions to find the optimal mixed strategy with and without budget constraints.

What follows is a summary of the results from each of the models and a discussion of some of the interesting aspects that were observed:- The Naive model found the optimal strategy under the assumption that the opposition's strategy was fixed and known to the team a priori. An interesting result was the identification of the fact that the head-to-head score estimation plays a vital role in determining the optimal strategy and an unrealistic score function can lead to an overly-optimistic estimate of expected score as seen in the first variation of the [Naive model's results](#3.C.-Results-and-Discussion). The assumption of knowledge of opponent's strategy was then relaxed in the Maximin model.
- The [Maximin model's results](#4.C.-Results-and-Discussion) illustrated that no pure nash equilibrium exists for determining optimal team strategy. This is in agreement with our intuition that sticking to a particular pure strategy in repeated instances would make the team predictable; something which the opposition can take advantage of. We can therefore conclude that, such a format of gameplay should not typically have a pure Nash Equilibrium. Hence, the results of the Maximin model serve to confirm this logical intuition.
- Another interesting observation from the Maximin-Minimax formulation was that the Maximin optimal value (1.984448793236806) was lesser than the Minimax optimal value (1.9844607584620062). The fact that Maximin and Minimax values can be equal at Nash equilibrium leads us to wonder if, in general 2 player games, the Maximin optimal $\leq$ Minimax optimal. As it turns out, this result is in consonance with the lemma that $$\begin{aligned} \underset{x \in \mathbb{(0, 1)}}{\text{max}}\underset{y \in \mathbb{(0, 1)}}{\text{min}} P_{xy} \leq \underset{y \in \mathbb{(0, 1)}}{\text{min}}\underset{x \in \mathbb{(0, 1)}}{\text{max}} P_{xy} \end{aligned} $$ A quick proof of the lemma can be formulated as follows: For a particular row $x_0$, we know the following inequality holds: $$ \begin{aligned} g(x_0) = \underset{y'}{\text{min }} P(x_0, y') \leq P(x_0, y) \leq \underset{x'}{\text{max }} P(x', y) && \forall x, y \end{aligned} $$ In other words,, $$ \begin{aligned} g(x) = \underset{y'}{\text{min}} P(x, y') \leq \underset{x'}{\text{max}} P(x', y) = h(y) \forall x,y g(x) \leq h(y) \forall x,y \end{aligned} $$ Therefore, $$ \begin{aligned} \underset{x}{\text{max}} g(x) \leq \underset{y}{\text{min}} h(y) \forall x,y \end{aligned} $$ Hence, we have: $$\begin{aligned} \underset{x}{\text{max}}\underset{y}{\text{min}} P_{xy} \leq \underset{y}{\text{min}}\underset{x}{\text{max}} P_{xy} \end{aligned} $$
- The [Nash model results](#5.C.-Results-and-Discussion) were arguably the most significant in terms of confirming theoretical concepts. We confirmed the existence of a mixed strategy Nash Equilibrium and found the optimal expected score. We also found the corresponding Nash Equilibrium strategies for both the home and opposition teams.
- In the [Budget-tradeoff results](#6.C.-Results-and-Discussion), we explored the impact that monetary restrictions can have in the process of decision making while building a team. As anticipated, the expected score consistently decreased as more emphasis (higher $\lambda$) was laid on the budget minimization objective.

In this project, we have illustrated how optimization techniques can be leveraged to determine the best strategy for a team in order to maximize winning chances based on game theoretic concepts. We also illustrated how to determine the best chess team composition and strategy to maximize probability of winning under budget constraints.

Through the course of the project, we used practical results of optimization models to introduce and exemplify theoretical game theory concepts. Specifically, we applied optimization concepts seen in class such as primal modeling, dual transformation, trade-off and strong duality to illustrate game theoretic concepts like Nash Equilibrium as well as Maximin and Minimax strategies.

We provided both empirical and theoretical evidence that, in general, the Maximin optimal $\leq$ Minimax optimal. A pure strategy Nash equilibrium would exist when the equality holds (Maximin = Minimax). We also illustrated, both mathematically as well as in practice, how the dual form of one team's mixed strategy model is the primal form of the other team's mixed strategy model. We used this to show that a mixed strategy Nash Equilibrium (if one exists) can be found by solving the primal and dual forms of a game and comparing . If the optimal values of primal and dual forms are equal to each other (strong duality holds) then that is a Nash Equilibrium of the game.

The optimization models built as part of this project project can be applied, without modification, to determine optimal strategies for any team sports with rubbers that are non-cooperative in nature. The techniques described in this project are applicable, without any modification, to other team sports such as tennis, wrestling and boxing, that involve a series of one-on-one games to determine the victorious team. The [Davis Cup](https://en.wikipedia.org/wiki/Davis_Cup) and the [Fed Cup](https://en.wikipedia.org/wiki/Fed_Cup) are examples of tournaments in the realm of tennis where these models can potentially prove to be extremely effective for maximizing winning probabilities. The models presented can also be modified slightly to develop strategies for online battle strategy games like Age of Empires and Clash of Clans as well as for determining squad composition and battle order in card and video games like Pokemon.

The models built in this project, however, are not trivially extendable to all team sports. Since they do not account for the element of cooperation between team members, they would require major modification in order to determine strategies for team sports like soccer and basketball. An interesting direction for further extension of this project would be to extend the idea to include cooperative team sports as well. This would require us to possess a mathematical model that accurately takes into account each team member's contribution to the strength of other team members' strength while estimating relative probability of winning against an opposition.

**Pure Strategy:** A pure strategy is an unconditional, defined choice that a person makes in a situation or game. For example, in the game of Rock-Paper-Scissors,if a player would choose to only play scissors for each and every independent trial, regardless of the other player’s strategy, choosing scissors would be the player’s pure strategy. The set of all options (i.e. rock, paper, and scissors) available in this game is known as the strategy set.

**Mixed Strategy:** A mixed strategy is an assignment of a probability to each pure strategy in the strategy set. Using the example of Rock-Paper-Scissors, if a person’s probability of employing each pure strategy is equal, then the probability distribution of the strategy set would be 1/3 for each option, or approximately 33%. In other words, a person using a mixed strategy incorporates more than one pure strategy into a game. This allows for a player to randomly select a pure strategy. Since probabilities are continuous, there are infinitely many mixed strategies available to a player. A mixed strategy can simply be considered as the probability distribution one uses to randomly choose among available actions in order to avoid being predictable.
Of course, one can regard a pure strategy as a degenerate case of a mixed strategy, in which that particular pure strategy is selected with probability 1 and every other strategy with probability 0. If a pure strategy in Rock-Paper-Scissors is to play scissors, thenit can be considered a mixed strategy where probability for choosing scissors equal to 1 and all other options (paper and rock) are chosen with the probability of 0.

**Nash Equilibrium:**
[Nash Equilibrium](http://www.investopedia.com/terms/n/nash-equilibrium.asp#ixzz4gSfkX0Wn) is a pair of strategies in which each player’s strategy is the best response to the other player’s strategy and no player has an incentive to deviate from his chosen strategy after considering an opponent's choice. Overall, in a Nash Equilibrium, an individual can receive no incremental benefit (only stands to lose) from changing actions, assuming other player remains constant in his strategy.

It is important to note that games can have

- Only one pure Nash Equilibrium
- Only one mixed Nash Equilibrium and no pure Nash Equilibrium
- Multiple pure Nash Equilibrium
- Pure and mixed Nash Equilibrium

It can be seen that Rock-Paper-Scissors belongs to the second category.The reason why there isn’t a pure Nash Equilibrium is that there is no way a player can hope to win if he makes the same choice 100% of the time. For example, let’s take player 1. If he consistently plays rock, then player 2 will always choose paper. Player 1 will never win. Thus, there is no pure equilibrium – it just doesn’t make sense for one player to ALWAYS pick one choice for the whole game – it’s just too predictable. There is, however, a mixed Nash Equilibrium wherein each player plays each of rock, paper and scissors approximately 33% of the time. Even knowing that player 1 is adopting this mixed strategy of (1/3, 1/3, 1/3) does not give player 2 any reason to change his strategy of (1/3, 1/3, 1/3). If player 1 plays any other strategy, say (1/4, 1/4, 1/2), player 2 can exploit that information and win. Therefore, (1/3, 1/3, 1/3) is the mixed Nash Equilibrium for Rock-Paper-Scissors.

In [27]:

```
function get_maximin_pure_by_enumeration(score_matrix::Array, maxPlayer=true)
n = size(scores_matrix)[1]
t = zeros(n)
if maxPlayer
for k in 1:n
t[k] = findmin(score_matrix[k, 1:n])[1]
end
return findmax(t)
else
for k in 1:n
t[k] = findmax(score_matrix[1:n, k])[1]
end
return findmin(t)
end
end
;
println("Maximin Score: ",get_maximin_pure_by_enumeration(scores_matrix, true)) # true for primal
println("Minimax Score: ",get_maximin_pure_by_enumeration(scores_matrix, false)) # false for primal
```

In [28]:

```
function get_MSNE_by_enumeration(score_matrix::Array)
n = size(scores_matrix)[1]
# for primal - P person
m = Model(solver=ClpSolver())
@variable(m, 1>= p[1:n] >=0)
@variable(m, t)
@constraint(m, sum(p) == 1)
@constraint(m, min_const[k in 1:n],
sum(p[i] * score_matrix[i, k] for i=1:n) >= t)
# Max min (g1, g2, ..., gn)
@objective(m, Max, t)
solve(m)
popt = getvalue(p)
# println(popt)
println(getobjectivevalue(m))
return popt
end;
function get_MSNE_dual_by_enumeration(score_matrix::Array)
n = size(scores_matrix)[2]
# for primal - P person
m = Model(solver=ClpSolver())
@variable(m, 1>= q[1:n] >=0)
@variable(m, t)
@constraint(m, sum(q) == 1)
@constraint(m, max_const[k in 1:n],
sum(q[i] * score_matrix[k, i] for i=1:n) <= t)
# Max min (g1, g2, ..., gn)
@objective(m, Min, t)
solve(m)
qopt = getvalue(q)
# println(qopt)
println(getobjectivevalue(m))
return qopt
end;
print("Primal optimal score: ")
popt = get_MSNE_by_enumeration(scores_matrix);
print("Dual optimal score: ")
popt =get_MSNE_dual_by_enumeration(scores_matrix);;
```