**Description:** This notebook is an introduction to JuMP. The topics described are as follows:

- Installing Julia and JuMP
- Representing vectors in Julia
- Structure of a JuMP model
- Solving general purpose linear programming problem
- Solving general purpose integer programming problem

**Author:** Shuvomoy Das Gupta

**License:**

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Julia is a new programming language.

- Free and open-source
- Syntax similar to MATLAB, speed similar to C

JuMP is a modelling language for mathematical optimization [1]. It is embedded in Julia.

- Very user-friendly
- Speed similar to special purpose commercial modelling language like AMPL
- Solver independent code: same code will run for both commercial and open-source solvers
- Very easy to implement solver callback and problem modification

Go to the download page of Julia, download the appropriate installer and run it. Now you can start an interactive Julia session!

IJulia will allow us to create powerful graphical notebooks, which is very convenient for the tutorial. We can insert codes, text, mathematical formulas and multimedia etc. in the same notebook. To install IJulia:

Install Anaconda from http://continuum.io/downloads. If you are on Windows, then while running the Anaconda installer please check the options

*Add Anaconda to the System Path*and also*Register Anaconda as default Python version of the system*.Now start a Julia interactive session. Type the following

`Pkg.add("IJulia")`

To open a new notebook

- In the Julia interactive session, run
`Using IJulia`

and then`notebook()`

. It will open the Home for IJulia in your web browser.

- The directory location can be checked by the command
`pwd()`

. If you want to change the directory to something else, the before running`Using IJulia`

and`notebook()`

, run`cd(path to your preferred directory)`

, e.g.,`cd("E:\\Dropbox\\Julia_Workspaces")`

. - Click on
*New Notebook*on the top right corner. In the new notebook, you can execute any Julia command pressing SHIFT+ENTER.

- In the Julia interactive session, run

At first add the JuMP package by running the following code in the notebook:

In [1]:

```
Pkg.add("JuMP")
```

We need to install a solver package. Let's install the open source solvers GLPK, Cbc and Clp by typing in `Pkg.add("GLPKMathProgInterface")`

, `Pkg.add("Cbc")`

and `Pkg.add("Clp")`

respectively. Let's add the Julia package associated with CPLEX by typing in `Pkg.add("CPLEX")`

. The other choices are `"CPLEX"`

, `"Cbc"`

, `"Clp"`

, `"Gurobi"`

, `"Xpress"`

and `"MOSEK"`

.

It should be noted that, in order to use commercial solvers such as CPLEX, Gurobi, Xpress and Mosek in JuMP, we will require working installations of them with appropriate licences. Both Gurobi and Mosek are free for academic use. CPLEX is free for faculty members and graduate teaching assistants.

In [2]:

```
Pkg.add("GLPKMathProgInterface")
```

In [3]:

```
Pkg.add("Cbc")
```

In [4]:

```
Pkg.add("Clp")
```

In [5]:

```
Pkg.add("CPLEX") # Working installation of CPLEX is needed in advance
```

In [6]:

```
Pkg.add("Gurobi") # Working installation of Gurobi is needed in advance
```

If you have not updated your Julia packages in a while, a good idea might be updating them.

In [7]:

```
Pkg.update()
```

In [8]:

```
println("Hello World!")
```

At first let us try to solve a very simple and trivial optimization problem using JuMP to check if everything is working properly.

\begin{align} \text{minimize} \qquad & x+y \\ \text{subject to} \quad \quad & x+y \leq 1 \\ \qquad \qquad & x \geq 0, y \geq 0 \\ \qquad \qquad & x,y \in \mathbb{R} \end{align}

Here is the JuMP code to solve the mentioned problem:

In [9]:

```
using JuMP # Need to say it whenever we use JuMP
using GLPKMathProgInterface # Loading the GLPK module for using its solver
#MODEL CONSTRUCTION
#--------------------
myModel = Model(solver=GLPKSolverLP())
# Name of the model object. All constraints and variables of an optimization problem are associated
# with a particular model object. The name of the model object does not have to be myModel, it can be yourModel too! The argument of Model,
# solver=GLPKsolverLP() means that to solve the optimization problem we will use GLPK solver.
#VARIABLES
#---------
# A variable is modelled using @variable(name of the model object, variable name and bound, variable type)
# Bound can be lower bound, upper bound or both. If no variable type is defined, then it is treated as
#real. For binary variable write Bin and for integer use Int.
@variable(myModel, x >= 0) # Models x >=0
# Some possible variations:
# @variable(myModel, x, Binary) # No bound on x present, but x is a binary variable now
# @variable(myModel, x <= 10) # This one defines a variable with lower bound x <= 10
# @variable(myModel, 0 <= x <= 10, Int) # This one has both lower and upper bound, and x is an integer
@variable(myModel, y >= 0) # Models y >= 0
#OBJECTIVE
#---------
@objective(myModel, Min, x + y) # Sets the objective to be minimized. For maximization use Max
#CONSTRAINTS
#-----------
@constraint(myModel, x + y <= 1) # Adds the constraint x + y <= 1
#THE MODEL IN A HUMAN-READABLE FORMAT
#------------------------------------
println("The optimization problem to be solved is:")
print(myModel) # Shows the model constructed in a human-readable form
#SOLVE IT AND DISPLAY THE RESULTS
#--------------------------------
status = solve(myModel) # solves the model
println("Objective value: ", getObjectiveValue(myModel)) # getObjectiveValue(model_name) gives the optimum objective value
println("x = ", getValue(x)) # getValue(decision_variable) will give the optimum value of the associated decision variable
println("y = ", getValue(y))
```

This was certainly not the most exciting optimization problem to solve. This was for test purpose only. However, before going into the structure of a JuMP model, let us learn how to represent vectors in Julia.

A column vector, $y=(y_1, y_2, \ldots, y_n)= \begin{pmatrix} y_1 \\ y_2 \\ . \\ . \\ y_n \end{pmatrix} \in \mathbb{R}^n$ will be written in Julia as

`[y[1];y[2];...;y[n]]`

.For example to create column vector $\begin{pmatrix} 3 \\ 2.4 \\ 9.1 \\ \end{pmatrix}$ use:

`[3; 2.4; 9.1]`

.

In [10]:

```
[3; 2.4; 9.1] # Column vector
```

Out[10]:

A row vector, $z=(z_1 \; z_2 \; \ldots \; z_n) \in \mathbb{R}^{1 \times n}$ will be written in Julia as

`[z[1] y[2]...z[n]]`

.For example to create row vector $(1.2 \; 3.5 \; 8.21)$ use:

`[1.2 3.5 8.21]`

.

In [11]:

```
[1.2 3.5 8.21] # Row vector
```

Out[11]:

- To create a $m \times n$ matrix

$$ A = \begin{pmatrix} A_{11} & A_{12} & A_{13} & \ldots &A_{1n} \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ A_{m1} & A_{m2} & A_{m3} & \ldots & A_{mn} \end{pmatrix} $$

write:

`[A[1,1] A[1,2] A[1,3]... A[1,n];`

`... ;`

`A[m,1] A[m,2] ... A[m,n]]`

.

So the matrix

$$ A = \begin{pmatrix} 1 & 1 & 9 & 5 \\ 3 & 5 & 0 & 8 \\ 2 & 0 & 6 & 13 \end{pmatrix} $$

is represented in Julia as:

```
A= [
1 1 9 5;
3 5 0 8;
2 0 6 13
]
```

In [12]:

```
# Generating a matrix
A= [
1 1 9 5;
3 5 0 8;
2 0 6 13
]
```

Out[12]:

$A_{ij}$ can be accessed by `A[i,j]`

,the $i$th row of the matrix A is represented by `A[i,:]`

, the $j$th column of the matrix A is represented by `A[:,j]`

.

The size of a matrix $A$ can be determined by running the command `size(A)`

. If we write `numRows, numCols = size(A)`

, then `numRows`

and `numCols`

will contain the total number of rows and columns of A respectively.

In [13]:

```
numRows, numCols = size(A)
println(
"A has ", numRows, " rows and ", numCols, " columns \n",
"A[3,3] is ", A[3,3], "\n",
"The 3rd column of A is ", A[:,3], "\n",
"The 2nd row of A is ", A[2,:]
)
```

Suppose $x,y \in \mathbb{R}^n$. Then $x^T y =\sum_{i=1}^{n} {x_i y_i}$ is written as `dot(x,y)`

.

In [14]:

```
y=[1; 2; 3; 4]
x=[5; 6; 7; 8]
xTy=dot(x,y)
```

Out[14]:

Any JuMP model that describes an optimization problem must have four parts:

**Model Object**,**Variables**,**Objective**,**Constraints**.

Any instance of an optimization problem corresponds to a model object. This model object is associated with all the variables, constraints and objective of the instance. It is constructed using `modelName = Model(solver=`

*solver of our preference*`)`

. If no solver is specified, then `ClpSolver()`

and/or `CbcSolver()`

will be used by default. Here `modelName`

is any valid name. We will limit ourselves in the open source solvers such as:

- Linear Programming Solver:
`ClpSolver(), GLPKSolverLP()`

- Mixed Integer Programming Solver:
`GLPKSolverMIP() CbcSolver()`

In [15]:

```
using JuMP
myModel = Model() # ClpSolver() and/or CbcSolver() will be used based on the problem
```

Out[15]:

Variables are defined using `@variable`

macro, which takes up to three input arguments. The *first* argument is the name of the model. Then the *second* argument contains the name of the variable, and a bound on the variable if it exists. The *third* argument is not needed if the variable is real. When the variable is binary or integer, then `Bin`

or `Int`

, respectively, is used in place of the third argument.

Suppose the model object is `myModel`

.

- To describe a variable $z \in \mathbb{R}$ such that $0 \leq z \leq 10$ write

In [16]:

```
@variable(myModel, 0 <= z <= 10)
```

Out[16]:

- Now consider a decision variable $x \in \mathbb{R}^n$, and it has a bound $l \preceq x \preceq u$, where naturally $l, u \in \mathbb{R}^n$. For that we write

In [17]:

```
# INPUT DATA, CHANGE THEM TO YOUR REQUIREMENT
#-------------------------------------------
n = 10
l = [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]
u = [10; 11; 12; 13; 14; 15; 16; 17; 18; 19]
```

Out[17]:

In [18]:

```
# VARIABLE DEFINITION
# -------------------
@variable(myModel, l[i] <= x[i=1:n] <= u[i])
```

Out[18]:

- Suppose we have decision variables $x \in \mathbb{R}^n$, $y \in \mathbb{Z}^m$ and $z \in \mathbb \{0,1\}^p$ such that $x \succeq 0$, $a \preceq y \preceq b$. Here $a, b \in \mathbb{Z}^m$. To express this in JuMP we write

In [19]:

```
# INPUT DATA, CHANGE THEM TO YOUR REQUIREMENT
#-------------------------------------------
n = 4 # dimension of x
m = 3 # dimension of y
p = 2 # dimensin of z
a = [0; 1; 2]
b = [3; 4; 7]
```

Out[19]:

In [20]:

```
# VARIABLE DEFINITION
# -------------------
@variable(myModel, x[i=1:n] >= 0)
```

Out[20]:

In [21]:

```
@variable(myModel, a[i] <= y[i=1:m] <= b[i], Int)
```

Out[21]:

In [22]:

```
@variable(myModel, z[i=1:p], Bin)
```

Out[22]:

Constraints are added by using `@constraint`

macro. The first argument is the model object the constraint is associated with, the second argument is the reference to that constraint and the third argument is the constraint description. The constraint reference comes handy when we want to manipulate the constraint later or access the dual variables associated with it. If no constraint reference is needed, then the second argument is the constraint description.

Let's give some examples on writing constraints in JuMP. Suppose the model name is `yourModel`

.

In [23]:

```
yourModel = Model()
```

Out[23]:

- Consider variables $x, y \in \mathbb{R}$ which are coupled by the constraints $5 x +3 y \leq 5$. We write this as

`@constraint(yourModel, 5*x + 3*y <= 5)`

Naturally,`x`

and`y`

have to be defined first using`@variable`

macro.

In [24]:

```
@variable(yourModel, x)
@variable(yourModel, y)
@constraint(yourModel, 5*x + 3*y <= 5)
```

Out[24]:

Here no constraint reference is given. Now suppose we want to get the dual value of some constraint after solving the problem, then we would need a constraint reference to assign to the constraint first. Let's call the constraint reference as `conRef1`

(it could be any valid name). Then the same constraint have to be written as:

`@constraint(yourModel, conRef1, 6*x + 4*y >= 5)`

When we would need the dual value after solving the problem we just write `println(getDual(conRef1))`

.

In [25]:

```
@constraint(yourModel, conRef1, 6*x + 4*y >= 5)
```

Out[25]:

- Consider a variable $x \in \mathbb{R}^4$, a coefficient vector $a=(1, -3, 5, -7)$ We want to write a constraint of the form $\sum_{i=1}^4{a_i x_i} \leq 3$. In JuMP we write:

In [26]:

```
a = [1; -3; 5; 7]
@variable(yourModel, w[1:4])
@constraint(yourModel, sum(a[i]*w[i] for i in 1:4) <= 3)
```

Out[26]:

Objective is set using `@objective`

macro. It has three arguments. The first argument is as usual the model object. The second one is either `Max`

if we want to maximize the objective function, or `Min`

when we want to minimize. The last argument is the description of the objective which has similar syntax to that of constraint definition.

For the previous model, consider the decision variable $w \in \mathbb{R}^4$ and cost vector $c = (2, 3 , 4, 5)$. We want to minimize $c^T w$. In JuMP we would write:

In [27]:

```
c = [2; 3; 4; 5]
@objective(yourModel, Min, sum(c[i]*w[i] for i in 1:4))
```

Out[27]:

problem Let us try to write the JuMP code for the following standard form optimization problem:

$$ \begin{align} & \text{minimize} && c^T x \\ & \text{subject to} && A x = b \\ & && x \succeq 0 \\ & && x \in \mathbb{R}^n \end{align} $$

where, $n = 4$, $c=(1, 3, 5, 2)$, $A = \begin{pmatrix} 1 & 1 & 9 & 5 \\ 3 & 5 & 0 & 8 \\ 2 & 0 & 6 & 13 \end{pmatrix}$ and $b=(7, 3, 5)$. The symbol $\succeq$ ($\preceq$) stands for element-wise greater (less) than or equal to.

Let us input different parts of the JuMP code one by one and see the corresponding outputs to detect if everything is okay. Of course we could input the whole code at once.

In [28]:

```
using JuMP # Need to say it whenever we use JuMP
using GLPKMathProgInterface # Loading the package for using the GLPK solver
```

In [29]:

```
#MODEL CONSTRUCTION
#------------------
sfLpModel = Model(solver=GLPKSolverLP()) # Name of the model object
```

Out[29]:

In [30]:

```
#INPUT DATA
#----------
c = [1; 3; 5; 2]
A= [
1 1 9 5;
3 5 0 8;
2 0 6 13
]
b = [7; 3; 5]
m, n = size(A) # m = number of rows of A, n = number of columns of A
```

Out[30]:

In [31]:

```
#VARIABLES
#---------
@variable(sfLpModel, x[1:n] >= 0) # Models x >=0
```

Out[31]:

In [32]:

```
#CONSTRAINTS
#-----------
for i in 1:m # for all rows do the following
@constraint(sfLpModel, sum(A[i,j]*x[j] for j in 1:n) == b[i]) # the ith row
# of A*x is equal to the ith component of b
end # end of the for loop
```

In [33]:

```
#OBJECTIVE
#---------
@objective(sfLpModel, Min, sum(c[j]*x[j] for j in 1:n)) # minimize c'x
```

Out[33]:

In [34]:

```
#THE MODEL IN A HUMAN-READABLE FORMAT
#------------------------------------
println("The optimization problem to be solved is:")
print(sfLpModel) # Shows the model constructed in a human-readable form
```

In [35]:

```
status = solve(sfLpModel) # solves the model
```

Out[35]:

In [36]:

```
#SOLVE IT AND DISPLAY THE RESULTS
#--------------------------------
println("Objective value: ", getObjectiveValue(sfLpModel)) # getObjectiveValue(model_name) gives the optimum objective value
println("Optimal solution is x = \n", getValue(x)) # getValue(decision_variable) will give the optimum value
# of the associated decision variable
```

In [37]:

```
using JuMP
using GLPKMathProgInterface
#MODEL CONSTRUCTION
#------------------
sfLpModel = Model(solver=GLPKSolverLP())
#INPUT DATA
#----------
c = [1; 3; 5; 2]
A= [
1 1 9 5;
3 5 0 8;
2 0 6 13
]
b = [7; 3; 5]
m, n = size (A) # m = number of rows of A, n = number of columns of A
#VARIABLES
#---------
@variable(sfLpModel, x[1:n] >= 0) # Models x >=0
#CONSTRAINTS
#-----------
for i in 1:m # for all rows do the following
@constraint(sfLpModel, sum(A[i,j]*x[j] for j in 1:n) == b[i]) # the ith row
# of A*x is equal to the ith component of b
end # end of the for loop
#OBJECTIVE
#---------
@objective(sfLpModel, Min, sum(c[j]*x[j] for j in 1:n)) # minimize c'x
#THE MODEL IN A HUMAN-READABLE FORMAT
#------------------------------------
println("The optimization problem to be solved is:")
print(sfLpModel) # Shows the model constructed in a human-readable form
@time begin
status = solve(sfLpModel) # solves the model
end
#SOLVE IT AND DISPLAY THE RESULTS
#--------------------------------
println("Objective value: ", getObjectiveValue(sfLpModel)) # getObjectiveValue(model_name) gives the optimum objective value
println("Optimal solution is x = \n", getValue(x)) # getValue(decision_variable) will give the optimum value
# of the associated decision variable
```

Let us try to write the JuMP code for the following standard form optimization problem:

$$ \begin{align} & \text{minimize} && c^T x + d^T y\\ & \text{subject to} && A x + B y= f \\ & && x \succeq 0, y \succeq 0 \\ & && x \in \mathbb{R}^n, y \in \mathbb{Z}^p \end{align} $$

Here, $A \in \mathbb{R}^{m \times n}, B \in \mathbb{R}^{m \times p}, c \in \mathbb{R}^n, d \in \mathbb{R}^p, f \in \mathbb{R}^m$. The data were randomly generated. The symbol $\succeq$ ($\preceq$) stands for element-wise greater (less) than or equal to.

In [38]:

```
n = 5
p = 4
m = 3
A=
[0.7511 -0.1357 0.7955 -0.4567 0.1356
-0.6670 -0.3326 0.1657 -0.5519 -0.9367
1.5894 -0.1302 -0.4313 -0.4875 0.4179]
B=
[-0.09520 -0.28056 -1.33978 0.6506
-0.8581 -0.3518 1.2788 1.5114
-0.5925 1.3477 0.1589 0.03495]
c=[0.3468,0.8687,0.1200,0.5024,0.2884]
d=[0.2017,0.2712,0.4997,0.9238]
f = [0.1716,0.3610,0.0705]
```

Out[38]:

In [39]:

```
using JuMP
using GLPKMathProgInterface
sfMipModel = Model(solver = GLPKSolverMIP())
@variable(sfMipModel, x[1:n] >=0)
@variable(sfMipModel, y[1:p] >= 0, Int)
@objective(sfMipModel, Min, sum(c[i] * x[i] for i in 1:n)+sum(d[i]*y[i] for i in 1:p))
for i in 1:m
@constraint(sfMipModel, sum(A[i,j]*x[j] for j in 1:n)+ sum(B[i,j]*y[j] for j in 1:p) == f[i])
end
print(sfMipModel, "\n")
statusMipModel = solve(sfMipModel)
print("Status of the problem is ", statusMipModel, "\n")
if statusMipModel == :Optimal
print("Optimal objective value = ", getObjectiveValue(sfMipModel), "\nOptimal x = ", getValue(x), "\nOptimal y = ", getValue(y))
end
```

[1] M. Lubin and I. Dunning, “Computing in Operations Research using Julia”, INFORMS Journal on Computing, to appear, 2014. arXiv:1312.1431