Description: Shows how to implement sensitivity analysis in JuMP.
Author: Jack Dunn
License:
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
In this notebook, we will see how to use JuMP to conduct sensitivity analysis after solving a linear program and replicate the output offered by other packages (e.g. the Sensitivity Report in Excel Solver).
We will consder a production planning problem by J E Beasley, made available on OR-Notes.
(This section is reproduced exactly from the above link)
A company manufactures four variants of the same product and in the final part of the manufacturing process there are assembly, polishing and packing operations. For each variant the time required for these operations is shown below (in minutes) as is the profit per unit sold.
Variant | Assembly | Polish | Pack | Profit |
---|---|---|---|---|
1 | 2 | 3 | 2 | 1.50 |
2 | 4 | 2 | 3 | 2.50 |
3 | 3 | 3 | 2 | 3.00 |
4 | 7 | 4 | 5 | 4.50 |
Given the current state of the labour force the company estimate that, each year, they have 100000 minutes of assembly time, 50000 minutes of polishing time and 60000 minutes of packing time available. How many of each variant should the company make per year and what is the associated profit?
Let $x_i \geq 0$ be the number of units of variant i ($i=1,2,3,4$) made per year
The operation time limits give the following constraints:
Presumably to maximise profit - hence we have
Now we can formulate and solve this model in JuMP:
# Define the data
m = 3
n = 4
c = [1.5; 2.5; 3.0; 4.5]
A = [2 4 3 7;
3 2 3 4;
2 3 2 5]
b = [100000; 50000; 60000]
# Import necessary packages and define model
using JuMP
using Gurobi # We need Gurobi for Sensitivity Analysis later
model = Model(solver=GurobiSolver())
# Define the variables
@variable(model, x[i=1:n] >= 0)
# Add the objective
@objective(model, Max, sum{c[i] * x[i], i=1:n})
# Add the constraints row-by-row, naming them according to each resource
# The names will allow us to refer to the constraints during sensitivity analysis
@constraint(model, assembly, dot(vec(A[1,:]), x) <= b[1])
@constraint(model, polish, dot(vec(A[2,:]), x) <= b[2])
@constraint(model, pack, dot(vec(A[3,:]), x) <= b[3])
# Solve the model and show the optimal solution and objective value
solve(model)
@show getvalue(x)
@show getobjectivevalue(model);
Optimize a model with 3 rows, 4 columns and 12 nonzeros Coefficient statistics: Matrix range [2e+00, 7e+00] Objective range [2e+00, 4e+00] Bounds range [0e+00, 0e+00] RHS range [5e+04, 1e+05] Presolve time: 0.00s Presolved: 3 rows, 4 columns, 12 nonzeros Iteration Objective Primal Inf. Dual Inf. Time 0 1.4000000e+31 7.875000e+30 1.400000e+01 0s 2 5.8000000e+04 0.000000e+00 0.000000e+00 0s Solved in 2 iterations and 0.00 seconds Optimal objective 5.800000000e+04 getvalue(x) = [0.0,16000.000000000002,5999.999999999999,0.0] getobjectivevalue(model) = 58000.0
We see that the optimal production plan is to make 16000 units of variant 1 and 6000 units of variant 2, with an optimal profit of $58000
Once we have solved a model, it is often useful to analyze the sensitivity of the solution to the model parameters. Other modeling tools like Excel Solver can produce a Sensitivity Report, which summarizes all of the sensitivity information in one table.
The Sensitivity Report produced for the production planning solution above is as follows (image from OR-Tools):
The table contains information on the shadow prices and reduced costs in the model, as well as the ranges on the cost coefficients and right-hand side values for which the current basis is optimal.
We will now reproduce this table using our JuMP model
We can get the final values of the variables with getValue()
as before:
x_final = getvalue(x)
4-element Array{Float64,1}: 0.0 16000.0 6000.0 0.0
We can get the final values of the constraints by calculating $Ax$:
con_final = A * x_final
3-element Array{Float64,1}: 82000.0 50000.0 60000.0
We can extract the reduced costs by calling getDual()
on the variables:
red_costs = getdual(x)
4-element Array{Float64,1}: -1.5 0.0 0.0 -0.2
Similarly, we can extract the shadow prices by using getDual()
on our constraint references:
getdual(assembly)
0.0
getdual(polish)
0.8
getdual(pack)
0.3
We can put these together into a vector as well:
shadow_prices = getdual([assembly; polish; pack])
3-element Array{Float64,1}: 0.0 0.8 0.3
The "Objective Coefficient" and "Constraint R. H. Side" columns simply contain the values of $c$ and $b$, respectively:
obj_coeff = c
4-element Array{Float64,1}: 1.5 2.5 3.0 4.5
con_rhs = b
3-element Array{Int64,1}: 100000 50000 60000
We now want to retrieve the range of parameter values for which the optimal basis does not change. There are multiple ways to do this.
One approach is to get the optimal basis using the MathProgBase.getbasis()
function, and this can then be used to calculate the allowable ranges of increase and decrease using standard linear programming theory of sensitivity analysis.
Alternatively, some solvers (like Gurobi) provide this information directly without the need for us to compute it manually. In this case, we can access the data through the Gurobi API directly, which is what we will do in the rest of this example.
# Import Gurobi so we can access the API
import Gurobi
# Get a reference to the internal Gurobi model so we can use the API
g = getrawsolver(model)
Gurobi Model: type : LP sense : maximize number of variables = 4 number of linear constraints = 3 number of quadratic constraints = 0 number of sos constraints = 0 number of non-zero coeffs = 12 number of non-zero qp objective terms = 0 number of non-zero qp constraint terms = 0
We can now access the sensitivity information directly. To do this, we make use of the get_dblattrarray()
function from Gurobi.jl, which allows us to retrieve attributes from the Gurobi API that are arrays of floating point values.
When calling get_dblattrarray()
, we have to specify the internal Gurobi model, the name of the attribute we want to retrieve, the index at which to starting reading from the array, and the number of values to read.
In our case, we use the SARHSLow
and SARHSUp
attributes to get the lower and upper bounds on the RHS values, and in each case we start at the first value and read in a total of $m$ values. Similarly, we use SAObjLow
and SAObjUp
to get the lower and upper bounds for the objective coefficients, and in this case we read in all $n$ values.
# RHS value lower and upper bounds
rhs_lb = Gurobi.get_dblattrarray(g, "SARHSLow", 1, m)
rhs_ub = Gurobi.get_dblattrarray(g, "SARHSUp", 1, m)
@show rhs_lb
@show rhs_ub
# Objective coefficient lower and upper bounds
obj_lb = Gurobi.get_dblattrarray(g, "SAObjLow", 1, n)
obj_ub = Gurobi.get_dblattrarray(g, "SAObjUp", 1, n)
@show obj_lb
@show obj_ub;
rhs_lb = [82000.0,40000.0,33333.33333333333] rhs_ub = [1.0e100,90000.0,75000.0] obj_lb = [-1.0e100,2.357142857142857,2.499999999999999,-1.0e100] obj_ub = [3.0000000000000004,4.5,3.75,4.7]
The order of values in these arrays is not necessarily obvious in larger problems, and generally we do not know the order in which the information is returned by Gurobi. We can use the getLinearIndex()
function on our variables and constraints to find their position in these arrays:
x_order = map(linearindex, x)
4-element Array{Int64,1}: 1 2 3 4
con_order = map(linearindex, [assembly, polish, pack])
3-element Array{Int64,1}: 1 2 3
We see that the variables and constraints are already ordered for us, but this isn't true all the time, so it pays to always rearrange the arrays according to this ordering in case the order is different
rhs_lb_sorted = rhs_lb[con_order];
rhs_ub_sorted = rhs_ub[con_order];
obj_lb_sorted = obj_lb[x_order];
obj_ub_sorted = obj_ub[x_order];
Now, we can use these lower and upper bounds to obtain the allowable increase and decrease on each objective coefficient and RHS value:
@show rhs_dec = con_rhs - rhs_lb_sorted;
@show rhs_inc = rhs_ub_sorted - con_rhs;
@show obj_dec = obj_coeff - obj_lb_sorted;
@show obj_inc = obj_ub_sorted - obj_coeff;
rhs_dec = con_rhs - rhs_lb_sorted = [18000.0,10000.0,26666.66666666667] rhs_inc = rhs_ub_sorted - con_rhs = [1.0e100,40000.0,15000.0] obj_dec = obj_coeff - obj_lb_sorted = [1.0e100,0.1428571428571428,0.5000000000000009,1.0e100] obj_inc = obj_ub_sorted - obj_coeff = [1.5000000000000004,2.0,0.75,0.20000000000000018]
We can put all of this together to form the final Sensitivity Report tables.
First, the variables:
var_sensitivity = [x_final red_costs obj_coeff obj_inc obj_dec]
4x5 Array{Float64,2}: 0.0 -1.5 1.5 1.5 1.0e100 16000.0 0.0 2.5 2.0 0.142857 6000.0 0.0 3.0 0.75 0.5 0.0 -0.2 4.5 0.2 1.0e100
And similarly for the constraints:
con_sensitivity = [con_final shadow_prices con_rhs rhs_inc rhs_dec]
3x5 Array{Float64,2}: 82000.0 0.0 100000.0 1.0e100 18000.0 50000.0 0.8 50000.0 40000.0 10000.0 60000.0 0.3 60000.0 15000.0 26666.7
This leads to the following tables, which we see are identical to those produced by Excel Solver:
Variables:
Name | Final Value | Red. Cost | Obj. Coeff | Allow. Inc. | Allow. Dec. |
---|---|---|---|---|---|
$x_1$ | 0 | -1.5 | 1.5 | 1.50 | 1E+100 |
$x_2$ | 16000 | 0.0 | 2.5 | 2.00 | 0.1429 |
$x_3$ | 6000 | 0.0 | 3.0 | 0.75 | 0.5 |
$x_4$ | 0 | -0.2 | 4.5 | 0.20 | 1E+100 |
Constraints:
Name | Final Value | Shadow Price | RHS | Allow. Inc. | Allow. Dec. |
---|---|---|---|---|---|
Assembly | 82000 | 0.0 | 100000 | 1E+100 | 18000 |
Polish | 50000 | 0.8 | 50000 | 40000 | 10000 |
Pack | 60000 | 0.3 | 60000 | 15000 | 26667 |