# Fusion: Object-Oriented API for Conic Optimization¶

Fusion is an Object-Oriented API specifically designed for Conic Optimization. Its main features are:

• (almost) Seamlessly multi-language (C#, Java, Python, Matlab, C++)

• Minimalistic and Safe

• Extremely Disciplined Conic Optimization

• MOSEK gets what you write!

• Fusion provides only a (large) subset of functionalities of the low-level Optimizer API.

# An introduction to building expressions in MOSEK Fusion for Python 3¶

Fusion solves optimization problems of the form $$\begin{array}{lcc} \mbox{minimize} & c^T x & \\ \mbox{subject to} & A^i x + b^i & \in & K^i, & \forall i. \\ \end{array}$$

The cones $\mathcal{K}^i$ are

• $\mathbb{R}_+$ - positive orthant ,

• $\mathcal{Q}$ - quadratic cone,

• $\mathcal{Q_r}$ - rotated quadratic cone,

• $\mathbb{S}_+$ - PSD matrices.

By a proper choice of dimensions, cones and coefficients, we obtain LPs, CQPs and SDPs.

The objective $$c^Tx$$ and $$A^i x + b$$ are linear expressions because they are linear functions of the variables. This notebook is a introduction to building such expressions in MOSEK Fusion for Python 3.

Let us use the problem $$\begin{array}{lcl} \mbox{minimize} & \left \Vert A x + b \right \Vert & \\ \end{array}$$ to illustrate the basic ideas.

The problem is rewritten as $$\begin{array}{lccl} \mbox{minimize} & t & \\ \mbox{siÃºbject to} & \left ( \begin{array}{c} t \\ A x + b \end{array} \right ) & \in & K_q \\ \end{array}$$ where $$K_q = \{x \mid \, x_1 \geq \Vert x_{2:n} \Vert \}$$ is quadratic cone.

In [156]:
import mosek
from mosek.fusion import *

def buildmodel(A,b):
M = Model('The model')

(m,n) = A.shape

print('m: %d n: %d' % (m,n))

# Define the variables
t     = M.variable('t', 1, Domain.unbounded())
x     = M.variable('x', n, Domain.unbounded())

# Define the constraint (t;Ax+b) in quadratic cone
M.constraint('con', Expr.transpose(e), Domain.inQCone())

# Objective
M.objective('obj',ObjectiveSense.Minimize, t)

return M,t,x

Let us investigate

which implements a linear expression. The explenation of the linear expression is as follows

Expr.mul(A,x)                            # Expr.mul does multiplication i.e. Ax.
Expr.vstack(t,Expr.add(Expr.mul(A,x),b)) # Expr.vstack stacks the variable t and the expression

so it builds the expression
$$\left ( \begin{array}{c} t \\ A x + b \end{array} \right ).$$

Next we can define the constraint

M.constraint('con', Expr.transpose(e), Domain.inQCone())

and one may ask why the transpose i.e. the

Expr.transpose

The reasons saying a list of linear expressions in quadratic cone would natrually mean that each linear linear expression should be the cone. And

Expr.transpose(e)

is a list of one linear expression i.e. the expression that should be member of the cone.

In [157]:
import numpy

numpy.random.seed(379) # Fix the seed

m       = 10           # Number of rows in A
n       = 3            # Number of columns in A
mu      = 100
sigma   = 30

# Generate random test data
A       =  numpy.random.normal(mu, sigma, (m,n))
b       =  numpy.random.normal(mu, sigma, m)

M,t,x   = buildmodel(A,b)

M.solve()

print('Primal objective value: %e Optimal r value: %e' % (M.primalObjValue(),t.level()))

m: 10 n: 3
Primal objective value: 1.299406e+02 Optimal r value: 1.299406e+02

e = Expr.vstack(t,A*x+b) # A*x+b IS INVALID CODE

# A regularized problem¶

Assume now we want to solve $$\begin{array}{lcl} \mbox{minimize} & \left \Vert A x + b \right \Vert + \lambda \sum_{j=1}^n \vert x_j \vert & \\ \end{array}$$ where $\lambda$ is a positive parameter.

The problem can be rewritten as $$\begin{array}{lccl} \mbox{minimize} & t & \\ \mbox{subject to} & \left ( \begin{array}{c} t \\ A x + b \\ \end{array} \right ) & \in & K_q \\ & \left ( \begin{array}{c} z_j \\ x_j \\ \end{array} \right ) & \in & K_q, \, j=1,\ldots,n.\\ \end{array}$$ (We leave it to reader to verify that is correct.

This model can be implemented as follows.

In [158]:
def buildregumodel(A,b,lam):
M = Model('The regularized problem')

(m,n) = A.shape

# Define the variables
t     = M.variable('t', 1, Domain.unbounded())
x     = M.variable('x', n, Domain.unbounded())
z     = M.variable('z', n, Domain.unbounded())

# Define the constraint (t;Ax+b) in quadratic cone
M.constraint('con', Expr.transpose(e), Domain.inQCone())
for j in range(n):
M.constraint('z_%d' % j,Expr.hstack(z.index(j),x.index(j)),Domain.inQCone())

# Objective

return M,t,x

The loop

for j in range(n):
M.constraint('z_%d' % j,Expr.hstack(z.index(j),x.index(j)),Domain.inQCone())

$$\left ( \begin{array}{c} z_j \\ x_j \\ \end{array} \right ) \in K_q\\$$

for all $j$s.

Finally in the objective the reduction type expression

Expr.sum()

is used which computes the sum of all the elemnts in the argument. Next $\lambda$ (called lam) times the sum is added to $t$.

In [159]:
lam   = 2.0 # Lambda
M,t,x = buildregumodel(A,b,lam)

M.solve()

print('Primal objective value: %e Optimal t value: %e' % (M.primalObjValue(),t.level()))
Primal objective value: 1.315865e+02 Optimal t value: 1.299428e+02

# A second regularized problem¶

Assume now we want to solve $$\begin{array}{lcl} \mbox{minimize} & \left \Vert A x + b \right \Vert + \sum_{j=1}^n \lambda_j \vert x_j \vert & \\ \end{array}$$ where $\lambda$ is a vector of positive parameters.

The problem can be rewritten as $$\begin{array}{lccl} \mbox{minimize} & t + \lambda^T z & \\ \mbox{subject to} & \left ( \begin{array}{c} t \\ A x + b \\ \end{array} \right ) & \in & K_q \\ & \left ( \begin{array}{c} z_j \\ x_j \\ \end{array} \right ) & \in & K_q, \, j=1,\ldots,n.\\ \end{array}$$

In [160]:
def buildregumodel2(A,b,lam):
M = Model('The regularized problem')

(m,n) = A.shape

# Define the variables
t     = M.variable('t', 1, Domain.unbounded())
x     = M.variable('x', n, Domain.unbounded())
z     = M.variable('z', n, Domain.unbounded())

# Define the constraint (t;Ax+b) in quadratic cone
M.constraint('con', Expr.transpose(e), Domain.inQCone())
M.constraint('z_j >= |x_j|',Expr.hstack(z,x),Domain.inQCone())

# Objective

return M,t,x

The idea in

M.constraint('z_j >= |x_j|',Expr.hstack(z,x),Domain.inQCone())

is to build the matrix. $$[ \begin{array}{cc} z & x \\ \end{array} ]$$ which is a lists of the rows. The $j$th row has the form $$[ \begin{array}{cc} z_j & x_j \\ \end{array} ]$$ so we want each row to belong to a quadratic cone. This is exactly what is specified by the constraint.

The objective to use the dot operator to compute the inner product between $\lambda$ and $z$ i.e. $$\sum_{j=1}^n \lambda_j z_j = \lambda^T z.$$

In [161]:
lam   = [2.0]*n # Lambda
M,t,x = buildregumodel2(A,b,lam)

M.solve()

print('Primal objective value: %e Optimal t value: %e' % (M.primalObjValue(),t.level()))
Primal objective value: 1.315865e+02 Optimal t value: 1.299428e+02

# A cheatsheet to building expressions¶

In [162]:
n   = 2

M   = Model('demo model')

a   = [3.0]*n

x   = M.variable('x',n,Domain.unbounded())
y   = M.variable('y',n,Domain.unbounded())
z   = M.variable('z',n,Domain.unbounded())

# Binary version
e3  = Expr.sub(x,y)             # x-y

# List version
e5  = Expr.add([x, y, z])       # x+y+z

# Multiplication
e6  = Expr.mul(7.0,x)           # 7.0*x
e7  = Expr.mulElm(a,x)          # Diag(a)*x, element wise multiplication

# Inner and outer products
e8  = Expr.dot(a,x)             # a'*x
e9  = Expr.outer(a,x)           # a*x' Outer product

# Reduction type operations
e10 = Expr.sum(x)

print('e0')
print(e0.toString())
print('e1')
print(e1.toString())
print('e2')
print(e2.toString())
print('e3')
print(e3.toString())
print('e4')
print(e4.toString())
print('e5')
print(e5.toString())
print('e6')
print(e6.toString())
print('e7')
print(e7.toString())
print('e8')
print(e8.toString())
print('e9')
print(e9.toString())
print('e10')
print(e10.toString())
e0
Expr(ndim=(2),
[  +  x[0] + 1.0,
+  x[1] + 1.0 ])
e1
Expr(ndim=(2),
[  +  x[0] +  y[0],
+  x[1] +  y[1] ])
e2
Expr(ndim=(2),
[  +  y[0] + 3.0,
+  y[1] + 3.0 ])
e3
Expr(ndim=(2),
[  +  x[0] -  y[0],
+  x[1] -  y[1] ])
e4
Expr(ndim=(2),
[  +  x[0] +  y[0] +  z[0],
+  x[1] +  y[1] +  z[1] ])
e5
Expr(ndim=(2),
[  +  x[0] +  y[0] +  z[0],
+  x[1] +  y[1] +  z[1] ])
e6
Expr(ndim=(2),
[  + 7.0 x[0],
+ 7.0 x[1] ])
e7
Expr(ndim=(2),
[ ([0]) ->  + 3.0 x[0],
([1]) ->  + 3.0 x[1] ])
e8
Expr(ndim=(1),
[  + 3.0 x[0] + 3.0 x[1] ])
e9
Expr(ndim=(2,2),
[  + 3.0 x[0],
+ 3.0 x[1],
+ 3.0 x[0],
+ 3.0 x[1] ])