CasADi live Python3

Try out CasADi in your browser.

Hello World

To run, click in the code cell below, and hit Shift+Enter. You may need to do 'Kernel->Restart' if you paused a while...

In [ ]:
from casadi import *

x = SX.sym("x")
print(jacobian(sin(x),x))

Tutorial examples from paper.casadi.org

Let us start out by finding the minimum of Rosenbrock's banana-valley function:

\begin{equation} \underset{\begin{array}{c}x, y\end{array}}{\text{minimize}} \quad x^2 + 100 \, (y-(1-x)^2)^2 \label{eq:rosenbrock} \end{equation}

By inspection, we can see that its unique solution is $(0,1)$. The problem can be formulated and solved with CasADi as follows:

In [ ]:
from casadi import *

# Symbolic representation
x=SX.sym('x')
y=SX.sym('y')
z=y-(1-x)**2
f=x**2+100*z**2
P=dict(x=vertcat(x,y),f=f)

# Create solver instance
F=nlpsol('F','ipopt',P)

# Solve the problem
r=F(x0=[2.5,3.0])
print(r['x'])

Let us reformulate the above as a constrained optimization problem, introducing a decision variable corresponding to z above:

\begin{equation} \begin{array}{cl} \underset{\begin{array}{c}x, y, z\end{array}}{\text{minimize}} \quad & x^2 + 100 \, z^2 \\ \text{subject to} \quad & z+(1-x)^2 - y = 0. \end{array} \label{eq:rosenbrock_nlp} \end{equation}

The problem can be formulated and solved with CasADi as follows:

In [ ]:
from casadi import *

# Formulate the NLP
x=SX.sym('x')
y=SX.sym('y')
z=SX.sym('z')
f=x**2+100*z**2
g=z+(1-x)**2-y
P=dict(x=vertcat(x,y,z),f=f,g=g)

# Create solver instance
F=nlpsol('F','ipopt',P)

# Solve the problem
r=F(x0=[2.5,3.0,0.75],ubg=0,lbg=0)
print(r['x'])

Since the paper came out, a higher-level abstraction for modeling NLPs was introduced. Here's the above example remodeled:

In [ ]:
from casadi import *

opti = Opti();

x = opti.variable()
y = opti.variable()
z = opti.variable()

opti.minimize(x**2+100*z**2)
opti.subject_to(z+(1-x)**2-y==0)

opti.set_initial(x,2.5)
opti.set_initial(y,3.0)
opti.set_initial(z,0.75)

opti.solver('ipopt')
sol = opti.solve()

print(sol.value(x),sol.value(y))

We now shift our attention to simulation and sensitivity analysis using the CasADi's integrator objects. Consider the following initial-value problem in ODE corresponding to a Van der Pol oscillator:

\begin{align} \left\{ \begin{array}{ccll} \dot{x}_1 &=& (1-x_2^2) \, x_1 - x_2 + p, \quad &x_1(0)=0 \\ \dot{x}_2 &=& x_1, \quad &x_2(0)=1 \end{array} \right. \label{eq:vdp_dae} \end{align}

With $p$ fixed to 0.1, we wish to solve for $x_{\text{f}} := x(1)$. This can be solved as follows:

In [ ]:
from casadi import *

# Formulate the ODE
x=SX.sym('x',2)
p=SX.sym('p')
z=1-x[1]**2
f=vertcat(z*x[0]-x[1]+p,x[0])
dae=dict(x=x,p=p,ode=f)

# Create solver instance
options=dict(t0=0,tf=1)
F=integrator('F','cvodes',dae,options)

# Solve the problem
r=F(x0=[0,1],p=0.1)
print(r['xf'])

# Create Jacobian function
D=F.factory('D',['x0','p'],['jac:xf:x0'])

# Solve the problem
r=D(x0=[0,1],p=0.1)
print(r['jac_xf_x0'])

By combining the nonlinear programing example with the embeddable integrator, we can implement the direct multiple shooting method by Bock and Plitt. We will consider a simple OCP reformulated as a DAE and with $p$ replaced by a time-varying control $u$:

\begin{align} \displaystyle \underset{\begin{array}{c}x(\cdot), z(\cdot), u(\cdot)\end{array}} {\text{minimize}}\quad &\displaystyle \int_{0}^{T}{ \left( x_1(t)^2 + x_2(t)^2 + u(t)^2 \right) \, dt} \\ \text{subject to} \, \quad & \left\{ \begin{array}{l} \dot{x}_1(t) = z(t) \, x_1(t) - x_2(t) + u(t) \\ \dot{x}_2(t) = x_1(t) \\ 0 = x_2(t)^2 + z(t) - 1\\ -1.0 \le u(t) \le 1.0, \quad x_1(t) \ge -0.25 \end{array} \right. \quad t \in [0,T] \\ & x_1(0)=0, \quad x_2(0)=1 \label{eq:vdp} \end{align} where $x(\cdot) \in \mathbb{R}^2$ is the (differential) state, $z(\cdot) \in \mathbb{R}$ is the algebraic variable and $u(\cdot) \in \mathbb{R}$ is the control. We let $T=10$.

Our goal is to transcribe the OCP to NLP form. In the direct approach, the first step in this process is a parameterization of the control trajectory. For simplicity, we assume a uniformly spaced, piecewise constant control trajectory: $$ u(t) := u_k \quad \text{for $t \in [t_k, t_{k+1}), \quad k=0, \ldots, N-1$} \quad \text{with $t_k := k \, T / N$.} $$

With the control fixed over one interval, we can use an integrator to reformulate the problem from continuous time to discrete time. Since we now have a DAE, we introduce an algebraic variable $z$ and the corresponding algebraic equation $g$ and use IDAS instead of CVODES. We also introduce a quadrature for calculating the contributions to the cost function.

In [ ]:
from casadi import *

# Formulate the DAE
x=SX.sym('x',2)
z=SX.sym('z')
u=SX.sym('u')
f=vertcat(z*x[0]-x[1]+u,x[0])
g=x[1]**2+z-1
h=x[0]**2+x[1]**2+u**2
dae=dict(x=x,p=u,ode=f,z=z,alg=g,quad=h)

# Create solver instance
T = 10. # end time
N = 20 # discretization
options=dict(t0=0,tf=T/N)
F=integrator('F','idas',dae,options)

# Empty NLP
w=[]; lbw=[]; ubw=[]
G=[]; J=0

# Initial conditions
Xk=MX.sym('X0',2)
w+=[Xk]
lbw+=[0,1]
ubw+=[0,1]

for k in range(1,N+1):
  # Local control
  name='U'+str(k-1)
  Uk=MX.sym(name)
  w+=[Uk]
  lbw+=[-1]
  ubw+=[ 1]

  # Call integrator
  Fk=F(x0=Xk,p=Uk)
  J+=Fk['qf']

  # Local state
  name='X'+str(k)
  Xk=MX.sym(name,2)
  w+=[Xk]
  lbw+=[-.25,-inf]
  ubw+=[ inf, inf]
  G+=[Fk['xf']-Xk]

# Create NLP solver
nlp=dict(f=J,g=vertcat(*G),x=vertcat(*w))
S=nlpsol('S','ipopt',nlp,dict(ipopt=dict(tol=1e-6)))

# Solve NLP
r=S(lbx=lbw,ubx=ubw,x0=0,lbg=0,ubg=0)
print(r['x'])

Again, we note that the same can be achieved using a higher-le}vel abstraction for modeling NLPs:

In [ ]:
from casadi import *

# Formulate the DAE
x=SX.sym('x',2)
z=SX.sym('z')
u=SX.sym('u')
f=vertcat(z*x[0]-x[1]+u,x[0])
g=x[1]**2+z-1
h=x[0]**2+x[1]**2+u**2
dae=dict(x=x,p=u,ode=f,z=z,alg=g,quad=h)

# Create solver instance
T = 10. # end time
N = 20 # discretization
options=dict(t0=0,tf=T/N)
F=integrator('F','idas',dae,options)

opti = Opti()

# Build up objective
J=0

# Initial conditions
Xk = opti.variable(2)
opti.subject_to(Xk==vertcat(0,1))

for k in range(1,N+1):
  # Local control
  Uk = opti.variable()
  opti.subject_to(opti.bounded(-1,Uk,1))

  # Call integrator
  Fk=F(x0=Xk,p=Uk)
  J+=Fk['qf']

  # Local state
  Xk = opti.variable(2)
  opti.subject_to(Xk[0]>=-0.25)

  opti.subject_to(Fk['xf']==Xk)

opti.minimize(J)

# Create NLP solver
opti.solver('ipopt',{},dict(tol=1e-6))

# Solve NLP
sol = opti.solve()
print (sol.value(opti.x))