An ODE of order $n$
$$ F(t, y^{(0)}, y^{(1)}, ..., y^{(n)}) = 0 $$contains derivatives $y^{(k)}(t) \equiv y^{(k)} \equiv \frac{d^{k}y(t)}{dt^{k}}$ up to the $n$-th derivative (and $y^{(0)} \equiv y$).
A linear ODE contains no higher powers than 1 of any of the $y^{(k)}$.
Superposition principle: Linear combinations of solutions are also solutions.
Non-linear ODEs can contain any powers in the dependent variable and its derivatives.
No superposition of solutions. Often impossible to solve analytically.
(Force is often derived from a potential energy $U(x)$ and may contain non-linear terms such as $x^{-2}$ or $x^3$.)
import numpy as np
def U1(x, k=1):
return 0.5 * k * x*x
def U2(x, k=1, alpha=0.5):
return 0.5 * k * x*x * (1 - (2/3)*alpha*x)
def U3(x, k=1, p=6):
return (k/p) * np.power(x, p)
import matplotlib
import matplotlib.pyplot as plt
matplotlib.style.use('seaborn-talk')
%matplotlib inline
X = np.linspace(-3, 3, 100)
ax = plt.subplot(1,1,1)
ax.plot(X, U1(X), label=r"$U_1$")
ax.plot(X, U2(X), label=r"$U_2$")
ax.plot(X, U3(X), label=r"$U_3$")
ax.set_ylim(-0.5, 10)
ax.legend(loc="upper center");
Basic idea:
Possible issues
Simple: forward difference
\begin{align} f(t, y) = \frac{dy(t)}{dt} &\approx \frac{y(t_{n+1}) - y(t_n)}{h}\\ y_{n+1} &\approx y_n + h f(t_n, y_n) \quad \text{with} \quad y_n := y(t_n) \end{align}Error will be $\mathcal{O}(h^2)$ (bad!).
Also: what if we have a second order ODE ?!?! We only used $dy/dt$.
The 2nd order ODE is $$ \frac{d^2 y}{dt^2} = f(t, y) $$
Introduce "dummy" dependent variables $y_i$ with $y_0 \equiv y$ and
\begin{alignat}{1} \frac{dy}{dt} &= \frac{dy_0}{dt} &= y_1\\ \frac{d^2y}{dt^2} &= \frac{dy_1}{dt} &= {} f(t, y_0). \end{alignat}The first equation defines the velocity $y_1 = v$ and the second one is the original ODE.
The $n$-th order ODE is $$ \frac{d^n y}{dt^n} = f(t, y, \frac{d y}{dt}, \frac{d^2 y}{dt^2}, \dots, \frac{d^{n-1} y}{dt^{n-1}}) $$
Introduce "dummy" dependent variables $y^{(i)}$ with $y^{(0)} \equiv y$ and
\begin{align} \frac{dy^{(0)}}{dt} &= y^{(1)}\\ \frac{dy^{(1)}}{dt} &= y^{(2)}\\ \dots & \\ \frac{dy^{(n-1)}}{dt} &= f(t, y^{(0)}, y^{(1)}, y^{(2)}, \dots, y^{(n-1)}). \end{align}1 ODE of any order $n$ $\rightarrow$ $n$ coupled simultaneous first-order ODEs in $n$ unknowns $y^{(0)}, \dots, y^{(n-1)}$:
\begin{align} \frac{dy^{(0)}}{dt} &= f^{(0)}(t, y^{(0)}, \dots, y^{(n-1)})\\ \frac{dy^{(1)}}{dt} &= f^{(1)}(t, y^{(0)}, \dots, y^{(n-1)})\\ \vdots & \\ \frac{dy^{(n-1)}}{dt} &= f^{(n-1)}(t, y^{(0)}, \dots, y^{(n-1)})\\ \end{align}In $n$-dimensional vector notation:
\begin{align} \frac{d\mathbf{y}(t)}{dt} &= \mathbf{f}(t, \mathbf{y})\\ \mathbf{y} &= \left(\begin{array}{c} y^{(0)}(t) \\ y^{(1)}(t) \\ \vdots \\ y^{(n-1)}(t) \end{array}\right), \quad \mathbf{f} = \left(\begin{array}{c} f^{(0)}(t, \mathbf{y}) \\ f^{(1)}(t, \mathbf{y}) \\ \vdots \\ f^{(n-1)}(t, \mathbf{y}) \end{array}\right) \end{align}RHS may not contain any explicit derivatives but components of $\mathbf{y}$ can represent derivatives.
One 2nd order ODE
$$ \frac{d^2 x}{dt^2} = m^{-1} F\Big(t, x, \frac{dx}{dt}\Big) $$to two simultaneous 1st order ODEs:
\begin{align} \frac{dy^{(0)}}{dt} &= y^{(1)}(t)\\ \frac{dy^{(1)}}{dt} &= m^{-1} F\Big(t, y^{(0)}, y^{(1)}\Big) \end{align}With $F_1 = -k x$: $$ \frac{d^2 x}{dt^2} = -m^{-1}k x $$ convert to
\begin{align} \frac{dy^{(0)}}{dt} &= y^{(1)}(t) \\ \frac{dy^{(1)}}{dt} &= -m^{-1}k y^{(0)} \end{align}Force (or derivative) function $\mathbf{f}$ and initial conditions:
\begin{alignat}{3} f^{(0)}(t, \mathbf{y}) &= y^{(1)}, &\quad y^{(0)}(0) &= x_0,\\ f^{(1)}(t, \mathbf{y}) &= -m^{-1} k y^{(0)}, &\quad y^{(1)}(0) &= v_0. \end{alignat}Given the $n$-dimensional vectors from the ODE standard form
$$ \frac{d\mathbf{y}}{dt} = \mathbf{f}(t, \mathbf{y}) $$the Euler rule amounts to
\begin{align} \mathbf{f}(t, \mathbf{y}) = \frac{d\mathbf{y}(t)}{dt} &\approx \frac{\mathbf{y}(t_{n+1}) - \mathbf{y}(t_n)}{\Delta t}\\ \mathbf{y}_{n+1} &\approx \mathbf{y}_n + \Delta t \mathbf{f}(t_n, \mathbf{y}_n) \quad \text{with} \quad \mathbf{y}_n := \mathbf{y}(t_n) \end{align}with $k=1$; $x_0 = 0$ and $v_0 = +1$.
f_harmonic
we are constructing the force vector of the standard ODE representationy
is the vector of dependents in the standard representationy
and then assign to individual elements with they[:] = ...
import numpy as np
def F1(x, k=1):
"""Harmonic force"""
return -k*x
def f_harmonic(t, y, k=1, m=1):
"""Force vector in standard ODE form (n=2)"""
return np.array([y[1], F1(y[0], k=k)/m])
t_max = 100
h = 0.01
Nsteps = int(t_max/h)
t_range = h * np.arange(Nsteps)
x = np.empty_like(t_range)
y = np.zeros(2)
# initial conditions
x0, v0 = 0.0, 1.0
y[:] = x0, v0
for i, t in enumerate(t_range):
# store position that corresponds to time t_i
x[i] = y[0]
# Euler integrator
y[:] = y + h * f_harmonic(t, y)
Plot the position $x(t)$ (which is $y_0$) against time:
plt.plot(t_range, x)
plt.plot(t_range, np.sin(t_range), color="red")
[<matplotlib.lines.Line2D at 0x10f993358>]
We can make the Euler integrator a function, which makes the code more readable and modular and we can make the whole integration a function, too. This will allow us to easily run the integration with different initial values or h
steps.
import numpy as np
def F1(x, k=1):
"""Harmonic force"""
return -k*x
def f_harmonic(t, y, k=1, m=1):
"""Force vector in standard ODE form (n=2)"""
return np.array([y[1], F1(y[0], k=k)/m])
def euler(y, f, t, h):
"""Euler integrator.
Returns new y at t+h.
"""
return y + h * f(t, y)
def integrate(x0=0, v0=1, t_max=100, h=0.001):
"""Integrate the harmonic oscillator with force F1.
Note that the spring constant k and particle mass m are currently
pre-defined.
Arguments
---------
x0 : float
initial position
v0 : float
initial velocity
t_max : float
time to integrate out to
h : float, default 0.001
integration time step
Returns
-------
Tuple ``(t, x)`` with times and positions.
"""
Nsteps = t_max/h
t_range = h * np.arange(Nsteps)
x = np.empty_like(t_range)
y = np.zeros(2)
# initial conditions
y[:] = x0, v0
for i, t in enumerate(t_range):
# store position that corresponds to time t_i
x[i] = y[0]
# Euler integrator
y[:] = euler(y, f_harmonic, t, h)
return t_range, x
Plot the position as a function of time, $x(t)$.
t, x = integrate(t_max=300, h=0.001)
plt.plot(t, x)
plt.plot(t, np.sin(t), color="red")
[<matplotlib.lines.Line2D at 0x10fa89ba8>]
Analytical solution: $$ \omega = \sqrt{\frac{k}{m}} $$ and $$ x(t) = A \sin(\omega t) $$ here with $A=1$ and $\omega = 1$:
plt.plot(t, x)
plt.plot(t, np.sin(t), color="red")
[<matplotlib.lines.Line2D at 0x1114c05f8>]
Note the increase in amplitude. Explore if smaller $h$ fixes this obvious problem.
t, x = integrate(h=0.001)
plt.plot(t, x)
plt.plot(t, np.sin(t), color="red")
[<matplotlib.lines.Line2D at 0x10f90e3c8>]
Smaller $h$ improves the integration (but Euler is still a bad algorithm... just run out for longer, i.e., higher t_max
.)
How does the global error after time $t$ behave, i.e., what is its dependency on step size $h$?
We assume that we linearly accumulate local error for the global error for $N$ steps:
$$ \mathcal{O}(h^2) \times h^{-1} \propto \mathcal{O}(h) $$Thus, halving $h$ should halve the global error.
We can apply a simple trick to turn standard (forward) Euler into a much better algorithm but this only works nicely when we solve Hamilton's equations of motion (Newton's equation without velocity-dependent potentials) and it also does not work nicely with the standard form.
In the following, $x$ is the position, $v = \dot{x} = \frac{dx}{dt}$ the velocity, and $f = F/m$ is the acceleration (but keeping the suggestive letter "f" for the force). Note that the force only depends on positions, $f(x) = F(x)/m$.
This is the same algorithm that we discussed in a more general form above.
The trick is to use the updated positions $x_{n+1}$ for the velocity $v_{n+1}$ update, which gives the semi-implicit Euler method:
\begin{align} x_{n+1} &= x_n + h v_n\\ v_{n+1} &= v_n + h f(x_{n+1}) \end{align}This algorithm does conserve energy over the long term because it is a symplectic algorithm (it maintains the deeper mathematical structures and invariants of Hamiltonian dynamics) although its error is still the same as for standard Euler.