We consider a one-dimensional cart position tracking problem, see Faragher 2012.
The hidden states are the position $z_t$ and velocity $\dot z_t$. We can apply an external acceleration/breaking force $u_t$. (Noisy) observations are represented by $x_t$.
The equations of motions are given by
that 'explains' the time series $x^T$.
p(x^T,z^T) &= p(z_1) \prod_{t=2}^T p(z_t,|,z_{t-1}),\prod_{t=1}^T p(x_t,|,z_t) \end{align*}$$ the following statement holds: $$p(x_t,|,x_{t-1},x_{t-2}) \neq p(x_t,|,x_{t-1}),.$$ In other words, the latent variables $z_t$ represent a memory bank for past observations beyond $t-1$.
which is usually accompanied by an initial state distribution $\pi_k \triangleq p(z_{1k}=1)$.
with $$\begin{align*} \rho_t^2 &= \sigma_z^2 + a^2 \sigma_{t-1}^2 \tag{auxiliary variable}\\ K_t &= \frac{c \rho_t^2}{c^2 \rho_t^2 + \sigma_x^2} \tag{'Kalman gain'} \\ \mu_t &= a \mu_{t-1} + K_t \cdot \left( x_t - c a \mu_{t-1}\right) \tag{posterior mean}\\ \sigma_t^2 &= \left( 1 - K_t \right) \rho_t^2 \tag{posterior variance} \end{align*}$$
the Kalman filter update equations are given by (see Bishop, pg.639) $$\begin{align*} P_t &= A V_{t-1} A^T + \Gamma &&\text{auxiliary variable}\\ K_t &= P_t C^T \cdot \left( \Sigma + C P_t C^T\right)^{-1} &&\text{Kalman gain vector} \\ \mu_t &= A \mu_{t-1} + K_t\cdot\left(x_t - C A \mu_{t-1} \right) &&\text{posterior state mean}\\ V_t &= \left(I-K_t C \right) V_{t-1} &&\text{posterior state variance} \end{align*}$$
using LinearAlgebra, PyPlot
include("scripts/cart_tracking_helpers.jl")
# Specify the model parameters
Δt = 1.0 # assume the time steps to be equal in size
A = [1.0 Δt;
0.0 1.0]
b = [0.5*Δt^2; Δt]
Σz = convert(Matrix,Diagonal([0.2*Δt; 0.1*Δt])) # process noise covariance
Σx = convert(Matrix,Diagonal([1.0; 2.0])) # observation noise covariance;
# Generate noisy observations
n = 10 # perform 10 timesteps
z_start = [10.0; 2.0] # initial state
u = 0.2 * ones(n) # constant input u
noisy_x = generateNoisyMeasurements(z_start, u, A, b, Σz, Σx);
m_z = noisy_x[1] # initial predictive mean
V_z = A * (1e8*Diagonal(I,2) * A') + Σz # initial predictive covariance
for t = 2:n
global m_z, V_z, m_pred_z, V_pred_z
#predict
m_pred_z = A * m_z + b * u[t] # predictive mean
V_pred_z = A * V_z * A' + Σz # predictive covariance
#update
gain = V_pred_z * inv(V_pred_z + Σx) # Kalman gain
m_z = m_pred_z + gain * (noisy_x[t] - m_pred_z) # posterior mean update
V_z = (Diagonal(I,2)-gain)*V_pred_z # posterior covariance update
end
plotCartPrediction2(m_pred_z[1], V_pred_z[1], m_z[1], V_z[1], noisy_x[n][1], Σx[1][1]);