# Kalman Filter in 1 Dimension (=Distance)¶

In [40]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
import seaborn as sb
from scipy import stats
import time
sb.set()  # Reset Style
#sb.set(style="white")
sb.set_context("talk")

In [41]:
%matplotlib inline
fw = 10 # figure width

In [42]:
# Plot the Distributions in this range:
x = np.linspace(-100,100,1000)


## In the beginning¶

In [43]:
mean0 = 0.0   # e.g. meters or miles
var0  = 20.0

In [44]:
plt.figure(figsize=(fw,5))
plt.plot(x,mlab.normpdf(x, mean0, var0), label='Normal Distribution')
plt.ylim(0, 0.1);
plt.legend(loc='best');
plt.xlabel('Position');


You are at position 0 and you are pretty unsure (flat normal distribution)

## Now we have something, which estimates the moved distance¶

In [45]:
meanMove = 25.0  # e.g. meters, calculated from velocity*dt or step counter or wheel encoder ...
varMove  = 10.0   # Estimated or determined with static measurements

In [46]:
plt.figure(figsize=(fw,5))
plt.plot(x,mlab.normpdf(x, meanMove, varMove), label='Normal Distribution')
plt.ylim(0, 0.1);
plt.legend(loc='best');
plt.xlabel('Distance moved');


## Both Distributions have to be merged together¶

$\mu_\text{new}=\mu_\text{0}+\mu_\text{move}$ is the new mean and $\sigma^2_\text{new}=\sigma^2_\text{0}+\sigma^2_\text{move}$ is the new variance.

In [47]:
def predict(var, mean, varMove, meanMove):
new_var = var + varMove
new_mean= mean+ meanMove
return new_var, new_mean

In [48]:
new_var, new_mean = predict(var0, mean0, varMove, meanMove)

In [49]:
plt.figure(figsize=(fw,5))
plt.plot(x,mlab.normpdf(x, mean0, var0), label='Beginning Normal Distribution')
plt.plot(x,mlab.normpdf(x, meanMove, varMove), label='Movement Normal Distribution')
plt.plot(x,mlab.normpdf(x, new_mean, new_var), label='Resulting Normal Distribution')
plt.ylim(0, 0.1);
plt.legend(loc='best');
plt.title('Normal Distributions of 1st Kalman Filter Prediction Step');
plt.savefig('Kalman-Filter-1D-Step.png', dpi=150)


What you see: The resulting distribution is flat > uncertain.

The more often you run the predict step, the flatter the distribution get

# First Sensor Measurement (Position) is coming in...¶

### Sensor Defaults for Position Measurements¶

(Estimated or determined with static measurements)

In [50]:
meanSensor = 25.0
varSensor  = 12.0

In [51]:
plt.figure(figsize=(fw,5))
plt.plot(x,mlab.normpdf(x, meanSensor, varSensor))
plt.ylim(0, 0.1);


## Now both Distributions have to be merged together¶

$\sigma^2_\text{new}=\cfrac{1}{\cfrac{1}{\sigma^2_\text{old}}+\cfrac{1}{\sigma^2_\text{Sensor}}}$ is the new variance and the new mean value is $\mu_\text{new}=\cfrac{\sigma^2_\text{Sensor} \cdot \mu_\text{old} + \sigma^2_\text{old} \cdot \mu_\text{Sensor}}{\sigma^2_\text{old}+\sigma^2_\text{Sensor}}$

In [52]:
def correct(var, mean, varSensor, meanSensor):
new_mean=(varSensor*mean + var*meanSensor) / (var+varSensor)
new_var = 1/(1/var +1/varSensor)
return new_var, new_mean

In [53]:
var, mean = correct(new_var, new_mean, varSensor, meanSensor)

In [54]:
plt.figure(figsize=(fw,5))
plt.plot(x,mlab.normpdf(x, new_mean, new_var), label='Beginning (after Predict)')
plt.plot(x,mlab.normpdf(x, meanSensor, varSensor), label='Position Sensor Normal Distribution')
plt.plot(x,mlab.normpdf(x, mean, var), label='New Position Normal Distribution')
plt.ylim(0, 0.1);
plt.legend(loc='best');
plt.title('Normal Distributions of 1st Kalman Filter Update Step');


You see: Sensor readings increase the certainty!

## Let's put everything together: The 1D Kalman Filter¶

"Kalman-Filter: Predicting the Future since 1960"

Let's say, we have some measurements for position and for distance traveled. Both have to be fused with the 1D-Kalman Filter.

In [55]:
positions = (10, 20, 30, 40, 50)+np.random.randn(5)
distances = (10, 10, 10, 10, 10)+np.random.randn(5)

In [56]:
positions

Out[56]:
array([ 10.05476138,  20.28458674,  29.49295574,  38.62625017,  49.53275068])
In [57]:
distances

Out[57]:
array([ 10.98636654,  10.00344322,  10.93369829,   9.85055967,  10.33031138])
In [60]:
mean = mean0
var = var0

plt.figure(figsize=(fw,5))
for m in range(len(positions)):

# Predict
var, mean = predict(var, mean, varMove, distances[m])
#print('mean: %.2f\tvar:%.2f' % (mean, var))
plt.plot(x,mlab.normpdf(x, mean, var), label='%i. step (Prediction)' % (m+1))

# Correct
var, mean = correct(var, mean, varSensor, positions[m])
print('After correction:  mean= %.2f\tvar= %.2f' % (mean, var))
plt.plot(x,mlab.normpdf(x, mean, var), label='%i. step (Correction)' % (m+1))

plt.ylim(0, 0.1);
plt.xlim(-20, 120)
plt.legend();

After correction:  mean= 10.32	var= 8.57
After correction:  mean= 20.30	var= 7.29
After correction:  mean= 30.21	var= 7.08
After correction:  mean= 39.22	var= 7.05
After correction:  mean= 49.54	var= 7.04


# Conclusion¶

The sensors are represented as normal distributions with their parameters ($\mu$ and $\sigma^2$) and are calculated together with addition or convolution. The prediction decreases the certainty about the state, the correction increases the certainty.

• Prediction: Certainty $\downarrow$
• Correction: Certainty $\uparrow$

If you have more than one state (here: position), than you have to use the multidimensional Kalman Filter, which is pretty much the same, but with matrices:

In order to use the Kalman filter to estimate the internal state $x$ of a process given only a sequence of noisy observations $z$, one must model the process in accordance with the framework of the Kalman filter. This means specifying the following matrices:

• $A$, the state-transition model, which is applied to the previous state $x$
• $H$, the observation model, which maps the true state space into the observed space
• $Q$, the covariance of the process noise
• $R$, the covariance of the observation noise
• $P$, the error covariance matrix (a measure of the estimated accuracy of the state estimate)
• $K$, the Kalman-Gain, which weights between the model and the observation
• and sometimes $B$, the control-input model, which is applied to the control vector $u$

for each time-step, $k$.

Source: Wikipedia

And you can run the Prediction in high update rate Open-Loop mode (means: without Correction) and use a correction, only if it is available.

Typical Example: You have a high update rate acceleration sensor (typical 100Hz) and a low update rate GPS sensor (typical 1Hz). Then you run the Prediction with 100Hz (mean 100 times a second) and only correct every 100th filter step.