Important: Please read the installation page for details about how to install the toolboxes. $\newcommand{\dotp}[2]{\langle #1, #2 \rangle}$ $\newcommand{\enscond}[2]{\lbrace #1, #2 \rbrace}$ $\newcommand{\pd}[2]{ \frac{ \partial #1}{\partial #2} }$ $\newcommand{\umin}[1]{\underset{#1}{\min}\;}$ $\newcommand{\umax}[1]{\underset{#1}{\max}\;}$ $\newcommand{\umin}[1]{\underset{#1}{\min}\;}$ $\newcommand{\uargmin}[1]{\underset{#1}{argmin}\;}$ $\newcommand{\norm}[1]{\|#1\|}$ $\newcommand{\abs}[1]{\left|#1\right|}$ $\newcommand{\choice}[1]{ \left\{ \begin{array}{l} #1 \end{array} \right. }$ $\newcommand{\pa}[1]{\left(#1\right)}$ $\newcommand{\diag}[1]{{diag}\left( #1 \right)}$ $\newcommand{\qandq}{\quad\text{and}\quad}$ $\newcommand{\qwhereq}{\quad\text{where}\quad}$ $\newcommand{\qifq}{ \quad \text{if} \quad }$ $\newcommand{\qarrq}{ \quad \Longrightarrow \quad }$ $\newcommand{\ZZ}{\mathbb{Z}}$ $\newcommand{\CC}{\mathbb{C}}$ $\newcommand{\RR}{\mathbb{R}}$ $\newcommand{\EE}{\mathbb{E}}$ $\newcommand{\Zz}{\mathcal{Z}}$ $\newcommand{\Ww}{\mathcal{W}}$ $\newcommand{\Vv}{\mathcal{V}}$ $\newcommand{\Nn}{\mathcal{N}}$ $\newcommand{\NN}{\mathcal{N}}$ $\newcommand{\Hh}{\mathcal{H}}$ $\newcommand{\Bb}{\mathcal{B}}$ $\newcommand{\Ee}{\mathcal{E}}$ $\newcommand{\Cc}{\mathcal{C}}$ $\newcommand{\Gg}{\mathcal{G}}$ $\newcommand{\Ss}{\mathcal{S}}$ $\newcommand{\Pp}{\mathcal{P}}$ $\newcommand{\Ff}{\mathcal{F}}$ $\newcommand{\Xx}{\mathcal{X}}$ $\newcommand{\Mm}{\mathcal{M}}$ $\newcommand{\Ii}{\mathcal{I}}$ $\newcommand{\Dd}{\mathcal{D}}$ $\newcommand{\Ll}{\mathcal{L}}$ $\newcommand{\Tt}{\mathcal{T}}$ $\newcommand{\si}{\sigma}$ $\newcommand{\al}{\alpha}$ $\newcommand{\la}{\lambda}$ $\newcommand{\ga}{\gamma}$ $\newcommand{\Ga}{\Gamma}$ $\newcommand{\La}{\Lambda}$ $\newcommand{\si}{\sigma}$ $\newcommand{\Si}{\Sigma}$ $\newcommand{\be}{\beta}$ $\newcommand{\de}{\delta}$ $\newcommand{\De}{\Delta}$ $\newcommand{\phi}{\varphi}$ $\newcommand{\th}{\theta}$ $\newcommand{\om}{\omega}$ $\newcommand{\Om}{\Omega}$
This numerical tour introduces basic signal denoising methods.
from __future__ import division
import nt_toolbox as nt
from nt_solutions import denoisingsimp_2_linear as solutions
%matplotlib inline
%load_ext autoreload
%autoreload 2
In these numerical tour, we simulate noisy acquisition by adding some white noise (each sample is corrupted by adding an independant Gaussian variable).
This is useful to test in an oracle maner the performance of our methods.
Length $N$ of the signal.
N = 1024
We load a clean signal $x_0 \in \RR^N$.
name = 'piece-regular'
x0 = rescale(load_signal(name, N))
Variance of the noise.
sigma = .04
We add some noise to it to obtain the noisy signal $y = x_0 + w$. Here $w$ is a realization of a Gaussian white noise of variance $\si^2$.
y = x0 + sigma*randn(size(x0))
Display the clean and the noisy signals.
subplot(2, 1, 1)
plot(x0); axis([1 N -.05 1.05])
subplot(2, 1, 2)
plot(y); axis([1 N -.05 1.05])
We consider a noising estimator $x \in \RR^N$ of $x_0$ that only depends on the observation $y$. Mathematically speaking, it is thus a random vector that depends on the noise $w$.
A translation invariant linear denoising is necessarely a convolution with a kernel $h$ $$ x = x_0 \star h $$ where the periodic convolution between two vector is defined as $$ (a \star b)_i = \sum_j a(j) b(i-j). $$
It can be computed over the Fourier domain as $$ \forall \om, \quad \hat x(\om) = \hat x_0(\om) \hat h(\om). $$
cconv = lambda a, b: real(ifft(fft(a).*fft(b)))
We use here a Gaussian fitler $h$ parameterized by the bandwith $\mu$.
normalize = lambda h: h/ sum(h(: ))
t = [0: N/ 2-1, -N/ 2: -1]'
h = lambda mu: normalize(exp(-(t.^2)/ (2*mu^2)))
Display the filter $h$ and its Fourier transform.
mu = 10
subplot(2, 1, 1)
plot(t, h(mu)); axis('tight')
title('h')
subplot(2, 1, 2)
plot(t, real(fft(h(mu)))); axis('tight')
title('fft(h)')
Shortcut for the convolution with $h$.
denoise = lambda x, mu: cconv(h(mu), x)
Display a denoised signal.
plot(denoise(y, mu))
axis([1 N -.05 1.05])
Exercise 1
Display a denoised signal for several values of $\mu$.
solutions.exo1()
## Insert your code here.
Exercise 2
Display the evolution of the oracle denoising error $ \norm{y-x_0} $ as a function of $\mu$. Set $\mu$ to the value of the optimal parameter. etrieve the best denoising result
solutions.exo2()
## Insert your code here.
Display the results.
plot(denoise(y, mu))
axis([1 N -.05 1.05])
We suppose here that $x_0$ is a realization of a random vector $x_0$, whose distribution is Gaussian with a stationary covariance $c$, and we denote $P_{X_0}(\om) = \hat c(\om)$ the power-spectrum of $x_0$.
Recall that $w$ is a realization of a random vector $W$ distributed according to $\Nn(0,\si^2 \text{Id})$.
The (oracle) optimal filter minimizes the risk $$ R(h) = \EE_{W,X_0}( \norm{ X_0 - h \star (X_0 + W) }^2 ). $$
One can show that the solution of this problem, the so-called Wiener filter, is defined as $$ \forall \om, \quad \hat h(\om) = \frac{ P_{X_0}(\om) }{ P_{X_0}(\om) + \si^2 }. $$
We estimate $ P_{X_0} $ using the periodogram associated to the realization $x_0$, i.e. $$ P_{X_0} \approx \frac{1}{N} \abs{\hat x_0}^2. $$
P = 1/ N * abs(fft(x0)).^2
Compute the approximate Wiener filter.
h_w = real(ifft(P ./ (P + sigma^2)))
Note that this is a theoretical filter, because in practice one does not have access to $x_0$.
Display it.
plot(fftshift(h_w)); axis tight
Display the denoising result.
plot(cconv(y, h_w))
axis([1 N -.05 1.05])
Note that this denoising is not very efficient, because the hypothesis of stationarity of $X_0$ is not realistic for such piecewise-regular signal.