Linear Image Denoising

This numerical tour introduces basic image denoising methods.

Important: Please read the installation page for details about how to install the toolboxes. $\newcommand{\dotp}[2]{\langle #1, #2 \rangle}$ $\newcommand{\enscond}[2]{\lbrace #1, #2 \rbrace}$ $\newcommand{\pd}[2]{ \frac{ \partial #1}{\partial #2} }$ $\newcommand{\umin}[1]{\underset{#1}{\min}\;}$ $\newcommand{\umax}[1]{\underset{#1}{\max}\;}$ $\newcommand{\umin}[1]{\underset{#1}{\min}\;}$ $\newcommand{\uargmin}[1]{\underset{#1}{argmin}\;}$ $\newcommand{\norm}[1]{\|#1\|}$ $\newcommand{\abs}[1]{\left|#1\right|}$ $\newcommand{\choice}[1]{ \left\{ \begin{array}{l} #1 \end{array} \right. }$ $\newcommand{\pa}[1]{\left(#1\right)}$ $\newcommand{\diag}[1]{{diag}\left( #1 \right)}$ $\newcommand{\qandq}{\quad\text{and}\quad}$ $\newcommand{\qwhereq}{\quad\text{where}\quad}$ $\newcommand{\qifq}{ \quad \text{if} \quad }$ $\newcommand{\qarrq}{ \quad \Longrightarrow \quad }$ $\newcommand{\ZZ}{\mathbb{Z}}$ $\newcommand{\CC}{\mathbb{C}}$ $\newcommand{\RR}{\mathbb{R}}$ $\newcommand{\EE}{\mathbb{E}}$ $\newcommand{\Zz}{\mathcal{Z}}$ $\newcommand{\Ww}{\mathcal{W}}$ $\newcommand{\Vv}{\mathcal{V}}$ $\newcommand{\Nn}{\mathcal{N}}$ $\newcommand{\NN}{\mathcal{N}}$ $\newcommand{\Hh}{\mathcal{H}}$ $\newcommand{\Bb}{\mathcal{B}}$ $\newcommand{\Ee}{\mathcal{E}}$ $\newcommand{\Cc}{\mathcal{C}}$ $\newcommand{\Gg}{\mathcal{G}}$ $\newcommand{\Ss}{\mathcal{S}}$ $\newcommand{\Pp}{\mathcal{P}}$ $\newcommand{\Ff}{\mathcal{F}}$ $\newcommand{\Xx}{\mathcal{X}}$ $\newcommand{\Mm}{\mathcal{M}}$ $\newcommand{\Ii}{\mathcal{I}}$ $\newcommand{\Dd}{\mathcal{D}}$ $\newcommand{\Ll}{\mathcal{L}}$ $\newcommand{\Tt}{\mathcal{T}}$ $\newcommand{\si}{\sigma}$ $\newcommand{\al}{\alpha}$ $\newcommand{\la}{\lambda}$ $\newcommand{\ga}{\gamma}$ $\newcommand{\Ga}{\Gamma}$ $\newcommand{\La}{\Lambda}$ $\newcommand{\si}{\sigma}$ $\newcommand{\Si}{\Sigma}$ $\newcommand{\be}{\beta}$ $\newcommand{\de}{\delta}$ $\newcommand{\De}{\Delta}$ $\newcommand{\phi}{\varphi}$ $\newcommand{\th}{\theta}$ $\newcommand{\om}{\omega}$ $\newcommand{\Om}{\Omega}$

In [2]:
using PyPlot
using NtToolBox
using Distributions

Noisy Image Formation

In these numerical tour, we simulate noisy acquisition by adding some white noise (each pixel is corrupted by adding an independant Gaussian variable).

This is useful to test in an oracle maner the performance of our methods.

Size $N = n \times n$ of the image.

In [3]:
n = 256
N = n^2;

We load a clean image $x_0 \in \RR^N$.

In [5]:
name = "NtToolBox/src/data/flowers.png"
x0 = load_image(name, n);
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile

Display the clean image.

In [7]:
imageplot(x0)

Variance of the noise.

In [8]:
sigma = .08
Out[8]:
0.08

We add some noise to it to obtain the noisy signal $y = x_0 + w$. Here $w$ is a realization of a Gaussian white noise of variance $\si^2$.

In [9]:
y = x0 .+ sigma.*rand(Normal(), 256, 256);

Display the noisy image.

In [10]:
imageplot(clamP(y))

Linear Image Denoising

We consider a noising estimator $x \in \RR^N$ of $x_0$ that only depends on the observation $y$. Mathematically speaking, it is thus a random vector that depends on the noise $w$.

A translation invariant linear denoising is necessarely a convolution with a kernel $h$ $$ x = x_0 \star h $$ where the periodic convolution between two 2-D arrays is defined as $$ (a \star b)_i = \sum_j a(j) b(i-j). $$

It can be computed over the Fourier domain as $$ \forall \om, \quad \hat x(\om) = \hat x_0(\om) \hat h(\om). $$

In [11]:
cconv = (a, b) -> real(plan_ifft((plan_fft(a)*a).*(plan_fft(b)*b))*((plan_fft(a)*a).*(plan_fft(b)*b)))
Out[11]:
(::#1) (generic function with 1 method)

We use here a Gaussian fitler $h$ parameterized by the bandwith $\mu$.

In [12]:
normalize = h -> h/sum(h)

X = [0:n/2; -n/2:-2]'
Y = [0:n/2; -n/2:-2]

h = mu -> normalize(exp(-(X.^2 .+ Y.^2)/(2*(mu)^2)))
Out[12]:
(::#5) (generic function with 1 method)

Display the filter $h$ and its Fourier transform.

In [13]:
mu = 10
subplot(1,2, 1)
imageplot(fftshift(h(mu)))
title("h")
subplot(1,2, 2)
imageplot(fftshift(real(plan_fft(h(mu))*h(mu))))
title(L"$\hat h$")
Out[13]:
PyObject <matplotlib.text.Text object at 0x334959f90>
In [14]:
imageplot(h(mu))

Shortcut for the convolution with $h$.

In [15]:
denoise = (x, mu) -> cconv(h(mu), x)
Out[15]:
(::#7) (generic function with 1 method)

Display a denoised signal.

In [16]:
imageplot(denoise(y, mu))

Exercise 1: Display a denoised signal for several values of $\mu$.

In [17]:
include("NtSolutions/denoisingsimp_2b_linear_image/exo1.jl")

Exercise 2: Display the evolution of the oracle denoising error $ \norm{y-x_0} $ as a function of $\mu$. Set $\mu$ to the value of the optimal parameter.

In [49]:
include("NtSolutions/denoisingsimp_2b_linear_image/exo2.jl")
# mulist = linspace(.1, 3.5, 31)
# err = zeros(length(mulist))
# for i in 1 : length(mulist)
# 	mu = mulist[i]
# 	err[i] = norm(x0 - denoise(y, mu))
# end

# clf
# h1, = plot(mulist,err); axis("tight");
# # retrieve the best denoising result
# i = mapslices(indmin, err, 1)[1]
# mu = mulist[i]
Out[49]:
0.5533333333333333

Display the results.

In [55]:
imageplot(denoise(y, mu))

Wiener Filtering

We suppose here that $x_0$ is a realization of a random vector $x_0$, whose distribution is Gaussian with a stationary covariance $c$, and we denote $P_{X_0}(\om) = \hat c(\om)$ the power-spectrum of $x_0$.

Recall that $w$ is a realization of a random vector $W$ distributed according to $\Nn(0,\si^2 \text{Id})$.

The (oracle) optimal filter minimizes the risk $$ R(h) = \EE_{W,X_0}( \norm{ X_0 - h \star (X_0 + W) }^2 ). $$

One can show that the solution of this problem, the so-called Wiener filter, is defined as $$ \forall \om, \quad \hat h(\om) = \frac{ P_{X_0}(\om) }{ P_{X_0}(\om) + \si^2 }. $$

We estimate $ P_{X_0} $ using the periodogram associated to the realization $x_0$, i.e. $$ P_{X_0} \approx \frac{1}{N}\abs{\hat x_0}^2. $$

In [121]:
P = 1/N .* ( abs(plan_fft(x0)*x0).^2 );
Out[121]:
256×256 Array{Float32,2}:
 8171.9       10.6164      6.01625      …   6.01625      10.6164   
  169.968     24.6843     10.2284           1.90513       0.254991 
   49.7285    39.1475      0.898326         1.20455      10.2768   
   10.1713     1.2197      1.95005          1.68784       6.31295  
    8.99825    0.987353    2.41802          4.17762       1.15039  
    5.96493    1.35605     1.88494      …   4.59954       0.418063 
    4.29223    0.304442    1.00316          3.27167       2.55866  
    2.56278    0.538486    0.474422         0.102042      0.770294 
    0.524596   1.56068     0.277281         2.62287       0.421342 
    0.833512   0.494591    0.142519         0.295279      1.38228  
    0.767665   0.682859    0.43622      …   1.44202       1.36096  
    1.18919    0.281352    0.0069366        0.462007      0.0384869
    0.372797   0.118761    0.204467         0.000695693   0.205559 
    ⋮                                   ⋱                 ⋮        
    0.372797   0.205559    0.000695694      0.204467      0.118761 
    1.18919    0.0384869   0.462007     …   0.0069366     0.281352 
    0.767665   1.36096     1.44202          0.43622       0.682859 
    0.833512   1.38228     0.295279         0.142519      0.494591 
    0.524596   0.421341    2.62287          0.277281      1.56068  
    2.56278    0.770294    0.102042         0.474422      0.538486 
    4.29223    2.55866     3.27167      …   1.00316       0.304442 
    5.96493    0.418063    4.59954          1.88494       1.35605  
    8.99825    1.15039     4.17762          2.41802       0.987353 
   10.1713     6.31295     1.68784          1.95005       1.2197   
   49.7285    10.2768      1.20455          0.898326     39.1475   
  169.968      0.254991    1.90514      …  10.2284       24.6843   

Compute the approximate Wiener filter.

In [81]:
h_w = real(plan_ifft(P ./ (P .+ sigma^2))*(P ./ (P .+ sigma^2)));
Out[81]:
256×256 Array{Float32,2}:
 0.254668      0.0780854     0.0148397    …   0.0148397     0.0780854  
 0.10135       0.036271      0.00687797       0.00567823    0.0351686  
 0.027534      0.0139063     0.000751491     -0.000287227   0.00976046 
 0.0103443     0.00470589    0.000710394     -0.00066084    0.00392393 
 0.00296009    0.00106851    0.000844462      3.25599f-5    0.00266379 
 0.00104208    0.00143793    0.00153131   …   0.00162988    0.00190061 
 0.00146124   -0.000180944   0.000411345      0.000759974   0.00112181 
 0.000880367  -9.21841f-5    0.0010208       -0.000365944  -0.00039197 
 0.00208328    0.00143564    0.00146518       0.000837262   6.01103f-5 
 0.00178375   -0.00052092   -1.86634f-5       0.00207347    0.00118471 
 0.000795316  -0.00142652   -0.000958989  …   6.25124f-5   -0.000555224
 0.00051408   -0.000763624  -0.00123228       0.000345681  -0.00163507 
 0.000899492  -0.00059521   -0.00108793       0.000685518  -0.00135931 
 ⋮                                        ⋱                 ⋮          
 0.000899492  -0.00135931    0.000685518     -0.00108793   -0.00059521 
 0.00051408   -0.00163507    0.000345681  …  -0.00123228   -0.000763624
 0.000795316  -0.000555224   6.25123f-5      -0.000958989  -0.00142652 
 0.00178375    0.00118471    0.00207347      -1.86635f-5   -0.00052092 
 0.00208328    6.01101f-5    0.000837262      0.00146518    0.00143564 
 0.000880367  -0.00039197   -0.000365944      0.0010208    -9.21841f-5 
 0.00146124    0.00112181    0.000759974  …   0.000411345  -0.000180944
 0.00104208    0.00190061    0.00162988       0.00153131    0.00143793 
 0.00296009    0.00266379    3.25599f-5       0.000844462   0.00106851 
 0.0103443     0.00392393   -0.00066084       0.000710394   0.00470589 
 0.027534      0.00976046   -0.000287227      0.000751491   0.0139063  
 0.10135       0.0351686     0.00567823   …   0.00687797    0.036271   

Note that this is a theoretical filter, because in practice one does not have access to $x_0$. Display it.

In [82]:
u = fftshift(h_w)
imageplot( u[Int(n/2 - 10) : Int(n/2 + 10), Int(n/2 - 10) : Int(n/2 + 10)] )

Display the denoising result.

In [122]:
imageplot( cconv(y, h_w) )

Note that this denoising is not very efficient, because the hypothesis of stationarity of $X_0$ is not realistic for such piecewise-regular signal.