Rachel Sowa rsowa@umail.ucsb.edu
The goal of this project is to use 2D correlation on images with repeating patterns and use this correlation data to attempt to recreate the original image via 2D convolution. The same method is then applied to a photograph that is not composed of a repeating pattern.
%pylab inline
rcParams['figure.figsize'] = (10, 4) #wide graphs by default
from __future__ import print_function
from __future__ import division
Populating the interactive namespace from numpy and matplotlib
pattern1 = imread('pattern1.jpg') #importing our first image
imshow(pattern1)
<matplotlib.image.AxesImage at 0x11457b110>
pattern1 = sum(pattern1[:,:,:-1], axis=2)/3.0 #converting the image from RGB to grayscale
imshow(pattern1, cmap=cm.gray)
colorbar()
<matplotlib.colorbar.Colorbar at 0x10a5b7a10>
Let's extract one single instance of the repeated part of the pattern.
shape1 = pattern1[21:43, 16:38]
imshow(shape1, cmap=cm.gray)
<matplotlib.image.AxesImage at 0x10c009b90>
Now we can perform the 2D correlation function on the original pattern and the single instance.
from scipy.signal import correlate2d
cc1 = correlate2d(pattern1, shape1, mode='same')
imshow(cc1)
colorbar()
gcf().set_figheight(8)
Above we can see the result of the 2D correlation. Notice how for every black diamond in the original image, there is a corresponding dark red dot in the correlation output. Each dark red dot represents a place in the original image where the black diamond matches up well with that part of the pattern.
Let's apply a threshold to our correlation output to obtain a more discrete representation of the results.
cc_threshold1 = where( cc1 > 7.5*(10**6), 1, 0)
imshow(cc_threshold1, cmap=cm.gray, interpolation='nearest')
<matplotlib.image.AxesImage at 0x110395f10>
Now we can perform convolution on this thresholded result using the black diamond as a kernel.
from scipy.signal import convolve2d
c1 = convolve2d(cc_threshold1, shape1)
imshow(c1, cmap= cm.gray, interpolation='nearest')
colorbar()
<matplotlib.colorbar.Colorbar at 0x110d38090>
What we get back out definitely resembles the original image pattern, but it now looks more blurred and has a transparent, ghostly effect.
Let's try the same approach on a different, more complicated pattern.
pattern2 = imread('pattern2.gif')
imshow(pattern2)
<matplotlib.image.AxesImage at 0x114417710>
pattern2 = sum(pattern2[:,:,:-1], axis=2)/3.0 #converting the image from RGB to grayscale
imshow(pattern2, cmap=cm.gray)
colorbar()
<matplotlib.colorbar.Colorbar at 0x11dbc3d10>
shape2 = pattern2[230:530,250:550] #extracting a smaller subset of the pattern
imshow(shape2, cmap=cm.gray)
<matplotlib.image.AxesImage at 0x11e42c150>
cc2 = correlate2d(pattern2, shape2, mode='same') #performing the 2D correlation function on the pattern and its subset
imshow(cc2)
colorbar()
gcf().set_figheight(8)
Again, notice the dark red dots corresponding to the parts of the pattern where the smaller shape lines up. Since the shape is bigger and more complicated than in the previous case, the algorithm doesn't do as good of a job matching the shape around the edges, where the pattern gets cut off midway.
Now let's apply a threshold to our correlation ouput and convolve it with the shape to try to recreate the original image.
cc_threshold2 = where( cc2 > 3.8*(10**9), 1, 0) #applying threshold to correlation output
imshow(cc_threshold2, cmap=cm.gray, interpolation='nearest')
<matplotlib.image.AxesImage at 0x10ddb0690>
c2 = convolve2d(cc_threshold2, shape2) #convolving threshold output with the shape
imshow(c2, cmap= cm.gray, interpolation='nearest')
colorbar()
<matplotlib.colorbar.Colorbar at 0x124292ed0>
Again, we get a weird, ghostly version of the original image. Note that it is cut off around the edges compared to the original, since our correlation output wasn't able to pick up a strong correlation around the edges.
What happens if we compute the correlation between a photograph and one of our shapes and then convolve them?
wall = imread('wall.jpg')
wall = sum(wall[:,:,:-1], axis=2)/3.0
imshow(wall, cmap=cm.gray)
colorbar()
<matplotlib.colorbar.Colorbar at 0x12cb8af90>
Let's first try correlating the wall photograph with the black diamond from the first example.
cc3 = correlate2d(wall, shape1, mode='same')
imshow(cc3)
colorbar()
gcf().set_figheight(8)
Next we apply a threshold to prepare it for convolution.
cc_threshold3 = where( cc3 > 5800000, 1, 0)
imshow(cc_threshold3, cmap=cm.gray, interpolation='nearest')
<matplotlib.image.AxesImage at 0x12d4c2dd0>
And finally let's convolve this thresholded output using the black diamond as a kernel.
c3 = convolve2d(cc_threshold3, shape1)
imshow(c3, cmap= cm.gray, interpolation='nearest')
colorbar()
<matplotlib.colorbar.Colorbar at 0x110a73ed0>
Since the black diamond is a rather simple shape and small in terms of number of pixels, the final result still somewhat resembles the original photograph.
But what if we perform correlation and convolution with the more complicated shape from the second example?
cc4 = correlate2d(wall, shape2, mode='same')
imshow(cc4)
colorbar()
gcf().set_figheight(8)
The result of the correlation is much less defined than in the previous example.
Because of this loss of definition, it becomes much more difficult when thresholding to find a good balance between having enough white pixels to form more than just a few dots and not having so many pixels that there is a lack of detail and the thresholded output just turns into a white blob.
cc_threshold4 = where( cc4 > 1.74*(10**9), 1, 0)
imshow(cc_threshold4, cmap=cm.gray, interpolation='nearest')
<matplotlib.image.AxesImage at 0x1206cb510>
Similarily, when we perform convolution on the thresholded ouput using the more complicated shape, the output lacks definition and doesn't really resemble the original photograph at all.
c4 = convolve2d(cc_threshold4, shape2)
imshow(c4, cmap= cm.gray, interpolation='nearest')
colorbar()
<matplotlib.colorbar.Colorbar at 0x12a5c4610>