# Pedestrian Classification¶

Subject of this notebook is to train a classifier that is able to detect whether or not a given image shows a pedestrian.

In [ ]:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt

def wissrech2_imshow(im_vec, ax = False):
im = np.reshape(im_vec, (25,50))
if ax == False:
#         plt.imshow(rescale_intensity(im.T), cmap = plt.get_cmap('gray'))
plt.imshow(im.T/np.max(im), cmap = plt.get_cmap('gray'))
plt.axis('off')
else:
ax.imshow(im.T/np.max(im), cmap = plt.get_cmap('gray'))
ax.axis('off')


## The image data¶

You can download the data from the lecture's homepage, it is contained in the file notebook11.zip. For training, 3000 samples (1500 pedestrian and 1500 non-pedestrian) are provided. For testing, 1000 samples (500 pedestrian and 500 non-pedestrian) are provided. Pedestrian and non-pedestrian samples are provided in terms of three feature sets:

• data_hog.mat: Histograms of oriented Gradients (HOG) features
• data_lrf.mat: Local Receptive Field (LRF) features
• data_intensity_25x50.mat: Gray-Level pixel intensity

All data is provided in Matlab .mat format, which can be loaded using scipy.io.loadmat. Every .mat data file consists of four datasets (train and test data for both pedestrian and non-pedestrian samples) and is an (n x m+1)matrix with n the number of samples and m the dimensionality of the corresponding feature set. The first column consists of a label which identifies the object class of the following sample: +1 = pedestrian, -1 = non-pedestrian

<label (-1 or +1)> < m-dimensional data vector, example 1>
<label (-1 or +1)> < m-dimensional data vector, example 1>
<label (-1 or +1)> < m-dimensional data vector, example 1>
...

# Principal component analysis¶

The first task has an exploratory character. Use sklearn.PCA to compute the principal components and correspoding eigenvectors for the given training data. Note that sklearn.PCA automatically centers the data. tt>sklearn.PCA</tt> orders the eigenvalues in descending order, such that the first eigenvalues is the largest and the first eigenvector corresponds to the largest eigenvalue.

In the context of pedestrian detection, the eigenvectors of the PCA are called eigenpedestrians. Generate nice plots of the mean image of the given training images, the first 10 eigenpedestrians, the eigenpedestrians 11-20, 51-60, 101-110, 201-210, and 601-610. You can use the provided wissrech2_imshow to display the images and plt.subplots to arrange the images. Use these pictures to explain how the images in the training set are "generated" from the mean image and the eigenpedestrians.

To complete your explanation, take the top-$k$ principal components ($k=20$ and $k = 100$) and project some (3-4) images onto the corresponding linear subspace. Use wissrech2_imshow to display the resulting image. Compare it with the original image.

In [ ]:
# your code goes here

In [ ]:
# your code goes here

In [ ]:
# your code goes here


## Task 2: Explanatory power of top-k principal components¶

To complete the exploratory part, make plots of the percentage of explained variance (y-axis) vs. the number of principal components (x-axis) for all provided feature sets (grayleve intensities, lrf, hog). Discuss the three curves with respect to the explanatory power of the principal components.

In [ ]:
# your code goes here


Now you train linear support vector machines for alle three different feature sets. You can use sklearn.svm.LinearSVC for this. Do this both with and without using PCA as a preprocessing step for dimensionality reduction. In the cases where you use PCA, test the following embedding dimensions (used top-$k$ principal components): $$k = 10, 20, \dots, 200.$$ Now for each tested combination of feature set and $k$ record the fraction of correctly classified test data points. You can use LinearSVC.score to easily obtain these numbers. For later comparison, set $k = \text{original number of features}$ in those cases where you do not use PCA.
Plot the recorded scores for all three features in one diagramme with $k$ on the $x$-axis and the scores on the $y$-axis. Explain what you observe in the diagram. Further, explain why your findings here do not contradict what you have observed in Task 2.
# your code goes here