Fashion-MNIST is a dataset of Zalando's article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. The dataset serves as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.
In this work, I will train a Convolutional Neural Network classifier with 3 convolution layers using the Keras deep learning library. The model is first trained for 10 epochs with batch size of 256, compiled with categorical_crossentropy
loss function and Adam
optimizer. Then, I added data augmentation, which generates new training samples by rotating, shifting and zooming on the training samples, and trained for another 50 epochs.
I will first split the original training data (60,000 images) into 80% training (48,000 images) and 20% validation (12000 images) optimize the classifier, while keeping the test data (10,000 images) to finally evaluate the accuracy of the model on the data it has never seen. This helps to see whether I'm over-fitting on the training data and whether I should lower the learning rate and train for more epochs if validation accuracy is higher than training accuracy or stop over-training if training accuracy shift higher than the validation.
import numpy as np
import pandas as pd
from keras.utils import to_categorical
from sklearn.model_selection import train_test_split
# Load training and test data into dataframes
data_train = pd.read_csv('data/fashion-mnist_train.csv')
data_test = pd.read_csv('data/fashion-mnist_test.csv')
# X forms the training images, and y forms the training labels
X = np.array(data_train.iloc[:, 1:])
y = to_categorical(np.array(data_train.iloc[:, 0]))
# Here I split original training data to sub-training (80%) and validation data (20%)
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=13)
# X_test forms the test images, and y_test forms the test labels
X_test = np.array(data_test.iloc[:, 1:])
y_test = to_categorical(np.array(data_test.iloc[:, 0]))
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters Using TensorFlow backend.
After loading and splitting the data, I preprocess them by reshaping them into the shape the network expects and scaling them so that all values are in the [0, 1] interval. Previously, for instance, the training data were stored in an array of shape (60000, 28, 28) of type uint8 with values in the [0, 255] interval. I transform it into a float32 array of shape (60000, 28 * 28) with values between 0 and 1.
# Each image's dimension is 28 x 28
img_rows, img_cols = 28, 28
input_shape = (img_rows, img_cols, 1)
# Prepare the training images
X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
X_train = X_train.astype('float32')
X_train /= 255
# Prepare the test images
X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
X_test = X_test.astype('float32')
X_test /= 255
# Prepare the validation images
X_val = X_val.reshape(X_val.shape[0], img_rows, img_cols, 1)
X_val = X_val.astype('float32')
X_val /= 255
This CNN takes as input tensors of shape (image_height, image_width, image_channels). In this case, I configure the CNN to process inputs of size (28, 28, 1), which is the format of the FashionMNIST images. I do this by passing the argument input_shape=(28, 28, 1) to the first layer.
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
cnn3 = Sequential()
cnn3.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape))
cnn3.add(MaxPooling2D((2, 2)))
cnn3.add(Dropout(0.25))
cnn3.add(Conv2D(64, kernel_size=(3, 3), activation='relu'))
cnn3.add(MaxPooling2D(pool_size=(2, 2)))
cnn3.add(Dropout(0.25))
cnn3.add(Conv2D(128, kernel_size=(3, 3), activation='relu'))
cnn3.add(Dropout(0.4))
cnn3.add(Flatten())
cnn3.add(Dense(128, activation='relu'))
cnn3.add(Dropout(0.3))
cnn3.add(Dense(10, activation='softmax'))
When compiling the model, I choose categorical_crossentropy as the loss function (which is relevent for multiclass, single-label classification problem) and Adam optimizer.
cnn3.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),
metrics=['accuracy'])
Let’s look at how the dimensions of the feature maps change with every successive layer:
cnn3.summary()
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_1 (Conv2D) (None, 26, 26, 32) 320 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 13, 13, 32) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 13, 13, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 11, 11, 64) 18496 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 5, 5, 64) 0 _________________________________________________________________ dropout_2 (Dropout) (None, 5, 5, 64) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 3, 3, 128) 73856 _________________________________________________________________ dropout_3 (Dropout) (None, 3, 3, 128) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 1152) 0 _________________________________________________________________ dense_1 (Dense) (None, 128) 147584 _________________________________________________________________ dropout_4 (Dropout) (None, 128) 0 _________________________________________________________________ dense_2 (Dense) (None, 10) 1290 ================================================================= Total params: 241,546 Trainable params: 241,546 Non-trainable params: 0 _________________________________________________________________
As previously mentioned, I train the model with batch size of 256 and 10 epochs on both training and validation data.
history3 = cnn3.fit(X_train, y_train,
batch_size=256,
epochs=10,
verbose=1,
validation_data=(X_val, y_val))
Train on 48000 samples, validate on 12000 samples Epoch 1/10 48000/48000 [==============================] - 65s 1ms/step - loss: 0.8479 - acc: 0.6865 - val_loss: 0.5098 - val_acc: 0.8076 Epoch 2/10 48000/48000 [==============================] - 69s 1ms/step - loss: 0.5232 - acc: 0.8047 - val_loss: 0.4146 - val_acc: 0.8526 Epoch 3/10 48000/48000 [==============================] - 79s 2ms/step - loss: 0.4510 - acc: 0.8366 - val_loss: 0.3688 - val_acc: 0.8669 Epoch 4/10 48000/48000 [==============================] - 65s 1ms/step - loss: 0.4039 - acc: 0.8529 - val_loss: 0.3481 - val_acc: 0.8741 Epoch 5/10 48000/48000 [==============================] - 66s 1ms/step - loss: 0.3762 - acc: 0.8612 - val_loss: 0.3221 - val_acc: 0.8810 Epoch 6/10 48000/48000 [==============================] - 68s 1ms/step - loss: 0.3594 - acc: 0.8696 - val_loss: 0.3105 - val_acc: 0.8869 Epoch 7/10 48000/48000 [==============================] - 78s 2ms/step - loss: 0.3397 - acc: 0.8778 - val_loss: 0.2960 - val_acc: 0.8923 Epoch 8/10 48000/48000 [==============================] - 63s 1ms/step - loss: 0.3266 - acc: 0.8810 - val_loss: 0.2847 - val_acc: 0.8977 Epoch 9/10 48000/48000 [==============================] - 75s 2ms/step - loss: 0.3162 - acc: 0.8836 - val_loss: 0.2884 - val_acc: 0.8947 Epoch 10/10 48000/48000 [==============================] - 61s 1ms/step - loss: 0.3074 - acc: 0.8878 - val_loss: 0.2700 - val_acc: 0.9028
score3 = cnn3.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score3[0])
print('Test accuracy:', score3[1])
Test loss: 0.24964626643657684 Test accuracy: 0.9079
My accuracy is 90.79%, pretty powerful!
Overfitting can be caused by having too few samples to learn from, making me unable to train a model that can generalize to new data. Given infinite data, my model would be exposed to every possible aspect of the data distribution at hand: I would never overfit.
Data augmentation takes the approach of generating more training data from existing training samples, by augmenting the samples via a number of random transformations that yield believable-looking images. The goal is that at training time, my model will never see the exact same picture twice. This helps expose the model to more aspects of the data and generalize better.
In Keras, this can be done by configuring a number of random transformations to be performed on the images read by the ImageDataGenerator instance.
from keras.preprocessing.image import ImageDataGenerator
gen = ImageDataGenerator(rotation_range=8, width_shift_range=0.08, shear_range=0.3,
height_shift_range=0.08, zoom_range=0.08)
batches = gen.flow(X_train, y_train, batch_size=256)
val_batches = gen.flow(X_val, y_val, batch_size=256)
Let's train the network using data augmentation.
history3 = cnn3.fit_generator(batches, steps_per_epoch=48000//256, epochs=50,
validation_data=val_batches, validation_steps=12000//256, use_multiprocessing=True)
Epoch 1/50 187/187 [==============================] - 66s 355ms/step - loss: 0.4831 - acc: 0.8195 - val_loss: 0.4110 - val_acc: 0.8404 Epoch 2/50 187/187 [==============================] - 71s 378ms/step - loss: 0.4413 - acc: 0.8350 - val_loss: 0.3684 - val_acc: 0.8633 Epoch 3/50 187/187 [==============================] - 78s 416ms/step - loss: 0.4205 - acc: 0.8437 - val_loss: 0.3511 - val_acc: 0.8684 Epoch 4/50 187/187 [==============================] - 69s 370ms/step - loss: 0.4098 - acc: 0.8478 - val_loss: 0.3550 - val_acc: 0.8614 Epoch 5/50 187/187 [==============================] - 65s 348ms/step - loss: 0.3997 - acc: 0.8510 - val_loss: 0.3362 - val_acc: 0.8744 Epoch 6/50 187/187 [==============================] - 67s 361ms/step - loss: 0.3943 - acc: 0.8524 - val_loss: 0.3537 - val_acc: 0.8675 Epoch 7/50 187/187 [==============================] - 71s 377ms/step - loss: 0.3892 - acc: 0.8560 - val_loss: 0.3249 - val_acc: 0.8750 Epoch 8/50 187/187 [==============================] - 72s 384ms/step - loss: 0.3793 - acc: 0.8593 - val_loss: 0.3259 - val_acc: 0.8770 Epoch 9/50 187/187 [==============================] - 71s 382ms/step - loss: 0.3739 - acc: 0.8601 - val_loss: 0.3197 - val_acc: 0.8802 Epoch 10/50 187/187 [==============================] - 75s 402ms/step - loss: 0.3700 - acc: 0.8618 - val_loss: 0.3248 - val_acc: 0.8796 Epoch 11/50 187/187 [==============================] - 73s 390ms/step - loss: 0.3657 - acc: 0.8648 - val_loss: 0.3177 - val_acc: 0.8790 Epoch 12/50 187/187 [==============================] - 63s 337ms/step - loss: 0.3607 - acc: 0.8649 - val_loss: 0.3151 - val_acc: 0.8823 Epoch 13/50 187/187 [==============================] - 64s 340ms/step - loss: 0.3581 - acc: 0.8665 - val_loss: 0.3046 - val_acc: 0.8869 Epoch 14/50 187/187 [==============================] - 66s 352ms/step - loss: 0.3577 - acc: 0.8649 - val_loss: 0.2992 - val_acc: 0.8876 Epoch 15/50 187/187 [==============================] - 64s 340ms/step - loss: 0.3500 - acc: 0.8686 - val_loss: 0.3014 - val_acc: 0.8867 Epoch 16/50 187/187 [==============================] - 63s 337ms/step - loss: 0.3497 - acc: 0.8711 - val_loss: 0.3065 - val_acc: 0.8849 Epoch 17/50 187/187 [==============================] - 66s 351ms/step - loss: 0.3547 - acc: 0.8696 - val_loss: 0.3068 - val_acc: 0.8861 Epoch 18/50 187/187 [==============================] - 62s 333ms/step - loss: 0.3439 - acc: 0.8707 - val_loss: 0.2992 - val_acc: 0.8887 Epoch 19/50 187/187 [==============================] - 65s 349ms/step - loss: 0.3445 - acc: 0.8727 - val_loss: 0.2916 - val_acc: 0.8931 Epoch 20/50 187/187 [==============================] - 75s 402ms/step - loss: 0.3384 - acc: 0.8734 - val_loss: 0.3072 - val_acc: 0.8845 Epoch 21/50 187/187 [==============================] - 63s 336ms/step - loss: 0.3402 - acc: 0.8731 - val_loss: 0.2955 - val_acc: 0.8875 Epoch 22/50 187/187 [==============================] - 61s 324ms/step - loss: 0.3391 - acc: 0.8756 - val_loss: 0.2951 - val_acc: 0.8912 Epoch 23/50 187/187 [==============================] - 64s 343ms/step - loss: 0.3352 - acc: 0.8755 - val_loss: 0.2813 - val_acc: 0.8937 Epoch 24/50 187/187 [==============================] - 61s 327ms/step - loss: 0.3328 - acc: 0.8750 - val_loss: 0.2912 - val_acc: 0.8902 Epoch 25/50 187/187 [==============================] - 64s 343ms/step - loss: 0.3273 - acc: 0.8774 - val_loss: 0.2873 - val_acc: 0.8952 Epoch 26/50 187/187 [==============================] - 66s 353ms/step - loss: 0.3306 - acc: 0.8775 - val_loss: 0.2816 - val_acc: 0.8913 Epoch 27/50 187/187 [==============================] - 64s 341ms/step - loss: 0.3221 - acc: 0.8790 - val_loss: 0.2978 - val_acc: 0.8876 Epoch 28/50 187/187 [==============================] - 64s 343ms/step - loss: 0.3290 - acc: 0.8784 - val_loss: 0.2906 - val_acc: 0.8890 Epoch 29/50 187/187 [==============================] - 77s 410ms/step - loss: 0.3232 - acc: 0.8812 - val_loss: 0.2892 - val_acc: 0.8894 Epoch 30/50 187/187 [==============================] - 69s 371ms/step - loss: 0.3232 - acc: 0.8794 - val_loss: 0.2750 - val_acc: 0.8971 Epoch 31/50 187/187 [==============================] - 64s 344ms/step - loss: 0.3213 - acc: 0.8821 - val_loss: 0.3017 - val_acc: 0.8852 Epoch 32/50 187/187 [==============================] - 63s 338ms/step - loss: 0.3244 - acc: 0.8792 - val_loss: 0.2788 - val_acc: 0.8952 Epoch 33/50 187/187 [==============================] - 67s 356ms/step - loss: 0.3173 - acc: 0.8820 - val_loss: 0.2817 - val_acc: 0.8931 Epoch 34/50 187/187 [==============================] - 67s 357ms/step - loss: 0.3161 - acc: 0.8842 - val_loss: 0.2762 - val_acc: 0.8952 Epoch 35/50 187/187 [==============================] - 68s 365ms/step - loss: 0.3169 - acc: 0.8837 - val_loss: 0.2794 - val_acc: 0.8969 Epoch 36/50 187/187 [==============================] - 78s 415ms/step - loss: 0.3197 - acc: 0.8800 - val_loss: 0.2869 - val_acc: 0.8882 Epoch 37/50 187/187 [==============================] - 76s 404ms/step - loss: 0.3168 - acc: 0.8822 - val_loss: 0.2767 - val_acc: 0.8928 Epoch 38/50 187/187 [==============================] - 79s 423ms/step - loss: 0.3066 - acc: 0.8846 - val_loss: 0.2743 - val_acc: 0.8975 Epoch 39/50 187/187 [==============================] - 66s 356ms/step - loss: 0.3132 - acc: 0.8825 - val_loss: 0.2677 - val_acc: 0.9027 Epoch 40/50 187/187 [==============================] - 64s 340ms/step - loss: 0.3093 - acc: 0.8850 - val_loss: 0.2735 - val_acc: 0.8946 Epoch 41/50 187/187 [==============================] - 66s 351ms/step - loss: 0.3074 - acc: 0.8862 - val_loss: 0.2695 - val_acc: 0.8980 Epoch 42/50 187/187 [==============================] - 64s 340ms/step - loss: 0.3089 - acc: 0.8860 - val_loss: 0.2713 - val_acc: 0.8992 Epoch 43/50 187/187 [==============================] - 68s 362ms/step - loss: 0.3082 - acc: 0.8840 - val_loss: 0.2751 - val_acc: 0.8970 Epoch 44/50 187/187 [==============================] - 66s 354ms/step - loss: 0.3063 - acc: 0.8849 - val_loss: 0.2619 - val_acc: 0.9019 Epoch 45/50 187/187 [==============================] - 64s 343ms/step - loss: 0.3051 - acc: 0.8873 - val_loss: 0.2639 - val_acc: 0.9031 Epoch 46/50 187/187 [==============================] - 67s 358ms/step - loss: 0.3063 - acc: 0.8856 - val_loss: 0.2689 - val_acc: 0.8987 Epoch 47/50 187/187 [==============================] - 79s 424ms/step - loss: 0.3087 - acc: 0.8849 - val_loss: 0.2760 - val_acc: 0.8954 Epoch 48/50 187/187 [==============================] - 66s 356ms/step - loss: 0.3028 - acc: 0.8869 - val_loss: 0.2690 - val_acc: 0.8967 Epoch 49/50 187/187 [==============================] - 65s 347ms/step - loss: 0.3028 - acc: 0.8858 - val_loss: 0.2712 - val_acc: 0.8959 Epoch 50/50 187/187 [==============================] - 67s 358ms/step - loss: 0.3042 - acc: 0.8880 - val_loss: 0.2675 - val_acc: 0.8990
score3 = cnn3.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score3[0])
print('Test accuracy:', score3[1])
Test loss: 0.22910109297037123 Test accuracy: 0.9117
Okay, I improved the accuracy to 91.17%!
Let's plot training and validation accuracy as well as training and validation loss.
import matplotlib.pyplot as plt
%matplotlib inline
accuracy = history3.history['acc']
val_accuracy = history3.history['val_acc']
loss = history3.history['loss']
val_loss = history3.history['val_loss']
epochs = range(len(accuracy))
plt.plot(epochs, accuracy, 'bo', label='Training accuracy')
plt.plot(epochs, val_accuracy, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
These plots look decent: The training curves are closely tracking the validation curves.
I can summarize the performance of my classifier as follows:
# get the predictions for the test data
predicted_classes = cnn3.predict_classes(X_test)
# get the indices to be plotted
y_true = data_test.iloc[:, 0]
correct = np.nonzero(predicted_classes==y_true)[0]
incorrect = np.nonzero(predicted_classes!=y_true)[0]
from sklearn.metrics import classification_report
target_names = ["Class {}".format(i) for i in range(10)]
print(classification_report(y_true, predicted_classes, target_names=target_names))
precision recall f1-score support Class 0 0.85 0.86 0.86 1000 Class 1 0.99 0.99 0.99 1000 Class 2 0.92 0.83 0.87 1000 Class 3 0.93 0.94 0.93 1000 Class 4 0.88 0.83 0.85 1000 Class 5 0.98 0.98 0.98 1000 Class 6 0.68 0.78 0.73 1000 Class 7 0.95 0.96 0.96 1000 Class 8 0.99 0.99 0.99 1000 Class 9 0.98 0.96 0.97 1000 avg / total 0.91 0.91 0.91 10000
It's apparent that the classifier is underperforming for class 6 in terms of both precision and recall. For class 0, the classifier is slightly lacking precision; whereas for class 2 and 4, it is slightly lacking recall.
Perhaps I would gain more insight after visualizing the correct and incorrect predictions.
Here is a subset of correctly predicted classes.
for i, correct in enumerate(correct[:9]):
plt.subplot(3,3,i+1)
plt.imshow(X_test[correct].reshape(28,28), cmap='gray', interpolation='none')
plt.title("Predicted {}, Class {}".format(predicted_classes[correct], y_true[correct]))
plt.tight_layout()
And here is a subset of incorrectly predicted classes:
for i, incorrect in enumerate(incorrect[0:9]):
plt.subplot(3,3,i+1)
plt.imshow(X_test[incorrect].reshape(28,28), cmap='gray', interpolation='none')
plt.title("Predicted {}, Class {}".format(predicted_classes[incorrect], y_true[incorrect]))
plt.tight_layout()
It’s often said that deep-learning models are “black boxes”: learning representations that are difficult to extract and present in a human-readable form. Although this is partially true for certain types of deep-learning models, it’s definitely not true for convnets. The representations learned by convnets are highly amenable to visualization, in large part because they’re representations of visual concepts.
Here I attempt to visualize the intermediate CNN outputs (intermediate activations). Visualizing intermediate activations consists of displaying the feature maps that are output by various convolution and pooling layers in a network, given a certain input (the output of a layer is often called its activation, the output of the activation function). This gives a view into how an input is decomposed into the different filters learned by the network.
I want to visualize feature maps with three dimensions: width, height, and depth (channels). Each channel encodes relatively independent features, so the proper way to visualize these feature maps is by independently plotting the contents of every channel as a 2D image.
I first get an input test image (#1994).
test_im = X_train[1994]
plt.imshow(test_im.reshape(28,28), cmap='viridis', interpolation='none')
plt.show()
In order to extract the feature maps I want to look at, I create a Keras model that takes batches of images as input, and outputs the activations of all convolution and pooling layers. To do this, I use the Keras class Model. A model is instantiated using two arguments: an input tensor (or list of input tensors) and an output tensor (or list of output tensors). The resulting class is a Keras model, mapping the specified inputs to the specified outputs. When fed an image input, this model returns the values of the layer activations in the original model.
from keras import models
# extracts the outputs of the top 8 layers
layer_outputs = [layer.output for layer in cnn3.layers[:8]]
# creates a model that will return these outputs, given the model input
activation_model = models.Model(input=cnn3.input, output=layer_outputs)
# returns a list of Numpy arrays: one array per layer activation
activations = activation_model.predict(test_im.reshape(1,28,28,1))
# activation of the 1st convolution layer
first_layer_activation = activations[0]
# display the 3rd channel of the activation of the 1st layer of the original model
plt.matshow(first_layer_activation[0, :, :, 3], cmap='viridis')
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ipykernel_launcher.py:6: UserWarning: Update your `Model` call to the Keras 2 API: `Model(inputs=Tensor("co..., outputs=[<tf.Tenso...)`
<matplotlib.image.AxesImage at 0x1205c7978>
# display the 6th channel of the activation of the 1st layer of the original model
plt.matshow(first_layer_activation[0, :, :, 6], cmap='viridis')
<matplotlib.image.AxesImage at 0x1217fcf98>
Let's plot a complete visualization of all the activations in the network. I extract and plot every channel in each of the eight activation maps, and then stack the results in one big image tensor, with channels stacked side by side.
layer_names = []
for layer in cnn3.layers[:-1]:
layer_names.append(layer.name)
images_per_row = 16
for layer_name, layer_activation in zip(layer_names, activations):
if layer_name.startswith('conv'):
n_features = layer_activation.shape[-1]
size = layer_activation.shape[1]
n_cols = n_features // images_per_row
display_grid = np.zeros((size * n_cols, images_per_row * size))
for col in range(n_cols):
for row in range(images_per_row):
channel_image = layer_activation[0,:, :, col * images_per_row + row]
channel_image -= channel_image.mean()
channel_image /= channel_image.std()
channel_image *= 64
channel_image += 128
channel_image = np.clip(channel_image, 0, 255).astype('uint8')
display_grid[col * size : (col + 1) * size,
row * size : (row + 1) * size] = channel_image
scale = 1. / size
plt.figure(figsize=(scale * display_grid.shape[1],
scale * display_grid.shape[0]))
plt.title(layer_name)
plt.grid(False)
plt.imshow(display_grid, aspect='auto', cmap='viridis')
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ipykernel_launcher.py:15: RuntimeWarning: invalid value encountered in true_divide from ipykernel import kernelapp as app