This notebooks shows how to define and train a simple Neural-Network with PyTorch and use it via skorch with SciKit-Learn.
from sklearn.datasets import fetch_mldata
from sklearn.model_selection import train_test_split
import numpy as np
Using SciKit-Learns fetch_mldata
to load MNIST data.
mnist = fetch_mldata('MNIST original')
mnist
{'COL_NAMES': ['label', 'data'], 'DESCR': 'mldata.org dataset: mnist-original', 'data': array([[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], dtype=uint8), 'target': array([ 0., 0., 0., ..., 9., 9., 9.])}
mnist.data.shape
(70000, 784)
Each image of the MNIST dataset is encoded in a 784 dimensional vector, representing a 28 x 28 pixel image. Each pixel has a value between 0 and 255, corresponding to the grey-value of a pixel.
The above featch_mldata
method to load MNIST returns data
and target
as uint8
which we convert to float32
and int64
respectively.
X = mnist.data.astype('float32')
y = mnist.target.astype('int64')
As we will use ReLU as activation in combination with softmax over the output layer, we need to scale X
down. An often use range is [0, 1].
X /= 255.0
X.min(), X.max()
(0.0, 1.0)
Note: data is not normalized.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
assert(X_train.shape[0] + X_test.shape[0] == mnist.data.shape[0])
X_train.shape, y_train.shape
((52500, 784), (52500,))
Simple, fully connected neural network with one hidden layer. Input layer has 784 dimensions (28x28), hidden layer has 98 (= 784 / 8) and output layer 10 neurons, representing digits 0 - 9.
import torch
from torch import nn
import torch.nn.functional as F
torch.manual_seed(0);
mnist_dim = X.shape[1]
hidden_dim = int(mnist_dim/8)
output_dim = len(np.unique(mnist.target))
mnist_dim, hidden_dim, output_dim
(784, 98, 10)
A Neural network in PyTorch's framework.
class ClassifierModule(nn.Module):
def __init__(
self,
input_dim=mnist_dim,
hidden_dim=hidden_dim,
output_dim=output_dim,
dropout=0.5,
):
super(ClassifierModule, self).__init__()
self.dropout = nn.Dropout(dropout)
self.hidden = nn.Linear(input_dim, hidden_dim)
self.output = nn.Linear(hidden_dim, output_dim)
def forward(self, X, **kwargs):
X = F.relu(self.hidden(X))
X = self.dropout(X)
X = F.softmax(self.output(X), dim=-1)
return X
Skorch allows to use PyTorch's networks in the SciKit-Learn setting.
from skorch.net import NeuralNetClassifier
net = NeuralNetClassifier(
ClassifierModule,
max_epochs=20,
lr=0.1,
# device='cuda', # uncomment this to train with CUDA
)
net.fit(X_train, y_train);
epoch train_loss valid_acc valid_loss dur ------- ------------ ----------- ------------ ------ 1 0.8343 0.8983 0.3821 1.4131 2 0.4338 0.9193 0.2961 1.4066 3 0.3625 0.9319 0.2424 1.3839 4 0.3275 0.9382 0.2199 1.3872 5 0.2967 0.9435 0.1989 1.4129 6 0.2800 0.9467 0.1835 1.2378 7 0.2615 0.9513 0.1695 1.1722 8 0.2467 0.9534 0.1612 1.4182 9 0.2385 0.9551 0.1533 1.4431 10 0.2276 0.9560 0.1471 1.3997 11 0.2201 0.9573 0.1434 1.3977 12 0.2118 0.9579 0.1380 1.4252 13 0.2070 0.9613 0.1335 1.4033 14 0.1994 0.9609 0.1316 1.4070 15 0.1979 0.9618 0.1257 1.4061 16 0.1915 0.9638 0.1228 1.4022 17 0.1881 0.9651 0.1189 1.4358 18 0.1835 0.9651 0.1167 1.4076 19 0.1769 0.9654 0.1150 1.4259 20 0.1717 0.9665 0.1129 1.4119
predicted = net.predict(X_test)
np.mean(predicted == y_test)
0.96325714285714281
An accuracy of nearly 96% for a network with only one hidden layer is not too bad
PyTorch expects a 4 dimensional tensor as input for its 2D convolution layer. The dimensions represent:
As initial batch size the number of examples needs to be provided. MNIST data has only one channel. As stated above, each MNIST vector represents a 28x28 pixel image. Hence, the resulting shape for PyTorch tensor needs to be (x, 1, 28, 28).
XCnn = X.reshape(-1, 1, 28, 28)
XCnn.shape
(70000, 1, 28, 28)
XCnn_train, XCnn_test, y_train, y_test = train_test_split(XCnn, y, test_size=0.25, random_state=42)
XCnn_train.shape, y_train.shape
((52500, 1, 28, 28), (52500,))
class Cnn(nn.Module):
def __init__(self):
super(Cnn, self).__init__()
self.conv1 = nn.Conv2d(1, 32, kernel_size=3)
self.conv2 = nn.Conv2d(32, 64, kernel_size=3)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(1600, 128) # 1600 = number channels * width * height
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, x.size(1) * x.size(2) * x.size(3)) # flatten over channel, height and width = 1600
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
x = F.softmax(x, dim=-1)
return x
cnn = NeuralNetClassifier(
Cnn,
max_epochs=15,
lr=1,
optimizer=torch.optim.Adadelta,
# device='cuda', # uncomment this to train with CUDA
)
cnn.fit(XCnn_train, y_train);
epoch train_loss valid_acc valid_loss dur ------- ------------ ----------- ------------ ------- 1 0.4168 0.9748 0.0836 18.2292 2 0.1455 0.9823 0.0594 18.0943 3 0.1129 0.9849 0.0503 19.4160 4 0.0940 0.9856 0.0433 17.4486 5 0.0836 0.9855 0.0460 19.5266 6 0.0788 0.9869 0.0379 19.8825 7 0.0681 0.9881 0.0369 18.6277 8 0.0662 0.9891 0.0356 19.2907 9 0.0630 0.9879 0.0340 18.7650 10 0.0575 0.9890 0.0324 17.2828 11 0.0563 0.9886 0.0333 17.8509 12 0.0523 0.9881 0.0357 17.3866 13 0.0516 0.9903 0.0326 17.8570 14 0.0462 0.9901 0.0320 18.2953 15 0.0464 0.9897 0.0313 18.2699
cnn_pred = cnn.predict(XCnn_test)
np.mean(cnn_pred == y_test)
0.99102857142857148
An accuracy of 99.1% should suffice for this example!