Advanced usage

This notebook shows some more advanced features of skorch. More examples will be added with time.

Run in Google Colab View source on GitHub

Table of contents

In [1]:
! [ ! -z "$COLAB_GPU" ] && pip install torch skorch
In [2]:
import torch
from torch import nn
import torch.nn.functional as F
torch.manual_seed(0);

Setup

A toy binary classification task

We load a toy classification task from sklearn.

In [3]:
import numpy as np
from sklearn.datasets import make_classification
In [4]:
X, y = make_classification(1000, 20, n_informative=10, random_state=0)
X = X.astype(np.float32)
In [5]:
X.shape, y.shape, y.mean()
Out[5]:
((1000, 20), (1000,), 0.5)

Definition of the pytorch classification module

We define a vanilla neural network with two hidden layers. The output layer should have 2 output units since there are two classes. In addition, it should have a softmax nonlinearity, because later, when calling predict_proba, the output from the forward call will be used.

In [6]:
from skorch import NeuralNetClassifier
In [7]:
class ClassifierModule(nn.Module):
    def __init__(
            self,
            num_units=10,
            nonlin=F.relu,
            dropout=0.5,
    ):
        super(ClassifierModule, self).__init__()
        self.num_units = num_units
        self.nonlin = nonlin
        self.dropout = dropout

        self.dense0 = nn.Linear(20, num_units)
        self.nonlin = nonlin
        self.dropout = nn.Dropout(dropout)
        self.dense1 = nn.Linear(num_units, 10)
        self.output = nn.Linear(10, 2)

    def forward(self, X, **kwargs):
        X = self.nonlin(self.dense0(X))
        X = self.dropout(X)
        X = F.relu(self.dense1(X))
        X = F.softmax(self.output(X), dim=-1)
        return X

Callbacks

Callbacks are a powerful and flexible way to customize the behavior of your neural network. They are all called at specific points during the model training, e.g. when training starts, or after each batch. Have a look at the skorch.callbacks module to see the callbacks that are already implemented.

Writing a custom callback

Although skorch comes with a handful of useful callbacks, you may find that you would like to write your own callbacks. Doing so is straightforward, just remember these rules:

  • They should inherit from skorch.callbacks.Callback.
  • They should implement at least one of the on_-methods provided by the parent class (e.g. on_batch_begin or on_epoch_end).
  • As argument, the on_-methods first get the NeuralNet instance, and, where appropriate, the local data (e.g. the data from the current batch). The method should also have **kwargs in the signature for potentially unused arguments.
  • Optional: If you have attributes that should be reset when the model is re-initialized, those attributes should be set in the initialize method.

Here is an example of a callback that remembers at which epoch the validation accuracy reached a certain value. Then, when training is finished, it calls a mock Twitter API and tweets that epoch. We proceed as follows:

  • We set the desired minimum accuracy during __init__.
  • We set the critical epoch during initialize.
  • After each epoch, if the critical accuracy has not yet been reached, we check if it was reached.
  • When training finishes, we send a tweet informing us whether our training was successful or not.
In [8]:
from skorch.callbacks import Callback


def tweet(msg):
    print("~" * 60)
    print("*tweet*", msg, "#skorch #pytorch")
    print("~" * 60)


class AccuracyTweet(Callback):
    def __init__(self, min_accuracy):
        self.min_accuracy = min_accuracy

    def initialize(self):
        self.critical_epoch_ = -1

    def on_epoch_end(self, net, **kwargs):
        if self.critical_epoch_ > -1:
            return
        # look at the validation accuracy of the last epoch
        if net.history[-1, 'valid_acc'] >= self.min_accuracy:
            self.critical_epoch_ = len(net.history)

    def on_train_end(self, net, **kwargs):
        if self.critical_epoch_ < 0:
            msg = "Accuracy never reached {} :(".format(self.min_accuracy)
        else:
            msg = "Accuracy reached {} at epoch {}!!!".format(
                self.min_accuracy, self.critical_epoch_)

        tweet(msg)

Now we initialize a NeuralNetClassifier and pass your new callback in a list to the callbacks argument. After that, we train the model and see what happens.

In [9]:
net = NeuralNetClassifier(
    ClassifierModule,
    max_epochs=10,
    lr=0.02,
    warm_start=True,
    callbacks=[AccuracyTweet(min_accuracy=0.7)],
)
In [10]:
net.fit(X, y)
  epoch    train_loss    valid_acc    valid_loss     dur
-------  ------------  -----------  ------------  ------
      1        0.6954       0.6000        0.6844  0.0561
      2        0.6802       0.5950        0.6817  0.0182
      3        0.6839       0.6000        0.6792  0.0182
      4        0.6753       0.5900        0.6767  0.0189
      5        0.6769       0.5950        0.6742  0.0184
      6        0.6774       0.6050        0.6720  0.0176
      7        0.6693       0.6250        0.6695  0.0183
      8        0.6694       0.6300        0.6672  0.0180
      9        0.6703       0.6400        0.6652  0.0192
     10        0.6523       0.6550        0.6623  0.0189
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*tweet* Accuracy never reached 0.7 :( #skorch #pytorch
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Out[10]:
<class 'skorch.classifier.NeuralNetClassifier'>[initialized](
  module_=ClassifierModule(
    (dense0): Linear(in_features=20, out_features=10, bias=True)
    (dropout): Dropout(p=0.5)
    (dense1): Linear(in_features=10, out_features=10, bias=True)
    (output): Linear(in_features=10, out_features=2, bias=True)
  ),
)

Oh no, our model never reached a validation accuracy of 0.7. Let's train some more (this is possible because we set warm_start=True):

In [11]:
net.fit(X, y)
     11        0.6641       0.6650        0.6603  0.0176
     12        0.6524       0.6650        0.6582  0.0196
     13        0.6506       0.6700        0.6553  0.0184
     14        0.6489       0.6650        0.6527  0.0192
     15        0.6505       0.6750        0.6502  0.0175
     16        0.6473       0.6750        0.6474  0.0185
     17        0.6431       0.6800        0.6443  0.0171
     18        0.6461       0.6900        0.6418  0.0182
     19        0.6430       0.6850        0.6392  0.0200
     20        0.6364       0.6950        0.6366  0.0187
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*tweet* Accuracy never reached 0.7 :( #skorch #pytorch
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Out[11]:
<class 'skorch.classifier.NeuralNetClassifier'>[initialized](
  module_=ClassifierModule(
    (dense0): Linear(in_features=20, out_features=10, bias=True)
    (dropout): Dropout(p=0.5)
    (dense1): Linear(in_features=10, out_features=10, bias=True)
    (output): Linear(in_features=10, out_features=2, bias=True)
  ),
)

Finally, the validation score exceeded 0.7. Hooray!

Accessing callback parameters

Say you would like to use a learning rate schedule with your neural net, but you don't know what parameters are best for that schedule. Wouldn't it be nice if you could find those parameters with a grid search? With skorch, this is possible. Below, we show how to access the parameters of your callbacks.

To simplify the access to your callback parameters, it is best if you give your callback a name. This is achieved by passing the callbacks parameter a list of name, callback tuples, such as:

callbacks=[
    ('scheduler', LearningRateScheduler)),
    ...
],

This way, you can access your callbacks using the double underscore semantics (as, for instance, in an sklearn Pipeline):

callbacks__scheduler__epoch=50,

So if you would like to perform a grid search on, say, the number of units in the hidden layer and the learning rate schedule, it could look something like this:

param_grid = {
    'module__num_units': [50, 100, 150],
    'callbacks__scheduler__epoch': [10, 50, 100],
}

Note: If you would like to refresh your knowledge on grid search, look here, here, or in the Basic_Usage notebok.

Below, we show how accessing the callback parameters works our AccuracyTweet callback:

In [12]:
net = NeuralNetClassifier(
    ClassifierModule,
    max_epochs=10,
    lr=0.1,
    warm_start=True,
    callbacks=[
        ('tweet', AccuracyTweet(min_accuracy=0.7)),
    ],
    callbacks__tweet__min_accuracy=0.6,
)
In [13]:
net.fit(X, y)
  epoch    train_loss    valid_acc    valid_loss     dur
-------  ------------  -----------  ------------  ------
      1        0.7114       0.5150        0.6923  0.0180
      2        0.6959       0.5300        0.6831  0.0189
      3        0.6856       0.6000        0.6752  0.0181
      4        0.6768       0.6300        0.6686  0.0199
      5        0.6670       0.6500        0.6585  0.0173
      6        0.6607       0.6300        0.6496  0.0185
      7        0.6375       0.6650        0.6377  0.0193
      8        0.6333       0.6800        0.6263  0.0178
      9        0.6160       0.6850        0.6117  0.0187
     10        0.6109       0.6850        0.5983  0.0176
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*tweet* Accuracy reached 0.6 at epoch 3!!! #skorch #pytorch
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Out[13]:
<class 'skorch.classifier.NeuralNetClassifier'>[initialized](
  module_=ClassifierModule(
    (dense0): Linear(in_features=20, out_features=10, bias=True)
    (dropout): Dropout(p=0.5)
    (dense1): Linear(in_features=10, out_features=10, bias=True)
    (output): Linear(in_features=10, out_features=2, bias=True)
  ),
)

As you can see, by passing callbacks__tweet__min_accuracy=0.6, we changed that parameter. The same can be achieved by calling the set_params method with the corresponding arguments:

In [14]:
net.set_params(callbacks__tweet__min_accuracy=0.75)
Out[14]:
<class 'skorch.classifier.NeuralNetClassifier'>[initialized](
  module_=ClassifierModule(
    (dense0): Linear(in_features=20, out_features=10, bias=True)
    (dropout): Dropout(p=0.5)
    (dense1): Linear(in_features=10, out_features=10, bias=True)
    (output): Linear(in_features=10, out_features=2, bias=True)
  ),
)
In [15]:
net.fit(X, y)
  epoch    train_loss    valid_acc    valid_loss     dur
-------  ------------  -----------  ------------  ------
     11        0.5926       0.6900        0.5844  0.0206
     12        0.5778       0.7000        0.5719  0.0210
     13        0.5824       0.7300        0.5597  0.0198
     14        0.5639       0.7350        0.5541  0.0177
     15        0.5580       0.7400        0.5451  0.0187
     16        0.5415       0.7500        0.5334  0.0183
     17        0.5508       0.7550        0.5333  0.0190
     18        0.5317       0.7400        0.5338  0.0174
     19        0.5209       0.7400        0.5225  0.0187
     20        0.5275       0.7350        0.5227  0.0182
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*tweet* Accuracy reached 0.75 at epoch 16!!! #skorch #pytorch
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Out[15]:
<class 'skorch.classifier.NeuralNetClassifier'>[initialized](
  module_=ClassifierModule(
    (dense0): Linear(in_features=20, out_features=10, bias=True)
    (dropout): Dropout(p=0.5)
    (dense1): Linear(in_features=10, out_features=10, bias=True)
    (output): Linear(in_features=10, out_features=2, bias=True)
  ),
)

Working with different data types

Working with Datasets

We encourage you to not pass Datasets to net.fit but to let skorch handle Datasets internally. Nonetheless, there are situations where passing Datasets to net.fit is hard to avoid (e.g. if you want to load the data lazily during the training). This is supported by skorch but may have some unwanted side-effects relating to sklearn. For instance, Datasets cannot split into train and validation in a stratified fashion without explicit knowledge of the classification targets.

Below we show what happens when you try to fit with Dataset and the stratified split fails:

In [16]:
class MyDataset(torch.utils.data.Dataset):
    def __init__(self, X, y):
        self.X = X
        self.y = y
        
        assert len(X) == len(y)

    def __len__(self):
        return len(self.X)

    def __getitem__(self, i):
        return self.X[i], self.y[i]
In [17]:
X, y = make_classification(1000, 20, n_informative=10, random_state=0)
X = X.astype(np.float32)
dataset = MyDataset(X, y)
In [18]:
net = NeuralNetClassifier(ClassifierModule)
In [19]:
try:
    net.fit(dataset, y=None)
except ValueError as e:
    print("Error:", e)
Error: Stratified CV requires explicitely passing a suitable y.
In [20]:
net.train_split.stratified
Out[20]:
True

As you can see, the stratified split fails since y is not known. There are two solutions to this:

  • turn off stratified splitting ( net.train_split.stratified=False)
  • pass y explicitly (if possible), even if it is implicitely contained in the Dataset

The second solution is shown below:

In [21]:
net.fit(dataset, y=y)
Re-initializing module!
  epoch    train_loss    valid_acc    valid_loss     dur
-------  ------------  -----------  ------------  ------
      1        0.7869       0.5000        0.7649  0.0164
      2        0.7700       0.5000        0.7526  0.0170
      3        0.7594       0.5000        0.7424  0.0172
      4        0.7443       0.5000        0.7336  0.0188
      5        0.7388       0.5000        0.7261  0.0159
      6        0.7265       0.5000        0.7201  0.0170
      7        0.7229       0.5000        0.7148  0.0155
      8        0.7103       0.5000        0.7105  0.0165
      9        0.7095       0.5000        0.7067  0.0166
     10        0.7053       0.5000        0.7035  0.0166
Out[21]:
<class 'skorch.classifier.NeuralNetClassifier'>[initialized](
  module_=ClassifierModule(
    (dense0): Linear(in_features=20, out_features=10, bias=True)
    (dropout): Dropout(p=0.5)
    (dense1): Linear(in_features=10, out_features=10, bias=True)
    (output): Linear(in_features=10, out_features=2, bias=True)
  ),
)

Working with dicts

The standard case

skorch has built-in support for dictionaries as data containers. Here we show a somewhat contrived example of how to use dicts, but it should get the point across. First we create data and put it into a dictionary X_dict with two keys X0 and X1:

In [22]:
X, y = make_classification(1000, 20, n_informative=10, random_state=0)
X0, X1 = X[:, :10].astype(np.float32), X[:, 10:].astype(np.float32)
X_dict = {'X0': X0, 'X1': X1}

When skorch passes the dict to the pytorch module, it will pass the data as keyword arguments to the forward call. That means that we should accept the two keys XO and X1 in the forward method, as shown below:

In [23]:
class ClassifierWithDict(nn.Module):
    def __init__(
            self,
            num_units0=50,
            num_units1=50,
            nonlin=F.relu,
            dropout=0.5,
    ):
        super(ClassifierWithDict, self).__init__()
        self.num_units0 = num_units0
        self.num_units1 = num_units1
        self.nonlin = nonlin
        self.dropout = dropout

        self.dense0 = nn.Linear(10, num_units0)
        self.dense1 = nn.Linear(10, num_units1)
        self.nonlin = nonlin
        self.dropout = nn.Dropout(dropout)
        self.output = nn.Linear(num_units0 + num_units1, 2)

    # NOTE: We accept X0 and X1, the keys from the dict, as arguments
    def forward(self, X0, X1, **kwargs):
        X0 = self.nonlin(self.dense0(X0))
        X0 = self.dropout(X0)

        X1 = self.nonlin(self.dense1(X1))
        X1 = self.dropout(X1)

        X = torch.cat((X0, X1), dim=1)
        X = F.relu(X)
        X = F.softmax(self.output(X), dim=-1)
        return X

As long as we keep this in mind, we are good to go.

In [24]:
net = NeuralNetClassifier(ClassifierWithDict, verbose=0)
In [25]:
net.fit(X_dict, y)
Out[25]:
<class 'skorch.classifier.NeuralNetClassifier'>[initialized](
  module_=ClassifierWithDict(
    (dense0): Linear(in_features=10, out_features=50, bias=True)
    (dense1): Linear(in_features=10, out_features=50, bias=True)
    (dropout): Dropout(p=0.5)
    (output): Linear(in_features=100, out_features=2, bias=True)
  ),
)

Working with sklearn FunctionTransformer and GridSearch

In [26]:
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import FunctionTransformer
from sklearn.model_selection import GridSearchCV

sklearn makes the assumption that incoming data should be numpy/sparse arrays or something similar. This clashes with the use of dictionaries. Unfortunately, it is sometimes impossible to work around that for now (for instance using skorch with BaggingClassifier). Other times, there are possibilities.

When we have a preprocessing pipeline that involves FunctionTransformer, we have to pass the parameter validate=False so that sklearn allows the dictionary to pass through:

In [27]:
pipe = Pipeline([
    ('do-nothing', FunctionTransformer(validate=False)),
    ('net', net),
])
In [28]:
pipe.fit(X_dict, y)
Out[28]:
Pipeline(memory=None,
     steps=[('do-nothing', FunctionTransformer(accept_sparse=False, check_inverse=True, func=None,
          inv_kw_args=None, inverse_func=None, kw_args=None,
          pass_y='deprecated', validate=False)), ('net', <class 'skorch.classifier.NeuralNetClassifier'>[initialized](
  module_=ClassifierWithDi... (dropout): Dropout(p=0.5)
    (output): Linear(in_features=100, out_features=2, bias=True)
  ),
))])

When trying a grid or randomized search, it is not that easy to pass a dict. If we try, we will get an error:

In [29]:
param_grid = {
    'net__module__num_units0': [10, 25, 50], 
    'net__module__num_units1': [10, 25, 50],
    'net__lr': [0.01, 0.1],
}
In [30]:
grid_search = GridSearchCV(pipe, param_grid, scoring='accuracy', verbose=1, cv=3)
In [31]:
try:
    grid_search.fit(X_dict, y)
except Exception as e:
    print(e)
Found input variables with inconsistent numbers of samples: [2, 1000]

The error above occurs because sklearn gets the length of the input data, which is 2 for the dict, and believes that is inconsistent with the length of the target (1000).

To get around that, skorch provides a helper class called SliceDict. It allows us to wrap our dictionaries so that they also behave like a numpy array:

In [32]:
from skorch.helper import SliceDict
In [33]:
X_slice_dict = SliceDict(X0=X0, X1=X1)  # X_slice_dict = SliceDict(**X_dict) would also work

The SliceDict shows the correct length, shape, and is sliceable across values:

In [34]:
print("Length of dict: {}, length of SliceDict: {}".format(len(X_dict), len(X_slice_dict)))
print("Shape of SliceDict: {}".format(X_slice_dict.shape))
Length of dict: 2, length of SliceDict: 1000
Shape of SliceDict: (1000,)
In [35]:
print("Slicing the SliceDict slices across values: {}".format(X_slice_dict[:2]))
Slicing the SliceDict slices across values: SliceDict(**{'X0': array([[-0.9658346 , -2.1890705 ,  0.16985609,  0.8138456 , -3.375209  ,
        -2.1430597 , -0.39585084,  2.9419577 , -2.1910605 ,  1.2443967 ],
       [-0.454767  ,  4.339768  , -0.48572844, -4.88433   , -2.8836503 ,
         2.6097205 , -1.952876  , -0.09192174,  0.07970932, -0.08938338]],
      dtype=float32), 'X1': array([[ 0.04351204, -0.5150961 , -0.86073655, -1.1097169 ,  0.31839254,
        -0.8231973 , -1.056304  , -0.89645284,  0.3759244 , -1.0849651 ],
       [-0.60726726, -1.0674309 ,  0.48804346, -0.50230557,  0.55743027,
         1.01592   , -1.9953582 ,  2.9030426 , -0.9739298 ,  2.1753323 ]],
      dtype=float32)})

With this, we can call GridSearch just as expected:

In [36]:
grid_search.fit(X_slice_dict, y)
Fitting 3 folds for each of 18 candidates, totalling 54 fits
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done  54 out of  54 | elapsed:   14.1s finished
Out[36]:
GridSearchCV(cv=3, error_score='raise-deprecating',
       estimator=Pipeline(memory=None,
     steps=[('do-nothing', FunctionTransformer(accept_sparse=False, check_inverse=True, func=None,
          inv_kw_args=None, inverse_func=None, kw_args=None,
          pass_y='deprecated', validate=False)), ('net', <class 'skorch.classifier.NeuralNetClassifier'>[initialized](
  module_=ClassifierWithDi... (dropout): Dropout(p=0.5)
    (output): Linear(in_features=100, out_features=2, bias=True)
  ),
))]),
       fit_params=None, iid='warn', n_jobs=None,
       param_grid={'net__module__num_units0': [10, 25, 50], 'net__module__num_units1': [10, 25, 50], 'net__lr': [0.01, 0.1]},
       pre_dispatch='2*n_jobs', refit=True, return_train_score='warn',
       scoring='accuracy', verbose=1)
In [37]:
grid_search.best_score_, grid_search.best_params_
Out[37]:
(0.756,
 {'net__lr': 0.1,
  'net__module__num_units0': 50,
  'net__module__num_units1': 25})