Advanced usage

This notebook shows some more advanced features of skorch. More examples will be added with time.

Run in Google Colab View source on GitHub

Table of contents

In [1]:
! [ ! -z "$COLAB_GPU" ] && pip install torch skorch
In [2]:
import torch
from torch import nn
import torch.nn.functional as F
torch.manual_seed(0);

Setup

A toy binary classification task

We load a toy classification task from sklearn.

In [3]:
import numpy as np
from sklearn.datasets import make_classification
In [4]:
X, y = make_classification(1000, 20, n_informative=10, random_state=0)
X = X.astype(np.float32)
In [5]:
X.shape, y.shape, y.mean()
Out[5]:
((1000, 20), (1000,), 0.5)

Definition of the pytorch classification module

We define a vanilla neural network with two hidden layers. The output layer should have 2 output units since there are two classes. In addition, it should have a softmax nonlinearity, because later, when calling predict_proba, the output from the forward call will be used.

In [6]:
from skorch import NeuralNetClassifier
In [7]:
class ClassifierModule(nn.Module):
    def __init__(
            self,
            num_units=10,
            nonlin=F.relu,
            dropout=0.5,
    ):
        super(ClassifierModule, self).__init__()
        self.num_units = num_units
        self.nonlin = nonlin
        self.dropout = dropout

        self.dense0 = nn.Linear(20, num_units)
        self.nonlin = nonlin
        self.dropout = nn.Dropout(dropout)
        self.dense1 = nn.Linear(num_units, 10)
        self.output = nn.Linear(10, 2)

    def forward(self, X, **kwargs):
        X = self.nonlin(self.dense0(X))
        X = self.dropout(X)
        X = F.relu(self.dense1(X))
        X = F.softmax(self.output(X), dim=-1)
        return X

Callbacks

Callbacks are a powerful and flexible way to customize the behavior of your neural network. They are all called at specific points during the model training, e.g. when training starts, or after each batch. Have a look at the skorch.callbacks module to see the callbacks that are already implemented.

Writing a custom callback

Although skorch comes with a handful of useful callbacks, you may find that you would like to write your own callbacks. Doing so is straightforward, just remember these rules:

  • They should inherit from skorch.callbacks.Callback.
  • They should implement at least one of the on_-methods provided by the parent class (e.g. on_batch_begin or on_epoch_end).
  • As argument, the on_-methods first get the NeuralNet instance, and, where appropriate, the local data (e.g. the data from the current batch). The method should also have **kwargs in the signature for potentially unused arguments.
  • Optional: If you have attributes that should be reset when the model is re-initialized, those attributes should be set in the initialize method.

Here is an example of a callback that remembers at which epoch the validation accuracy reached a certain value. Then, when training is finished, it calls a mock Twitter API and tweets that epoch. We proceed as follows:

  • We set the desired minimum accuracy during __init__.
  • We set the critical epoch during initialize.
  • After each epoch, if the critical accuracy has not yet been reached, we check if it was reached.
  • When training finishes, we send a tweet informing us whether our training was successful or not.
In [8]:
from skorch.callbacks import Callback


def tweet(msg):
    print("~" * 60)
    print("*tweet*", msg, "#skorch #pytorch")
    print("~" * 60)


class AccuracyTweet(Callback):
    def __init__(self, min_accuracy):
        self.min_accuracy = min_accuracy

    def initialize(self):
        self.critical_epoch_ = -1

    def on_epoch_end(self, net, **kwargs):
        if self.critical_epoch_ > -1:
            return
        # look at the validation accuracy of the last epoch
        if net.history[-1, 'valid_acc'] >= self.min_accuracy:
            self.critical_epoch_ = len(net.history)

    def on_train_end(self, net, **kwargs):
        if self.critical_epoch_ < 0:
            msg = "Accuracy never reached {} :(".format(self.min_accuracy)
        else:
            msg = "Accuracy reached {} at epoch {}!!!".format(
                self.min_accuracy, self.critical_epoch_)

        tweet(msg)

Now we initialize a NeuralNetClassifier and pass your new callback in a list to the callbacks argument. After that, we train the model and see what happens.

In [9]:
net = NeuralNetClassifier(
    ClassifierModule,
    max_epochs=15,
    lr=0.02,
    warm_start=True,
    callbacks=[AccuracyTweet(min_accuracy=0.7)],
)
In [10]:
net.fit(X, y)
  epoch    train_loss    valid_acc    valid_loss     dur
-------  ------------  -----------  ------------  ------
      1        0.6954       0.6000        0.6844  0.0176
      2        0.6802       0.5950        0.6817  0.0150
      3        0.6839       0.6000        0.6792  0.0178
      4        0.6753       0.5900        0.6767  0.0140
      5        0.6769       0.5950        0.6742  0.0172
      6        0.6774       0.6050        0.6720  0.0166
      7        0.6693       0.6250        0.6695  0.0134
      8        0.6694       0.6300        0.6672  0.0168
      9        0.6703       0.6400        0.6652  0.0177
     10        0.6523       0.6550        0.6623  0.0151
     11        0.6641       0.6650        0.6603  0.0134
     12        0.6524       0.6650        0.6582  0.0138
     13        0.6506       0.6700        0.6553  0.0126
     14        0.6489       0.6650        0.6527  0.0132
     15        0.6505       0.6750        0.6502  0.0133
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*tweet* Accuracy never reached 0.7 :( #skorch #pytorch
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Out[10]:
<class 'skorch.classifier.NeuralNetClassifier'>[initialized](
  module_=ClassifierModule(
    (dense0): Linear(in_features=20, out_features=10, bias=True)
    (dropout): Dropout(p=0.5)
    (dense1): Linear(in_features=10, out_features=10, bias=True)
    (output): Linear(in_features=10, out_features=2, bias=True)
  ),
)

Oh no, our model never reached a validation accuracy of 0.7. Let's train some more (this is possible because we set warm_start=True):

In [11]:
net.fit(X, y)
     16        0.6473       0.6750        0.6474  0.0175
     17        0.6431       0.6800        0.6443  0.0185
     18        0.6461       0.6900        0.6418  0.0162
     19        0.6430       0.6850        0.6392  0.0131
     20        0.6364       0.6950        0.6366  0.0146
     21        0.6266       0.7000        0.6334  0.0149
     22        0.6316       0.7000        0.6308  0.0151
     23        0.6231       0.7000        0.6277  0.0128
     24        0.6094       0.7000        0.6242  0.0160
     25        0.6250       0.7050        0.6215  0.0130
     26        0.6180       0.7150        0.6187  0.0139
     27        0.6186       0.7150        0.6159  0.0169
     28        0.6144       0.7150        0.6134  0.0171
     29        0.5993       0.7150        0.6100  0.0147
     30        0.5976       0.7150        0.6071  0.0138
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*tweet* Accuracy reached 0.7 at epoch 21!!! #skorch #pytorch
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Out[11]:
<class 'skorch.classifier.NeuralNetClassifier'>[initialized](
  module_=ClassifierModule(
    (dense0): Linear(in_features=20, out_features=10, bias=True)
    (dropout): Dropout(p=0.5)
    (dense1): Linear(in_features=10, out_features=10, bias=True)
    (output): Linear(in_features=10, out_features=2, bias=True)
  ),
)
In [12]:
assert net.history[-1, 'valid_acc'] >= 0.7

Finally, the validation score exceeded 0.7. Hooray!

Accessing callback parameters

Say you would like to use a learning rate schedule with your neural net, but you don't know what parameters are best for that schedule. Wouldn't it be nice if you could find those parameters with a grid search? With skorch, this is possible. Below, we show how to access the parameters of your callbacks.

To simplify the access to your callback parameters, it is best if you give your callback a name. This is achieved by passing the callbacks parameter a list of name, callback tuples, such as:

callbacks=[
    ('scheduler', LearningRateScheduler)),
    ...
],

This way, you can access your callbacks using the double underscore semantics (as, for instance, in an sklearn Pipeline):

callbacks__scheduler__epoch=50,

So if you would like to perform a grid search on, say, the number of units in the hidden layer and the learning rate schedule, it could look something like this:

param_grid = {
    'module__num_units': [50, 100, 150],
    'callbacks__scheduler__epoch': [10, 50, 100],
}

Note: If you would like to refresh your knowledge on grid search, look here, here, or in the Basic_Usage notebok.

Below, we show how accessing the callback parameters works our AccuracyTweet callback:

In [13]:
net = NeuralNetClassifier(
    ClassifierModule,
    max_epochs=10,
    lr=0.1,
    warm_start=True,
    callbacks=[
        ('tweet', AccuracyTweet(min_accuracy=0.7)),
    ],
    callbacks__tweet__min_accuracy=0.6,
)
In [14]:
net.fit(X, y)
  epoch    train_loss    valid_acc    valid_loss     dur
-------  ------------  -----------  ------------  ------
      1        0.6932       0.5950        0.6749  0.0161
      2        0.6686       0.6550        0.6613  0.0160
      3        0.6641       0.6450        0.6487  0.0168
      4        0.6438       0.6600        0.6354  0.0171
      5        0.6293       0.7000        0.6190  0.0171
      6        0.6091       0.7300        0.6040  0.0147
      7        0.5872       0.7500        0.5868  0.0130
      8        0.5820       0.7600        0.5736  0.0138
      9        0.5778       0.7850        0.5595  0.0129
     10        0.5626       0.7750        0.5484  0.0131
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*tweet* Accuracy reached 0.6 at epoch 2!!! #skorch #pytorch
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Out[14]:
<class 'skorch.classifier.NeuralNetClassifier'>[initialized](
  module_=ClassifierModule(
    (dense0): Linear(in_features=20, out_features=10, bias=True)
    (dropout): Dropout(p=0.5)
    (dense1): Linear(in_features=10, out_features=10, bias=True)
    (output): Linear(in_features=10, out_features=2, bias=True)
  ),
)

As you can see, by passing callbacks__tweet__min_accuracy=0.6, we changed that parameter. The same can be achieved by calling the set_params method with the corresponding arguments:

In [15]:
net.set_params(callbacks__tweet__min_accuracy=0.75)
Out[15]:
<class 'skorch.classifier.NeuralNetClassifier'>[initialized](
  module_=ClassifierModule(
    (dense0): Linear(in_features=20, out_features=10, bias=True)
    (dropout): Dropout(p=0.5)
    (dense1): Linear(in_features=10, out_features=10, bias=True)
    (output): Linear(in_features=10, out_features=2, bias=True)
  ),
)
In [16]:
net.fit(X, y)
  epoch    train_loss    valid_acc    valid_loss     dur
-------  ------------  -----------  ------------  ------
     11        0.5513       0.7750        0.5405  0.0136
     12        0.5612       0.7800        0.5361  0.0133
     13        0.5473       0.7950        0.5303  0.0159
     14        0.5304       0.7900        0.5241  0.0162
     15        0.5088       0.7850        0.5198  0.0170
     16        0.5373       0.7800        0.5168  0.0170
     17        0.5377       0.7750        0.5179  0.0169
     18        0.5257       0.7700        0.5177  0.0171
     19        0.5150       0.7700        0.5132  0.0169
     20        0.5136       0.7450        0.5116  0.0167
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*tweet* Accuracy reached 0.75 at epoch 11!!! #skorch #pytorch
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Out[16]:
<class 'skorch.classifier.NeuralNetClassifier'>[initialized](
  module_=ClassifierModule(
    (dense0): Linear(in_features=20, out_features=10, bias=True)
    (dropout): Dropout(p=0.5)
    (dense1): Linear(in_features=10, out_features=10, bias=True)
    (output): Linear(in_features=10, out_features=2, bias=True)
  ),
)

Working with different data types

Working with Datasets

We encourage you to not pass Datasets to net.fit but to let skorch handle Datasets internally. Nonetheless, there are situations where passing Datasets to net.fit is hard to avoid (e.g. if you want to load the data lazily during the training). This is supported by skorch but may have some unwanted side-effects relating to sklearn. For instance, Datasets cannot split into train and validation in a stratified fashion without explicit knowledge of the classification targets.

Below we show what happens when you try to fit with Dataset and the stratified split fails:

In [17]:
class MyDataset(torch.utils.data.Dataset):
    def __init__(self, X, y):
        self.X = X
        self.y = y
        
        assert len(X) == len(y)

    def __len__(self):
        return len(self.X)

    def __getitem__(self, i):
        return self.X[i], self.y[i]
In [18]:
X, y = make_classification(1000, 20, n_informative=10, random_state=0)
X = X.astype(np.float32)
dataset = MyDataset(X, y)
In [19]:
net = NeuralNetClassifier(ClassifierModule)
In [20]:
try:
    net.fit(dataset, y=None)
except ValueError as e:
    print("Error:", e)
Error: Stratified CV requires explicitely passing a suitable y.
In [21]:
net.train_split.stratified
Out[21]:
True

As you can see, the stratified split fails since y is not known. There are two solutions to this:

  • turn off stratified splitting ( net.train_split.stratified=False)
  • pass y explicitly (if possible), even if it is implicitely contained in the Dataset

The second solution is shown below:

In [22]:
net.fit(dataset, y=y)
Re-initializing module.
Re-initializing optimizer.
  epoch    train_loss    valid_acc    valid_loss     dur
-------  ------------  -----------  ------------  ------
      1        0.6938       0.4650        0.6984  0.0154
      2        0.6975       0.4650        0.6977  0.0141
      3        0.6938       0.4600        0.6970  0.0130
      4        0.6923       0.4700        0.6964  0.0137
      5        0.6921       0.4800        0.6959  0.0135
      6        0.6878       0.5000        0.6954  0.0138
      7        0.6901       0.4950        0.6948  0.0130
      8        0.6884       0.4900        0.6944  0.0137
      9        0.6896       0.4900        0.6940  0.0130
     10        0.6870       0.4850        0.6936  0.0130
Out[22]:
<class 'skorch.classifier.NeuralNetClassifier'>[initialized](
  module_=ClassifierModule(
    (dense0): Linear(in_features=20, out_features=10, bias=True)
    (dropout): Dropout(p=0.5)
    (dense1): Linear(in_features=10, out_features=10, bias=True)
    (output): Linear(in_features=10, out_features=2, bias=True)
  ),
)

Working with dicts

The standard case

skorch has built-in support for dictionaries as data containers. Here we show a somewhat contrived example of how to use dicts, but it should get the point across. First we create data and put it into a dictionary X_dict with two keys X0 and X1:

In [23]:
X, y = make_classification(1000, 20, n_informative=10, random_state=0)
X = X.astype(np.float32)
X0, X1 = X[:, :10], X[:, 10:]
X_dict = {'X0': X0, 'X1': X1}

When skorch passes the dict to the pytorch module, it will pass the data as keyword arguments to the forward call. That means that we should accept the two keys XO and X1 in the forward method, as shown below:

In [24]:
class ClassifierWithDict(nn.Module):
    def __init__(
            self,
            num_units0=50,
            num_units1=50,
            nonlin=F.relu,
            dropout=0.5,
    ):
        super(ClassifierWithDict, self).__init__()
        self.num_units0 = num_units0
        self.num_units1 = num_units1
        self.nonlin = nonlin
        self.dropout = dropout

        self.dense0 = nn.Linear(10, num_units0)
        self.dense1 = nn.Linear(10, num_units1)
        self.nonlin = nonlin
        self.dropout = nn.Dropout(dropout)
        self.output = nn.Linear(num_units0 + num_units1, 2)

    # NOTE: We accept X0 and X1, the keys from the dict, as arguments
    def forward(self, X0, X1, **kwargs):
        X0 = self.nonlin(self.dense0(X0))
        X0 = self.dropout(X0)

        X1 = self.nonlin(self.dense1(X1))
        X1 = self.dropout(X1)

        X = torch.cat((X0, X1), dim=1)
        X = F.relu(X)
        X = F.softmax(self.output(X), dim=-1)
        return X

As long as we keep this in mind, we are good to go.

In [25]:
net = NeuralNetClassifier(ClassifierWithDict, verbose=0)
In [26]:
net.fit(X_dict, y)
Out[26]:
<class 'skorch.classifier.NeuralNetClassifier'>[initialized](
  module_=ClassifierWithDict(
    (dense0): Linear(in_features=10, out_features=50, bias=True)
    (dense1): Linear(in_features=10, out_features=50, bias=True)
    (dropout): Dropout(p=0.5)
    (output): Linear(in_features=100, out_features=2, bias=True)
  ),
)

Working with sklearn FunctionTransformer and GridSearch

In [27]:
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import FunctionTransformer
from sklearn.model_selection import GridSearchCV

sklearn makes the assumption that incoming data should be numpy/sparse arrays or something similar. This clashes with the use of dictionaries. Unfortunately, it is sometimes impossible to work around that for now (for instance using skorch with BaggingClassifier). Other times, there are possibilities.

When we have a preprocessing pipeline that involves FunctionTransformer, we have to pass the parameter validate=False so that sklearn allows the dictionary to pass through:

In [28]:
pipe = Pipeline([
    ('do-nothing', FunctionTransformer(validate=False)),
    ('net', net),
])
In [29]:
pipe.fit(X_dict, y)
Out[29]:
Pipeline(memory=None,
     steps=[('do-nothing', FunctionTransformer(accept_sparse=False, check_inverse=True, func=None,
          inv_kw_args=None, inverse_func=None, kw_args=None,
          pass_y='deprecated', validate=False)), ('net', <class 'skorch.classifier.NeuralNetClassifier'>[initialized](
  module_=ClassifierWithDi... (dropout): Dropout(p=0.5)
    (output): Linear(in_features=100, out_features=2, bias=True)
  ),
))])

When trying a grid or randomized search, it is not that easy to pass a dict. If we try, we will get an error:

In [30]:
param_grid = {
    'net__module__num_units0': [10, 25, 50], 
    'net__module__num_units1': [10, 25, 50],
    'net__lr': [0.01, 0.1],
}
In [31]:
grid_search = GridSearchCV(pipe, param_grid, scoring='accuracy', verbose=1, cv=3)
In [32]:
try:
    grid_search.fit(X_dict, y)
except Exception as e:
    print(e)
Found input variables with inconsistent numbers of samples: [2, 1000]

The error above occurs because sklearn gets the length of the input data, which is 2 for the dict, and believes that is inconsistent with the length of the target (1000).

To get around that, skorch provides a helper class called SliceDict. It allows us to wrap our dictionaries so that they also behave like a numpy array:

In [33]:
from skorch.helper import SliceDict
In [34]:
X_slice_dict = SliceDict(X0=X0, X1=X1)  # X_slice_dict = SliceDict(**X_dict) would also work

The SliceDict shows the correct length, shape, and is sliceable across values:

In [35]:
print("Length of dict: {}, length of SliceDict: {}".format(len(X_dict), len(X_slice_dict)))
print("Shape of SliceDict: {}".format(X_slice_dict.shape))
Length of dict: 2, length of SliceDict: 1000
Shape of SliceDict: (1000,)
In [36]:
print("Slicing the SliceDict slices across values: {}".format(X_slice_dict[:2]))
Slicing the SliceDict slices across values: SliceDict(**{'X0': array([[-0.9658346 , -2.1890705 ,  0.16985609,  0.8138456 , -3.375209  ,
        -2.1430597 , -0.39585084,  2.9419577 , -2.1910605 ,  1.2443967 ],
       [-0.454767  ,  4.339768  , -0.48572844, -4.88433   , -2.8836503 ,
         2.6097205 , -1.952876  , -0.09192174,  0.07970932, -0.08938338]],
      dtype=float32), 'X1': array([[ 0.04351204, -0.5150961 , -0.86073655, -1.1097169 ,  0.31839254,
        -0.8231973 , -1.056304  , -0.89645284,  0.3759244 , -1.0849651 ],
       [-0.60726726, -1.0674309 ,  0.48804346, -0.50230557,  0.55743027,
         1.01592   , -1.9953582 ,  2.9030426 , -0.9739298 ,  2.1753323 ]],
      dtype=float32)})

With this, we can call GridSearch just as expected:

In [37]:
grid_search.fit(X_slice_dict, y)
Fitting 3 folds for each of 18 candidates, totalling 54 fits
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done  54 out of  54 | elapsed:   11.3s finished
Out[37]:
GridSearchCV(cv=3, error_score='raise-deprecating',
       estimator=Pipeline(memory=None,
     steps=[('do-nothing', FunctionTransformer(accept_sparse=False, check_inverse=True, func=None,
          inv_kw_args=None, inverse_func=None, kw_args=None,
          pass_y='deprecated', validate=False)), ('net', <class 'skorch.classifier.NeuralNetClassifier'>[initialized](
  module_=ClassifierWithDi... (dropout): Dropout(p=0.5)
    (output): Linear(in_features=100, out_features=2, bias=True)
  ),
))]),
       fit_params=None, iid='warn', n_jobs=None,
       param_grid={'net__module__num_units0': [10, 25, 50], 'net__module__num_units1': [10, 25, 50], 'net__lr': [0.01, 0.1]},
       pre_dispatch='2*n_jobs', refit=True, return_train_score='warn',
       scoring='accuracy', verbose=1)
In [38]:
grid_search.best_score_, grid_search.best_params_
Out[38]:
(0.754,
 {'net__lr': 0.1,
  'net__module__num_units0': 50,
  'net__module__num_units1': 50})

Multiple return values from forward

Often, we want our Module.forward method to return more than just one value. There can be several reasons for this. Maybe, the criterion requires not one but several outputs. Or perhaps we want to inspect intermediate values to learn more about our model (say inspecting attention in a sequence-to-sequence model). Fortunately, skorch makes it easy to achieve this. In the following, we demonstrate how to handle multiple outputs from the Module.

To demonstrate this, we implement a very simple autoencoder. It consists of an encoder that reduces our input of 20 units to 5 units using two linear layers, and a decoder that tries to reconstruct the original input, again using two linear layers.

Implementing a simple autoencoder

In [39]:
from skorch import NeuralNetRegressor
In [40]:
class Encoder(nn.Module):
    def __init__(self, num_units=5):
        super().__init__()
        self.num_units = num_units
        
        self.encode = nn.Sequential(
            nn.Linear(20, 10),
            nn.ReLU(),
            nn.Linear(10, self.num_units),
            nn.ReLU(),
        )
        
    def forward(self, X):
        encoded = self.encode(X)
        return encoded
In [41]:
class Decoder(nn.Module):
    def __init__(self, num_units):
        super().__init__()
        self.num_units = num_units
        
        self.decode = nn.Sequential(
            nn.Linear(self.num_units, 10),
            nn.ReLU(),
            nn.Linear(10, 20),
        )
        
    def forward(self, X):
        decoded = self.decode(X)
        return decoded

The autoencoder module below actually returns a tuple of two values, the decoded input and the encoded input. This way, we cannot only use the decoded input to calculate the normal loss but also have access to the encoded state.

In [42]:
class AutoEncoder(nn.Module):
    def __init__(self, num_units):
        super().__init__()
        self.num_units = num_units

        self.encoder = Encoder(num_units=self.num_units)
        self.decoder = Decoder(num_units=self.num_units)
        
    def forward(self, X):
        encoded = self.encoder(X)
        decoded = self.decoder(encoded)
        return decoded, encoded  # <- return a tuple of two values

Since the module's forward method returns two values, we have to adjust our objective to do the right thing with those values. If we don't do this, the criterion wouldn't know what to do with the two values and would raise an error.

One strategy would be to only use the decoded state for the loss and discard the encoded state. For this demonstration, we have a different plan: We would like the encoded state to be sparse. Therefore, we add an L1 loss of the encoded state to the reconstruction loss. This way, the net will try to reconstruct the input as accurately as possible while keeping the encoded state as sparse as possible.

To implement this, the right method to override is called get_loss, which is where skorch computes and returns the loss. It gets the prediction (our tuple) and the target as input, as well as other arguments and keywords that we pass through. We create a subclass of NeuralNetRegressor that overrides said method and implements our idea for the loss.

In [43]:
class AutoEncoderNet(NeuralNetRegressor):
    def get_loss(self, y_pred, y_true, *args, **kwargs):
        decoded, encoded = y_pred  # <- unpack the tuple that was returned by `forward`
        loss_reconstruction = super().get_loss(decoded, y_true, *args, **kwargs)
        loss_l1 = 1e-3 * torch.abs(encoded).sum()
        return loss_reconstruction + loss_l1

Note: Alternatively, we could have used an unaltered NeuralNetRegressor but implement a custom criterion that is responsible for unpacking the tuple and computing the loss.

Training the autoencoder

Now that everything is ready, we train the model as usual. We initialize our net subclass with the AutoEncoder module and call the fit method with X both as input and as target (since we want to reconstruct the original data):

In [44]:
net = AutoEncoderNet(
    AutoEncoder,
    module__num_units=5,
    lr=0.3,
)
In [45]:
net.fit(X, X)
  epoch    train_loss    valid_loss     dur
-------  ------------  ------------  ------
      1        3.8328        3.7855  0.0233
      2        3.6989        3.7111  0.0244
      3        3.6417        3.6707  0.0259
      4        3.6101        3.6463  0.0209
      5        3.5914        3.6310  0.0226
      6        3.5799        3.6212  0.0242
      7        3.5725        3.6144  0.0307
      8        3.5672        3.6090  0.0347
      9        3.5627        3.6036  0.0239
     10        3.5570        3.5963  0.0264
Out[45]:
<class '__main__.AutoEncoderNet'>[initialized](
  module_=AutoEncoder(
    (encoder): Encoder(
      (encode): Sequential(
        (0): Linear(in_features=20, out_features=10, bias=True)
        (1): ReLU()
        (2): Linear(in_features=10, out_features=5, bias=True)
        (3): ReLU()
      )
    )
    (decoder): Decoder(
      (decode): Sequential(
        (0): Linear(in_features=5, out_features=10, bias=True)
        (1): ReLU()
        (2): Linear(in_features=10, out_features=20, bias=True)
      )
    )
  ),
)

VoilĂ , the model was trained using our custom loss function that makes use of both predicted values.

Extracting the decoder and the encoder output

Sometimes, we may wish to inspect all the values returned by the foward method of the module. There are several ways to achieve this. In theory, we can always access the module directly by using the net.module_ attribute. However, this is unwieldy, since this completely shortcuts the prediction loop, which takes care of important steps like casting numpy arrays to pytorch tensors and batching.

Also, we cannot use the predict method on the net. This method will only return the first output from the forward method, in this case the decoded state. The reason for this is that predict is part of the sklearn API, which requires there to be only one output. This is shown below:

In [46]:
y_pred = net.predict(X)
y_pred.shape  # only the decoded state is returned
Out[46]:
(1000, 20)

However, the net itself provides two methods to retrieve all outputs. The first one is the net.forward method, which retrieves all the predicted batches from the Module.forward and concatenates them. Use this to retrieve the complete decoded and encoded state:

In [47]:
decoded_pred, encoded_pred = net.forward(X)
decoded_pred.shape, encoded_pred.shape
Out[47]:
(torch.Size([1000, 20]), torch.Size([1000, 5]))

The other method is called net.forward_iter. It is similar to net.forward but instead of collecting all the batches, this method is lazy and only yields one batch at a time. This can be especially useful if the output doesn't fit into memory:

In [48]:
for decoded_pred, encoded_pred in net.forward_iter(X):
    # do something with each batch
    break
decoded_pred.shape, encoded_pred.shape
Out[48]:
(torch.Size([128, 20]), torch.Size([128, 5]))

Finally, let's make sure that our initial goal of having a sparse encoded state was met. We check how many activities are close to zero:

In [49]:
torch.isclose(encoded_pred, torch.zeros_like(encoded_pred)).float().mean()
Out[49]:
tensor(0.8781)

As we had hoped, the encoded state is quite sparse, with the majority of outpus being 0.