Basic usage

skorch is designed to maximize interoperability between sklearn and pytorch. The aim is to keep 99% of the flexibility of pytorch while being able to leverage most features of sklearn. Below, we show the basic usage of skorch and how it can be combined with sklearn.

This notebook shows you how to use the basic functionality of skorch.

Table of contents

In [1]:
import torch
from torch import nn
import torch.nn.functional as F
In [2]:
torch.manual_seed(0);

Training a classifier and making predictions

A toy binary classification task

We load a toy classification task from sklearn.

In [3]:
import numpy as np
from sklearn.datasets import make_classification
In [4]:
X, y = make_classification(1000, 20, n_informative=10, random_state=0)
X = X.astype(np.float32)
In [5]:
X.shape, y.shape, y.mean()
Out[5]:
((1000, 20), (1000,), 0.5)

Definition of the pytorch classification module

We define a vanilla neural network with two hidden layers. The output layer should have 2 output units since there are two classes. In addition, it should have a softmax nonlinearity, because later, when calling predict_proba, the output from the forward call will be used.

In [6]:
class ClassifierModule(nn.Module):
    def __init__(
            self,
            num_units=10,
            nonlin=F.relu,
            dropout=0.5,
    ):
        super(ClassifierModule, self).__init__()
        self.num_units = num_units
        self.nonlin = nonlin
        self.dropout = dropout

        self.dense0 = nn.Linear(20, num_units)
        self.nonlin = nonlin
        self.dropout = nn.Dropout(dropout)
        self.dense1 = nn.Linear(num_units, 10)
        self.output = nn.Linear(10, 2)

    def forward(self, X, **kwargs):
        X = self.nonlin(self.dense0(X))
        X = self.dropout(X)
        X = F.relu(self.dense1(X))
        X = F.softmax(self.output(X), dim=-1)
        return X

Defining and training the neural net classifier

We use NeuralNetClassifier because we're dealing with a classifcation task. The first argument should be the pytorch module. As additional arguments, we pass the number of epochs and the learning rate (lr), but those are optional.

Note: To use the CUDA backend, pass device='cuda' as an additional argument.

In [7]:
from skorch.net import NeuralNetClassifier
In [8]:
net = NeuralNetClassifier(
    ClassifierModule,
    max_epochs=20,
    lr=0.1,
    # device='cuda',  # uncomment this to train with CUDA
)

As in sklearn, we call fit passing the input data X and the targets y. By default, NeuralNetClassifier makes a StratifiedKFold split on the data (80/20) to track the validation loss. This is shown, as well as the train loss and the accuracy on the validation set.

In [9]:
pdb on
Automatic pdb calling has been turned ON
In [10]:
net.fit(X, y)
  epoch    train_loss    valid_acc    valid_loss     dur
-------  ------------  -----------  ------------  ------
      1        0.6868       0.6000        0.6740  0.0861
      2        0.6706       0.6400        0.6617  0.0365
      3        0.6637       0.6650        0.6504  0.0418
      4        0.6548       0.7000        0.6418  0.0460
      5        0.6340       0.7100        0.6272  0.0393
      6        0.6219       0.7150        0.6124  0.0387
      7        0.6058       0.7100        0.5980  0.0407
      8        0.5964       0.7200        0.5875  0.0407
      9        0.5901       0.7100        0.5760  0.0380
     10        0.5716       0.7250        0.5651  0.0378
     11        0.5633       0.7250        0.5580  0.0387
     12        0.5652       0.7300        0.5529  0.0387
     13        0.5462       0.7350        0.5426  0.0397
     14        0.5407       0.7300        0.5407  0.0395
     15        0.5360       0.7300        0.5373  0.0373
     16        0.5517       0.7400        0.5328  0.0372
     17        0.5351       0.7450        0.5277  0.0380
     18        0.5280       0.7400        0.5260  0.0410
     19        0.5148       0.7450        0.5264  0.0388
     20        0.5309       0.7400        0.5210  0.0374
Out[10]:
<class 'skorch.net.NeuralNetClassifier'>[initialized](
  module_=ClassifierModule(
    (dense0): Linear(in_features=20, out_features=10, bias=True)
    (dropout): Dropout(p=0.5)
    (dense1): Linear(in_features=10, out_features=10, bias=True)
    (output): Linear(in_features=10, out_features=2, bias=True)
  ),
)

Also, as in sklearn, you may call predict or predict_proba on the fitted model.

Making predictions, classification

In [11]:
y_pred = net.predict(X[:5])
y_pred
Out[11]:
array([0, 0, 0, 0, 0])
In [12]:
y_proba = net.predict_proba(X[:5])
y_proba
Out[12]:
array([[ 0.54967165,  0.45032835],
       [ 0.78423566,  0.21576437],
       [ 0.67652142,  0.32347855],
       [ 0.88522649,  0.1147735 ],
       [ 0.68577135,  0.31422862]], dtype=float32)

Training a regressor

A toy regression task

In [13]:
from sklearn.datasets import make_regression
In [14]:
X_regr, y_regr = make_regression(1000, 20, n_informative=10, random_state=0)
X_regr = X_regr.astype(np.float32)
y_regr = y_regr.astype(np.float32) / 100
y_regr = y_regr.reshape(-1, 1)
In [15]:
X_regr.shape, y_regr.shape, y_regr.min(), y_regr.max()
Out[15]:
((1000, 20), (1000, 1), -6.4901485, 6.1545048)

Note: Regression currently requires the target to be 2-dimensional, hence the need to reshape. This should be fixed with an upcoming version of pytorch.

Definition of the pytorch regression module

Again, define a vanilla neural network with two hidden layers. The main difference is that the output layer only has one unit and does not apply a softmax nonlinearity.

In [16]:
class RegressorModule(nn.Module):
    def __init__(
            self,
            num_units=10,
            nonlin=F.relu,
    ):
        super(RegressorModule, self).__init__()
        self.num_units = num_units
        self.nonlin = nonlin

        self.dense0 = nn.Linear(20, num_units)
        self.nonlin = nonlin
        self.dense1 = nn.Linear(num_units, 10)
        self.output = nn.Linear(10, 1)

    def forward(self, X, **kwargs):
        X = self.nonlin(self.dense0(X))
        X = F.relu(self.dense1(X))
        X = self.output(X)
        return X

Defining and training the neural net regressor

Training a regressor is almost the same as training a classifier. Mainly, we use NeuralNetRegressor instead of NeuralNetClassifier (this is the same terminology as in sklearn).

In [17]:
from skorch.net import NeuralNetRegressor
In [18]:
net_regr = NeuralNetRegressor(
    RegressorModule,
    max_epochs=20,
    lr=0.1,
    # device='cuda',  # uncomment this to train with CUDA
)
In [19]:
net_regr.fit(X_regr, y_regr)
  epoch    train_loss    valid_loss     dur
-------  ------------  ------------  ------
      1        4.6059        3.5860  0.0337
      2        3.5021        1.3814  0.0251
      3        1.1019        0.5334  0.0253
      4        0.7071        0.2994  0.0388
      5        0.5654        0.4141  0.0299
      6        0.3179        0.1574  0.0272
      7        0.2476        0.1906  0.0289
      8        0.1302        0.1049  0.0274
      9        0.1373        0.1124  0.0274
     10        0.0728        0.0737  0.0294
     11        0.0839        0.0727  0.0362
     12        0.0435        0.0513  0.0335
     13        0.0508        0.0483  0.0300
     14        0.0279        0.0371  0.0306
     15        0.0322        0.0335  0.0291
     16        0.0193        0.0282  0.0300
     17        0.0224        0.0247  0.0306
     18        0.0148        0.0221  0.0294
     19        0.0167        0.0198  0.0354
     20        0.0122        0.0182  0.0328
Out[19]:
<class 'skorch.net.NeuralNetRegressor'>[initialized](
  module_=RegressorModule(
    (dense0): Linear(in_features=20, out_features=10, bias=True)
    (dense1): Linear(in_features=10, out_features=10, bias=True)
    (output): Linear(in_features=10, out_features=1, bias=True)
  ),
)

Making predictions, regression

You may call predict or predict_proba on the fitted model. For regressions, both methods return the same value.

In [20]:
y_pred = net_regr.predict(X_regr[:5])
y_pred
Out[20]:
array([[ 0.52162164],
       [-1.50998151],
       [-0.90007448],
       [-0.08845913],
       [-0.52214217]], dtype=float32)

Saving and loading a model

Save and load either the whole model by using pickle or just the learned model parameters by calling save_params and load_params.

Saving the whole model

In [21]:
import pickle
In [22]:
file_name = '/tmp/mymodel.pkl'
In [23]:
with open(file_name, 'wb') as f:
    pickle.dump(net, f)
/home/marian/anaconda3/envs/skorch/lib/python3.6/site-packages/torch/serialization.py:193: UserWarning: Couldn't retrieve source code for container of type ClassifierModule. It won't be checked for correctness upon loading.
  "type " + obj.__name__ + ". It won't be checked "
In [24]:
with open(file_name, 'rb') as f:
    new_net = pickle.load(f)

Saving only the model parameters

This only saves and loads the proper module parameters, meaning that hyperparameters such as lr and max_epochs are not saved. Therefore, to load the model, we have to re-initialize it beforehand.

In [25]:
net.save_params(file_name)  # a file handler also works
In [26]:
# first initialize the model
new_net = NeuralNetClassifier(
    ClassifierModule,
    max_epochs=20,
    lr=0.1,
).initialize()
In [27]:
new_net.load_params(file_name)

Usage with an sklearn Pipeline

It is possible to put the NeuralNetClassifier inside an sklearn Pipeline, as you would with any sklearn classifier.

In [28]:
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
In [29]:
pipe = Pipeline([
    ('scale', StandardScaler()),
    ('net', net),
])
In [30]:
pipe.fit(X, y)
Re-initializing module!
  epoch    train_loss    valid_acc    valid_loss     dur
-------  ------------  -----------  ------------  ------
      1        0.6891       0.5550        0.6853  0.0454
      2        0.6826       0.5600        0.6825  0.0426
      3        0.6873       0.5900        0.6801  0.0389
      4        0.6797       0.6000        0.6776  0.0355
      5        0.6772       0.6150        0.6751  0.0352
      6        0.6748       0.6200        0.6723  0.0452
      7        0.6682       0.6200        0.6691  0.0392
      8        0.6645       0.6200        0.6654  0.0352
      9        0.6623       0.6300        0.6613  0.0351
     10        0.6464       0.6200        0.6555  0.0355
     11        0.6471       0.6300        0.6491  0.0360
     12        0.6449       0.6600        0.6424  0.0377
     13        0.6285       0.6500        0.6341  0.0367
     14        0.6265       0.6500        0.6261  0.0352
     15        0.6252       0.6600        0.6193  0.0376
     16        0.6148       0.6750        0.6102  0.0375
     17        0.6039       0.6850        0.6017  0.0375
     18        0.5979       0.6900        0.5949  0.0382
     19        0.5794       0.7000        0.5849  0.0385
     20        0.5596       0.7050        0.5758  0.0379
Out[30]:
Pipeline(memory=None,
     steps=[('scale', StandardScaler(copy=True, with_mean=True, with_std=True)), ('net', <class 'skorch.net.NeuralNetClassifier'>[initialized](
  module_=ClassifierModule(
    (dense0): Linear(in_features=20, out_features=10, bias=True)
    (dropout): Dropout(p=0.5)
    (dense1): Linear(in_features=10, out_features=10, bias=True)
    (output): Linear(in_features=10, out_features=2, bias=True)
  ),
))])
In [31]:
y_proba = pipe.predict_proba(X[:5])
y_proba
Out[31]:
array([[ 0.39650354,  0.60349649],
       [ 0.73950189,  0.26049814],
       [ 0.72104084,  0.27895918],
       [ 0.71111423,  0.2888858 ],
       [ 0.66332668,  0.33667329]], dtype=float32)

To save the whole pipeline, including the pytorch module, use pickle.

Callbacks

Adding a new callback to the model is straightforward. Below we show how to add a new callback that determines the area under the ROC (AUC) score.

In [32]:
from skorch.callbacks import EpochScoring

There is a scoring callback in skorch, EpochScoring, which we use for this. We have to specify which score to calculate. We have 3 choices:

  • Passing a string: This should be a valid sklearn metric. For a list of all existing scores, look here.
  • Passing None: If you implement your own .score method on your neural net, passing scoring=None will tell skorch to use that.
  • Passing a function or callable: If we want to define our own scoring function, we pass a function with the signature func(model, X, y) -> score, which is then used.

Note that this works exactly the same as scoring in sklearn does.

For our case here, since sklearn already implements AUC, we just pass the correct string 'roc_auc'. We should also tell the callback that higher scores are better (to get the correct colors printed below -- by default, lower scores are assumed to be better). Furthermore, we may specify a name argument for EpochScoring, and whether to use training data (by setting on_train=True) or validation data (which is the default).

In [33]:
auc = EpochScoring(scoring='roc_auc', lower_is_better=False)

Finally, we pass the scoring callback to the callbacks parameter as a list and then call fit. Notice that we get the printed scores and color highlighting for free.

In [34]:
net = NeuralNetClassifier(
    ClassifierModule,
    max_epochs=20,
    lr=0.1,
    callbacks=[auc],
)
In [35]:
net.fit(X, y)
  epoch    roc_auc    train_loss    valid_acc    valid_loss     dur
-------  ---------  ------------  -----------  ------------  ------
      1     0.5911        0.7204       0.5000        0.6948  0.0438
      2     0.6524        0.6925       0.5300        0.6881  0.0484
      3     0.6700        0.6867       0.6000        0.6857  0.0415
      4     0.6854        0.6820       0.6400        0.6832  0.0460
      5     0.6829        0.6801       0.6050        0.6812  0.0404
      6     0.6757        0.6742       0.6100        0.6796  0.0382
      7     0.6808        0.6762       0.6100        0.6776  0.0354
      8     0.6759        0.6576       0.6350        0.6747  0.0344
      9     0.6813        0.6661       0.6350        0.6707  0.0352
     10     0.6903        0.6548       0.6450        0.6655  0.0352
     11     0.6929        0.6500       0.6400        0.6611  0.0370
     12     0.6920        0.6445       0.6500        0.6571  0.0369
     13     0.7095        0.6372       0.6650        0.6509  0.0364
     14     0.7155        0.6288       0.6700        0.6446  0.0404
     15     0.7265        0.6268       0.6700        0.6390  0.0343
     16     0.7398        0.6150       0.6900        0.6308  0.0379
     17     0.7487        0.6221       0.7000        0.6246  0.0412
     18     0.7473        0.6168       0.7250        0.6187  0.0442
     19     0.7588        0.5945       0.7400        0.6100  0.0449
     20     0.7664        0.6000       0.7650        0.6026  0.0458
Out[35]:
<class 'skorch.net.NeuralNetClassifier'>[initialized](
  module_=ClassifierModule(
    (dense0): Linear(in_features=20, out_features=10, bias=True)
    (dropout): Dropout(p=0.5)
    (dense1): Linear(in_features=10, out_features=10, bias=True)
    (output): Linear(in_features=10, out_features=2, bias=True)
  ),
)

For information on how to write custom callbacks, have a look at the Advanced_Usage notebook.

Usage with sklearn GridSearchCV

Special prefixes

The NeuralNet class allows to directly access parameters of the pytorch module by using the module__ prefix. So e.g. if you defined the module to have a num_units parameter, you can set it via the module__num_units argument. This is exactly the same logic that allows to access estimator parameters in sklearn Pipelines and FeatureUnions.

This feature is useful in several ways. For one, it allows to set those parameters in the model definition. Furthermore, it allows you to set parameters in an sklearn GridSearchCV as shown below.

In addition to the parameters prefixed by module__, you may access a couple of other attributes, such as those of the optimizer by using the optimizer__ prefix (again, see below). All those special prefixes are stored in the prefixes_ attribute:

In [36]:
print(', '.join(net.prefixes_))
module, iterator_train, iterator_valid, optimizer, criterion, callbacks, dataset

Below we show how to perform a grid search over the learning rate (lr), the module's number of hidden units (module__num_units), the module's dropout rate (module__dropout), and whether the SGD optimizer should use Nesterov momentum or not (optimizer__nesterov).

In [37]:
from sklearn.model_selection import GridSearchCV
In [38]:
net = NeuralNetClassifier(
    ClassifierModule,
    max_epochs=20,
    lr=0.1,
    verbose=0,
    optimizer__momentum=0.9,
)
In [39]:
params = {
    'lr': [0.05, 0.1],
    'module__num_units': [10, 20],
    'module__dropout': [0, 0.5],
    'optimizer__nesterov': [False, True],
}
In [40]:
gs = GridSearchCV(net, params, refit=False, cv=3, scoring='accuracy', verbose=2)
In [41]:
gs.fit(X, y)
Fitting 3 folds for each of 16 candidates, totalling 48 fits
[CV] lr=0.05, module__dropout=0, module__num_units=10, optimizer__nesterov=False 
[CV]  lr=0.05, module__dropout=0, module__num_units=10, optimizer__nesterov=False, total=   0.7s
[CV] lr=0.05, module__dropout=0, module__num_units=10, optimizer__nesterov=False 
[Parallel(n_jobs=1)]: Done   1 out of   1 | elapsed:    0.7s remaining:    0.0s
[CV]  lr=0.05, module__dropout=0, module__num_units=10, optimizer__nesterov=False, total=   0.6s
[CV] lr=0.05, module__dropout=0, module__num_units=10, optimizer__nesterov=False 
[CV]  lr=0.05, module__dropout=0, module__num_units=10, optimizer__nesterov=False, total=   0.6s
[CV] lr=0.05, module__dropout=0, module__num_units=10, optimizer__nesterov=True 
[CV]  lr=0.05, module__dropout=0, module__num_units=10, optimizer__nesterov=True, total=   0.6s
[CV] lr=0.05, module__dropout=0, module__num_units=10, optimizer__nesterov=True 
[CV]  lr=0.05, module__dropout=0, module__num_units=10, optimizer__nesterov=True, total=   0.6s
[CV] lr=0.05, module__dropout=0, module__num_units=10, optimizer__nesterov=True 
[CV]  lr=0.05, module__dropout=0, module__num_units=10, optimizer__nesterov=True, total=   0.6s
[CV] lr=0.05, module__dropout=0, module__num_units=20, optimizer__nesterov=False 
[CV]  lr=0.05, module__dropout=0, module__num_units=20, optimizer__nesterov=False, total=   0.6s
[CV] lr=0.05, module__dropout=0, module__num_units=20, optimizer__nesterov=False 
[CV]  lr=0.05, module__dropout=0, module__num_units=20, optimizer__nesterov=False, total=   0.6s
[CV] lr=0.05, module__dropout=0, module__num_units=20, optimizer__nesterov=False 
[CV]  lr=0.05, module__dropout=0, module__num_units=20, optimizer__nesterov=False, total=   0.6s
[CV] lr=0.05, module__dropout=0, module__num_units=20, optimizer__nesterov=True 
[CV]  lr=0.05, module__dropout=0, module__num_units=20, optimizer__nesterov=True, total=   0.6s
[CV] lr=0.05, module__dropout=0, module__num_units=20, optimizer__nesterov=True 
[CV]  lr=0.05, module__dropout=0, module__num_units=20, optimizer__nesterov=True, total=   0.6s
[CV] lr=0.05, module__dropout=0, module__num_units=20, optimizer__nesterov=True 
[CV]  lr=0.05, module__dropout=0, module__num_units=20, optimizer__nesterov=True, total=   0.6s
[CV] lr=0.05, module__dropout=0.5, module__num_units=10, optimizer__nesterov=False 
[CV]  lr=0.05, module__dropout=0.5, module__num_units=10, optimizer__nesterov=False, total=   0.6s
[CV] lr=0.05, module__dropout=0.5, module__num_units=10, optimizer__nesterov=False 
[CV]  lr=0.05, module__dropout=0.5, module__num_units=10, optimizer__nesterov=False, total=   0.6s
[CV] lr=0.05, module__dropout=0.5, module__num_units=10, optimizer__nesterov=False 
[CV]  lr=0.05, module__dropout=0.5, module__num_units=10, optimizer__nesterov=False, total=   0.6s
[CV] lr=0.05, module__dropout=0.5, module__num_units=10, optimizer__nesterov=True 
[CV]  lr=0.05, module__dropout=0.5, module__num_units=10, optimizer__nesterov=True, total=   0.6s
[CV] lr=0.05, module__dropout=0.5, module__num_units=10, optimizer__nesterov=True 
[CV]  lr=0.05, module__dropout=0.5, module__num_units=10, optimizer__nesterov=True, total=   0.6s
[CV] lr=0.05, module__dropout=0.5, module__num_units=10, optimizer__nesterov=True 
[CV]  lr=0.05, module__dropout=0.5, module__num_units=10, optimizer__nesterov=True, total=   0.6s
[CV] lr=0.05, module__dropout=0.5, module__num_units=20, optimizer__nesterov=False 
[CV]  lr=0.05, module__dropout=0.5, module__num_units=20, optimizer__nesterov=False, total=   0.7s
[CV] lr=0.05, module__dropout=0.5, module__num_units=20, optimizer__nesterov=False 
[CV]  lr=0.05, module__dropout=0.5, module__num_units=20, optimizer__nesterov=False, total=   0.7s
[CV] lr=0.05, module__dropout=0.5, module__num_units=20, optimizer__nesterov=False 
[CV]  lr=0.05, module__dropout=0.5, module__num_units=20, optimizer__nesterov=False, total=   0.6s
[CV] lr=0.05, module__dropout=0.5, module__num_units=20, optimizer__nesterov=True 
[CV]  lr=0.05, module__dropout=0.5, module__num_units=20, optimizer__nesterov=True, total=   0.7s
[CV] lr=0.05, module__dropout=0.5, module__num_units=20, optimizer__nesterov=True 
[CV]  lr=0.05, module__dropout=0.5, module__num_units=20, optimizer__nesterov=True, total=   0.8s
[CV] lr=0.05, module__dropout=0.5, module__num_units=20, optimizer__nesterov=True 
[CV]  lr=0.05, module__dropout=0.5, module__num_units=20, optimizer__nesterov=True, total=   0.6s
[CV] lr=0.1, module__dropout=0, module__num_units=10, optimizer__nesterov=False 
[CV]  lr=0.1, module__dropout=0, module__num_units=10, optimizer__nesterov=False, total=   0.6s
[CV] lr=0.1, module__dropout=0, module__num_units=10, optimizer__nesterov=False 
[CV]  lr=0.1, module__dropout=0, module__num_units=10, optimizer__nesterov=False, total=   0.6s
[CV] lr=0.1, module__dropout=0, module__num_units=10, optimizer__nesterov=False 
[CV]  lr=0.1, module__dropout=0, module__num_units=10, optimizer__nesterov=False, total=   0.6s
[CV] lr=0.1, module__dropout=0, module__num_units=10, optimizer__nesterov=True 
[CV]  lr=0.1, module__dropout=0, module__num_units=10, optimizer__nesterov=True, total=   0.6s
[CV] lr=0.1, module__dropout=0, module__num_units=10, optimizer__nesterov=True 
[CV]  lr=0.1, module__dropout=0, module__num_units=10, optimizer__nesterov=True, total=   0.6s
[CV] lr=0.1, module__dropout=0, module__num_units=10, optimizer__nesterov=True 
[CV]  lr=0.1, module__dropout=0, module__num_units=10, optimizer__nesterov=True, total=   0.6s
[CV] lr=0.1, module__dropout=0, module__num_units=20, optimizer__nesterov=False 
[CV]  lr=0.1, module__dropout=0, module__num_units=20, optimizer__nesterov=False, total=   0.6s
[CV] lr=0.1, module__dropout=0, module__num_units=20, optimizer__nesterov=False 
[CV]  lr=0.1, module__dropout=0, module__num_units=20, optimizer__nesterov=False, total=   0.6s
[CV] lr=0.1, module__dropout=0, module__num_units=20, optimizer__nesterov=False 
[CV]  lr=0.1, module__dropout=0, module__num_units=20, optimizer__nesterov=False, total=   0.6s
[CV] lr=0.1, module__dropout=0, module__num_units=20, optimizer__nesterov=True 
[CV]  lr=0.1, module__dropout=0, module__num_units=20, optimizer__nesterov=True, total=   0.7s
[CV] lr=0.1, module__dropout=0, module__num_units=20, optimizer__nesterov=True 
[CV]  lr=0.1, module__dropout=0, module__num_units=20, optimizer__nesterov=True, total=   0.6s
[CV] lr=0.1, module__dropout=0, module__num_units=20, optimizer__nesterov=True 
[CV]  lr=0.1, module__dropout=0, module__num_units=20, optimizer__nesterov=True, total=   0.6s
[CV] lr=0.1, module__dropout=0.5, module__num_units=10, optimizer__nesterov=False 
[CV]  lr=0.1, module__dropout=0.5, module__num_units=10, optimizer__nesterov=False, total=   0.6s
[CV] lr=0.1, module__dropout=0.5, module__num_units=10, optimizer__nesterov=False 
[CV]  lr=0.1, module__dropout=0.5, module__num_units=10, optimizer__nesterov=False, total=   0.6s
[CV] lr=0.1, module__dropout=0.5, module__num_units=10, optimizer__nesterov=False 
[CV]  lr=0.1, module__dropout=0.5, module__num_units=10, optimizer__nesterov=False, total=   0.6s
[CV] lr=0.1, module__dropout=0.5, module__num_units=10, optimizer__nesterov=True 
[CV]  lr=0.1, module__dropout=0.5, module__num_units=10, optimizer__nesterov=True, total=   0.6s
[CV] lr=0.1, module__dropout=0.5, module__num_units=10, optimizer__nesterov=True 
[CV]  lr=0.1, module__dropout=0.5, module__num_units=10, optimizer__nesterov=True, total=   0.6s
[CV] lr=0.1, module__dropout=0.5, module__num_units=10, optimizer__nesterov=True 
[CV]  lr=0.1, module__dropout=0.5, module__num_units=10, optimizer__nesterov=True, total=   0.6s
[CV] lr=0.1, module__dropout=0.5, module__num_units=20, optimizer__nesterov=False 
[CV]  lr=0.1, module__dropout=0.5, module__num_units=20, optimizer__nesterov=False, total=   0.6s
[CV] lr=0.1, module__dropout=0.5, module__num_units=20, optimizer__nesterov=False 
[CV]  lr=0.1, module__dropout=0.5, module__num_units=20, optimizer__nesterov=False, total=   0.6s
[CV] lr=0.1, module__dropout=0.5, module__num_units=20, optimizer__nesterov=False 
[CV]  lr=0.1, module__dropout=0.5, module__num_units=20, optimizer__nesterov=False, total=   0.6s
[CV] lr=0.1, module__dropout=0.5, module__num_units=20, optimizer__nesterov=True 
[CV]  lr=0.1, module__dropout=0.5, module__num_units=20, optimizer__nesterov=True, total=   0.6s
[CV] lr=0.1, module__dropout=0.5, module__num_units=20, optimizer__nesterov=True 
[CV]  lr=0.1, module__dropout=0.5, module__num_units=20, optimizer__nesterov=True, total=   0.6s
[CV] lr=0.1, module__dropout=0.5, module__num_units=20, optimizer__nesterov=True 
[CV]  lr=0.1, module__dropout=0.5, module__num_units=20, optimizer__nesterov=True, total=   0.6s
[Parallel(n_jobs=1)]: Done  48 out of  48 | elapsed:   30.3s finished
Out[41]:
GridSearchCV(cv=3, error_score='raise',
       estimator=<class 'skorch.net.NeuralNetClassifier'>[uninitialized](
  module=<class '__main__.ClassifierModule'>,
),
       fit_params=None, iid=True, n_jobs=1,
       param_grid={'lr': [0.05, 0.1], 'module__num_units': [10, 20], 'module__dropout': [0, 0.5], 'optimizer__nesterov': [False, True]},
       pre_dispatch='2*n_jobs', refit=False, return_train_score='warn',
       scoring='accuracy', verbose=2)
In [42]:
print(gs.best_score_, gs.best_params_)
0.856 {'lr': 0.1, 'module__dropout': 0, 'module__num_units': 20, 'optimizer__nesterov': False}

Of course, we could further nest the NeuralNetClassifier within an sklearn Pipeline, in which case we just prefix the parameter by the name of the net (e.g. net__module__num_units).

In [ ]: