Deep Learning Models -- A collection of various deep learning architectures, models, and tips for TensorFlow and PyTorch in Jupyter Notebooks.

Transfer Learning Example (VGG16 pre-trained on ImageNet for Cifar-10)

In [1]:
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p torch,torchvision
Sebastian Raschka 

CPython 3.7.3
IPython 7.9.0

torch 1.3.0
torchvision 0.4.1a0+d94043a
In [2]:
import torch
import time
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision import datasets
from torchvision import transforms
from torch.utils.data import DataLoader

#######################################
### PRE-TRAINED MODELS AVAILABLE HERE
## https://pytorch.org/docs/stable/torchvision/models.html
from torchvision import models
#######################################

if torch.cuda.is_available():
    torch.backends.cudnn.deterministic = True

Loading an Example Dataset

In this example, we are going to work with CIFAR-10, because it's small (smaller) than ImageNet and fast to downliad. However, note that in a "real-world application", images with dimension > 224x224 are recommended when working with modelsthat have been trained on ImageNet images with > 224x224 size. Here, we resize the images as a workaround.

  • Note that due to the average pooling in the final layer, it is also possible to feed in 32x32-pixel images directly. However, I noticed that the performance is rather low (~65% test accuracy after 10 and 100 epochs).

  • Also note that we we normalize the images with the following parameters

    transforms.Normalize(mean=[0.485, 0.456, 0.406],
                          std=[0.229, 0.224, 0.225])

which have been used for training the model originally on ImageNet.

In [3]:
##########################
### SETTINGS
##########################

# Device
DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print('Device:', DEVICE)

NUM_CLASSES = 10

# Hyperparameters
random_seed = 1
learning_rate = 0.0001
num_epochs = 10
batch_size = 128


##########################
### MNIST DATASET
##########################

custom_transform = transforms.Compose([
    transforms.Resize((224, 224)),
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406],
                          std=[0.229, 0.224, 0.225])
])

## Note that this particular normalization scheme is
## necessary since it was used for pre-training
## the network on ImageNet.
## These are the channel-means and standard deviations
## for z-score normalization.


train_dataset = datasets.CIFAR10(root='data', 
                                 train=True, 
                                 transform=custom_transform,
                                 download=True)

test_dataset = datasets.CIFAR10(root='data', 
                                train=False, 
                                transform=custom_transform)


train_loader = DataLoader(dataset=train_dataset, 
                          batch_size=batch_size, 
                          num_workers=8,
                          shuffle=True)

test_loader = DataLoader(dataset=test_dataset, 
                         batch_size=batch_size, 
                         num_workers=8,
                         shuffle=False)

# Checking the dataset
for images, labels in train_loader:  
    print('Image batch dimensions:', images.shape)
    print('Image label dimensions:', labels.shape)
    break
Device: cuda:0
Files already downloaded and verified
Image batch dimensions: torch.Size([128, 3, 224, 224])
Image label dimensions: torch.Size([128])

Loading the Pre-Trained Model

Here, we are going to use VGG16 as an example for transfer learning from torchvision. A list of all pre-trained models is available at

In [4]:
model = models.vgg16(pretrained=True)
model
Out[4]:
VGG(
  (features): Sequential(
    (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU(inplace=True)
    (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU(inplace=True)
    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (6): ReLU(inplace=True)
    (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (8): ReLU(inplace=True)
    (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (11): ReLU(inplace=True)
    (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (13): ReLU(inplace=True)
    (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (15): ReLU(inplace=True)
    (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (18): ReLU(inplace=True)
    (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (20): ReLU(inplace=True)
    (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (22): ReLU(inplace=True)
    (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (25): ReLU(inplace=True)
    (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (27): ReLU(inplace=True)
    (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (29): ReLU(inplace=True)
    (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
  (classifier): Sequential(
    (0): Linear(in_features=25088, out_features=4096, bias=True)
    (1): ReLU(inplace=True)
    (2): Dropout(p=0.5, inplace=False)
    (3): Linear(in_features=4096, out_features=4096, bias=True)
    (4): ReLU(inplace=True)
    (5): Dropout(p=0.5, inplace=False)
    (6): Linear(in_features=4096, out_features=1000, bias=True)
  )
)

Freezing the Model

First, we are going to freeze the whole model:

In [5]:
for param in model.parameters():
    param.requires_grad = False

Next, Assume we want to train the penultimate layer (here, model.classifier[3] as we can see from the model structure above, which I am pasting as a reference below:

VGG(
  (features): Sequential(
    (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU(inplace=True)
    (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
...
    (29): ReLU(inplace=True)
    (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
  (classifier): Sequential(
    (0): Linear(in_features=25088, out_features=4096, bias=True)
    (1): ReLU(inplace=True)
    (2): Dropout(p=0.5, inplace=False)
->    (3): Linear(in_features=4096, out_features=4096, bias=True)
    (4): ReLU(inplace=True)
    (5): Dropout(p=0.5, inplace=False)
    (6): Linear(in_features=4096, out_features=1000, bias=True)
  )
)
In [6]:
model.classifier[3].requires_grad = True

Now, replace the output layer with your own output layer (here, we actually add two more output layers):

In [7]:
model.classifier[6] = nn.Sequential(
                      nn.Linear(4096, 512), 
                      nn.ReLU(), 
                      nn.Dropout(0.5),
                      nn.Linear(512, NUM_CLASSES))

Training (as usual)

In [8]:
model = model.to(DEVICE)
optimizer = torch.optim.Adam(model.parameters())
In [9]:
def compute_accuracy(model, data_loader):
    model.eval()
    correct_pred, num_examples = 0, 0
    for i, (features, targets) in enumerate(data_loader):
            
        features = features.to(DEVICE)
        targets = targets.to(DEVICE)

        logits = model(features)
        _, predicted_labels = torch.max(logits, 1)
        num_examples += targets.size(0)
        correct_pred += (predicted_labels == targets).sum()
    return correct_pred.float()/num_examples * 100


def compute_epoch_loss(model, data_loader):
    model.eval()
    curr_loss, num_examples = 0., 0
    with torch.no_grad():
        for features, targets in data_loader:
            features = features.to(DEVICE)
            targets = targets.to(DEVICE)
            logits = model(features)
            loss = F.cross_entropy(logits, targets, reduction='sum')
            num_examples += targets.size(0)
            curr_loss += loss

        curr_loss = curr_loss / num_examples
        return curr_loss
    
    

start_time = time.time()
for epoch in range(num_epochs):
    
    model.train()
    for batch_idx, (features, targets) in enumerate(train_loader):
        
        features = features.to(DEVICE)
        targets = targets.to(DEVICE)
            
        ### FORWARD AND BACK PROP
        logits = model(features)
        cost = F.cross_entropy(logits, targets)
        optimizer.zero_grad()
        
        cost.backward()
        
        ### UPDATE MODEL PARAMETERS
        optimizer.step()
        
        ### LOGGING
        if not batch_idx % 50:
            print ('Epoch: %03d/%03d | Batch %04d/%04d | Cost: %.4f' 
                   %(epoch+1, num_epochs, batch_idx, 
                     len(train_loader), cost))

    model.eval()
    with torch.set_grad_enabled(False): # save memory during inference
        print('Epoch: %03d/%03d | Train: %.3f%% | Loss: %.3f' % (
              epoch+1, num_epochs, 
              compute_accuracy(model, train_loader),
              compute_epoch_loss(model, train_loader)))


    print('Time elapsed: %.2f min' % ((time.time() - start_time)/60))
    
print('Total Training Time: %.2f min' % ((time.time() - start_time)/60))
Epoch: 001/010 | Batch 0000/0391 | Cost: 2.3803
Epoch: 001/010 | Batch 0050/0391 | Cost: 0.8682
Epoch: 001/010 | Batch 0100/0391 | Cost: 0.7765
Epoch: 001/010 | Batch 0150/0391 | Cost: 0.5888
Epoch: 001/010 | Batch 0200/0391 | Cost: 0.6363
Epoch: 001/010 | Batch 0250/0391 | Cost: 0.5042
Epoch: 001/010 | Batch 0300/0391 | Cost: 0.7212
Epoch: 001/010 | Batch 0350/0391 | Cost: 0.5531
Epoch: 001/010 | Train: 83.768% | Loss: 0.470
Time elapsed: 10.51 min
Epoch: 002/010 | Batch 0000/0391 | Cost: 0.4309
Epoch: 002/010 | Batch 0050/0391 | Cost: 0.5423
Epoch: 002/010 | Batch 0100/0391 | Cost: 0.6057
Epoch: 002/010 | Batch 0150/0391 | Cost: 0.7861
Epoch: 002/010 | Batch 0200/0391 | Cost: 0.5859
Epoch: 002/010 | Batch 0250/0391 | Cost: 0.6265
Epoch: 002/010 | Batch 0300/0391 | Cost: 0.5713
Epoch: 002/010 | Batch 0350/0391 | Cost: 0.4664
Epoch: 002/010 | Train: 84.770% | Loss: 0.435
Time elapsed: 21.05 min
Epoch: 003/010 | Batch 0000/0391 | Cost: 0.5218
Epoch: 003/010 | Batch 0050/0391 | Cost: 0.4995
Epoch: 003/010 | Batch 0100/0391 | Cost: 0.5690
Epoch: 003/010 | Batch 0150/0391 | Cost: 0.6084
Epoch: 003/010 | Batch 0200/0391 | Cost: 0.6712
Epoch: 003/010 | Batch 0250/0391 | Cost: 0.7230
Epoch: 003/010 | Batch 0300/0391 | Cost: 0.6850
Epoch: 003/010 | Batch 0350/0391 | Cost: 0.5648
Epoch: 003/010 | Train: 85.626% | Loss: 0.418
Time elapsed: 31.59 min
Epoch: 004/010 | Batch 0000/0391 | Cost: 0.5770
Epoch: 004/010 | Batch 0050/0391 | Cost: 0.5119
Epoch: 004/010 | Batch 0100/0391 | Cost: 0.5196
Epoch: 004/010 | Batch 0150/0391 | Cost: 0.6272
Epoch: 004/010 | Batch 0200/0391 | Cost: 0.5175
Epoch: 004/010 | Batch 0250/0391 | Cost: 0.5380
Epoch: 004/010 | Batch 0300/0391 | Cost: 0.5041
Epoch: 004/010 | Batch 0350/0391 | Cost: 0.6165
Epoch: 004/010 | Train: 87.010% | Loss: 0.386
Time elapsed: 42.13 min
Epoch: 005/010 | Batch 0000/0391 | Cost: 0.6082
Epoch: 005/010 | Batch 0050/0391 | Cost: 0.6508
Epoch: 005/010 | Batch 0100/0391 | Cost: 0.5656
Epoch: 005/010 | Batch 0150/0391 | Cost: 0.5483
Epoch: 005/010 | Batch 0200/0391 | Cost: 0.5408
Epoch: 005/010 | Batch 0250/0391 | Cost: 0.7091
Epoch: 005/010 | Batch 0300/0391 | Cost: 0.5846
Epoch: 005/010 | Batch 0350/0391 | Cost: 0.4931
Epoch: 005/010 | Train: 87.088% | Loss: 0.372
Time elapsed: 52.66 min
Epoch: 006/010 | Batch 0000/0391 | Cost: 0.5629
Epoch: 006/010 | Batch 0050/0391 | Cost: 0.4118
Epoch: 006/010 | Batch 0100/0391 | Cost: 0.4184
Epoch: 006/010 | Batch 0150/0391 | Cost: 0.5407
Epoch: 006/010 | Batch 0200/0391 | Cost: 0.5839
Epoch: 006/010 | Batch 0250/0391 | Cost: 0.5171
Epoch: 006/010 | Batch 0300/0391 | Cost: 0.4679
Epoch: 006/010 | Batch 0350/0391 | Cost: 0.5208
Epoch: 006/010 | Train: 87.710% | Loss: 0.368
Time elapsed: 63.20 min
Epoch: 007/010 | Batch 0000/0391 | Cost: 0.4737
Epoch: 007/010 | Batch 0050/0391 | Cost: 0.7670
Epoch: 007/010 | Batch 0100/0391 | Cost: 0.4890
Epoch: 007/010 | Batch 0150/0391 | Cost: 0.5645
Epoch: 007/010 | Batch 0200/0391 | Cost: 0.6673
Epoch: 007/010 | Batch 0250/0391 | Cost: 0.5325
Epoch: 007/010 | Batch 0300/0391 | Cost: 0.6377
Epoch: 007/010 | Batch 0350/0391 | Cost: 0.5301
Epoch: 007/010 | Train: 87.692% | Loss: 0.354
Time elapsed: 73.73 min
Epoch: 008/010 | Batch 0000/0391 | Cost: 0.7276
Epoch: 008/010 | Batch 0050/0391 | Cost: 0.5233
Epoch: 008/010 | Batch 0100/0391 | Cost: 0.7512
Epoch: 008/010 | Batch 0150/0391 | Cost: 0.5838
Epoch: 008/010 | Batch 0200/0391 | Cost: 0.4164
Epoch: 008/010 | Batch 0250/0391 | Cost: 0.6005
Epoch: 008/010 | Batch 0300/0391 | Cost: 0.5340
Epoch: 008/010 | Batch 0350/0391 | Cost: 0.4254
Epoch: 008/010 | Train: 87.604% | Loss: 0.359
Time elapsed: 84.28 min
Epoch: 009/010 | Batch 0000/0391 | Cost: 0.7138
Epoch: 009/010 | Batch 0050/0391 | Cost: 0.7279
Epoch: 009/010 | Batch 0100/0391 | Cost: 0.3387
Epoch: 009/010 | Batch 0150/0391 | Cost: 0.4552
Epoch: 009/010 | Batch 0200/0391 | Cost: 0.3744
Epoch: 009/010 | Batch 0250/0391 | Cost: 0.6198
Epoch: 009/010 | Batch 0300/0391 | Cost: 0.5379
Epoch: 009/010 | Batch 0350/0391 | Cost: 0.5648
Epoch: 009/010 | Train: 88.338% | Loss: 0.341
Time elapsed: 94.82 min
Epoch: 010/010 | Batch 0000/0391 | Cost: 0.5407
Epoch: 010/010 | Batch 0050/0391 | Cost: 0.4377
Epoch: 010/010 | Batch 0100/0391 | Cost: 0.4832
Epoch: 010/010 | Batch 0150/0391 | Cost: 0.4002
Epoch: 010/010 | Batch 0200/0391 | Cost: 0.4990
Epoch: 010/010 | Batch 0250/0391 | Cost: 0.3890
Epoch: 010/010 | Batch 0300/0391 | Cost: 0.4749
Epoch: 010/010 | Batch 0350/0391 | Cost: 0.7142
Epoch: 010/010 | Train: 88.696% | Loss: 0.329
Time elapsed: 105.35 min
Total Training Time: 105.35 min
In [10]:
with torch.set_grad_enabled(False): # save memory during inference
    print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
Test accuracy: 84.25%
In [11]:
%matplotlib inline
import matplotlib.pyplot as plt
In [12]:
classes = ('plane', 'car', 'bird', 'cat',
           'deer', 'dog', 'frog', 'horse', 'ship', 'truck')

for batch_idx, (features, targets) in enumerate(test_loader):

    features = features
    targets = targets
    break

logits = model(features.to(DEVICE))
_, predicted_labels = torch.max(logits, 1)
    
In [13]:
def unnormalize(tensor, mean, std):
    for t, m, s in zip(tensor, mean, std):
        t.mul_(s).add_(m)
    return tensor

n_images = 10

fig, axes = plt.subplots(nrows=1, ncols=n_images, 
                         sharex=True, sharey=True, figsize=(20, 2.5))
orig_images = features[:n_images]

for i in range(n_images):
    curr_img = orig_images[i].detach().to(torch.device('cpu'))
    curr_img = unnormalize(curr_img,
                           torch.tensor([0.485, 0.456, 0.406]),
                           torch.tensor([0.229, 0.224, 0.225])) 
    curr_img = curr_img.permute((1, 2, 0))
    axes[i].imshow(curr_img)
    axes[i].set_title(classes[predicted_labels[i]])