Deep Learning Models -- A collection of various deep learning architectures, models, and tips for TensorFlow and PyTorch in Jupyter Notebooks.

In [1]:
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p torch
Sebastian Raschka 

CPython 3.6.8
IPython 7.2.0

torch 1.0.1.post2
  • Runs on CPU (not recommended here) or GPU (if available)

Model Zoo -- Convolutional Neural Network (VGG19 Architecture)

Implementation of the VGG-19 architecture on Cifar10.

Reference for VGG-19:

  • Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.

The following table (taken from Simonyan & Zisserman referenced above) summarizes the VGG19 architecture:

Imports

In [2]:
import numpy as np
import time
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision import datasets
from torchvision import transforms
from torch.utils.data import DataLoader

Settings and Dataset

In [3]:
##########################
### SETTINGS
##########################

# Device
DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print('Device:', DEVICE)

# Hyperparameters
random_seed = 1
learning_rate = 0.001
num_epochs = 20
batch_size = 128

# Architecture
num_features = 784
num_classes = 10


##########################
### MNIST DATASET
##########################

# Note transforms.ToTensor() scales input images
# to 0-1 range
train_dataset = datasets.CIFAR10(root='data', 
                                 train=True, 
                                 transform=transforms.ToTensor(),
                                 download=True)

test_dataset = datasets.CIFAR10(root='data', 
                                train=False, 
                                transform=transforms.ToTensor())


train_loader = DataLoader(dataset=train_dataset, 
                          batch_size=batch_size, 
                          shuffle=True)

test_loader = DataLoader(dataset=test_dataset, 
                         batch_size=batch_size, 
                         shuffle=False)

# Checking the dataset
for images, labels in train_loader:  
    print('Image batch dimensions:', images.shape)
    print('Image label dimensions:', labels.shape)
    break
Device: cuda:0
Files already downloaded and verified
Image batch dimensions: torch.Size([128, 3, 32, 32])
Image label dimensions: torch.Size([128])

Model

In [4]:
##########################
### MODEL
##########################


class VGG16(torch.nn.Module):

    def __init__(self, num_features, num_classes):
        super(VGG16, self).__init__()
        
        # calculate same padding:
        # (w - k + 2*p)/s + 1 = o
        # => p = (s(o-1) - w + k)/2
        
        self.block_1 = nn.Sequential(
                nn.Conv2d(in_channels=3,
                          out_channels=64,
                          kernel_size=(3, 3),
                          stride=(1, 1),
                          # (1(32-1)- 32 + 3)/2 = 1
                          padding=1), 
                nn.ReLU(),
                nn.Conv2d(in_channels=64,
                          out_channels=64,
                          kernel_size=(3, 3),
                          stride=(1, 1),
                          padding=1),
                nn.ReLU(),
                nn.MaxPool2d(kernel_size=(2, 2),
                             stride=(2, 2))
        )
        
        self.block_2 = nn.Sequential(
                nn.Conv2d(in_channels=64,
                          out_channels=128,
                          kernel_size=(3, 3),
                          stride=(1, 1),
                          padding=1),
                nn.ReLU(),
                nn.Conv2d(in_channels=128,
                          out_channels=128,
                          kernel_size=(3, 3),
                          stride=(1, 1),
                          padding=1),
                nn.ReLU(),
                nn.MaxPool2d(kernel_size=(2, 2),
                             stride=(2, 2))
        )
        
        self.block_3 = nn.Sequential(        
                nn.Conv2d(in_channels=128,
                          out_channels=256,
                          kernel_size=(3, 3),
                          stride=(1, 1),
                          padding=1),
                nn.ReLU(),
                nn.Conv2d(in_channels=256,
                          out_channels=256,
                          kernel_size=(3, 3),
                          stride=(1, 1),
                          padding=1),
                nn.ReLU(),        
                nn.Conv2d(in_channels=256,
                          out_channels=256,
                          kernel_size=(3, 3),
                          stride=(1, 1),
                          padding=1),
                nn.ReLU(),
                nn.Conv2d(in_channels=256,
                          out_channels=256,
                          kernel_size=(3, 3),
                          stride=(1, 1),
                          padding=1),
                nn.ReLU(),
                nn.MaxPool2d(kernel_size=(2, 2),
                             stride=(2, 2))
        )
        
          
        self.block_4 = nn.Sequential(   
                nn.Conv2d(in_channels=256,
                          out_channels=512,
                          kernel_size=(3, 3),
                          stride=(1, 1),
                          padding=1),
                nn.ReLU(),        
                nn.Conv2d(in_channels=512,
                          out_channels=512,
                          kernel_size=(3, 3),
                          stride=(1, 1),
                          padding=1),
                nn.ReLU(),        
                nn.Conv2d(in_channels=512,
                          out_channels=512,
                          kernel_size=(3, 3),
                          stride=(1, 1),
                          padding=1),
                nn.ReLU(),
                nn.Conv2d(in_channels=512,
                          out_channels=512,
                          kernel_size=(3, 3),
                          stride=(1, 1),
                          padding=1),
                nn.ReLU(),   
                nn.MaxPool2d(kernel_size=(2, 2),
                             stride=(2, 2))
        )
        
        self.block_5 = nn.Sequential(
                nn.Conv2d(in_channels=512,
                          out_channels=512,
                          kernel_size=(3, 3),
                          stride=(1, 1),
                          padding=1),
                nn.ReLU(),            
                nn.Conv2d(in_channels=512,
                          out_channels=512,
                          kernel_size=(3, 3),
                          stride=(1, 1),
                          padding=1),
                nn.ReLU(),            
                nn.Conv2d(in_channels=512,
                          out_channels=512,
                          kernel_size=(3, 3),
                          stride=(1, 1),
                          padding=1),
                nn.ReLU(),
                nn.Conv2d(in_channels=512,
                          out_channels=512,
                          kernel_size=(3, 3),
                          stride=(1, 1),
                          padding=1),
                nn.ReLU(),   
                nn.MaxPool2d(kernel_size=(2, 2),
                             stride=(2, 2))             
        )
        
        self.classifier = nn.Sequential(
                nn.Linear(512, 4096),
                nn.ReLU(True),
                nn.Linear(4096, 4096),
                nn.ReLU(True),
                nn.Linear(4096, num_classes)
        )
            
        
        for m in self.modules():
            if isinstance(m, torch.nn.Conv2d):
                #n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
                #m.weight.data.normal_(0, np.sqrt(2. / n))
                m.weight.detach().normal_(0, 0.05)
                if m.bias is not None:
                    m.bias.detach().zero_()
            elif isinstance(m, torch.nn.Linear):
                m.weight.detach().normal_(0, 0.05)
                m.bias.detach().detach().zero_()
        
        
    def forward(self, x):

        x = self.block_1(x)
        x = self.block_2(x)
        x = self.block_3(x)
        x = self.block_4(x)
        x = self.block_5(x)
        logits = self.classifier(x.view(-1, 512))
        probas = F.softmax(logits, dim=1)

        return logits, probas

    
torch.manual_seed(random_seed)
model = VGG16(num_features=num_features,
              num_classes=num_classes)

model = model.to(DEVICE)

optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)  

Training

In [5]:
def compute_accuracy(model, data_loader):
    model.eval()
    correct_pred, num_examples = 0, 0
    for i, (features, targets) in enumerate(data_loader):
            
        features = features.to(DEVICE)
        targets = targets.to(DEVICE)

        logits, probas = model(features)
        _, predicted_labels = torch.max(probas, 1)
        num_examples += targets.size(0)
        correct_pred += (predicted_labels == targets).sum()
    return correct_pred.float()/num_examples * 100


def compute_epoch_loss(model, data_loader):
    model.eval()
    curr_loss, num_examples = 0., 0
    with torch.no_grad():
        for features, targets in data_loader:
            features = features.to(DEVICE)
            targets = targets.to(DEVICE)
            logits, probas = model(features)
            loss = F.cross_entropy(logits, targets, reduction='sum')
            num_examples += targets.size(0)
            curr_loss += loss

        curr_loss = curr_loss / num_examples
        return curr_loss
    
    

start_time = time.time()
for epoch in range(num_epochs):
    
    model.train()
    for batch_idx, (features, targets) in enumerate(train_loader):
        
        features = features.to(DEVICE)
        targets = targets.to(DEVICE)
            
        ### FORWARD AND BACK PROP
        logits, probas = model(features)
        cost = F.cross_entropy(logits, targets)
        optimizer.zero_grad()
        
        cost.backward()
        
        ### UPDATE MODEL PARAMETERS
        optimizer.step()
        
        ### LOGGING
        if not batch_idx % 50:
            print ('Epoch: %03d/%03d | Batch %04d/%04d | Cost: %.4f' 
                   %(epoch+1, num_epochs, batch_idx, 
                     len(train_loader), cost))

    model.eval()
    with torch.set_grad_enabled(False): # save memory during inference
        print('Epoch: %03d/%03d | Train: %.3f%% | Loss: %.3f' % (
              epoch+1, num_epochs, 
              compute_accuracy(model, train_loader),
              compute_epoch_loss(model, train_loader)))


    print('Time elapsed: %.2f min' % ((time.time() - start_time)/60))
    
print('Total Training Time: %.2f min' % ((time.time() - start_time)/60))
Epoch: 001/020 | Batch 0000/0391 | Cost: 1061.4152
Epoch: 001/020 | Batch 0050/0391 | Cost: 2.3018
Epoch: 001/020 | Batch 0100/0391 | Cost: 2.0600
Epoch: 001/020 | Batch 0150/0391 | Cost: 1.9973
Epoch: 001/020 | Batch 0200/0391 | Cost: 1.8176
Epoch: 001/020 | Batch 0250/0391 | Cost: 1.8368
Epoch: 001/020 | Batch 0300/0391 | Cost: 1.7213
Epoch: 001/020 | Batch 0350/0391 | Cost: 1.7154
Epoch: 001/020 | Train: 35.478% | Loss: 1.685
Time elapsed: 1.02 min
Epoch: 002/020 | Batch 0000/0391 | Cost: 1.7648
Epoch: 002/020 | Batch 0050/0391 | Cost: 1.7050
Epoch: 002/020 | Batch 0100/0391 | Cost: 1.5464
Epoch: 002/020 | Batch 0150/0391 | Cost: 1.6054
Epoch: 002/020 | Batch 0200/0391 | Cost: 1.4430
Epoch: 002/020 | Batch 0250/0391 | Cost: 1.4253
Epoch: 002/020 | Batch 0300/0391 | Cost: 1.5701
Epoch: 002/020 | Batch 0350/0391 | Cost: 1.4163
Epoch: 002/020 | Train: 44.042% | Loss: 1.531
Time elapsed: 2.07 min
Epoch: 003/020 | Batch 0000/0391 | Cost: 1.5172
Epoch: 003/020 | Batch 0050/0391 | Cost: 1.1992
Epoch: 003/020 | Batch 0100/0391 | Cost: 1.2846
Epoch: 003/020 | Batch 0150/0391 | Cost: 1.4088
Epoch: 003/020 | Batch 0200/0391 | Cost: 1.4853
Epoch: 003/020 | Batch 0250/0391 | Cost: 1.3923
Epoch: 003/020 | Batch 0300/0391 | Cost: 1.3268
Epoch: 003/020 | Batch 0350/0391 | Cost: 1.3162
Epoch: 003/020 | Train: 55.596% | Loss: 1.223
Time elapsed: 3.10 min
Epoch: 004/020 | Batch 0000/0391 | Cost: 1.2210
Epoch: 004/020 | Batch 0050/0391 | Cost: 1.2594
Epoch: 004/020 | Batch 0100/0391 | Cost: 1.2881
Epoch: 004/020 | Batch 0150/0391 | Cost: 1.0182
Epoch: 004/020 | Batch 0200/0391 | Cost: 1.1256
Epoch: 004/020 | Batch 0250/0391 | Cost: 1.1048
Epoch: 004/020 | Batch 0300/0391 | Cost: 1.1812
Epoch: 004/020 | Batch 0350/0391 | Cost: 1.1685
Epoch: 004/020 | Train: 57.594% | Loss: 1.178
Time elapsed: 4.13 min
Epoch: 005/020 | Batch 0000/0391 | Cost: 1.1298
Epoch: 005/020 | Batch 0050/0391 | Cost: 0.9705
Epoch: 005/020 | Batch 0100/0391 | Cost: 0.9255
Epoch: 005/020 | Batch 0150/0391 | Cost: 1.3610
Epoch: 005/020 | Batch 0200/0391 | Cost: 0.9720
Epoch: 005/020 | Batch 0250/0391 | Cost: 1.0088
Epoch: 005/020 | Batch 0300/0391 | Cost: 0.9998
Epoch: 005/020 | Batch 0350/0391 | Cost: 1.1961
Epoch: 005/020 | Train: 63.570% | Loss: 1.003
Time elapsed: 5.17 min
Epoch: 006/020 | Batch 0000/0391 | Cost: 0.8837
Epoch: 006/020 | Batch 0050/0391 | Cost: 0.9184
Epoch: 006/020 | Batch 0100/0391 | Cost: 0.8568
Epoch: 006/020 | Batch 0150/0391 | Cost: 1.0788
Epoch: 006/020 | Batch 0200/0391 | Cost: 1.0365
Epoch: 006/020 | Batch 0250/0391 | Cost: 0.8714
Epoch: 006/020 | Batch 0300/0391 | Cost: 1.0370
Epoch: 006/020 | Batch 0350/0391 | Cost: 1.0536
Epoch: 006/020 | Train: 68.390% | Loss: 0.880
Time elapsed: 6.20 min
Epoch: 007/020 | Batch 0000/0391 | Cost: 1.0297
Epoch: 007/020 | Batch 0050/0391 | Cost: 0.8801
Epoch: 007/020 | Batch 0100/0391 | Cost: 0.9652
Epoch: 007/020 | Batch 0150/0391 | Cost: 1.1417
Epoch: 007/020 | Batch 0200/0391 | Cost: 0.8851
Epoch: 007/020 | Batch 0250/0391 | Cost: 0.9499
Epoch: 007/020 | Batch 0300/0391 | Cost: 0.9416
Epoch: 007/020 | Batch 0350/0391 | Cost: 0.9220
Epoch: 007/020 | Train: 68.740% | Loss: 0.872
Time elapsed: 7.24 min
Epoch: 008/020 | Batch 0000/0391 | Cost: 1.0054
Epoch: 008/020 | Batch 0050/0391 | Cost: 0.8184
Epoch: 008/020 | Batch 0100/0391 | Cost: 0.8955
Epoch: 008/020 | Batch 0150/0391 | Cost: 0.9319
Epoch: 008/020 | Batch 0200/0391 | Cost: 1.0566
Epoch: 008/020 | Batch 0250/0391 | Cost: 1.0591
Epoch: 008/020 | Batch 0300/0391 | Cost: 0.7914
Epoch: 008/020 | Batch 0350/0391 | Cost: 0.9090
Epoch: 008/020 | Train: 72.846% | Loss: 0.770
Time elapsed: 8.27 min
Epoch: 009/020 | Batch 0000/0391 | Cost: 0.6672
Epoch: 009/020 | Batch 0050/0391 | Cost: 0.7192
Epoch: 009/020 | Batch 0100/0391 | Cost: 0.8586
Epoch: 009/020 | Batch 0150/0391 | Cost: 0.7310
Epoch: 009/020 | Batch 0200/0391 | Cost: 0.8406
Epoch: 009/020 | Batch 0250/0391 | Cost: 0.7620
Epoch: 009/020 | Batch 0300/0391 | Cost: 0.6692
Epoch: 009/020 | Batch 0350/0391 | Cost: 0.6407
Epoch: 009/020 | Train: 73.702% | Loss: 0.748
Time elapsed: 9.30 min
Epoch: 010/020 | Batch 0000/0391 | Cost: 0.6539
Epoch: 010/020 | Batch 0050/0391 | Cost: 1.0382
Epoch: 010/020 | Batch 0100/0391 | Cost: 0.5921
Epoch: 010/020 | Batch 0150/0391 | Cost: 0.4933
Epoch: 010/020 | Batch 0200/0391 | Cost: 0.7485
Epoch: 010/020 | Batch 0250/0391 | Cost: 0.6779
Epoch: 010/020 | Batch 0300/0391 | Cost: 0.6787
Epoch: 010/020 | Batch 0350/0391 | Cost: 0.6977
Epoch: 010/020 | Train: 75.708% | Loss: 0.703
Time elapsed: 10.34 min
Epoch: 011/020 | Batch 0000/0391 | Cost: 0.6866
Epoch: 011/020 | Batch 0050/0391 | Cost: 0.7203
Epoch: 011/020 | Batch 0100/0391 | Cost: 0.5730
Epoch: 011/020 | Batch 0150/0391 | Cost: 0.5762
Epoch: 011/020 | Batch 0200/0391 | Cost: 0.6571
Epoch: 011/020 | Batch 0250/0391 | Cost: 0.7582
Epoch: 011/020 | Batch 0300/0391 | Cost: 0.7366
Epoch: 011/020 | Batch 0350/0391 | Cost: 0.6810
Epoch: 011/020 | Train: 79.044% | Loss: 0.606
Time elapsed: 11.37 min
Epoch: 012/020 | Batch 0000/0391 | Cost: 0.5665
Epoch: 012/020 | Batch 0050/0391 | Cost: 0.7081
Epoch: 012/020 | Batch 0100/0391 | Cost: 0.6823
Epoch: 012/020 | Batch 0150/0391 | Cost: 0.8297
Epoch: 012/020 | Batch 0200/0391 | Cost: 0.6470
Epoch: 012/020 | Batch 0250/0391 | Cost: 0.7293
Epoch: 012/020 | Batch 0300/0391 | Cost: 0.9127
Epoch: 012/020 | Batch 0350/0391 | Cost: 0.8419
Epoch: 012/020 | Train: 79.474% | Loss: 0.585
Time elapsed: 12.40 min
Epoch: 013/020 | Batch 0000/0391 | Cost: 0.4087
Epoch: 013/020 | Batch 0050/0391 | Cost: 0.4224
Epoch: 013/020 | Batch 0100/0391 | Cost: 0.4336
Epoch: 013/020 | Batch 0150/0391 | Cost: 0.6586
Epoch: 013/020 | Batch 0200/0391 | Cost: 0.7107
Epoch: 013/020 | Batch 0250/0391 | Cost: 0.7359
Epoch: 013/020 | Batch 0300/0391 | Cost: 0.4860
Epoch: 013/020 | Batch 0350/0391 | Cost: 0.7271
Epoch: 013/020 | Train: 80.746% | Loss: 0.549
Time elapsed: 13.44 min
Epoch: 014/020 | Batch 0000/0391 | Cost: 0.5500
Epoch: 014/020 | Batch 0050/0391 | Cost: 0.5108
Epoch: 014/020 | Batch 0100/0391 | Cost: 0.5186
Epoch: 014/020 | Batch 0150/0391 | Cost: 0.4737
Epoch: 014/020 | Batch 0200/0391 | Cost: 0.7015
Epoch: 014/020 | Batch 0250/0391 | Cost: 0.6069
Epoch: 014/020 | Batch 0300/0391 | Cost: 0.7080
Epoch: 014/020 | Batch 0350/0391 | Cost: 0.6460
Epoch: 014/020 | Train: 81.596% | Loss: 0.553
Time elapsed: 14.47 min
Epoch: 015/020 | Batch 0000/0391 | Cost: 0.5398
Epoch: 015/020 | Batch 0050/0391 | Cost: 0.5269
Epoch: 015/020 | Batch 0100/0391 | Cost: 0.5048
Epoch: 015/020 | Batch 0150/0391 | Cost: 0.5873
Epoch: 015/020 | Batch 0200/0391 | Cost: 0.5320
Epoch: 015/020 | Batch 0250/0391 | Cost: 0.4743
Epoch: 015/020 | Batch 0300/0391 | Cost: 0.6124
Epoch: 015/020 | Batch 0350/0391 | Cost: 0.7204
Epoch: 015/020 | Train: 85.276% | Loss: 0.439
Time elapsed: 15.51 min
Epoch: 016/020 | Batch 0000/0391 | Cost: 0.4387
Epoch: 016/020 | Batch 0050/0391 | Cost: 0.3777
Epoch: 016/020 | Batch 0100/0391 | Cost: 0.3430
Epoch: 016/020 | Batch 0150/0391 | Cost: 0.5901
Epoch: 016/020 | Batch 0200/0391 | Cost: 0.6303
Epoch: 016/020 | Batch 0250/0391 | Cost: 0.4983
Epoch: 016/020 | Batch 0300/0391 | Cost: 0.6507
Epoch: 016/020 | Batch 0350/0391 | Cost: 0.4663
Epoch: 016/020 | Train: 86.440% | Loss: 0.406
Time elapsed: 16.55 min
Epoch: 017/020 | Batch 0000/0391 | Cost: 0.4675
Epoch: 017/020 | Batch 0050/0391 | Cost: 0.6440
Epoch: 017/020 | Batch 0100/0391 | Cost: 0.3536
Epoch: 017/020 | Batch 0150/0391 | Cost: 0.5421
Epoch: 017/020 | Batch 0200/0391 | Cost: 0.4504
Epoch: 017/020 | Batch 0250/0391 | Cost: 0.4169
Epoch: 017/020 | Batch 0300/0391 | Cost: 0.4617
Epoch: 017/020 | Batch 0350/0391 | Cost: 0.4092
Epoch: 017/020 | Train: 84.636% | Loss: 0.459
Time elapsed: 17.59 min
Epoch: 018/020 | Batch 0000/0391 | Cost: 0.4267
Epoch: 018/020 | Batch 0050/0391 | Cost: 0.6478
Epoch: 018/020 | Batch 0100/0391 | Cost: 0.5806
Epoch: 018/020 | Batch 0150/0391 | Cost: 0.5453
Epoch: 018/020 | Batch 0200/0391 | Cost: 0.4984
Epoch: 018/020 | Batch 0250/0391 | Cost: 0.2517
Epoch: 018/020 | Batch 0300/0391 | Cost: 0.5219
Epoch: 018/020 | Batch 0350/0391 | Cost: 0.5217
Epoch: 018/020 | Train: 86.094% | Loss: 0.413
Time elapsed: 18.63 min
Epoch: 019/020 | Batch 0000/0391 | Cost: 0.3849
Epoch: 019/020 | Batch 0050/0391 | Cost: 0.2890
Epoch: 019/020 | Batch 0100/0391 | Cost: 0.5058
Epoch: 019/020 | Batch 0150/0391 | Cost: 0.5718
Epoch: 019/020 | Batch 0200/0391 | Cost: 0.4053
Epoch: 019/020 | Batch 0250/0391 | Cost: 0.5241
Epoch: 019/020 | Batch 0300/0391 | Cost: 0.7110
Epoch: 019/020 | Batch 0350/0391 | Cost: 0.4572
Epoch: 019/020 | Train: 87.586% | Loss: 0.365
Time elapsed: 19.67 min
Epoch: 020/020 | Batch 0000/0391 | Cost: 0.3576
Epoch: 020/020 | Batch 0050/0391 | Cost: 0.3466
Epoch: 020/020 | Batch 0100/0391 | Cost: 0.3427
Epoch: 020/020 | Batch 0150/0391 | Cost: 0.3117
Epoch: 020/020 | Batch 0200/0391 | Cost: 0.4912
Epoch: 020/020 | Batch 0250/0391 | Cost: 0.4481
Epoch: 020/020 | Batch 0300/0391 | Cost: 0.6303
Epoch: 020/020 | Batch 0350/0391 | Cost: 0.4274
Epoch: 020/020 | Train: 88.024% | Loss: 0.361
Time elapsed: 20.71 min
Total Training Time: 20.71 min

Evaluation

In [6]:
with torch.set_grad_enabled(False): # save memory during inference
    print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
Test accuracy: 74.56%
In [7]:
%watermark -iv
numpy       1.15.4
torch       1.0.1.post2