Deep Learning Models -- A collection of various deep learning architectures, models, and tips for TensorFlow and PyTorch in Jupyter Notebooks.

Model Zoo -- RNN with GRU

Demo of a simple RNN for sentiment classification (here: a binary classification problem with two labels, positive and negative) using GRU (Gated Rectified Unit) cells.

In [1]:
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p torch

import torch
import torch.nn.functional as F
from torchtext import data
from torchtext import datasets
import time
import random

torch.backends.cudnn.deterministic = True
Sebastian Raschka 

CPython 3.7.1
IPython 7.4.0

torch 1.0.1.post2

General Settings

In [2]:
RANDOM_SEED = 123
torch.manual_seed(RANDOM_SEED)

VOCABULARY_SIZE = 20000
LEARNING_RATE = 1e-4
BATCH_SIZE = 128
NUM_EPOCHS = 15
DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

EMBEDDING_DIM = 128
HIDDEN_DIM = 256
OUTPUT_DIM = 1

Dataset

Load the IMDB Movie Review dataset:

In [3]:
TEXT = data.Field(tokenize='spacy',
                  include_lengths=True) # necessary for packed_padded_sequence
LABEL = data.LabelField(dtype=torch.float)
train_data, test_data = datasets.IMDB.splits(TEXT, LABEL)
train_data, valid_data = train_data.split(random_state=random.seed(RANDOM_SEED),
                                          split_ratio=0.8)

print(f'Num Train: {len(train_data)}')
print(f'Num Valid: {len(valid_data)}')
print(f'Num Test: {len(test_data)}')
downloading aclImdb_v1.tar.gz
aclImdb_v1.tar.gz: 100%|██████████| 84.1M/84.1M [00:08<00:00, 9.50MB/s]
Num Train: 20000
Num Valid: 5000
Num Test: 25000

Build the vocabulary based on the top "VOCABULARY_SIZE" words:

In [4]:
TEXT.build_vocab(train_data, max_size=VOCABULARY_SIZE)
LABEL.build_vocab(train_data)

print(f'Vocabulary size: {len(TEXT.vocab)}')
print(f'Number of classes: {len(LABEL.vocab)}')
Vocabulary size: 20002
Number of classes: 2

The TEXT.vocab dictionary will contain the word counts and indices. The reason why the number of words is VOCABULARY_SIZE + 2 is that it contains to special tokens for padding and unknown words: <unk> and <pad>.

Make dataset iterators:

In [ ]:
train_loader, valid_loader, test_loader = data.BucketIterator.splits(
    (train_data, valid_data, test_data), 
    batch_size=BATCH_SIZE,
    sort_within_batch=True, # necessary for packed_padded_sequence
    device=DEVICE)

Testing the iterators (note that the number of rows depends on the longest document in the respective batch):

In [6]:
print('Train')
for batch in train_loader:
    print(f'Text matrix size: {batch.text[0].size()}')
    print(f'Target vector size: {batch.label.size()}')
    break
    
print('\nValid:')
for batch in valid_loader:
    print(f'Text matrix size: {batch.text[0].size()}')
    print(f'Target vector size: {batch.label.size()}')
    break
    
print('\nTest:')
for batch in test_loader:
    print(f'Text matrix size: {batch.text[0].size()}')
    print(f'Target vector size: {batch.label.size()}')
    break
Train
Text matrix size: torch.Size([132, 128])
Target vector size: torch.Size([128])

Valid:
Text matrix size: torch.Size([61, 128])
Target vector size: torch.Size([128])

Test:
Text matrix size: torch.Size([42, 128])
Target vector size: torch.Size([128])

Model

In [ ]:
import torch.nn as nn

class RNN(nn.Module):
    def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim):
        
        super().__init__()
        
        self.embedding = nn.Embedding(input_dim, embedding_dim)
        self.rnn = nn.GRU(embedding_dim, hidden_dim)
        self.fc = nn.Linear(hidden_dim, output_dim)
        
    def forward(self, text, text_length):

        #[sentence len, batch size] => [sentence len, batch size, embedding size]
        embedded = self.embedding(text)
        
        packed = torch.nn.utils.rnn.pack_padded_sequence(embedded, text_length)
        
        #[sentence len, batch size, embedding size] => 
        #  output: [sentence len, batch size, hidden size]
        #  hidden: [1, batch size, hidden size]
        packed_output, hidden = self.rnn(packed)
        
        return self.fc(hidden.squeeze(0)).view(-1)
In [ ]:
INPUT_DIM = len(TEXT.vocab)

torch.manual_seed(RANDOM_SEED)
model = RNN(INPUT_DIM, EMBEDDING_DIM, HIDDEN_DIM, OUTPUT_DIM)
model = model.to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)

Training

In [ ]:
def compute_binary_accuracy(model, data_loader, device):
    model.eval()
    correct_pred, num_examples = 0, 0
    with torch.no_grad():
        for batch_idx, batch_data in enumerate(data_loader):
            text, text_lengths = batch_data.text
            logits = model(text, text_lengths)
            predicted_labels = (torch.sigmoid(logits) > 0.5).long()
            num_examples += batch_data.label.size(0)
            correct_pred += (predicted_labels == batch_data.label.long()).sum()
        return correct_pred.float()/num_examples * 100
In [14]:
start_time = time.time()

for epoch in range(NUM_EPOCHS):
    model.train()
    for batch_idx, batch_data in enumerate(train_loader):
        
        text, text_lengths = batch_data.text
        
        ### FORWARD AND BACK PROP
        logits = model(text, text_lengths)
        cost = F.binary_cross_entropy_with_logits(logits, batch_data.label)
        optimizer.zero_grad()
        
        cost.backward()
        
        ### UPDATE MODEL PARAMETERS
        optimizer.step()
        
        ### LOGGING
        if not batch_idx % 50:
            print (f'Epoch: {epoch+1:03d}/{NUM_EPOCHS:03d} | '
                   f'Batch {batch_idx:03d}/{len(train_loader):03d} | '
                   f'Cost: {cost:.4f}')

    with torch.set_grad_enabled(False):
        print(f'training accuracy: '
              f'{compute_binary_accuracy(model, train_loader, DEVICE):.2f}%'
              f'\nvalid accuracy: '
              f'{compute_binary_accuracy(model, valid_loader, DEVICE):.2f}%')
        
    print(f'Time elapsed: {(time.time() - start_time)/60:.2f} min')
    
print(f'Total Training Time: {(time.time() - start_time)/60:.2f} min')
print(f'Test accuracy: {compute_binary_accuracy(model, test_loader, DEVICE):.2f}%')
Epoch: 001/015 | Batch 000/157 | Cost: 0.6869
Epoch: 001/015 | Batch 050/157 | Cost: 0.6909
Epoch: 001/015 | Batch 100/157 | Cost: 0.6929
Epoch: 001/015 | Batch 150/157 | Cost: 0.6667
training accuracy: 58.70%
valid accuracy: 57.30%
Time elapsed: 0.18 min
Epoch: 002/015 | Batch 000/157 | Cost: 0.6831
Epoch: 002/015 | Batch 050/157 | Cost: 0.6460
Epoch: 002/015 | Batch 100/157 | Cost: 0.6443
Epoch: 002/015 | Batch 150/157 | Cost: 0.6239
training accuracy: 68.70%
valid accuracy: 67.54%
Time elapsed: 0.36 min
Epoch: 003/015 | Batch 000/157 | Cost: 0.4887
Epoch: 003/015 | Batch 050/157 | Cost: 0.5954
Epoch: 003/015 | Batch 100/157 | Cost: 0.6105
Epoch: 003/015 | Batch 150/157 | Cost: 0.5285
training accuracy: 75.35%
valid accuracy: 73.62%
Time elapsed: 0.55 min
Epoch: 004/015 | Batch 000/157 | Cost: 0.4711
Epoch: 004/015 | Batch 050/157 | Cost: 0.5610
Epoch: 004/015 | Batch 100/157 | Cost: 0.4648
Epoch: 004/015 | Batch 150/157 | Cost: 0.4983
training accuracy: 79.68%
valid accuracy: 77.00%
Time elapsed: 0.73 min
Epoch: 005/015 | Batch 000/157 | Cost: 0.4718
Epoch: 005/015 | Batch 050/157 | Cost: 0.4375
Epoch: 005/015 | Batch 100/157 | Cost: 0.4393
Epoch: 005/015 | Batch 150/157 | Cost: 0.4138
training accuracy: 81.71%
valid accuracy: 78.18%
Time elapsed: 0.92 min
Epoch: 006/015 | Batch 000/157 | Cost: 0.3452
Epoch: 006/015 | Batch 050/157 | Cost: 0.3552
Epoch: 006/015 | Batch 100/157 | Cost: 0.4116
Epoch: 006/015 | Batch 150/157 | Cost: 0.4030
training accuracy: 82.78%
valid accuracy: 79.52%
Time elapsed: 1.11 min
Epoch: 007/015 | Batch 000/157 | Cost: 0.3604
Epoch: 007/015 | Batch 050/157 | Cost: 0.3680
Epoch: 007/015 | Batch 100/157 | Cost: 0.3132
Epoch: 007/015 | Batch 150/157 | Cost: 0.3442
training accuracy: 85.72%
valid accuracy: 81.90%
Time elapsed: 1.30 min
Epoch: 008/015 | Batch 000/157 | Cost: 0.3696
Epoch: 008/015 | Batch 050/157 | Cost: 0.2850
Epoch: 008/015 | Batch 100/157 | Cost: 0.3538
Epoch: 008/015 | Batch 150/157 | Cost: 0.4393
training accuracy: 86.21%
valid accuracy: 81.56%
Time elapsed: 1.48 min
Epoch: 009/015 | Batch 000/157 | Cost: 0.3638
Epoch: 009/015 | Batch 050/157 | Cost: 0.2887
Epoch: 009/015 | Batch 100/157 | Cost: 0.3294
Epoch: 009/015 | Batch 150/157 | Cost: 0.2515
training accuracy: 86.36%
valid accuracy: 82.18%
Time elapsed: 1.67 min
Epoch: 010/015 | Batch 000/157 | Cost: 0.2781
Epoch: 010/015 | Batch 050/157 | Cost: 0.3547
Epoch: 010/015 | Batch 100/157 | Cost: 0.2762
Epoch: 010/015 | Batch 150/157 | Cost: 0.3104
training accuracy: 87.39%
valid accuracy: 82.92%
Time elapsed: 1.86 min
Epoch: 011/015 | Batch 000/157 | Cost: 0.3024
Epoch: 011/015 | Batch 050/157 | Cost: 0.2901
Epoch: 011/015 | Batch 100/157 | Cost: 0.1955
Epoch: 011/015 | Batch 150/157 | Cost: 0.2581
training accuracy: 89.20%
valid accuracy: 83.66%
Time elapsed: 2.05 min
Epoch: 012/015 | Batch 000/157 | Cost: 0.1964
Epoch: 012/015 | Batch 050/157 | Cost: 0.3578
Epoch: 012/015 | Batch 100/157 | Cost: 0.2177
Epoch: 012/015 | Batch 150/157 | Cost: 0.3732
training accuracy: 88.12%
valid accuracy: 82.82%
Time elapsed: 2.24 min
Epoch: 013/015 | Batch 000/157 | Cost: 0.2964
Epoch: 013/015 | Batch 050/157 | Cost: 0.2757
Epoch: 013/015 | Batch 100/157 | Cost: 0.4130
Epoch: 013/015 | Batch 150/157 | Cost: 0.2817
training accuracy: 90.71%
valid accuracy: 84.54%
Time elapsed: 2.43 min
Epoch: 014/015 | Batch 000/157 | Cost: 0.2700
Epoch: 014/015 | Batch 050/157 | Cost: 0.2832
Epoch: 014/015 | Batch 100/157 | Cost: 0.3164
Epoch: 014/015 | Batch 150/157 | Cost: 0.2610
training accuracy: 90.95%
valid accuracy: 84.68%
Time elapsed: 2.61 min
Epoch: 015/015 | Batch 000/157 | Cost: 0.2593
Epoch: 015/015 | Batch 050/157 | Cost: 0.2185
Epoch: 015/015 | Batch 100/157 | Cost: 0.3066
Epoch: 015/015 | Batch 150/157 | Cost: 0.3229
training accuracy: 90.88%
valid accuracy: 84.88%
Time elapsed: 2.79 min
Total Training Time: 2.79 min
Test accuracy: 85.09%
In [ ]:
import spacy
nlp = spacy.load('en')

def predict_sentiment(model, sentence):
    # based on:
    # https://github.com/bentrevett/pytorch-sentiment-analysis/blob/
    # master/2%20-%20Upgraded%20Sentiment%20Analysis.ipynb
    model.eval()
    tokenized = [tok.text for tok in nlp.tokenizer(sentence)]
    indexed = [TEXT.vocab.stoi[t] for t in tokenized]
    length = [len(indexed)]
    tensor = torch.LongTensor(indexed).to(DEVICE)
    tensor = tensor.unsqueeze(1)
    length_tensor = torch.LongTensor(length)
    prediction = torch.sigmoid(model(tensor, length_tensor))
    return prediction.item()
In [16]:
print('Probability positive:')
predict_sentiment(model, "I really love this movie. This movie is so great!")
Probability positive:
Out[16]:
0.8322937488555908
In [ ]: