Deep Learning Models -- A collection of various deep learning architectures, models, and tips for TensorFlow and PyTorch in Jupyter Notebooks.
Example notebook showing how to use an own CSV text dataset for training a simple RNN for sentiment classification (here: a binary classification problem with two labels, positive and negative) using LSTM (Long Short Term Memory) cells.
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p torch
import torch
import torch.nn.functional as F
from torchtext import data
from torchtext import datasets
import time
import random
import pandas as pd
torch.backends.cudnn.deterministic = True
Sebastian Raschka CPython 3.6.8 IPython 7.2.0 torch 1.0.1.post2
RANDOM_SEED = 123
torch.manual_seed(RANDOM_SEED)
VOCABULARY_SIZE = 20000
LEARNING_RATE = 1e-4
BATCH_SIZE = 128
NUM_EPOCHS = 15
DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
EMBEDDING_DIM = 128
HIDDEN_DIM = 256
OUTPUT_DIM = 1
The following cells will download the IMDB movie review dataset (http://ai.stanford.edu/~amaas/data/sentiment/) for positive-negative sentiment classification in as CSV-formatted file:
!wget https://github.com/rasbt/python-machine-learning-book-2nd-edition/raw/master/code/ch08/movie_data.csv.gz
--2019-11-28 19:47:46-- https://github.com/rasbt/python-machine-learning-book-2nd-edition/raw/master/code/ch08/movie_data.csv.gz Resolving github.com (github.com)... 140.82.113.3 Connecting to github.com (github.com)|140.82.113.3|:443... connected. HTTP request sent, awaiting response... 302 Found Location: https://raw.githubusercontent.com/rasbt/python-machine-learning-book-2nd-edition/master/code/ch08/movie_data.csv.gz [following] --2019-11-28 19:47:46-- https://raw.githubusercontent.com/rasbt/python-machine-learning-book-2nd-edition/master/code/ch08/movie_data.csv.gz Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.184.133 Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.184.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 26521894 (25M) [application/octet-stream] Saving to: ‘movie_data.csv.gz’ movie_data.csv.gz 100%[===================>] 25.29M 10.5MB/s in 2.4s 2019-11-28 19:47:49 (10.5 MB/s) - ‘movie_data.csv.gz’ saved [26521894/26521894]
!gunzip -f movie_data.csv.gz
Check that the dataset looks okay:
df = pd.read_csv('movie_data.csv')
df.head()
review | sentiment | |
---|---|---|
0 | In 1974, the teenager Martha Moxley (Maggie Gr... | 1 |
1 | OK... so... I really like Kris Kristofferson a... | 0 |
2 | ***SPOILER*** Do not read this, if you think a... | 0 |
3 | hi for all the people who have seen this wonde... | 1 |
4 | I recently bought the DVD, forgetting just how... | 0 |
del df
Define the Label and Text field formatters:
TEXT = data.Field(sequential=True,
tokenize='spacy',
include_lengths=True) # necessary for packed_padded_sequence
LABEL = data.LabelField(dtype=torch.float)
Process the dataset:
fields = [('review', TEXT), ('sentiment', LABEL)]
dataset = data.TabularDataset(
path="movie_data.csv", format='csv',
skip_header=True, fields=fields)
Split the dataset into training, validation, and test partitions:
train_data, valid_data, test_data = dataset.split(
split_ratio=[0.75, 0.05, 0.2],
random_state=random.seed(RANDOM_SEED))
print(f'Num Train: {len(train_data)}')
print(f'Num Valid: {len(valid_data)}')
print(f'Num Test: {len(test_data)}')
Num Train: 37500 Num Valid: 10000 Num Test: 2500
Build the vocabulary based on the top "VOCABULARY_SIZE" words:
TEXT.build_vocab(train_data, max_size=VOCABULARY_SIZE)
LABEL.build_vocab(train_data)
print(f'Vocabulary size: {len(TEXT.vocab)}')
print(f'Number of classes: {len(LABEL.vocab)}')
Vocabulary size: 20002 Number of classes: 2
LABEL.vocab.freqs
Counter({'0': 18742, '1': 18758})
The TEXT.vocab dictionary will contain the word counts and indices. The reason why the number of words is VOCABULARY_SIZE + 2 is that it contains to special tokens for padding and unknown words: <unk>
and <pad>
.
Make dataset iterators:
train_loader, valid_loader, test_loader = data.BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size=BATCH_SIZE,
sort_within_batch=True, # necessary for packed_padded_sequence
sort_key=lambda x: len(x.review),
device=DEVICE)
Testing the iterators (note that the number of rows depends on the longest document in the respective batch):
print('Train')
for batch in train_loader:
print(f'Text matrix size: {batch.review[0].size()}')
print(f'Target vector size: {batch.sentiment.size()}')
break
print('\nValid:')
for batch in valid_loader:
print(f'Text matrix size: {batch.review[0].size()}')
print(f'Target vector size: {batch.sentiment.size()}')
break
print('\nTest:')
for batch in test_loader:
print(f'Text matrix size: {batch.review[0].size()}')
print(f'Target vector size: {batch.sentiment.size()}')
break
Train Text matrix size: torch.Size([512, 128]) Target vector size: torch.Size([128]) Valid: Text matrix size: torch.Size([52, 128]) Target vector size: torch.Size([128]) Test: Text matrix size: torch.Size([75, 128]) Target vector size: torch.Size([128])
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim):
super().__init__()
self.embedding = nn.Embedding(input_dim, embedding_dim)
self.rnn = nn.LSTM(embedding_dim, hidden_dim)
self.fc = nn.Linear(hidden_dim, output_dim)
def forward(self, text, text_length):
#[sentence len, batch size] => [sentence len, batch size, embedding size]
embedded = self.embedding(text)
packed = torch.nn.utils.rnn.pack_padded_sequence(embedded, text_length)
#[sentence len, batch size, embedding size] =>
# output: [sentence len, batch size, hidden size]
# hidden: [1, batch size, hidden size]
packed_output, (hidden, cell) = self.rnn(packed)
return self.fc(hidden.squeeze(0)).view(-1)
INPUT_DIM = len(TEXT.vocab)
torch.manual_seed(RANDOM_SEED)
model = RNN(INPUT_DIM, EMBEDDING_DIM, HIDDEN_DIM, OUTPUT_DIM)
model = model.to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)
def compute_binary_accuracy(model, data_loader, device):
model.eval()
correct_pred, num_examples = 0, 0
with torch.no_grad():
for batch_idx, batch_data in enumerate(data_loader):
text, text_lengths = batch_data.review
logits = model(text, text_lengths)
predicted_labels = (torch.sigmoid(logits) > 0.5).long()
num_examples += batch_data.sentiment.size(0)
correct_pred += (predicted_labels.long() == batch_data.sentiment.long()).sum()
return correct_pred.float()/num_examples * 100
start_time = time.time()
for epoch in range(NUM_EPOCHS):
model.train()
for batch_idx, batch_data in enumerate(train_loader):
text, text_lengths = batch_data.review
### FORWARD AND BACK PROP
logits = model(text, text_lengths)
cost = F.binary_cross_entropy_with_logits(logits, batch_data.sentiment)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
### LOGGING
if not batch_idx % 50:
print (f'Epoch: {epoch+1:03d}/{NUM_EPOCHS:03d} | '
f'Batch {batch_idx:03d}/{len(train_loader):03d} | '
f'Cost: {cost:.4f}')
with torch.set_grad_enabled(False):
print(f'training accuracy: '
f'{compute_binary_accuracy(model, train_loader, DEVICE):.2f}%'
f'\nvalid accuracy: '
f'{compute_binary_accuracy(model, valid_loader, DEVICE):.2f}%')
print(f'Time elapsed: {(time.time() - start_time)/60:.2f} min')
print(f'Total Training Time: {(time.time() - start_time)/60:.2f} min')
print(f'Test accuracy: {compute_binary_accuracy(model, test_loader, DEVICE):.2f}%')
Epoch: 001/015 | Batch 000/293 | Cost: 0.6948 Epoch: 001/015 | Batch 050/293 | Cost: 0.6868 Epoch: 001/015 | Batch 100/293 | Cost: 0.6926 Epoch: 001/015 | Batch 150/293 | Cost: 0.6788 Epoch: 001/015 | Batch 200/293 | Cost: 0.6838 Epoch: 001/015 | Batch 250/293 | Cost: 0.6563 training accuracy: 64.83% valid accuracy: 64.88% Time elapsed: 0.29 min Epoch: 002/015 | Batch 000/293 | Cost: 0.5635 Epoch: 002/015 | Batch 050/293 | Cost: 0.6154 Epoch: 002/015 | Batch 100/293 | Cost: 0.5449 Epoch: 002/015 | Batch 150/293 | Cost: 0.6161 Epoch: 002/015 | Batch 200/293 | Cost: 0.5794 Epoch: 002/015 | Batch 250/293 | Cost: 0.5190 training accuracy: 75.81% valid accuracy: 75.02% Time elapsed: 0.57 min Epoch: 003/015 | Batch 000/293 | Cost: 0.5194 Epoch: 003/015 | Batch 050/293 | Cost: 0.4679 Epoch: 003/015 | Batch 100/293 | Cost: 0.5069 Epoch: 003/015 | Batch 150/293 | Cost: 0.4728 Epoch: 003/015 | Batch 200/293 | Cost: 0.4180 Epoch: 003/015 | Batch 250/293 | Cost: 0.3722 training accuracy: 77.14% valid accuracy: 76.48% Time elapsed: 0.85 min Epoch: 004/015 | Batch 000/293 | Cost: 0.4978 Epoch: 004/015 | Batch 050/293 | Cost: 0.4959 Epoch: 004/015 | Batch 100/293 | Cost: 0.4877 Epoch: 004/015 | Batch 150/293 | Cost: 0.4808 Epoch: 004/015 | Batch 200/293 | Cost: 0.4264 Epoch: 004/015 | Batch 250/293 | Cost: 0.3528 training accuracy: 82.63% valid accuracy: 80.89% Time elapsed: 1.14 min Epoch: 005/015 | Batch 000/293 | Cost: 0.3676 Epoch: 005/015 | Batch 050/293 | Cost: 0.3325 Epoch: 005/015 | Batch 100/293 | Cost: 0.4878 Epoch: 005/015 | Batch 150/293 | Cost: 0.4481 Epoch: 005/015 | Batch 200/293 | Cost: 0.4147 Epoch: 005/015 | Batch 250/293 | Cost: 0.4270 training accuracy: 84.73% valid accuracy: 82.78% Time elapsed: 1.42 min Epoch: 006/015 | Batch 000/293 | Cost: 0.4143 Epoch: 006/015 | Batch 050/293 | Cost: 0.4586 Epoch: 006/015 | Batch 100/293 | Cost: 0.3946 Epoch: 006/015 | Batch 150/293 | Cost: 0.3729 Epoch: 006/015 | Batch 200/293 | Cost: 0.3584 Epoch: 006/015 | Batch 250/293 | Cost: 0.4089 training accuracy: 86.25% valid accuracy: 84.17% Time elapsed: 1.71 min Epoch: 007/015 | Batch 000/293 | Cost: 0.3147 Epoch: 007/015 | Batch 050/293 | Cost: 0.3494 Epoch: 007/015 | Batch 100/293 | Cost: 0.2743 Epoch: 007/015 | Batch 150/293 | Cost: 0.3913 Epoch: 007/015 | Batch 200/293 | Cost: 0.2999 Epoch: 007/015 | Batch 250/293 | Cost: 0.2530 training accuracy: 86.61% valid accuracy: 84.16% Time elapsed: 1.99 min Epoch: 008/015 | Batch 000/293 | Cost: 0.3180 Epoch: 008/015 | Batch 050/293 | Cost: 0.3589 Epoch: 008/015 | Batch 100/293 | Cost: 0.3230 Epoch: 008/015 | Batch 150/293 | Cost: 0.3192 Epoch: 008/015 | Batch 200/293 | Cost: 0.3328 Epoch: 008/015 | Batch 250/293 | Cost: 0.2283 training accuracy: 87.09% valid accuracy: 84.59% Time elapsed: 2.29 min Epoch: 009/015 | Batch 000/293 | Cost: 0.3429 Epoch: 009/015 | Batch 050/293 | Cost: 0.3042 Epoch: 009/015 | Batch 100/293 | Cost: 0.2704 Epoch: 009/015 | Batch 150/293 | Cost: 0.2430 Epoch: 009/015 | Batch 200/293 | Cost: 0.4137 Epoch: 009/015 | Batch 250/293 | Cost: 0.1736 training accuracy: 74.11% valid accuracy: 72.36% Time elapsed: 2.59 min Epoch: 010/015 | Batch 000/293 | Cost: 0.5759 Epoch: 010/015 | Batch 050/293 | Cost: 0.4807 Epoch: 010/015 | Batch 100/293 | Cost: 0.2686 Epoch: 010/015 | Batch 150/293 | Cost: 0.3420 Epoch: 010/015 | Batch 200/293 | Cost: 0.2759 Epoch: 010/015 | Batch 250/293 | Cost: 0.3928 training accuracy: 89.58% valid accuracy: 86.27% Time elapsed: 2.88 min Epoch: 011/015 | Batch 000/293 | Cost: 0.2417 Epoch: 011/015 | Batch 050/293 | Cost: 0.3175 Epoch: 011/015 | Batch 100/293 | Cost: 0.2029 Epoch: 011/015 | Batch 150/293 | Cost: 0.2389 Epoch: 011/015 | Batch 200/293 | Cost: 0.3107 Epoch: 011/015 | Batch 250/293 | Cost: 0.3486 training accuracy: 90.21% valid accuracy: 86.52% Time elapsed: 3.17 min Epoch: 012/015 | Batch 000/293 | Cost: 0.2540 Epoch: 012/015 | Batch 050/293 | Cost: 0.2851 Epoch: 012/015 | Batch 100/293 | Cost: 0.1901 Epoch: 012/015 | Batch 150/293 | Cost: 0.2286 Epoch: 012/015 | Batch 200/293 | Cost: 0.3239 Epoch: 012/015 | Batch 250/293 | Cost: 0.2856 training accuracy: 90.72% valid accuracy: 86.78% Time elapsed: 3.47 min Epoch: 013/015 | Batch 000/293 | Cost: 0.1913 Epoch: 013/015 | Batch 050/293 | Cost: 0.2547 Epoch: 013/015 | Batch 100/293 | Cost: 0.3984 Epoch: 013/015 | Batch 150/293 | Cost: 0.2294 Epoch: 013/015 | Batch 200/293 | Cost: 0.2692 Epoch: 013/015 | Batch 250/293 | Cost: 0.2132 training accuracy: 91.51% valid accuracy: 87.13% Time elapsed: 3.76 min Epoch: 014/015 | Batch 000/293 | Cost: 0.1699 Epoch: 014/015 | Batch 050/293 | Cost: 0.2611 Epoch: 014/015 | Batch 100/293 | Cost: 0.2594 Epoch: 014/015 | Batch 150/293 | Cost: 0.2062 Epoch: 014/015 | Batch 200/293 | Cost: 0.2608 Epoch: 014/015 | Batch 250/293 | Cost: 0.2881 training accuracy: 91.43% valid accuracy: 86.93% Time elapsed: 4.05 min Epoch: 015/015 | Batch 000/293 | Cost: 0.2522 Epoch: 015/015 | Batch 050/293 | Cost: 0.2753 Epoch: 015/015 | Batch 100/293 | Cost: 0.2322 Epoch: 015/015 | Batch 150/293 | Cost: 0.2361 Epoch: 015/015 | Batch 200/293 | Cost: 0.3728 Epoch: 015/015 | Batch 250/293 | Cost: 0.2895 training accuracy: 89.71% valid accuracy: 85.54% Time elapsed: 4.34 min Total Training Time: 4.34 min Test accuracy: 86.88%
import spacy
nlp = spacy.load('en')
def predict_sentiment(model, sentence):
# based on:
# https://github.com/bentrevett/pytorch-sentiment-analysis/blob/
# master/2%20-%20Upgraded%20Sentiment%20Analysis.ipynb
model.eval()
tokenized = [tok.text for tok in nlp.tokenizer(sentence)]
indexed = [TEXT.vocab.stoi[t] for t in tokenized]
length = [len(indexed)]
tensor = torch.LongTensor(indexed).to(DEVICE)
tensor = tensor.unsqueeze(1)
length_tensor = torch.LongTensor(length)
prediction = torch.sigmoid(model(tensor, length_tensor))
return prediction.item()
print('Probability positive:')
1-predict_sentiment(model, "This is such an awesome movie, I really love it!")
Probability positive:
0.8258040845394135
print('Probability negative:')
predict_sentiment(model, "I really hate this movie. It is really bad and sucks!")
Probability negative:
0.8462136387825012
%watermark -iv
torch 1.0.1.post2 pandas 0.23.4 spacy 2.0.16