*This notebook is heavily based on the great PyTorch DCGAN tutorial from Nathan Inkawhich and uses the MNIST dataset to illustrate the difference between the saturating and non-saturating generator loss in GAN training.*
*Notebook compiled by Michael M. Pieler while going through the Depthfirstlearning InfoGAN material.*
This tutorial will give an introduction to DCGANs through an example. We will train a generative adversarial network (GAN) to generate new celebrities after showing it pictures of many real celebrities. Most of the code here is from the dcgan implementation in pytorch/examples, and this document will give a thorough explanation of the implementation and shed light on how and why this model works. But don’t worry, no prior knowledge of GANs is required, but it may require a first-timer to spend some time reasoning about what is actually happening under the hood. Also, for the sake of time it will help to have a GPU, or two. Lets start from the beginning.
GANs are a framework for teaching a DL model to capture the training data’s distribution so we can generate new data from that same distribution. GANs were invented by Ian Goodfellow in 2014 and first described in the paper Generative Adversarial Nets. They are made of two distinct models, a generator and a discriminator. The job of the generator is to spawn ‘fake’ images that look like the training images. The job of the discriminator is to look at an image and output whether or not it is a real training image or a fake image from the generator. During training, the generator is constantly trying to outsmart the discriminator by generating better and better fakes, while the discriminator is working to become a better detective and correctly classify the real and fake images. The equilibrium of this game is when the generator is generating perfect fakes that look as if they came directly from the training data, and the discriminator is left to always guess at 50% confidence that the generator output is real or fake.
Now, lets define some notation to be used throughout tutorial starting with the discriminator. Let $x$ be data representing an image. $D(x)$ is the discriminator network which outputs the (scalar) probability that $x$ came from training data rather than the generator. Here, since we are dealing with images the input to $D(x)$ is an image of HWC size 1x28x28. Intuitively, $D(x)$ should be HIGH when $x$ comes from training data and LOW when $x$ comes from the generator. $D(x)$ can also be thought of as a traditional binary classifier.
For the generator’s notation, let $z$ be a latent space vector sampled from a standard normal distribution. $G(z)$ represents the generator function which maps the latent vector $z$ to data-space. The goal of $G$ is to estimate the distribution that the training data comes from ($p_{data}$) so it can generate fake samples from that estimated distribution ($p_g$).
So, $D(G(z))$ is the probability (scalar) that the output of the generator $G$ is a real image. As described in Goodfellow’s paper, $D$ and $G$ play a minimax game in which $D$ tries to maximize the probability it correctly classifies reals and fakes ($logD(x)$), and $G$ tries to minimize the probability that $D$ will predict its outputs are fake ($log(1-D(G(x)))$). From the paper, the GAN loss function is
\begin{align}\underset{G}{\text{min}} \underset{D}{\text{max}}V(D,G) = \mathbb{E}_{x\sim p_{data}(x)}\big[logD(x)\big] + \mathbb{E}_{z\sim p_{z}(z)}\big[log(1-D(G(x)))\big]\end{align}In theory, the solution to this minimax game is where $p_g = p_{data}$, and the discriminator guesses randomly if the inputs are real or fake. However, the convergence theory of GANs is still being actively researched and in reality models do not always train to this point.
A DCGAN is a direct extension of the GAN described above, except that it explicitly uses convolutional and convolutional-transpose layers in the discriminator and generator, respectively. It was first described by Radford et. al. in the paper Unsupervised Representation Learning With Deep Convolutional Generative Adversarial Networks. The discriminator is made up of strided convolution layers, batch norm layers, and LeakyReLU activations. The input is a 1x28x28 input image and the output is a scalar probability that the input is from the real data distribution. The generator is comprised of convolutional-transpose layers, batch norm layers, and ReLU activations. The input is a latent vector, $z$, that is drawn from a standard normal distribution and the output is a 1x28x28 RGB image. The strided conv-transpose layers allow the latent vector to be transformed into a volume with the same shape as an image. In the paper, the authors also give some tips about how to setup the optimizers, how to calculate the loss functions, and how to initialize the model weights, all of which will be explained in the coming sections.
%matplotlib inline
from __future__ import print_function
#%matplotlib inline
import argparse
import os
import random
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from IPython.display import HTML
# Set random seem for reproducibility
manualSeed = 999
#manualSeed = random.randint(1, 10000) # use if you want new results
print("Random Seed: ", manualSeed)
random.seed(manualSeed)
torch.manual_seed(manualSeed);
Random Seed: 999
Let’s define some inputs for the run:
# Root directory for dataset
dataroot = 'data/'
# Number of workers for dataloader
workers = 2
# Batch size during training
batch_size = 256
# Spatial size of training images. All images will be resized to this
# size using a transformer.
image_size = 28 #64
# Number of channels in the training images. For color images this is 3
nc = 1 #3
# Size of z latent vector (i.e. size of generator input)
nz = 100
# Size of feature maps in generator
ngf = 64
# Size of feature maps in discriminator
ndf = 28
# Number of training epochs
num_epochs = 10
# Learning rate for optimizers
lr = 0.0002
# Beta1 hyperparam for Adam optimizers
beta1 = 0.5
# Number of GPUs available. Use 0 for CPU mode.
ngpu = 1
In this tutorial we will use the MNIST dataset from torchvision.
# We can use an image folder dataset the way we have it setup.
# Create the dataset
dataset = dset.MNIST(root=dataroot, transform=transforms.ToTensor(), download=True)
# Create the dataloader
dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size,
shuffle=True, num_workers=workers)
# Decide which device we want to run on
device = torch.device("cuda:0" if (torch.cuda.is_available() and ngpu > 0) else "cpu")
# Plot some training images
real_batch = next(iter(dataloader))
plt.figure(figsize=(8,8))
plt.axis("off")
plt.title("Training Images")
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=2, normalize=True).cpu(),(1,2,0)));
With our input parameters set and the dataset prepared, we can now get into the implementation. We will start with the weigth initialization strategy, then talk about the generator, discriminator, loss functions, and training loop in detail.
From the DCGAN paper, the authors specify that all model weights shall
be randomly initialized from a Normal distribution with mean=0,
stdev=0.2. The weights_init
function takes an initialized model as
input and reinitializes all convolutional, convolutional-transpose, and
batch normalization layers to meet this criteria. This function is
applied to the models immediately after initialization.
# custom weights initialization called on netG and netD
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
nn.init.normal_(m.weight.data, 0.0, 0.02)
elif classname.find('BatchNorm') != -1:
nn.init.normal_(m.weight.data, 1.0, 0.02)
nn.init.constant_(m.bias.data, 0)
The generator, $G$, is designed to map the latent space vector ($z$) to data-space. Since our data are images, converting $z$ to data-space means ultimately creating a RGB image with the same size as the training images (i.e. 1x28x28). In practice, this is accomplished through a series of strided two dimensional convolutional transpose layers, each paired with a 2d batch norm layer and a relu activation. The output of the generator is fed through a tanh function to return it to the input data range of $[-1,1]$. It is worth noting the existence of the batch norm functions after the conv-transpose layers, as this is a critical contribution of the DCGAN paper. These layers help with the flow of gradients during training.
Notice, the how the inputs we set in the input section (nz, ngf, and nc) influence the generator architecture in code. nz is the length of the z input vector, ngf relates to the size of the feature maps that are propagated through the generator, and nc is the number of channels in the output image (set to 3 for RGB images, in our case 1 for grayscale). Below is the code for the generator.
# Generator Code
class Generator(nn.Module):
def __init__(self, ngpu):
super(Generator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is Z, going into a convolution
nn.ConvTranspose2d(nz, ngf*4, 3, 2, 0, bias=False),
nn.BatchNorm2d(ngf*4),
nn.ReLU(True),
# state size. (ngf*4) x 3 x 3
nn.ConvTranspose2d(ngf*4, ngf*2, 3, 2, 0, bias=False),
nn.BatchNorm2d(ngf*2),
nn.ReLU(True),
# state size. (ngf*2) x 8 x 8
nn.ConvTranspose2d(ngf*2, ngf, 3, 2, 0, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
# state size. (ngf) x 16 x 16
nn.ConvTranspose2d(ngf, nc, 3, 2, 2, 1, bias=False),
nn.Tanh()
# state size. (nc) x 28 x 28
)
def forward(self, input):
return self.main(input)
As mentioned, the discriminator, $D$, is a binary classification network that takes an image as input and outputs a scalar probability that the input image is real (as opposed to fake). Here, $D$ takes a 1x28x28 input image, processes it through a series of Conv2d, BatchNorm2d, and LeakyReLU layers, and outputs the final probability through a Sigmoid activation function. This architecture can be extended with more layers if necessary for the problem, but there is significance to the use of the strided convolution, BatchNorm, and LeakyReLUs. The DCGAN paper mentions it is a good practice to use strided convolution rather than pooling to downsample because it lets the network learn its own pooling function. Also batch norm and leaky relu functions promote healthy gradient flow which is critical for the learning process of both $G$ and $D$.
class Discriminator(nn.Module):
def __init__(self, ngpu):
super(Discriminator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is (nc) x 28 x 28
nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf) x 14 x 14
nn.Conv2d(ndf, ndf*2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf*2),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*2) x 7 x 7
nn.Conv2d(ndf*2, ndf*4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf*4),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*4) x 3 x 3
nn.Conv2d(ndf*4, 1, 4, 2, 1, bias=False),
#nn.Sigmoid() # not needed with nn.BCEWithLogitsLoss()
)
def forward(self, input):
return self.main(input)
With $D$ and $G$ setup, we can specify how they learn through the loss functions and optimizers. We will use the Binary Cross Entropy loss (nn.BCELoss) function which is defined in PyTorch as:
\begin{align}\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right]\end{align}Notice how this function provides the calculation of both log components in the objective function (i.e. $log(D(x))$ and $log(1-D(G(z)))$). We can specify what part of the BCE equation to use with the $y$ input. This is accomplished in the training loop which is coming up soon, but it is important to understand how we can choose which component we wish to calculate just by changing $y$ (i.e. GT labels).
Next, we define our real label as 1 and the fake label as 0. These labels will be used when calculating the losses of $D$ and $G$, and this is also the convention used in the original GAN paper. Finally, we set up two separate optimizers, one for $D$ and one for $G$. As specified in the DCGAN paper, both are Adam optimizers with learning rate 0.0002 and Beta1 = 0.5. For keeping track of the generator’s learning progression, we will generate a fixed batch of latent vectors that are drawn from a Gaussian distribution (i.e. fixed_noise) . In the training loop, we will periodically input this fixed_noise into $G$, and over the iterations we will see images form out of the noise.
Finally, now that we have all of the parts of the GAN framework defined, we can train it. Be mindful that training GANs is somewhat of an art form, as incorrect hyperparameter settings lead to mode collapse with little explanation of what went wrong. Here, we will closely follow Algorithm 1 from Goodfellow’s paper, while abiding by some of the best practices shown in ganhacks. Namely, we will “construct different mini-batches for real and fake” images, and also adjust G’s objective function to maximize $logD(G(z))$. Training is split up into two main parts. Part 1 updates the Discriminator and Part 2 updates the Generator.
Part 1 - Train the Discriminator
Recall, the goal of training the discriminator is to maximize the probability of correctly classifying a given input as real or fake. In terms of Goodfellow, we wish to “update the discriminator by ascending its stochastic gradient”. Practically, we want to maximize $log(D(x)) + log(1-D(G(z)))$. Due to the separate mini-batch suggestion from ganhacks, we will calculate this in two steps. First, we will construct a batch of real samples from the training set, forward pass through $D$, calculate the loss ($log(D(x))$), then calculate the gradients in a backward pass. Secondly, we will construct a batch of fake samples with the current generator, forward pass this batch through $D$, calculate the loss ($log(1-D(G(z)))$), and accumulate the gradients with a backward pass. Now, with the gradients accumulated from both the all-real and all-fake batches, we call a step of the Discriminator’s optimizer.
Part 2 - Train the Generator
As stated in the original paper, we want to train the Generator by minimizing $log(1-D(G(z)))$ in an effort to generate better fakes. As mentioned, this was shown by Goodfellow to not provide sufficient gradients, especially early in the learning process. As a fix, we instead wish to maximize $log(D(G(z)))$. In the code we accomplish this by: classifying the Generator output from Part 1 with the Discriminator, computing G’s loss using real labels as GT, computing G’s gradients in a backward pass, and finally updating G’s parameters with an optimizer step. It may seem counter-intuitive to use the real labels as GT labels for the loss function, but this allows us to use the $log(x)$ part of the BCELoss (rather than the $log(1-x)$ part) which is exactly what we want.
Finally, we will do some statistic reporting and at the end of each epoch we will push our fixed_noise batch through the generator to visually track the progress of G’s training. The training statistics reported are:
Note: This step might take a while, depending on how many epochs you run and if you removed some data from the dataset.
Comparison of the saturating and non-saturating Generator loss
If $D$ classifies the images from $G$ as fake the saturating loss results in a derivation of approx. -1 and the non-saturating loss of less than -$\infty$. The higher derivation from the non-saturating loss helps $G$ to obtain higher gradients and to learn faster when starting out to create better fake images. See figure below for a plot of the functions and their derivations.
For more details, see the excellent NIPS 2016 GAN tutorial, especially figure 16 (p.26).
x_data = np.arange(0.001,1,0.001)
sat_loss_data = np.log(1-x_data)
sat_loss_derivation_data = -1/(1-x_data)
non_sat_loss_data = -np.log(x_data)
non_sat_loss_derivation_data = -1/x_data
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(1, 1, 1)
ax.plot(x_data, sat_loss_data, 'r', label='Saturating G loss')
ax.plot(x_data, sat_loss_derivation_data, 'r--', label='Derivation saturating G loss')
ax.plot(x_data, non_sat_loss_data, 'b', label='non-saturating loss')
ax.plot(x_data, non_sat_loss_derivation_data, 'b--', label='Derivation non-saturating G loss')
ax.set_xlim([0, 1])
ax.set_ylim([-10, 4])
ax.grid(True, which='both')
ax.axhline(y=0, color='k')
ax.axvline(x=0, color='k')
ax.set_title('Saturating and non-saturating loss function')
plt.xlabel('D(G(z))')
plt.ylabel('Loss / derivation of loss')
ax.legend()
plt.show()
# Establish convention for real and fake labels during training
real_label = 1
fake_label = 0
# Create batch of latent vectors that we will use to visualize
# the progression of the generator
fixed_noise = torch.randn(64, nz, 1, 1, device=device)
def training_loop(num_epochs=num_epochs, saturating=False):
## Create the generator
netG = Generator(ngpu).to(device)
# Handle multi-gpu if desired
if (device.type == 'cuda') and (ngpu > 1):
netG = nn.DataParallel(netG, list(range(ngpu)))
# Apply the weights_init function to randomly initialize all weights
# to mean=0, stdev=0.2.
netG.apply(weights_init)
## Create the Discriminator
netD = Discriminator(ngpu).to(device)
# Handle multi-gpu if desired
if (device.type == 'cuda') and (ngpu > 1):
netD = nn.DataParallel(netD, list(range(ngpu)))
# Apply the weights_init function to randomly initialize all weights
# to mean=0, stdev=0.2.
netD.apply(weights_init)
## Initialize BCELoss function
#criterion = nn.BCELoss()
criterion = nn.BCEWithLogitsLoss() # more stable than nn.BCELoss
# Setup Adam optimizers for both G and D
optimizerD = optim.Adam(netD.parameters(), lr=lr, betas=(beta1, 0.999))
optimizerG = optim.Adam(netG.parameters(), lr=lr, betas=(beta1, 0.999))
## Training Loop
# Lists to keep track of progress
img_list = []
G_losses = []
G_grads_mean = []
G_grads_std = []
D_losses = []
iters = 0
print("Starting Training Loop...")
# For each epoch
for epoch in range(num_epochs):
# For each batch in the dataloader
for i, data in enumerate(dataloader, 0):
############################
# (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))
###########################
## Train with all-real batch
netD.zero_grad()
# Format batch
real_cpu = data[0].to(device)
b_size = real_cpu.size(0)
label = torch.full((b_size,), real_label, device=device)
# Forward pass real batch through D
output = netD(real_cpu).view(-1)
# Calculate loss on all-real batch
errD_real = criterion(output, label)
# Calculate gradients for D in backward pass
errD_real.backward()
D_x = output.mean().item()
## Train with all-fake batch
# Generate batch of latent vectors
noise = torch.randn(b_size, nz, 1, 1, device=device)
# Generate fake image batch with G
fake = netG(noise)
label.fill_(fake_label)
# Classify all fake batch with D
output = netD(fake.detach()).view(-1)
# Calculate D's loss on the all-fake batch
errD_fake = criterion(output, label)
# Calculate the gradients for this batch
errD_fake.backward()
D_G_z1 = output.mean().item()
# Add the gradients from the all-real and all-fake batches
errD = errD_real + errD_fake
# Update D
optimizerD.step()
############################
# (2) Update G network: maximize log(D(G(z)))
###########################
netG.zero_grad()
if saturating:
label.fill_(fake_label) # Saturating loss: Use fake_label y = 0 to get J(G) = log(1−D(G(z)))
else:
label.fill_(real_label) # Non-saturating loss: fake labels are real for generator cost
# Since we just updated D, perform another forward pass of all-fake batch through D
output = netD(fake).view(-1)
# Calculate G's loss based on this output
if saturating:
errG = -criterion(output, label) # Saturating loss: -J(D) = J(G)
else:
errG = criterion(output, label) # Non-saturating loss
# Calculate gradients for G
errG.backward()
D_G_z2 = output.mean().item()
# Update G
optimizerG.step()
# Save gradients
G_grad = [p.grad.view(-1).cpu().numpy() for p in list(netG.parameters())]
G_grads_mean.append(np.concatenate(G_grad).mean())
G_grads_std.append(np.concatenate(G_grad).std())
# Output training stats
if i % 50 == 0:
print('[%d/%d][%d/%d]\tLoss_D: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f'
% (epoch+1, num_epochs, i, len(dataloader),
errD.item(), errG.item(), D_x, D_G_z1, D_G_z2))
# Save Losses for plotting later
G_losses.append(errG.item())
D_losses.append(errD.item())
# Check how the generator is doing by saving G's output on fixed_noise
if (iters % 500 == 0) or ((epoch == num_epochs-1) and (i == len(dataloader)-1)):
with torch.no_grad():
fake = netG(fixed_noise).detach().cpu()
img_list.append(vutils.make_grid(fake, padding=2, normalize=True))
iters += 1
return G_losses, D_losses, G_grads_mean, G_grads_std, img_list
# Train with saturating G loss
G_losses_sat, D_losses_sat, G_grads_mean_sat, G_grads_std_sat, img_list_sat = training_loop(saturating=True)
Starting Training Loop... [1/10][0/235] Loss_D: 1.3887 Loss_G: -0.4508 D(x): -0.2936 D(G(z)): -0.4256 / -0.6127 [1/10][50/235] Loss_D: 0.3291 Loss_G: -0.0834 D(x): 2.0319 D(G(z)): -2.0001 / -2.5753 [1/10][100/235] Loss_D: 0.1132 Loss_G: -0.0283 D(x): 3.5382 D(G(z)): -3.1206 / -3.7979 [1/10][150/235] Loss_D: 0.2081 Loss_G: -0.0607 D(x): 2.5793 D(G(z)): -2.4521 / -2.9096 [1/10][200/235] Loss_D: 0.2112 Loss_G: -0.0398 D(x): 2.8777 D(G(z)): -2.4249 / -3.5479 [2/10][0/235] Loss_D: 0.0650 Loss_G: -0.0190 D(x): 3.6582 D(G(z)): -3.9902 / -4.2413 [2/10][50/235] Loss_D: 0.0594 Loss_G: -0.0206 D(x): 3.9822 D(G(z)): -3.9017 / -4.2014 [2/10][100/235] Loss_D: 0.0394 Loss_G: -0.0116 D(x): 4.1730 D(G(z)): -4.7400 / -4.8724 [2/10][150/235] Loss_D: 0.0936 Loss_G: -0.0312 D(x): 3.4650 D(G(z)): -3.5113 / -3.7421 [2/10][200/235] Loss_D: 0.0640 Loss_G: -0.0228 D(x): 3.9037 D(G(z)): -3.8585 / -4.0961 [3/10][0/235] Loss_D: 0.0612 Loss_G: -0.0256 D(x): 3.9085 D(G(z)): -3.8678 / -4.0078 [3/10][50/235] Loss_D: 0.0528 Loss_G: -0.0271 D(x): 3.9490 D(G(z)): -4.2961 / -4.0545 [3/10][100/235] Loss_D: 0.1277 Loss_G: -0.0312 D(x): 3.3842 D(G(z)): -3.1278 / -3.8937 [3/10][150/235] Loss_D: 0.0052 Loss_G: -0.0015 D(x): 6.2872 D(G(z)): -7.2038 / -7.2102 [3/10][200/235] Loss_D: 0.0036 Loss_G: -0.0018 D(x): 6.9495 D(G(z)): -7.2417 / -7.2576 [4/10][0/235] Loss_D: 0.0034 Loss_G: -0.0020 D(x): 7.1236 D(G(z)): -7.1970 / -7.2162 [4/10][50/235] Loss_D: 0.0051 Loss_G: -0.0036 D(x): 7.2039 D(G(z)): -6.8757 / -6.9126 [4/10][100/235] Loss_D: 0.0374 Loss_G: -0.0189 D(x): 4.9919 D(G(z)): -4.4543 / -4.6834 [4/10][150/235] Loss_D: 0.1152 Loss_G: -0.0508 D(x): 3.3369 D(G(z)): -2.9284 / -3.2181 [4/10][200/235] Loss_D: 0.1604 Loss_G: -0.0619 D(x): 2.3802 D(G(z)): -3.7964 / -3.1447 [5/10][0/235] Loss_D: 0.1337 Loss_G: -0.0249 D(x): 3.8816 D(G(z)): -2.6651 / -4.1167 [5/10][50/235] Loss_D: 0.0161 Loss_G: -0.0073 D(x): 5.8879 D(G(z)): -5.3289 / -5.4183 [5/10][100/235] Loss_D: 0.0171 Loss_G: -0.0087 D(x): 6.1156 D(G(z)): -5.5586 / -5.6091 [5/10][150/235] Loss_D: 0.0097 Loss_G: -0.0052 D(x): 6.3340 D(G(z)): -5.6881 / -5.7770 [5/10][200/235] Loss_D: 0.0050 Loss_G: -0.0029 D(x): 7.1016 D(G(z)): -6.2738 / -6.3071 [6/10][0/235] Loss_D: 0.0041 Loss_G: -0.0020 D(x): 7.0734 D(G(z)): -6.7138 / -6.7284 [6/10][50/235] Loss_D: 0.0035 Loss_G: -0.0022 D(x): 7.4029 D(G(z)): -6.6583 / -6.6866 [6/10][100/235] Loss_D: 0.0053 Loss_G: -0.0029 D(x): 6.8388 D(G(z)): -6.2711 / -6.3487 [6/10][150/235] Loss_D: 0.0227 Loss_G: -0.0093 D(x): 5.2345 D(G(z)): -4.8389 / -4.9807 [6/10][200/235] Loss_D: 0.0473 Loss_G: -0.0174 D(x): 4.6603 D(G(z)): -4.1599 / -4.6472 [7/10][0/235] Loss_D: 0.0186 Loss_G: -0.0115 D(x): 5.7913 D(G(z)): -5.1837 / -5.2712 [7/10][50/235] Loss_D: 0.0390 Loss_G: -0.0172 D(x): 4.9137 D(G(z)): -4.2505 / -4.5570 [7/10][100/235] Loss_D: 0.2280 Loss_G: -0.0568 D(x): 2.7351 D(G(z)): -2.2266 / -3.1321 [7/10][150/235] Loss_D: 0.1663 Loss_G: -0.0937 D(x): 2.4980 D(G(z)): -3.2400 / -2.6628 [7/10][200/235] Loss_D: 0.1358 Loss_G: -0.0613 D(x): 2.6048 D(G(z)): -3.6537 / -3.1476 [8/10][0/235] Loss_D: 0.1024 Loss_G: -0.0532 D(x): 3.4416 D(G(z)): -3.3524 / -3.3278 [8/10][50/235] Loss_D: 0.0031 Loss_G: -0.0009 D(x): 6.8739 D(G(z)): -8.7508 / -8.7250 [8/10][100/235] Loss_D: 0.0036 Loss_G: -0.0014 D(x): 7.5558 D(G(z)): -8.3003 / -8.3119 [8/10][150/235] Loss_D: 0.0031 Loss_G: -0.0017 D(x): 7.7257 D(G(z)): -7.7145 / -7.7349 [8/10][200/235] Loss_D: 0.1613 Loss_G: -0.0346 D(x): 3.5118 D(G(z)): -3.0895 / -4.0275 [9/10][0/235] Loss_D: 0.1315 Loss_G: -0.0649 D(x): 2.7865 D(G(z)): -3.7079 / -3.2155 [9/10][50/235] Loss_D: 0.0010 Loss_G: -0.0003 D(x): 7.6770 D(G(z)): -8.8490 / -8.8476 [9/10][100/235] Loss_D: 0.0011 Loss_G: -0.0005 D(x): 7.8612 D(G(z)): -8.4162 / -8.4179 [9/10][150/235] Loss_D: 0.0012 Loss_G: -0.0006 D(x): 7.9798 D(G(z)): -8.0923 / -8.0970 [9/10][200/235] Loss_D: 0.0015 Loss_G: -0.0009 D(x): 7.9355 D(G(z)): -7.6068 / -7.6184 [10/10][0/235] Loss_D: 0.0027 Loss_G: -0.0020 D(x): 7.8045 D(G(z)): -6.8003 / -6.8429 [10/10][50/235] Loss_D: 0.0100 Loss_G: -0.0041 D(x): 5.5954 D(G(z)): -5.7344 / -5.8612 [10/10][100/235] Loss_D: 0.0114 Loss_G: -0.0054 D(x): 5.7873 D(G(z)): -5.4751 / -5.5821 [10/10][150/235] Loss_D: 0.0107 Loss_G: -0.0048 D(x): 5.7647 D(G(z)): -5.6374 / -5.7180 [10/10][200/235] Loss_D: 0.0142 Loss_G: -0.0067 D(x): 5.6557 D(G(z)): -5.4688 / -5.5581
# Train with non-saturating G loss
G_losses_nonsat, D_losses_nonsat, G_grads_mean_nonsat, G_grads_std_nonsat, img_list_nonsat = training_loop(saturating=False)
Starting Training Loop... [1/10][0/235] Loss_D: 1.4806 Loss_G: 0.8172 D(x): -0.0447 D(G(z)): 0.0793 / -0.2075 [1/10][50/235] Loss_D: 0.7044 Loss_G: 2.1220 D(x): 0.8629 D(G(z)): -1.1168 / -1.9880 [1/10][100/235] Loss_D: 0.7730 Loss_G: 2.8524 D(x): 0.9181 D(G(z)): -0.8912 / -2.7809 [1/10][150/235] Loss_D: 0.5983 Loss_G: 2.5466 D(x): 1.6292 D(G(z)): -0.8499 / -2.4517 [1/10][200/235] Loss_D: 0.5546 Loss_G: 3.4062 D(x): 2.8636 D(G(z)): -0.5865 / -3.3674 [2/10][0/235] Loss_D: 0.3519 Loss_G: 2.7590 D(x): 1.9300 D(G(z)): -1.7959 / -2.6780 [2/10][50/235] Loss_D: 0.3308 Loss_G: 2.5970 D(x): 2.0969 D(G(z)): -1.7170 / -2.5044 [2/10][100/235] Loss_D: 0.2906 Loss_G: 2.2772 D(x): 1.9329 D(G(z)): -2.1988 / -2.1487 [2/10][150/235] Loss_D: 0.3888 Loss_G: 2.7510 D(x): 2.5243 D(G(z)): -1.2754 / -2.6730 [2/10][200/235] Loss_D: 0.3172 Loss_G: 1.6518 D(x): 1.6552 D(G(z)): -2.5283 / -1.3866 [3/10][0/235] Loss_D: 1.6868 Loss_G: 4.0523 D(x): 5.7553 D(G(z)): 1.3798 / -4.0271 [3/10][50/235] Loss_D: 2.3933 Loss_G: 0.9973 D(x): -2.2437 D(G(z)): -6.1110 / -0.3708 [3/10][100/235] Loss_D: 1.0796 Loss_G: 7.2032 D(x): 5.0410 D(G(z)): 0.5031 / -7.2021 [3/10][150/235] Loss_D: 0.2849 Loss_G: 2.1368 D(x): 2.1947 D(G(z)): -2.0871 / -1.9783 [3/10][200/235] Loss_D: 0.2950 Loss_G: 2.7243 D(x): 2.6610 D(G(z)): -1.7325 / -2.6387 [4/10][0/235] Loss_D: 0.2870 Loss_G: 2.3895 D(x): 2.3218 D(G(z)): -1.9871 / -2.2675 [4/10][50/235] Loss_D: 0.3742 Loss_G: 1.8984 D(x): 1.4942 D(G(z)): -2.4912 / -1.6904 [4/10][100/235] Loss_D: 0.3780 Loss_G: 1.3870 D(x): 1.3651 D(G(z)): -2.7237 / -1.0159 [4/10][150/235] Loss_D: 0.3416 Loss_G: 2.8508 D(x): 2.6546 D(G(z)): -1.5420 / -2.7724 [4/10][200/235] Loss_D: 0.3743 Loss_G: 2.3610 D(x): 2.2388 D(G(z)): -1.5798 / -2.2348 [5/10][0/235] Loss_D: 0.8888 Loss_G: 3.8335 D(x): 3.7872 D(G(z)): 0.1543 / -3.8041 [5/10][50/235] Loss_D: 0.5466 Loss_G: 2.7397 D(x): 2.4668 D(G(z)): -0.8038 / -2.6563 [5/10][100/235] Loss_D: 1.3939 Loss_G: 0.2562 D(x): -0.9272 D(G(z)): -4.3648 / 1.4594 [5/10][150/235] Loss_D: 0.4209 Loss_G: 2.0110 D(x): 1.8750 D(G(z)): -1.5667 / -1.8254 [5/10][200/235] Loss_D: 0.5267 Loss_G: 1.5757 D(x): 1.1206 D(G(z)): -1.8176 / -1.2863 [6/10][0/235] Loss_D: 0.5964 Loss_G: 1.2519 D(x): 0.7687 D(G(z)): -2.1497 / -0.8305 [6/10][50/235] Loss_D: 0.8342 Loss_G: 2.7040 D(x): 2.8922 D(G(z)): -0.0021 / -2.6150 [6/10][100/235] Loss_D: 0.8347 Loss_G: 3.2591 D(x): 2.8237 D(G(z)): 0.0038 / -3.2102 [6/10][150/235] Loss_D: 0.7580 Loss_G: 2.2694 D(x): 2.5903 D(G(z)): -0.2334 / -2.1263 [6/10][200/235] Loss_D: 0.6888 Loss_G: 2.3309 D(x): 2.1053 D(G(z)): -0.4848 / -2.1988 [7/10][0/235] Loss_D: 0.6861 Loss_G: 2.2548 D(x): 2.0530 D(G(z)): -0.5077 / -2.1189 [7/10][50/235] Loss_D: 0.7380 Loss_G: 2.2261 D(x): 2.2898 D(G(z)): -0.3398 / -2.0788 [7/10][100/235] Loss_D: 1.0114 Loss_G: 1.0376 D(x): -0.2493 D(G(z)): -2.6130 / -0.4894 [7/10][150/235] Loss_D: 0.6860 Loss_G: 1.1018 D(x): 0.4584 D(G(z)): -2.3075 / -0.6002 [7/10][200/235] Loss_D: 0.6408 Loss_G: 2.2526 D(x): 1.7789 D(G(z)): -0.7521 / -2.1123 [8/10][0/235] Loss_D: 0.6513 Loss_G: 2.1164 D(x): 1.9070 D(G(z)): -0.6742 / -1.9593 [8/10][50/235] Loss_D: 0.6021 Loss_G: 1.5369 D(x): 1.1008 D(G(z)): -1.3862 / -1.2461 [8/10][100/235] Loss_D: 0.7345 Loss_G: 1.0822 D(x): 0.3116 D(G(z)): -2.2611 / -0.5846 [8/10][150/235] Loss_D: 0.5946 Loss_G: 1.5102 D(x): 0.9741 D(G(z)): -1.6062 / -1.2049 [8/10][200/235] Loss_D: 0.6444 Loss_G: 1.3050 D(x): 0.8399 D(G(z)): -1.5512 / -0.9241 [9/10][0/235] Loss_D: 0.7738 Loss_G: 0.8035 D(x): 0.3042 D(G(z)): -2.0040 / -0.0892 [9/10][50/235] Loss_D: 0.6315 Loss_G: 1.4747 D(x): 1.1098 D(G(z)): -1.3242 / -1.1537 [9/10][100/235] Loss_D: 0.8309 Loss_G: 1.0950 D(x): 0.1284 D(G(z)): -2.5120 / -0.5905 [9/10][150/235] Loss_D: 0.6704 Loss_G: 1.5814 D(x): 0.9386 D(G(z)): -1.3158 / -1.2952 [9/10][200/235] Loss_D: 0.5917 Loss_G: 1.7399 D(x): 1.3321 D(G(z)): -1.2021 / -1.5003 [10/10][0/235] Loss_D: 0.6440 Loss_G: 1.7975 D(x): 1.4672 D(G(z)): -0.9116 / -1.5754 [10/10][50/235] Loss_D: 0.7311 Loss_G: 0.9297 D(x): 0.5071 D(G(z)): -1.8895 / -0.3204 [10/10][100/235] Loss_D: 0.6918 Loss_G: 1.8201 D(x): 1.9210 D(G(z)): -0.5137 / -1.6040 [10/10][150/235] Loss_D: 0.6514 Loss_G: 1.6173 D(x): 1.1796 D(G(z)): -1.0846 / -1.3509 [10/10][200/235] Loss_D: 0.6477 Loss_G: 1.5817 D(x): 1.2871 D(G(z)): -1.0060 / -1.3080
Finally, lets check out how we did. Here, we will look at three different results. First, we will see how D and G’s losses changed during training. Second, we will visualize G’s output on the fixed_noise batch for every epoch. And third, we will look at a batch of real data next to a batch of fake data from G.
Loss versus training iteration
Below is a plot of D & G’s losses versus training iterations.
plt.figure(figsize=(10,5))
plt.title("Generator and discriminator loss")
plt.plot(G_losses_sat,label="Saturating G loss", alpha=0.75)
plt.plot(D_losses_sat,label="Saturating D loss", alpha=0.75)
plt.plot(G_losses_nonsat,label="Non-saturating G loss", alpha=0.75)
plt.plot(D_losses_nonsat,label="Non-saturating D loss", alpha=0.75)
plt.xlabel("Iterations")
plt.ylabel("Loss")
plt.legend()
plt.show()
Generator gradients mean and standard deviation versus training iteration
Below is a plot of the G gradients mean and standard deviation versus training iterations.
plt.figure(figsize=(10,5))
plt.title("Generator gradient means")
plt.plot(G_grads_mean_sat, label="Saturating G loss", alpha=0.75)
plt.plot(G_grads_mean_nonsat, label="Non-saturating G loss", alpha=0.75)
plt.xlabel("Iterations")
plt.ylabel("Gradient mean")
plt.legend()
plt.show()
plt.figure(figsize=(10,5))
plt.title("Generator gradient standard deviations")
plt.plot(G_grads_std_sat,label="Saturating G loss", alpha=0.75)
plt.plot(G_grads_std_nonsat,label="Non-saturating G loss", alpha=0.75)
plt.xlabel("Iterations")
plt.ylabel("Gradient standard deviation")
plt.legend()
plt.show()
Visualization of G’s progression
Remember how we saved the generator’s output on the fixed_noise batch after every epoch of training. Now, we can visualize the training progression of G with an animation. Press the play button to start the animation.
# Visualize results with saturating G loss
fig = plt.figure(figsize=(8,8))
plt.title('Saturating G loss')
plt.axis("off")
ims = [[plt.imshow(np.transpose(i,(1,2,0)), animated=True)] for i in img_list_sat]
ani = animation.ArtistAnimation(fig, ims, interval=1000, repeat_delay=1000, blit=True)
HTML(ani.to_jshtml())
# Visualize results with non-saturating G loss
fig = plt.figure(figsize=(8,8))
plt.title('Non-saturating G loss')
plt.axis("off")
ims = [[plt.imshow(np.transpose(i,(1,2,0)), animated=True)] for i in img_list_nonsat]
ani = animation.ArtistAnimation(fig, ims, interval=1000, repeat_delay=1000, blit=True)
HTML(ani.to_jshtml())
Real Images vs. Fake Images
Finally, lets take a look at some real images and fake images side by side.
# Grab a batch of real images from the dataloader
real_batch = next(iter(dataloader))
# Plot the real images
plt.figure(figsize=(15,15))
plt.subplot(1,3,1)
plt.axis("off")
plt.title("Real images")
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=5, normalize=True).cpu(),(1,2,0)))
# Plot the fake images from the last epoch
plt.subplot(1,3,2)
plt.axis("off")
plt.title("Fake images - saturating G loss")
plt.imshow(np.transpose(img_list_sat[-1],(1,2,0)))
# Plot the fake images from the last epoch
plt.subplot(1,3,3)
plt.axis("off")
plt.title("Fake images - non-saturating G loss")
plt.imshow(np.transpose(img_list_nonsat[-1],(1,2,0)))
plt.show()