Backpropagation¶

Backpropagation is a method for efficiently computing the gradient of the cost function of a neural network with respect to its parameters. These partial derivatives can then be used to update the network's parameters using, e.g., gradient descent. This may be the most common method for training neural networks. Deriving backpropagation involves numerous clever applications of the chain rule for functions of vectors. For a somewhat more accessible and readable (but slightly less complete) tutorial which uses notation similar to this one, please see http://neuralnetworksanddeeplearning.com/chap2.html

Review: The chain rule¶

The chain rule is a way to compute the derivative of a function whose variables are themselves functions of other variables. If $C$ is a scalar-valued function of a scalar $z$ and $z$ is itself a scalar-valued function of another scalar variable $w$, then the chain rule states that $$\frac{\partial C}{\partial w} = \frac{\partial C}{\partial z}\frac{\partial z}{\partial w}$$ For scalar-valued functions of more than one variable, the chain rule essentially becomes additive. In other words, if $C$ is a scalar-valued function of $N$ variables $z_1, \ldots, z_N$, each of which is a function of some variable $w$, the chain rule states that $$\frac{\partial C}{\partial w} = \sum_{i = 1}^N \frac{\partial C}{\partial z_i}\frac{\partial z_i}{\partial w}$$

Notation¶

In the following derivation, we'll use the following notation:

$L$ - Number of layers in the network.

$N^n$ - Dimensionality of layer $n \in \{0, \ldots, L\}$. $N^0$ is the dimensionality of the input; $N^L$ is the dimensionality of the output.

$W^m \in \mathbb{R}^{N^m \times N^{m - 1}}$ - Weight matrix for layer $m \in \{1, \ldots, L\}$. $W^m_{ij}$ is the weight between the $i^{th}$ unit in layer $m$ and the $j^{th}$ unit in layer $m - 1$.

$b^m \in \mathbb{R}^{N^m}$ - Bias vector for layer $m$.

$\sigma^m$ - Nonlinear activation function of the units in layer $m$, applied elementwise.

$z^m \in \mathbb{R}^{N^m}$ - Linear mix of the inputs to layer $m$, computed by $z^m = W^m a^{m - 1} + b^m$.

$a^m \in \mathbb{R}^{N^m}$ - Activation of units in layer $m$, computed by $a^m = \sigma^m(h^m) = \sigma^m(W^m a^{m - 1} + b^m)$. $a^L$ is the output of the network. We define the special case $a^0$ as the input of the network.

$y \in \mathbb{R}^{N^L}$ - Target output of the network.

$C$ - Cost/error function of the network, which is a function of $a^L$ (the network output) and $y$ (treated as a constant).

Backpropagation in general¶

In order to train the network using a gradient descent algorithm, we need to know the gradient of each of the parameters with respect to the cost/error function $C$; that is, we need to know $\frac{\partial C}{\partial W^m}$ and $\frac{\partial C}{\partial b^m}$. It will be sufficient to derive an expression for these gradients in terms of the following terms, which we can compute based on the neural network's architecture:

• $\frac{\partial C}{\partial a^L}$: The derivative of the cost function with respect to its argument, the output of the network
• $\frac{\partial a^m}{\partial z^m}$: The derivative of the nonlinearity used in layer $m$ with respect to its argument

To compute the gradient of our cost/error function $C$ to $W^m_{ij}$ (a single entry in the weight matrix of the layer $m$), we can first note that $C$ is a function of $a^L$, which is itself a function of the linear mix variables $z^m_k$, which are themselves functions of the weight matrices $W^m$ and biases $b^m$. With this in mind, we can use the chain rule as follows:

$$\frac{\partial C}{\partial W^m_{ij}} = \sum_{k = 1}^{N^m} \frac{\partial C}{\partial z^m_k} \frac{\partial z^m_k}{\partial W^m_{ij}}$$

Note that by definition $$z^m_k = \sum_{l = 1}^{N^m} W^m_{kl} a_l^{m - 1} + b^m_k$$ It follows that $\frac{\partial z^m_k}{\partial W^m_{ij}}$ will evaluate to zero when $i \ne k$ because $z^m_k$ does not interact with any elements in $W^m$ except for those in the $k$th row, and we are only considering the entry $W^m_{ij}$. When $i = k$, we have

\begin{align*} \frac{\partial z^m_i}{\partial W^m_{ij}} &= \frac{\partial}{\partial W^m_{ij}}\left(\sum_{l = 1}^{N^m} W^m_{il} a_l^{m - 1} + b^m_i\right)\\ &= a^{m - 1}_j\\ \rightarrow \frac{\partial z^m_k}{\partial W^m_{ij}} &= \begin{cases} 0 & k \ne i\\ a^{m - 1}_j & k = i \end{cases} \end{align*}

The fact that $\frac{\partial C}{\partial a^m_k}$ is $0$ unless $k = i$ causes the summation above to collapse, giving

$$\frac{\partial C}{\partial W^m_{ij}} = \frac{\partial C}{\partial z^m_i} a^{m - 1}_j$$

or in vector form

$$\frac{\partial C}{\partial W^m} = \frac{\partial C}{\partial z^m} a^{m - 1 \top}$$

Similarly for the bias variables $b^m$, we have

$$\frac{\partial C}{\partial b^m_i} = \sum_{k = 1}^{N^m} \frac{\partial C}{\partial z^m_k} \frac{\partial z^m_k}{\partial b^m_i}$$

As above, it follows that $\frac{\partial z^m_k}{\partial b^m_i}$ will evaluate to zero when $i \ne k$ because $z^m_k$ does not interact with any element in $b^m$ except $b^m_k$. When $i = k$, we have

\begin{align*} \frac{\partial z^m_i}{\partial b^m_i} &= \frac{\partial}{\partial b^m_i}\left(\sum_{l = 1}^{N^m} W^m_{il} a_l^{m - 1} + b^m_i\right)\\ &= 1\\ \rightarrow \frac{\partial z^m_i}{\partial b^m_i} &= \begin{cases} 0 & k \ne i\\ 1 & k = i \end{cases} \end{align*}

The summation also collapses to give

$$\frac{\partial C}{\partial b^m_i} = \frac{\partial C}{\partial z^m_i}$$

or in vector form

$$\frac{\partial C}{\partial b^m} = \frac{\partial C}{\partial z^m}$$

Now, we must compute $\frac{\partial C}{\partial z^m_k}$. For the final layer ($m = L$), this term is straightforward to compute using the chain rule:

$$\frac{\partial C}{\partial z^L_k} = \frac{\partial C}{\partial a^L_k} \frac{\partial a^L_k}{\partial z^L_k}$$

or, in vector form

$$\frac{\partial C}{\partial z^L} = \frac{\partial C}{\partial a^L} \frac{\partial a^L}{\partial z^L}$$

The first term $\frac{\partial C}{\partial a^L}$ is just the derivative of the cost function with respect to its argument, whose form depends on the cost function chosen. Similarly, $\frac{\partial a^m}{\partial z^m}$ (for any layer $m$ includling $L$) is the derivative of the layer's nonlinearity with respect to its argument and will depend on the choice of nonlinearity. For other layers, we again invoke the chain rule:

\begin{align*} \frac{\partial C}{\partial z^m_k} &= \frac{\partial C}{\partial a^m_k} \frac{\partial a^m_k}{\partial z^m_k}\\ &= \left(\sum_{l = 1}^{N^{m + 1}}\frac{\partial C}{\partial z^{m + 1}_l}\frac{\partial z^{m + 1}_l}{\partial a^m_k}\right)\frac{\partial a^m_k}{\partial z^m_k}\\ &= \left(\sum_{l = 1}^{N^{m + 1}}\frac{\partial C}{\partial z^{m + 1}_l}\frac{\partial}{\partial a^m_k} \left(\sum_{h = 1}^{N^m} W^{m + 1}_{lh} a_h^m + b_l^{m + 1}\right)\right) \frac{\partial a^m_k}{\partial z^m_k}\\ &= \left(\sum_{l = 1}^{N^{m + 1}}\frac{\partial C}{\partial z^{m + 1}_l} W^{m + 1}_{lk}\right) \frac{\partial a^m_k}{\partial z^m_k}\\ &= \left(\sum_{l = 1}^{N^{m + 1}}W^{m + 1\top}_{kl} \frac{\partial C}{\partial z^{m + 1}_l}\right) \frac{\partial a^m_k}{\partial z^m_k}\\ \end{align*}

where the last simplification was made because by convention $\frac{\partial C}{\partial z^{m + 1}_l}$ is a column vector, allowing us to write the following vector form:

$$\frac{\partial C}{\partial z^m} = \left(W^{m + 1\top} \frac{\partial C}{\partial z^{m + 1}}\right) \circ \frac{\partial a^m}{\partial z^m}$$

Note that we now have the ingredients to efficiently compute the gradient of the cost function with respect to the network's parameters: First, we compute $\frac{\partial C}{\partial z^L_k}$ based on the choice of cost function and nonlinearity. Then, we recursively can compute $\frac{\partial C}{\partial z^m}$ layer-by-layer based on the term $\frac{\partial C}{\partial z^{m + 1}}$ computed from the previous layer and the nonlinearity of the layer (this is called the "backward pass").

Backpropagation in practice¶

As discussed above, the exact form of the updates depends on both the chosen cost function and each layer's chosen nonlinearity. The following two table lists the some common choices for nonlinearities and the required partial derivative for deriving the gradient for each layer:

Nonlinearity $a^m = \sigma^m(z^m)$ $\frac{\partial a^m}{\partial z^m}$ Notes
Sigmoid $\frac{1}{1 + e^{z^m}}$ $\sigma^m(z^m)(1 - \sigma^m(z^m)) = a^m(1 - a^m)$ "Squashes" any input to the range $[0, 1]$
Tanh $\frac{e^{z^m} - e^{-z^m}}{e^{z^m} + e^{-z^m}}$ $1 - (\sigma^m(z^m))^2 = 1 - (a^m)^2$ Equivalent, up to scaling, to the sigmoid function
ReLU $\max(0, z^m)$ $0, z^m < 0;\; 1, z^m \ge 0$ Commonly used in neural networks with many layers

Similarly, the following table collects some common cost functions and the partial derivative needed to compute the gradient for the final layer:

Cost Function $C$ $\frac{\partial C}{\partial a^L}$ Notes
Squared Error $\frac{1}{2}(y - a^L)^\top(y - a^L)$ $y - a^L$ Commonly used when the output is not constrained to a specific range
Cross-Entropy $(y - 1)\log(1 - a^L) - y\log(a^L)$ $\frac{a^L - y}{a^L(1 - a^L)}$ Commonly used for binary classification tasks; can yield faster convergence

In practice, backpropagation proceeds in the following manner for each training sample:

1. Forward pass: Given the network input $a^0$, compute $a^m$ recursively by $$a^1 = \sigma^1(W^1 a^0 + b^1), \ldots, a^L = \sigma^L(W^L a^{L - 1} + b^L)$$
2. Backward pass: Compute $$\frac{\partial C}{\partial z^L} = \frac{\partial C}{\partial a^L} \frac{\partial a^L}{\partial z^L}$$ for the final layer based on the tables above, then recursively compute $$\frac{\partial C}{\partial z^m} = \left(W^{m + 1\top} \frac{\partial C}{\partial z^{m + 1}}\right) \circ \frac{\partial a^m}{\partial z^m}$$ for all other layers. Plug these values into $$\frac{\partial C}{\partial W^m} = \frac{\partial C}{\partial z^m_i} a^{m - 1 \top}$$ and $$\frac{\partial C}{\partial b^m} = \frac{\partial C}{\partial z^m}$$ to obtain the updates.

Example: Sigmoid network with cross-entropy loss using gradient descent¶

A common network architecture is one with fully connected layers where each layer's nonlinearity is the sigmoid function $a^m = \frac{1}{1 + e^{z^m}}$ and the cost function is the cross-entropy loss $(y - 1)\log(1 - a^L) - y\log(a^L)$. To compute the updates for gradient descent, we first compute (based on the tables above) \begin{align*} \frac{\partial C}{\partial z^L} &= \frac{\partial C}{\partial a^L} \frac{\partial a^L}{\partial z^L}\\ &= \left(\frac{a^L - y}{a^L(1 - a^L)}\right)a^L(1 - a^L)\\ &= a^L - y \end{align*} From here, we can compute \begin{align*} \frac{\partial C}{\partial z^{L - 1}} &= \left(W^{L\top} \frac{\partial C}{\partial z^L} \right) \circ \frac{\partial a^{L - 1}}{\partial z^{L - 1}}\\ &= W^{L\top} (a^L - y) \circ a^{L - 1}(1 - a^{L - 1})\\ \frac{\partial C}{\partial z^{L - 2}} &= \left(W^{L - 1\top} \frac{\partial C}{\partial z^{L - 1}} \right) \circ \frac{\partial a^{L - 2}}{\partial z^{L - 2}}\\ &= W^{L - 1\top} \left(W^{L\top} (a^L - y) \circ a^{L - 1}(1 - a^{L - 1})\right) \circ a^{L - 2}(1 - a^{L - 2}) \end{align*} and so on, until we have computed $\frac{\partial C}{\partial z^m}$ for $m \in \{1, \ldots, L\}$. This allows us to compute $\frac{\partial C}{\partial W^m_{ij}}$ and $\frac{\partial C}{\partial b^m_i}$, e.g. \begin{align*} \frac{\partial C}{\partial W^L} &= \frac{\partial C}{\partial z^L} a^{L - 1 \top}\\ &= (a^L - y)a^{L - 1\top}\\ \frac{\partial C}{\partial W^{L - 1}} &= \frac{\partial C}{\partial z^{L - 1}} a^{L - 2 \top}\\ &= W^{L\top} (a^L - y) \circ a^{L - 1}(1 - a^{L - 1}) a^{L - 2\top} \end{align*} and so on. Standard gradient descent then updates each parameter as follows: $$W^m = W^m - \lambda \frac{\partial C}{\partial W^m}$$ $$b^m = b^m - \lambda \frac{\partial C}{\partial b^m}$$ where $\lambda$ is the learning rate. This process is repeated until some stopping criteria is met.

Toy Python example¶

Due to the recursive nature of the backpropagation algorithm, it lends itself well to software implementations. The following code implements a multi-layer perceptron which is trained using backpropagation with user-supplied nonlinearities, layer sizes, and cost function.

In :
# Ensure python 3 forward compatibility
from __future__ import print_function
import numpy as np

def sigmoid(x):
return 1/(1 + np.exp(-x))

class SigmoidLayer:
def __init__(self, n_input, n_output):
self.W = np.random.randn(n_output, n_input)
self.b = np.random.randn(n_output, 1)
def output(self, X):
if X.ndim == 1:
X = X.reshape(-1, 1)
return sigmoid(self.W.dot(X) + self.b)

class SigmoidNetwork:

def __init__(self, layer_sizes):
'''
:parameters:
- layer_sizes : list of int
List of layer sizes of length L+1 (including the input dimensionality)
'''
self.layers = []
for n_input, n_output in zip(layer_sizes[:-1], layer_sizes[1:]):
self.layers.append(SigmoidLayer(n_input, n_output))

def train(self, X, y, learning_rate=0.2):
X = np.array(X)
y = np.array(y)
if X.ndim == 1:
X = X.reshape(-1, 1)
if y.ndim == 1:
y = y.reshape(1, -1)

# Forward pass - compute a^n for n in {0, ... L}
layer_outputs = [X]
for layer in self.layers:
layer_outputs.append(layer.output(layer_outputs[-1]))

# Backward pass - compute \partial C/\partial z^m for m in {L, ..., 1}
cost_partials = [layer_outputs[-1] - y]
for layer, layer_output in zip(reversed(self.layers), reversed(layer_outputs[:-1])):
cost_partials.append(layer.W.T.dot(cost_partials[-1])*layer_output*(1 - layer_output))
cost_partials.reverse()

# Compute weight gradient step
for cost_partial, layer_output in zip(cost_partials[1:], layer_outputs[:-1]):
# and biases
b_updates = [cost_partial.mean(axis=1).reshape(-1, 1) for cost_partial in cost_partials[1:]]

for W_update, b_update, layer in zip(W_updates, b_updates, self.layers):
layer.W -= W_update*learning_rate
layer.b -= b_update*learning_rate

def output(self, X):
a = np.array(X)
if a.ndim == 1:
a = a.reshape(-1, 1)
for layer in self.layers:
a = layer.output(a)
return a
In :
nn = SigmoidNetwork([2, 2, 1])
X = np.array([[0, 1, 0, 1],
[0, 0, 1, 1]])
y = np.array([0, 1, 1, 0])
for n in range(int(1e3)):
nn.train(X, y, learning_rate=1.)
print("Input\tOutput\tQuantized")
for i in [[0, 0], [1, 0], [0, 1], [1, 1]]:
print("{}\t{:.4f}\t{}".format(i, nn.output(i)[0, 0], 1*(nn.output(i) > .5)))
Input	Output	Quantized
[0, 0]	0.0143	
[1, 0]	0.9888	
[0, 1]	0.9892	
[1, 1]	0.0124