import torch
Differentiating the function $y = 2\mathbf{x}^{\top}\mathbf{x}$ with respect to the column vector $\mathbf{x}$
x = torch.arange(4.0)
x
tensor([0., 1., 2., 3.])
Before we calculate the gradient of $y$ with respect to $\mathbf{x}$, we need a place to store it
x.requires_grad_(True)
x.grad
We now calculate our function of x
and assign the result to y
y = 2 * torch.dot(x, x)
y
tensor(28., grad_fn=<MulBackward0>)
We can now take the gradient of y
with respect to x
y.backward()
x.grad
tensor([ 0., 4., 8., 12.])
We already know that the gradient of the function $y = 2\mathbf{x}^{\top}\mathbf{x}$ with respect to $\mathbf{x}$ should be $4\mathbf{x}$
x.grad == 4 * x
tensor([True, True, True, True])
Now let's calculate
another function of x
and take its gradient
x.grad.zero_()
y = x.sum()
y.backward()
x.grad
tensor([1., 1., 1., 1.])
Sum up the gradients computed individually for each example
x.grad.zero_()
y = x * x
y.backward(gradient=torch.ones(len(y)))
x.grad
tensor([0., 2., 4., 6.])
Move some calculations outside of the recorded computational graph
x.grad.zero_()
y = x * x
u = y.detach()
z = u * x
z.sum().backward()
x.grad == u
tensor([True, True, True, True])
x.grad.zero_()
y.sum().backward()
x.grad == 2 * x
tensor([True, True, True, True])
Even if a function required passing through a maze of Python control flow we can still calculate the gradient of the resulting variable
def f(a):
b = a * 2
while b.norm() < 1000:
b = b * 2
if b.sum() > 0:
c = b
else:
c = 100 * b
return c
a = torch.randn(size=(), requires_grad=True)
d = f(a)
d.backward()
a.grad == d / a
tensor(True)