PyTorch allows you to dynamically define computational graphs. This is done by operating on `Variable`

s, which wrap PyTorch's Tensor objects.

Here is a simple example:

In [1]:

```
import torch
from torch.autograd import Variable
import numpy as np
```

In [2]:

```
def f(x):
return x**2 + 2 * x
```

In [3]:

```
x = Variable(torch.from_numpy(np.array([4.0])), requires_grad=True)
y = f(x)
```

In [4]:

```
y.backward()
```

In [5]:

```
x.grad.data # 2x + 2 for x = 4
```

Out[5]:

In [6]:

```
x = Variable(torch.from_numpy(np.array([5.0])), requires_grad=True)
y = f(x)
```

In [7]:

```
y.backward()
```

In [8]:

```
x.grad.data # 2x + 2 for x = 5
```

Out[8]:

Note that unlike in TensorFlow, we defined the graph on the fly. That is why it was more convenient to define a function in Python: we call the function as part of constructing the graph.