Consider a simple, recursive implementation of Fibonacci:

In [1]:

```
def fib(n):
'''Regular, recursive Fibonacci.'''
if n < 2:
return n
return fib(n - 1) + fib(n - 2)
```

In [2]:

```
[fib(n) for n in range(10)]
```

Out[2]:

Problem is...it is slow:

In [3]:

```
%timeit fib(30)
```

It's slow because it recomputes many smaller Fibonacci numbers over and over again, resulting in exponential cost. Dynamic programming techniques can be used to speed things up.

`%timeOnce`

testing magic¶The default behaviour of `%timeit`

is to re-run a statement 7 times with each run comprising of several loops. Since this notebook will look at memoized functions (which can return repeated function calls in constant time), it is better to have only 1 test run and 1 loop.

In [4]:

```
%alias_magic timeOnce %timeit -p "-n1 -r1"
```

In [5]:

```
%timeOnce fib(30)
```

Dynamic programming involves breaking a problem up into smaller sub-problems, solving the sub-problems, and then using the solutions to solve the original problem. This strategy can be employed in two ways: top-down and bottom-up.

Memoization is a technique where the results of a function call are remembered by the function itself. When a memoized function is called with a particular input, the result is computed, stored somewhere (e.g. a hashmap), and the result is returned. Now if that function is called again with the same input, the result can simply be retrieved from the hashmap instead of recomputed.

Python makes memoization very easy with the @lru_cache decorator. As it happens, Fibonacci is used by the Python docs to illustrate how `@lru_cache`

works.

In [6]:

```
from functools import lru_cache
```

In [7]:

```
@lru_cache(maxsize=None)
def memoized_fib(n):
'''Memoized Fibonacci.
source: https://docs.python.org/3/library/functools.html'''
if n < 2:
return n
return memoized_fib(n - 1) + memoized_fib(n - 2)
```

The only difference between `memoized_fib()`

and `fib()`

is the `@lru_cache`

decorator, and yet:

In [8]:

```
%timeOnce memoized_fib(30)
```

`fib()`

takes ~400 milliseconds to compute the 30th Fibonnaci number while `memoized_fib()`

only takes ~20 *micro seconds*. The cost is O(n).

`memoized_fib()`

does even better when tested again:

In [9]:

```
%timeOnce memoized_fib(30)
```

For the second call, `memoized_fib()`

only needs to retrieve the result of `memoized_fib(30)`

from the cache so it is significantly faster.

It is possible to view the effectiveness of the cache with `.cache_info`

:

In [10]:

```
memoized_fib.cache_info()
```

Out[10]:

And to clear it:

In [11]:

```
memoized_fib.cache_clear()
```

While memoization is easy (in Python), it's not a perfect solution. Lots of calls must be made to start building up the cache so it is easy to hit the recursion limit.

In [12]:

```
try:
memoized_fib(800)
except RecursionError as e:
print(e)
```

With memoization, the cache is built-up "automatically" from function calls. However, there is no guarrentee that the construction of the cache is done optimally. Although it is true that no computation is repeated twice, it can take many recursive calls before a cached solution is found. Can this be avoided?

What if the cache were built-up intentionally? It can start with some base values:

\begin{align} \text{fib}(0) &= 0 \\ \text{fib}(1) &= 1 \end{align}and these base values can be used to add larger Fibonacci numbers to the cache:

\begin{align} \text{fib}(2) &= \text{fib}(1) + \text{fib}(0) \\ \text{fib}(2) &= 1 \end{align}This process can be repeated all the way up to the Fibonacci number given to the function.

General formula:

$$ \text{fib}(n) = \text{fib}(n-1) + \text{fib}(n-2) $$In [13]:

```
def tabulated_fib(n):
'''Tabulated Fibonacci.'''
# Create a cache to store computed Fibonacci values.
# Seed it with fib(0) = 0 and fib(1) = 1.
cache = [0, 1]
# Compute fib(2), fib(3), ..., fib(n)
for num in range(2, n+1):
cache.append(cache[num-1] + cache[num-2])
return cache[n]
```

How does it fare against `memoized_fib()`

?

In [14]:

```
%timeOnce tabulated_fib(30)
```

`tabulated_fib()`

is a bit faster than `memoized_fib()`

, but it is still in the same complexity class as `memoized_fib`

: $O(n)$.

Another advantage `tabulated_fib()`

has over `memoized_fib()`

is that it is not recursive, so it will never hit the maximum recursion limit.

In [15]:

```
tabulated_fib(800)
```

Out[15]:

The full cache isn't actually needed to compute Fibonacci, but I included it because dynamic programming often involves creating tables of solutions to small problems and referring to those tables to solve large problems. For Fibonacci, only the last two computations are needed but generally speaking, larger tables may be necessary to solve other problems using a dynamic programming approach.

In [16]:

```
def slim_fib(n):
'''Only remember the last two computations (fib(n-1) and fib(n-2)).'''
prev_fib = 1
prev_prev_fib = 0
for num in range(2, n+1):
next_fib = prev_prev_fib + prev_fib
prev_prev_fib = prev_fib
prev_fib = next_fib
return prev_fib
```

`slim_fib()`

is fastest of all:

In [17]:

```
%timeOnce slim_fib(30)
```

`slim_fib()`

is also $O(n)$, but only takes $O(1)$ space (compared to `tabulated_fib()`

which takes $O(n)$ space).