“Let’s be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by reference to the real world. In science consensus is irrelevant. What is relevant is reproducible results.” – Michael Crichton

NumPy is a first-rate library for numerical programming

- Widely used in academia, finance and industry.
- Mature, fast, stable and under continuous development.

We have already seen some code involving NumPy in the preceding lectures.

In this lecture, we will start a more systematic discussion of both

- NumPy arrays and
- the fundamental array processing operations provided by NumPy.

The essential problem that NumPy solves is fast array processing.

The most important structure that NumPy defines is an array data type formally called a numpy.ndarray.

NumPy arrays power a large proportion of the scientific Python ecosystem.

Let’s first import the library.

In [ ]:

```
import numpy as np
```

To create a NumPy array containing only zeros we use np.zeros

In [ ]:

```
a = np.zeros(3)
a
```

In [ ]:

```
type(a)
```

NumPy arrays are somewhat like native Python lists, except that

- Data
*must be homogeneous*(all elements of the same type). - These types must be one of the data types (
`dtypes`

) provided by NumPy.

The most important of these dtypes are:

- float64: 64 bit floating-point number
- int64: 64 bit integer
- bool: 8 bit True or False

There are also dtypes to represent complex numbers, unsigned integers, etc.

On modern machines, the default dtype for arrays is `float64`

In [ ]:

```
a = np.zeros(3)
type(a[0])
```

If we want to use integers we can specify as follows:

In [ ]:

```
a = np.zeros(3, dtype=int)
type(a[0])
```

In [ ]:

```
z = np.zeros(10)
```

Here `z`

is a *flat* array with no dimension — neither row nor column vector.

The dimension is recorded in the `shape`

attribute, which is a tuple

In [ ]:

```
z.shape
```

Here the shape tuple has only one element, which is the length of the array (tuples with one element end with a comma).

To give it dimension, we can change the `shape`

attribute

In [ ]:

```
z.shape = (10, 1)
z
```

In [ ]:

```
z = np.zeros(4)
z.shape = (2, 2)
z
```

As we’ve seen, the `np.zeros`

function creates an array of zeros.

You can probably guess what `np.ones`

creates.

Related is `np.empty`

, which creates arrays in memory that can later be populated with data

In [ ]:

```
z = np.empty(3)
z
```

The numbers you see here are garbage values.

(Python allocates 3 contiguous 64 bit pieces of memory, and the existing contents of those memory slots are interpreted as `float64`

values)

To set up a grid of evenly spaced numbers use `np.linspace`

In [ ]:

```
z = np.linspace(2, 4, 5) # From 2 to 4, with 5 elements
```

To create an identity matrix use either `np.identity`

or `np.eye`

In [ ]:

```
z = np.identity(2)
z
```

In addition, NumPy arrays can be created from Python lists, tuples, etc. using `np.array`

In [ ]:

```
z = np.array([10, 20]) # ndarray from Python list
z
```

In [ ]:

```
type(z)
```

In [ ]:

```
z = np.array((10, 20), dtype=float) # Here 'float' is equivalent to 'np.float64'
z
```

In [ ]:

```
z = np.array([[1, 2], [3, 4]]) # 2D array from a list of lists
z
```

See also `np.asarray`

, which performs a similar function, but does not make
a distinct copy of data already in a NumPy array.

In [ ]:

```
na = np.linspace(10, 20, 2)
na is np.asarray(na) # Does not copy NumPy arrays
```

In [ ]:

```
na is np.array(na) # Does make a new copy --- perhaps unnecessarily
```

To read in the array data from a text file containing numeric data use `np.loadtxt`

or `np.genfromtxt`

—see the documentation for details.

In [ ]:

```
z = np.linspace(1, 2, 5)
z
```

In [ ]:

```
z[0]
```

In [ ]:

```
z[0:2] # Two elements, starting at element 0
```

In [ ]:

```
z[-1]
```

For 2D arrays the index syntax is as follows:

In [ ]:

```
z = np.array([[1, 2], [3, 4]])
z
```

In [ ]:

```
z[0, 0]
```

In [ ]:

```
z[0, 1]
```

And so on.

Note that indices are still zero-based, to maintain compatibility with Python sequences.

Columns and rows can be extracted as follows

In [ ]:

```
z[0, :]
```

In [ ]:

```
z[:, 1]
```

NumPy arrays of integers can also be used to extract elements

In [ ]:

```
z = np.linspace(2, 4, 5)
z
```

In [ ]:

```
indices = np.array((0, 2, 3))
z[indices]
```

Finally, an array of `dtype bool`

can be used to extract elements

In [ ]:

```
z
```

In [ ]:

```
d = np.array([0, 1, 1, 0, 0], dtype=bool)
d
```

In [ ]:

```
z[d]
```

We’ll see why this is useful below.

An aside: all elements of an array can be set equal to one number using slice notation

In [ ]:

```
z = np.empty(3)
z
```

In [ ]:

```
z[:] = 42
z
```

Arrays have useful methods, all of which are carefully optimized

In [ ]:

```
a = np.array((4, 3, 2, 1))
a
```

In [ ]:

```
a.sort() # Sorts a in place
a
```

In [ ]:

```
a.sum() # Sum
```

In [ ]:

```
a.mean() # Mean
```

In [ ]:

```
a.max() # Max
```

In [ ]:

```
a.argmax() # Returns the index of the maximal element
```

In [ ]:

```
a.cumsum() # Cumulative sum of the elements of a
```

In [ ]:

```
a.cumprod() # Cumulative product of the elements of a
```

In [ ]:

```
a.var() # Variance
```

In [ ]:

```
a.std() # Standard deviation
```

In [ ]:

```
a.shape = (2, 2)
a.T # Equivalent to a.transpose()
```

Another method worth knowing is `searchsorted()`

.

If `z`

is a nondecreasing array, then `z.searchsorted(a)`

returns the index of the first element of `z`

that is `>= a`

In [ ]:

```
z = np.linspace(2, 4, 5)
z
```

In [ ]:

```
z.searchsorted(2.2)
```

Many of the methods discussed above have equivalent functions in the NumPy namespace

In [ ]:

```
a = np.array((4, 3, 2, 1))
```

In [ ]:

```
np.sum(a)
```

In [ ]:

```
np.mean(a)
```

The operators `+`

, `-`

, `*`

, `/`

and `**`

all act *elementwise* on arrays

In [ ]:

```
a = np.array([1, 2, 3, 4])
b = np.array([5, 6, 7, 8])
a + b
```

In [ ]:

```
a * b
```

We can add a scalar to each element as follows

In [ ]:

```
a + 10
```

Scalar multiplication is similar

In [ ]:

```
a * 10
```

The two-dimensional arrays follow the same general rules

In [ ]:

```
A = np.ones((2, 2))
B = np.ones((2, 2))
A + B
```

In [ ]:

```
A + 10
```

In [ ]:

```
A * B
```

With Anaconda’s scientific Python package based around Python 3.5 and above,
one can use the `@`

symbol for matrix multiplication, as follows:

In [ ]:

```
A = np.ones((2, 2))
B = np.ones((2, 2))
A @ B
```

(For older versions of Python and NumPy you need to use the np.dot function)

We can also use `@`

to take the inner product of two flat arrays

In [ ]:

```
A = np.array((1, 2))
B = np.array((10, 20))
A @ B
```

In fact, we can use `@`

when one element is a Python list or tuple

In [ ]:

```
A = np.array(((1, 2), (3, 4)))
A
```

In [ ]:

```
A @ (0, 1)
```

Since we are post-multiplying, the tuple is treated as a column vector.

NumPy arrays are mutable data types, like Python lists.

In other words, their contents can be altered (mutated) in memory after initialization.

We already saw examples above.

Here’s another example:

In [ ]:

```
a = np.array([42, 44])
a
```

In [ ]:

```
a[-1] = 0 # Change last element to 0
a
```

Mutability leads to the following behavior (which can be shocking to MATLAB programmers…)

In [ ]:

```
a = np.random.randn(3)
a
```

In [ ]:

```
b = a
b[0] = 0.0
a
```

What’s happened is that we have changed `a`

by changing `b`

.

The name `b`

is bound to `a`

and becomes just another reference to the
array (the Python assignment model is described in more detail later in the course).

Hence, it has equal rights to make changes to that array.

This is in fact the most sensible default behavior!

It means that we pass around only pointers to data, rather than making copies.

Making copies is expensive in terms of both speed and memory.

It is of course possible to make `b`

an independent copy of `a`

when required.

This can be done using `np.copy`

In [ ]:

```
a = np.random.randn(3)
a
```

In [ ]:

```
b = np.copy(a)
b
```

Now `b`

is an independent copy (called a *deep copy*)

In [ ]:

```
b[:] = 1
b
```

In [ ]:

```
a
```

Note that the change to `b`

has not affected `a`

.

Let’s look at some other useful things we can do with NumPy.

NumPy provides versions of the standard functions `log`

, `exp`

, `sin`

, etc. that act *element-wise* on arrays

In [ ]:

```
z = np.array([1, 2, 3])
np.sin(z)
```

This eliminates the need for explicit element-by-element loops such as

In [ ]:

```
n = len(z)
y = np.empty(n)
for i in range(n):
y[i] = np.sin(z[i])
```

Because they act element-wise on arrays, these functions are called *vectorized functions*.

In NumPy-speak, they are also called *ufuncs*, which stands for “universal functions”.

As we saw above, the usual arithmetic operations (`+`

, `*`

, etc.) also
work element-wise, and combining these with the ufuncs gives a very large set of fast element-wise functions.

In [ ]:

```
z
```

In [ ]:

```
(1 / np.sqrt(2 * np.pi)) * np.exp(- 0.5 * z**2)
```

Not all user-defined functions will act element-wise.

For example, passing the function `f`

defined below a NumPy array causes a `ValueError`

In [ ]:

```
def f(x):
return 1 if x > 0 else 0
```

The NumPy function `np.where`

provides a vectorized alternative:

In [ ]:

```
x = np.random.randn(4)
x
```

In [ ]:

```
np.where(x > 0, 1, 0) # Insert 1 if x > 0 true, otherwise 0
```

You can also use `np.vectorize`

to vectorize a given function

In [ ]:

```
f = np.vectorize(f)
f(x) # Passing the same vector x as in the previous example
```

However, this approach doesn’t always obtain the same speed as a more carefully crafted vectorized function.

In [ ]:

```
z = np.array([2, 3])
y = np.array([2, 3])
z == y
```

In [ ]:

```
y[0] = 5
z == y
```

In [ ]:

```
z != y
```

The situation is similar for `>`

, `<`

, `>=`

and `<=`

.

We can also do comparisons against scalars

In [ ]:

```
z = np.linspace(0, 10, 5)
z
```

In [ ]:

```
z > 3
```

This is particularly useful for *conditional extraction*

In [ ]:

```
b = z > 3
b
```

In [ ]:

```
z[b]
```

Of course we can—and frequently do—perform this in one step

In [ ]:

```
z[z > 3]
```

NumPy provides some additional functionality related to scientific programming through its sub-packages.

We’ve already seen how we can generate random variables using np.random

In [ ]:

```
z = np.random.randn(10000) # Generate standard normals
y = np.random.binomial(10, 0.5, size=1000) # 1,000 draws from Bin(10, 0.5)
y.mean()
```

Another commonly used subpackage is np.linalg

In [ ]:

```
A = np.array([[1, 2], [3, 4]])
np.linalg.det(A) # Compute the determinant
```

In [ ]:

```
np.linalg.inv(A) # Compute the inverse
```

Much of this functionality is also available in SciPy, a collection of modules that are built on top of NumPy.

We’ll cover the SciPy versions in more detail soon.

For a comprehensive list of what’s available in NumPy see this documentation.

Consider the polynomial expression

$$ p(x) = a_0 + a_1 x + a_2 x^2 + \cdots a_N x^N = \sum_{n=0}^N a_n x^n \tag{1} $$

Earlier, you wrote a simple function `p(x, coeff)`

to evaluate (9.1) without considering efficiency.

Now write a new function that does the same job, but uses NumPy arrays and array operations for its computations, rather than any form of Python loop.

(Such functionality is already implemented as `np.poly1d`

, but for the sake of the exercise don’t use this class)

- Hint: Use
`np.cumprod()`

Let `q`

be a NumPy array of length `n`

with `q.sum() == 1`

.

Suppose that `q`

represents a probability mass function.

We wish to generate a discrete random variable $ x $ such that $ \mathbb P\{x = i\} = q_i $.

In other words, `x`

takes values in `range(len(q))`

and `x = i`

with probability `q[i]`

.

The standard (inverse transform) algorithm is as follows:

- Divide the unit interval $ [0, 1] $ into $ n $ subintervals $ I_0, I_1, \ldots, I_{n-1} $ such that the length of $ I_i $ is $ q_i $.
- Draw a uniform random variable $ U $ on $ [0, 1] $ and return the $ i $ such that $ U \in I_i $.

The probability of drawing $ i $ is the length of $ I_i $, which is equal to $ q_i $.

We can implement the algorithm as follows

In [ ]:

```
from random import uniform
def sample(q):
a = 0.0
U = uniform(0, 1)
for i in range(len(q)):
if a < U <= a + q[i]:
return i
a = a + q[i]
```

If you can’t see how this works, try thinking through the flow for a simple example, such as `q = [0.25, 0.75]`

It helps to sketch the intervals on paper.

Your exercise is to speed it up using NumPy, avoiding explicit loops

- Hint: Use
`np.searchsorted`

and`np.cumsum`

If you can, implement the functionality as a class called `DiscreteRV`

, where

- the data for an instance of the class is the vector of probabilities
`q`

- the class has a
`draw()`

method, which returns one draw according to the algorithm described above

If you can, write the method so that `draw(k)`

returns `k`

draws from `q`

.

Recall our earlier discussion of the empirical cumulative distribution function.

Your task is to

- Make the
`__call__`

method more efficient using NumPy. - Add a method that plots the ECDF over $ [a, b] $, where $ a $ and $ b $ are method parameters.

In [ ]:

```
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (10,6)
```

This code does the job

In [ ]:

```
def p(x, coef):
X = np.ones_like(coef)
X[1:] = x
y = np.cumprod(X) # y = [1, x, x**2,...]
return coef @ y
```

Let’s test it

In [ ]:

```
x = 2
coef = np.linspace(2, 4, 3)
print(coef)
print(p(x, coef))
# For comparison
q = np.poly1d(np.flip(coef))
print(q(x))
```

Here’s our first pass at a solution:

In [ ]:

```
from numpy import cumsum
from numpy.random import uniform
class DiscreteRV:
"""
Generates an array of draws from a discrete random variable with vector of
probabilities given by q.
"""
def __init__(self, q):
"""
The argument q is a NumPy array, or array like, nonnegative and sums
to 1
"""
self.q = q
self.Q = cumsum(q)
def draw(self, k=1):
"""
Returns k draws from q. For each such draw, the value i is returned
with probability q[i].
"""
return self.Q.searchsorted(uniform(0, 1, size=k))
```

The logic is not obvious, but if you take your time and read it slowly, you will understand.

There is a problem here, however.

Suppose that `q`

is altered after an instance of `discreteRV`

is
created, for example by

In [ ]:

```
q = (0.1, 0.9)
d = DiscreteRV(q)
d.q = (0.5, 0.5)
```

The problem is that `Q`

does not change accordingly, and `Q`

is the
data used in the `draw`

method.

To deal with this, one option is to compute `Q`

every time the draw
method is called.

But this is inefficient relative to computing `Q`

once-off.

A better option is to use descriptors.

A solution from the quantecon library using descriptors that behaves as we desire can be found here.

In [ ]:

```
"""
Modifies ecdf.py from QuantEcon to add in a plot method
"""
class ECDF:
"""
One-dimensional empirical distribution function given a vector of
observations.
Parameters
----------
observations : array_like
An array of observations
Attributes
----------
observations : array_like
An array of observations
"""
def __init__(self, observations):
self.observations = np.asarray(observations)
def __call__(self, x):
"""
Evaluates the ecdf at x
Parameters
----------
x : scalar(float)
The x at which the ecdf is evaluated
Returns
-------
scalar(float)
Fraction of the sample less than x
"""
return np.mean(self.observations <= x)
def plot(self, ax, a=None, b=None):
"""
Plot the ecdf on the interval [a, b].
Parameters
----------
a : scalar(float), optional(default=None)
Lower endpoint of the plot interval
b : scalar(float), optional(default=None)
Upper endpoint of the plot interval
"""
# === choose reasonable interval if [a, b] not specified === #
if a is None:
a = self.observations.min() - self.observations.std()
if b is None:
b = self.observations.max() + self.observations.std()
# === generate plot === #
x_vals = np.linspace(a, b, num=100)
f = np.vectorize(self.__call__)
ax.plot(x_vals, f(x_vals))
plt.show()
```

Here’s an example of usage

In [ ]:

```
fig, ax = plt.subplots()
X = np.random.randn(1000)
F = ECDF(X)
F.plot(ax)
```