Many frequentist methods for hypothesis testing roughly involve the following steps:

- Writing down the hypotheses, notably the
**null hypothesis**which is the*opposite*of the hypothesis you want to prove (with a certain degree of confidence). - Computing a
**test statistics**, a mathematical formula depending on the test type, the model, the hypotheses, and the data. - Using the computed value to accept the hypothesis, reject it, or fail to conclude.

Here, we flip a coin $n$ times and we observe $h$ heads. We want to know whether the coin is fair (null hypothesis). This example is extremely simple yet quite good for pedagogical purposes. Besides, it is the basis of many more complex methods.

We denote by $\mathcal B(q)$ the Bernoulli distribution with unknown parameter $q$ (http://en.wikipedia.org/wiki/Bernoulli_distribution). A Bernoulli variable:

- is 0 (tail) with probability $1-q$,
- is 1 (head) with probability $q$.

- Let's suppose that, after $n=100$ flips, we get $h=61$ heads. We choose a significance level of 0.05: is the coin fair or not? Our null hypothesis is:
*the coin is fair*($q = 1/2$).

In [ ]:

```
import numpy as np
import scipy.stats as st
import scipy.special as sp
```

In [ ]:

```
n = 100 # number of coin flips
h = 61 # number of heads
q = .5 # null-hypothesis of fair coin
```

- Let's compute the
**z-score**, which is defined by the following formula (`xbar`

is the estimated average of the distribution). We will explain this formula in the next section*How it works...*

In [ ]:

```
xbar = float(h)/n
z = (xbar - q) * np.sqrt(n / (q*(1-q))); z
```

- Now, from the z-score, we can compute the p-value as follows:

In [ ]:

```
pval = 2 * (1 - st.norm.cdf(z)); pval
```

- This p-value is less than 0.05, so we reject the null hypothesis and conclude that
*the coin is probably not fair*.

You'll find all the explanations, figures, references, and much more in the book (to be released later this summer).

IPython Cookbook, by Cyrille Rossant, Packt Publishing, 2014 (500 pages).