# CS 237 Spring 2019
# Author: Alina Ene (aene@bu.edu)
# Used in L20
In this notebook, we consider the following experiment:
Experiment: We repeatedly roll a fair 6-sided die until we get a pair of consecutive sixes for the first time. The rolls are independent.
We are interested in the expectation of the following random variable:
X = number of rolls we perform
What is the distribution of X? On a first glance, it has the flavor of a geometric random variable. One reasoning could be the following:
Think: Do you agree with the above argument?
Let us do the simulation and see what we get. The code below estimates Ex(X).
import numpy as np
from numpy.random import randint
import matplotlib.pyplot as plt
plt.style.use('seaborn')
# simulation for the double sixes experiment
# a single experiment
def single_trial():
prev_roll = randint(1, 7) # fair 6-sided die roll
num_rolls = 1
while True:
curr_roll = randint(1, 7)
num_rolls = num_rolls + 1
if prev_roll == 6 and curr_roll == 6:
return num_rolls
prev_roll = curr_roll
return num_rolls
# perform N trials
N = 100000
rolls = []
trial = [i + 1 for i in range(N)]
for i in range(N):
num_rolls = single_trial()
rolls.append(num_rolls)
avg_rolls = sum(rolls)
avg_rolls = avg_rolls / N
plt.bar(trial,rolls)
plt.xlabel("Trial")
plt.ylabel("Number of rolls")
plt.title("Ex(# rolls until 66) = " + str(avg_rolls),fontsize=20)
plt.show()
The simulation says the expectation is about 42, whereas the above argument says it is 36. So there are two possibilities:
The code seems simple enough and correct enough. So maybe the above argument is worth taking a closer look. Indeed, as we have seen in class, a Geometric(p) random variable counts the number of independent trials until the first success, where p is the probability that a single trial is a success. The above reasoning argues (correctly) that X is the number of trials until the first success, where Pr(success) = 1/36. But it does not argue that the trials are independent!
Are the trials independent? Consider the first two trials: the first trial is rolls 1 and 2, and the second trial is rolls 2 and 3. Since the two trials share the second roll of the die, they are dependent.
Therefore the conclusion of the above argument does not follow.
The random variable X is not geometric. The trials as defined earlier (trial = two consecutive rolls) are not independent, since two consecutive trials share a roll, but trials that are farther apart are independent. So perhaps we can still use the ideas we developed for the geometric distribution, but apply them with a bit of care. As we now illustrate, this is indeed the case: we will use the ideas from lecture to break down the expectation. The event of interest now is whether the first two rolls were sixes.
Let
Also, for notational convenience, let $p = \frac{1}{6}$ and $q = \frac{5}{6}$.
Since the events $\overline{S}_2$, $S_2 \cap S_1$, and $S_2 \cap \overline{S}_1$ form a partition of the sample space, the law of total Ex gives us that:
$$ \begin{align*} \mathrm{Ex}(X) &= \mathrm{Ex}(X | \overline{S}_1) \Pr(\overline{S}_1) + \mathrm{Ex}(X | S_1 \cap S_2) \Pr(S_1 \cap S_2) + \mathrm{Ex}(X | S_1 \cap \overline{S}_2) \Pr(S_1 \cap \overline{S}_2)\\ &= \mathrm{Ex}(X | \overline{S}_1) q + \mathrm{Ex}(X | S_1 \cap S_2) p^2 + \mathrm{Ex}(X | S_1 \cap \overline{S}_2) pq \end{align*} $$Now we consider each conditional expectation in turn. As in class, the key will be the fact that the process is memoryless about what happened in the past, provided that the past is not so recent. More precisely, we only need to remember whether the previous roll was a 6 or not, since the outcomes of the rolls that were further in the past do not affect the number of rolls we will need from this point onward. With this observation in mind, we have
$$ \begin{align*} \mathrm{Ex}(X | \overline{S}_1) &= 1 + \mathrm{Ex}(X)\\ \mathrm{Ex}(X | S_1 \cap S_2) & = 2\\ \mathrm{Ex}(X | S_1 \cap \overline{S}_2) &= 2 + \mathrm{Ex}(X) \end{align*} $$The formal argument for the above identities is similar to what we saw in lecture, and it is left as an exercise.
Putting everything together, we obtain: $$ \mathrm{Ex}(X) = (1 + \mathrm{Ex}(X)) q + 2 p^2 + (2 + \mathrm{Ex}(X)) pq $$ By solving for $\mathrm{Ex}(X)$ in the above equation and using that $q = 1-p$, we obtain: $$ \mathrm{Ex}(X) = \frac{1+p}{p^2} = 42$$
Recall that the simulation gave us an empirical expectation of approximately 42.