where ∼ means that the ratio of the two number converges to 1 as n approaches ∞.
This is fine when we are talking about n∈{1,2,3,…}, but have you ever wondered what π! is?
For that, we need the Gamma function.
Just as an aside, the Beta and Gamma functions are closely related. The Gamma function should be amongst the Top 10 Functions of All Time (if there was ever such a list).
Γ(a)=∫∞0xae−xdxxfor any real a>0=∫∞0xa−1e−xdx(alternately)Note that if a approaches 0 from the right, the dxx is what drives that integral since x0=1 and e−0=1.
But ∫dxx=log(x) which blows up to −∞, which is why we restrict a>0.
How would we derive a PDF that is based on the Gamma distribution?
By normalizing the Gamma function.
1=∫∞0cxae−xdxx=∫∞01Γ(a)xae−xdxxAnd so the PDF for a Gamma distribution would be
Gamma(a,1)=1Γ(a)xae−xdxxfor x>0More generally Y∼Gamma(a,λ)Let Y=Xλ, where X∼Gamma(a,1)→y=xλx=λydxdy=λ⇒fY(y)=fX(x)dxdy transforming X to Y=1Γ(a)(λy)ae−λy1λyλ=1Γ(a)(λy)ae−λy1yfor y>0import matplotlib
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import (MultipleLocator, FormatStrFormatter,
AutoMinorLocator)
from scipy.stats import gamma
%matplotlib inline
plt.xkcd()
_, ax = plt.subplots(figsize=(12,8))
alpha_values = [1.,2.,3.,4.,5.,7.5,9.,0.5]
lambda_values = [2.,2.,2.,1.,0.5,1.,1.,0.5]
x = np.linspace(0, 20, 1000)
# plot the distributions
#fig, ax = plt.subplots(figsize=(12, 6))
for a,l in zip(alpha_values, lambda_values):
ax.plot(x, gamma.pdf(x, a, scale=l), lw=3.2, alpha=0.6, label=r'$\alpha$={}, $\lambda$={}'.format(a,l))
# legend styling
legend = ax.legend()
for label in legend.get_texts():
label.set_fontsize('large')
for label in legend.get_lines():
label.set_linewidth(1.5)
# y-axis
ax.set_ylim([0.0, 0.5])
ax.set_ylabel(r'$f(x)$')
# x-axis
ax.set_xlim([0, 20.0])
ax.set_xlabel(r'$x$')
# x-axis tick formatting
majorLocator = MultipleLocator(5.0)
majorFormatter = FormatStrFormatter('%0.1f')
minorLocator = MultipleLocator(1.0)
ax.xaxis.set_major_locator(majorLocator)
ax.xaxis.set_major_formatter(majorFormatter)
ax.xaxis.set_minor_locator(minorLocator)
ax.grid(color='grey', linestyle='-', linewidth=0.3)
plt.suptitle(r'Gamma Distribution $f(x) = \frac{1}{\Gamma(\alpha)} \, (\lambda x)^{\alpha} \, e^{-\lambda x} \, \frac{1}{x}$')
plt.show()
Recall Poisson processes, where the number of events happening in a certain time period t is such that
Nt=number of events occuring up to time t∼Pois(λ,t)where the number of events occuring in disjoint time intervals are independent.
Now earlier, we discussed how time t is exponential. Consider t1, the time until we observe the very first event.
P(t1>t)=P(Nt=0)by definition=λ0eλt0!=eλt=1−(1−eλt)1 - CDF of Expo(λ)⇒time until first event ∼Expo(λ)And so using this argument along with the memorylessness property, all of the other times between subsequent events t2,t3,…,tn are also Expo(λ)
So the interarrival time for events in a Poisson process are i.i.d. Expo(λ).
But what if we want to know tn, the time of the nth arrival?
Tn=n∑i=1X1,X2,…,Xnwhere Xi is i.i.d. Expo(λ)∼Gamma(n,λ) assuming n is an integerRecall the relation between the geometric and negative binomial distributions:
Well, there's is something analogous between the exponential and Gamma distributions:
Let X∼Gamma(α,1)
Directly using LOTUS:
E(Xc)=∫∞01Γ(a)xcxαe−xdxx=∫∞01Γ(α)xa+ce−xdxx=Γ(α+c)Γ(α), where α+x>01st moment (mean) of Gamma(n,1)=E(X)=Γ(α+1)Γ(α)=αΓ(α)Γ(α)=α2nd moment Gamma(n,1)=E(X2)=Γ(α+2)Γ(α)=(α+1)Γ(α+1)Γ(α)=(α+1)αΓ(α)Γ(α)=α2+α⇒Var of Gamma(n,1)=E(X2)−(E(X)2)=α2+α−α2=αLet X∼Gamma(α,λ)
Applying what we know from the case of above and using LOTUS:
E(Xc)=∫∞01Γ(α)xc(λx)αe−λxdxλx=∫∞01Γ(α)xα+cλαe−λxdxλx=∫∞01Γ(α)(λx)α+ce−λxdxλx1λc=Γ(α+c)Γ(α)λc, where α+x>01st moment (mean) of Gamma(α,λ)=E(X)=Γ(α+1)Γ(α)λ=αΓ(α)Γ(α)λ=αλ2nd moment Gamma(α,λ)=E(X2)=Γ(α+2)Γ(α)λ2=(α+1)Γ(α+1)Γ(α)λ2=(α+1)αΓ(α)Γ(α)λ2=(α+1)αλ2⇒Var of Gammaα,λ)=E(X2)−(E(X)2)=(α+1)αλ2−(αλ)2=α+α2−α2λ2=αλ2View Lecture 24: Gamma distribution and Poisson process | Statistics 110 on YouTube.