Let X∼Expo(1).
We begin by finding the MGF of X:
M(t)=E(etX) definition of MGF=∫∞0e−xetxdx=∫∞0e−x(1−t)dx=11−t for t<1In finding the moments, by definition we have:
Even though finding derivatives of 11−t is not all that bad, it is nevertheless annoying busywork. But since we know that the nth moment is the coefficient of the nth term of the Taylor Expansion of X, we can leverage that fact instead.
11−t=∞∑n=0tn for |t|<1=∞∑n=0n!tnn! since we need the form ∞∑n=0(E(Xn)tnn!)⇒E(Xn)=n!And now we can simply generate arbitary moments for r.v. X!
Let Y∼Expo(λ).
We begin with
let X=λYand so X=λY∼Expo(1)then Y=XλYn=Xnλn⇒E(Yn)=E(Xn)λn=n!λnAnd as before, we now can simply generate arbitary moments for r.v. Y!
Let Z∼N(0,1); find all its moments.
We have seen before that, by symmetry, E(Z2n+1)=0 for all odd values of n.
So we will focus in on the even values of n.
Now the MGF M(t)=et2/2. Without taking any derivatives, we can immediately Taylor expand that, since it is continuous everywhere.
M(t)=et2/2=∞∑n=0(t2/2)nn!=∞∑n=0t2n2nn!=∞∑n=0(2n)!t2n2nn!(2n)! since we need the form ∞∑n=0(E(Xn)tnn!)⇒E(Z2n)=(2n)!2nn!Let's double-check this with what know about Var(Z)
And so you might have noticed a pattern here. Let us rewrite those even moments once more:
Let X∼Pois(λ); now let's consider MGFs and how to use them to find sums of random variables (convolutions).
M(t)=E(etX)=∞∑k=0etkλke−λk!=e−λ∞∑k=0λketkk!=e−λeλetbut the right is just another Taylor expansion=eλ(et−1)Now let's let Y∼Pois(μ), and it is independent of X. Find the distribution of (X+Y).
You may recall that with MGFs, all we need to do is multiply the MGFs.
eλ(et−1)eμ(et−1)=e(λ+μ)(et−1)⇒X+Y∼Pois(λ+μ)When adding a Poisson distribution X with another independent Poisson distribution Y, the resulting convolution X+Y will also be Poisson. This interesting relation only happens with Poisson distributions.
Now think about what happens with X and Y are not independent.
Let Y=X, so that X+Y=2X, which is clearly not Poisson, as
In the most basic case of two r.v. in a joint distribution, consider both r.v.'s together:
Joint CDF
In the general case, the joint CDF fof two r.v.'s is F(x,y)=P(X≤x,Y≤y)
Joint PDF f(x,y) such that, in the continuous case P((X,Y)∈B)=∬Bf(x,y)dxdy
f(x,y) such that, in the discrete case
P(X=x,Y=y)We also can consider a single r.v. of a joint distribution:
X,Y are independent iff F(x,y)=FX(x)FY(y).
P(X=x,Y=y)=P(X=x)P(Y=y)discrete casef(x,y)=fX(x)fY(y)continuous case... with the caveat that this must be so for all x, y∈R
P(X≤x) is the marginal distribution of X, where we consider one r.v. at a time.
In the case of a two-r.v. joint distribution, we can get the marginals by using the joint distribution itself:
P(X=x)=∑yP(X=x,Y=y)marginal PMF, discrete case, for xfY(y)=∫∞−∞f(X,Y)(x,y)dxmarginal PDF, continuous case, for yLet X,Y be both Bernoulli. X and Y may be independent; or they might be dependent. They may or may not have the same p. But they are both related in the form of a joint distribution.
We can lay out this joint distribution in a 2×2 contigency table like below:
Y=0 | Y=1 | |
---|---|---|
X=0 | 2/6 | 1/6 |
X=1 | 2/6 | 1/6 |
In order to be a joint distribution, all of the values in our contigency table must be positive; and they must all sum up to 1. The example above shows such a PMF.
Let's add the marginals for X and Y to our 2×2 contigency table:
Y=0 | Y=1 | ... | |
---|---|---|---|
X=0 | 2/6 | 1/6 | 3/6 |
X=1 | 2/6 | 1/6 | 3/6 |
... | 4/6 | 2/6 |
Observe how in our example, we have:
P(X=0,Y=0)=P(X=0)P(Y=0)=3/6×4/6=12/36=2/6P(X=0,Y=1)=P(X=0)P(Y=1)=3/6×2/6=6/36=1/6P(X=1,Y=0)=P(X=1)P(Y=0)=3/6×4/6=12/36=2/6P(X=1,Y=1)=P(X=1)P(Y=1)=3/6×2/6=6/36=1/6and so you can see that X and Y are independent.
Now here's an example of a two r.v. joint distribution where X and Y are dependent; check it out for yourself.
Y=0 | Y=1 | |
---|---|---|
X=0 | 1/3 | 0 |
X=1 | 1/3 | 1/3 |
Now say we had Uniform distributions on a square such that x,y∈[0,1].
The joint PDF would be constant on/within the square; and 0 outside.
joint PDF={cif 0≤x≤1, 0≤y≤10otherwiseIn 1-dimension space, if you integrate 1 over some interval you get the length of that interval.
In 2-dimension space, if you integrate 1 over some region, you get the area of that region.
Normalizing c, we know that c=1area=1.
Marginally, X and Y are independent Unif(1).
View Lecture 18: MGFs Continued | Statistics 110 on YouTube.