$I, N$ means infected and not infected, $P$ means positive $$\Pr(I|P)=\frac{\Pr(IP)}{\Pr(P)}=\frac{\Pr(IP)}{\Pr(IP)+\Pr(NP)}\approx\frac{10^{-4}}{10^{-4}+0.01}\approx 0.01$$
The door we chose has probability $1/3$ for the prize. After the host has revealed another invalid door, the chance of the last door is $1-1/3=2/3$. Note the host is not randomly removing a door, so the chance of the two unchosen doors will fall to the last door.
Pairwise independence means $$P(X_iX_j)=P(X_i)P(X_j), \forall i\neq j\in \{1,\ldots ,n\}$$
Mutually independent means $$P(\prod_{X\in S}X)=\prod_{X\in S}, \forall S\subseteq \{1,\ldots ,n\}$$
Thus for $n\geq 3$, the two group of identities are not necessarily equivalent.
We want to prove $$P(x,y|z)=P(x|z)P(y|z)\Leftrightarrow P(x,y|z)=g(x,z)h(y,z)$$
The LHS to RHS is obvious. We are now going from RHS to LHS. First of all, $p(x,y|z)$ must satisfy unitary condition $$\iint p(x,y|z)dxdy=\left(\int g(x,z)dx\right)\left(\int h(y,z)dy\right)=1$$ and $$P(x|z)= g(x,z)\int h(y,z)dy,\qquad P(y|z)= h(y,z)\int h(x,z)dx$$ And the deduction to LHS is trivial now.
We need to prove $$P(XYWZ)=P(XZ)P(YWZ)/P(Z),$$ which is the result of plugging 2nd equation into 1st one.
We need to prove $$P(XYZW)=P(XZW)P(YZW)/P(ZW),$$ which is not reachable.
The pdf of $y=1/x$ is
\begin{align} g(y)&=f(x(y))\left|\frac{dx}{dy}\right|=f(1/y)/y^2\\ &=\frac{b^a}{\Gamma(a)}(1/y)^{a-1}e^{-b/y}/y^2\\ &=\frac{b^a}{\Gamma(a)}y^{-(a+1)}e^{-b/y} \end{align}Notice $-\sum_x\sum_y p(x,y)\log p(x)=-\sum_x p(x)\log p(x)=H(X)$
\begin{align} I(X,Y)&=\sum_x\sum_y p(x,y)\log\frac{p(x,y)}{p(x)p(y)}\\ &=H(X)+H(Y)-H(X,Y) \end{align}\begin{align} H(Y|X)&=-\sum_y \log p(x,y)\log\frac{p(x,y)}{p(x)}\\ &=H(X,Y)-H(X) \end{align}So $I(X,Y)=H(X)-H(X|Y)=H(Y)-H(Y|X)$
Suppose we have $$X_{\pm}=\frac{X_1\pm X_2}{\sqrt{2}}$$ then $$\begin{bmatrix} X_+\\ X_- \end{bmatrix}\sim N\left(\mathbf{0},\begin{bmatrix}\sigma_+^2&\\&\sigma_-^2\end{bmatrix}\right)=N(0,\sigma_+^2)N(0,\sigma_-^2)$$
And $$\sigma_+^2\frac{(X_1+X_2)^2}{2}+\sigma_-^2\frac{(X_1-X_2)^2}{2}=\sigma^2 (X_1^2+X_2^2)-2\rho X_1X_2$$
Thus $\sigma_\pm=(1\pm\rho)\sigma^2$. And $X_+$ is independent on $X_-$. From $X_\pm$, we can get the distribution of $X_1$ and $X_2$.
If $\rho=1$, then $X_1=X_2=X/\sqrt 2$, the mutual information $I(X_1,X_2)=H(X_1)$
If $\rho=-1$, similiar.
If $\rho=0$, they are independent. So $I(X_1,X_2)=0$
As a result, $r\leq 1$
MLE(Maximum Likehood Estimation) is to maximize the likehood $$\prod_i q(x=i|\theta)^{n_i}=\exp(n_i\log q_i)\propto\exp(\sum_ip_i\log q_i)$$
It is equivalent to minimize $-\sum_ip_i\log q_i\equiv -\sum_ip_i\log(q_i/p_i)$
$f=x^{a-1}(1-x)^{b-1}/\Beta(a,b)\propto x^{a-1}(1-x)^{b-1}$ $$E(x)=\int xf dx=\Beta(a+1,b)/\Beta(a,b)=\frac{a}{a+b}$$ We can prove the last step as follows:
\begin{align} \Beta(p, q+1)&=\int_0^1 x^{p-1}(1-x)^{q}dx\\ &=\int_0^1 x^{p-1}(1-x)^{q-1}dx-\int_0^1 x^{p}(1-x)^{q-1}dx\\ &=\int_0^1 x^{p-1}(1-x)^{q-1}dx-\int_0^1 x^{p}(1-x)^{q-1}dx\\ &=\int_0^1 x^{p-1}(1-x)^{q-1}dx-\frac{p}{q}\int_0^1 x^{p-1}(1-x)^{q}dx\\ &=\Beta(p,q)-\frac{p}{q}\Beta(p, q+1) \end{align}As a result, $\Beta(p, q+1)=\dfrac{q}{p+q}\Beta(p,q)$. We can also use the relationship betweeen Beta and Gamma function, and $\Gamma(p+1)=p\Gamma(p)$ $$\Beta(p,q+1)=\frac{\Gamma(p)\Gamma(q)}{\Gamma(p+q+1)}=\frac{q\Gamma(p)\Gamma(q)}{(p+q)\Gamma(p+q)}=\frac{q}{p+q}\Beta(p,q)$$
$$E(x^2)=\int x^2f dx=\Beta(a+2,b)/\Beta(a,b)=\frac{a}{a+b}\frac{a+1}{a+b+1}$$$$\frac{f'}{f}=\frac{a-1}{x}+\frac{b-1}{1-x}=0\Rightarrow x_m=\frac{a-1}{a-b}$$$$Var(x)=E(x^2)-E(x)^2=\frac{ab}{(a+b)^2(a+b+1)}$$The cdf(cumulative distribution function) of each variable is
$$F_1(x)=\Pr(X<x)=x$$So the cdf that the smallest one is smaller than $x$ is $$F_m(x)=1-(1-F_1(x))^2=x(2-x)$$
So the density function is $f_m=F_m'=2(1-x)$. The subscript $m$ means minimum