As $$\E[x]=0,\quad \E[y]=\E[x]^2+\Var[x]^2=1/3$$
\begin{align} \Cov[x,y]&=E[xy-x\bar y-\bar xy+\bar x\bar y]\\ &=\E[xy]-\E[x]\E[y]\\ &=\int_{-1}^1 \frac{x^3}{2}dx\\ &=0\\ &\Rightarrow\rho=0\end{align}It is obvious that $\E[x]=\E[y]=0$,
\begin{align} \Cov[x,y]&=\E[xy]-\E[x]\E[y]\\ &=P(W=1)\E[x,y|W=1]+P(W=-1)\E[x,y|W=-1]\\ &=(E[x^2]+E[-x^2])/2\\ &=0 \end{align}We need to prove $|\rho|<1$, i.e. $$\Cov[x,y]^2\leq \Cov[x,x]\Cov[y,y]$$
The $\Cov$ function satisty
Using Cauchy-Schwartz Inequality, the conclusion is straightforward.
It is obvious that $\Cov[X,Y+c]=\Cov[X,Y]$ for any const $c$. So for $Y=aX+b$,
\begin{align} \Cov[X,Y]&=\Cov[x,aX]=a\Cov[x,x]\\ \Cov[Y,Y]&=a^2\Cov[X,X] \end{align}So we have $$\rho=\frac{\Cov[X,Y]}{\sqrt{\Cov[X,X]\Cov[Y,Y]}}=\frac{a\Cov[X,X]}{|a|\Cov[X,X]}=\sgn[a]$$
Suppose $\Lambda=\Sigma^{-1}=U^TDU$, where $U$ is orthonormal and $D$ is diagonal, then we can find basis $\vec y=U(\vec x-\vec{\mu})$ , which simplifies the integral to
\begin{align} \int \exp\left(-\frac{y^TDy}{2}\right)&=\prod \int \exp\left(-\frac{D_iy_i^2}{2}\right)dy_i\\ &=\prod \sqrt{\frac{2\pi}{D_i}}\\ &=\frac{(2\pi)^{d/2}}{\sqrt{\det D}}\\ &=(2\pi)^{d/2}\sqrt{\det \Sigma}\\ \end{align}Obviously, $\det\Sigma=(1-\rho^2)\sigma_1^2\sigma_2^2$
The inverse matrix of $\Sigma$ is
\begin{align} \Lambda&=\adj\Sigma/\det\Sigma\\ &=\begin{bmatrix} \sigma_2^2& -\rho\sigma_1\sigma_2\\ -\rho\sigma_1\sigma_2& \sigma_1^2 \end{bmatrix}/\left[(1-\rho^2)\sigma_1^2\sigma_2^2\right]\\ &=\frac{1}{1-\rho^2}\begin{bmatrix} \dfrac{1}{\sigma_1^2}& -\dfrac{\rho}{\sigma_1\sigma_2}\\ -\dfrac{\rho}{\sigma_1\sigma_2}& \dfrac{1}{\sigma_2^2} \end{bmatrix} \end{align}Plug thess expressions into the original pdf, we can easily prove the (4.268)
If $\sigma_i=1$, then it is simplified to $$p(x_1|x_2)=N\left(x_1|\mu_1+\rho(x_2-\mu_2),1-\rho^2\right)$$
Try to use python to solve it!
So the mean of $\mu$ is $\dfrac{n_1\bar y^{(1)}/v_1+n_2\bar y^{(2)}/v_1}{n_1/v_1+n_2/v_2}$, and variance of $\mu$ is $\dfrac{1}{n_1/v_1+n_2/v_2}$.
From $$\begin{bmatrix} \Sigma_{11}& \Sigma_{12}\\ \Sigma_{21}& \Sigma_{22} \end{bmatrix} \cdot\begin{bmatrix} \Lambda_{11}&\Lambda_{12}\\ \Lambda_{21}&\Lambda_{22} \end{bmatrix}=I,$$ we find
\begin{align} \Sigma_{11}^{-1}=\Lambda_{11}-\Lambda_{12}\Lambda_{22}^{-1}\Lambda_{21},\quad \Sigma_{12}^{-1}=\Lambda_{21}-\Lambda_{22}\Lambda_{12}^{-1}\Lambda_{11}\\ \Sigma_{21}^{-1}=\Lambda_{12}-\Lambda_{11}\Lambda_{21}^{-1}\Lambda_{22},\quad \Sigma_{22}^{-1}=\Lambda_{22}-\Lambda_{21}\Lambda_{11}^{-1}\Lambda_{12} \end{align}Using the rule $N(\mu,\Sigma)\rightarrow N_c(\Sigma^{-1}\mu,\Sigma^{-1})$
\begin{align} p(x_2)&=N(x_2|\mu_2,\Sigma_{22})\\ &=N_c(x_2|\Sigma_{22}^{-1}\mu_2, \Sigma_{22}^{-1})\\ &=N_c(x_2|(\Lambda_{22}-\Lambda_{21}\Lambda_{11}^{-1}\Lambda_{12})\mu_2,\Sigma_{22}^{-1})\\ &=N_c(x_2|(\Lambda_{22}\mu_2+\Lambda_{21}\mu_1)-\Lambda_{21}\Lambda_{11}^{-1}(\Lambda_{11}\mu_1+\Lambda_{12}\mu_2),\Sigma_{22}^{-1})\\ &=N_c(x_2|\xi_2-\Lambda_{21}\Lambda_{11}^{-1}\xi_1,\Lambda_{22}-\Lambda_{21}\Lambda_{11}^{-1}\Lambda_{12})\\ p(x_1|x_2)&=N(x_1|\mu_{1|2}, \Sigma_{1|2})\\ &=N(x_1|\Lambda_{11}^{-1}(\Lambda_{11}\mu_1-\Lambda_{12}(x_2-\mu_2)), \Lambda_{11}^{-1})\\ &=N_c(x_1|\Lambda_{11}\mu_1-\Lambda_{12}(x_2-\mu_2),\Lambda_{11})\\ &=N_c(x_1|(\Lambda_{11}\mu_1+\Lambda_{12}\mu_2)-\Lambda_{12}x_2,\Lambda_{11})\\ &=N_c(x_1|\xi_1-\Lambda_{12}x_2,\Lambda_{11})\\ \end{align}