(a) (Logarithmic barrier for second-order cone) $f(x,t) = \log\left(\displaystyle\frac{1}{t^{2} - x^{\top}x}\right)$ on ${\rm{\mathbf{dom}}}(f) = \{(x,t) \in \mathbb{R}^{n} \times \mathbb{R} \:\mid\: \parallel x\parallel_{2} \:<\: t\}$, the second-order cone (also known as the ice-cream cone, and the Lorentz cone).
CONVEX. The domain (the second order cone) is a convex cone. Notice that $$f(x,t) = - \log(t^{2} - x^{\top}x) = -\log t - \log\left(t - \frac{x^{\top}x}{t}\right).$$ The first term (ngeative of log) is convex. Let us examine the second term above. The quadratic-over-linear function $x^{\top}x/t$ is convex on $\{(x,t)\:|\: t>0\}$, therefore its negative $(-x^{\top}x/t)$ is concave. Consequently, $(t - x^{\top}x/t)$ is sum of concaves (linear is both convex and concave), and hence concave. Since $\log(.)$ is concave increasing, hence (by composition rules in textbook p. 84) the composition $\log(t - x^{\top}x/t)$ is concave, whose negative is $f(x,t)$. Thus, we have shown that $f(x,t)$ is negative of a concave function, hence $f(x,t)$ is convex in $(x,t)$.
Let us prove the convexity by second order (Hessian) condition. For $h:\mathbb{R}\mapsto\mathbb{R}$, $g:\mathbb{R}^{m}\mapsto\mathbb{R}$, recall that the chain rule for Hessian of the composition $f(z) = h \circ g (z)$, is given by $$\text{Hess}\left(f\right) = h^{\prime}\left(g(z)\right)\:\text{Hess}(g) \: + \: h^{\prime\prime}(g(z))\:\left(\nabla g\right)\left(\nabla g\right)^{\top}, \quad \text{which is an $m\times m$ symmetric matrix}.$$
For us, $h(.) = -\log(.)$, and $g(z) = z^{\top}Dz$, where $z=(t,x)\in\mathbb{R}^{n+1}$ and the $(n+1)\times(n+1)$ diagonal matrix $D={\rm{diag}}(1,-\mathbf{1})$.
Direct application of the above chain rule gives $$\text{Hess}\left(f\right) = \displaystyle\frac{2}{\left(z^{\top}Dz\right)^2}\left[2Dzz^{\top}D \: - \: \left(z^{\top}Dz\right)D\right].$$ Since the pre-factor of the Hessian above is positive, showing convexity reduces to proving that $y^{\top}\left[2Dzz^{\top}D \: - \: \left(z^{\top}Dz\right)D\right]y \geq 0$ for all $y\in\mathbb{R}^{n+1}$. This follows from noting that $$\begin{aligned} y^{\top}\left[2Dzz^{\top}D \: - \: \left(z^{\top}Dz\right)D\right]y &= 2 \left(y^{\top}Dz\right)^{2} \: - \: \left(y^{\top}Dy\right) \left(z^{\top}Dz\right)\\ &= 2\left(y_{1}z_{1} \:-\: y_{2}z_{2} \:-\: y_{3}z_{3} \:-\: ... \:-\: y_{n+1}z_{n+1}\right)^{2} \: - \: \left(y_{1}^{2} \:-\: y_{2}^{2} \:-\: y_{3}^{2} \:-\: ... \:-\: y_{n+1}^{2}\right)\left(z_{1}^{2} \:-\: z_{2}^{2} \:-\: z_{3}^{2} \:-\: ... \:-\: z_{n+1}^{2}\right)\\ &= \left(y_{1}z_{1} \:-\: y_{2}z_{2} \:-\: y_{3}z_{3} \:-\: ... \:-\: y_{n+1}z_{n+1}\right)^{2} \: + \: \displaystyle\sum_{i\neq j} \left(y_{i}z_{j}\right)^{2}\\ &\geq 0. \end{aligned}$$
(b) (Maximum eigenvalue) $f(X) = \lambda_{\max}\left(X\right)$ for $X\in\mathbb{S}^{n}$, the set of $n\times n$ symmetric matrices.
CONVEX. The domain $\mathbb{S}^{n}$ is convex. We showed in class (Lecture 7, p. 3-5) that $\lambda_{\max}(X) = \underset{x^{\top}x = 1}{\sup}\:x^{\top}Xx$ for $X\in\mathbb{S}^{n}$, and hence being the pointwise sup of linear (which is convex) function, is convex.
(c) (Minimum eigenvalue) $f(X) = \lambda_{\min}\left(X\right)$ for $X\in\mathbb{S}^{n}$, the set of $n\times n$ symmetric matrices.
CONCAVE. The domain $\mathbb{S}^{n}$ is convex. We showed in class (Lecture 7, p. 3-5) that $\lambda_{\min}(X) = \underset{x^{\top}x = 1}{\inf}\:x^{\top}Xx$ for $X\in\mathbb{S}^{n}$, and hence being the pointwise inf of linear (which is concave) function, is concave.
Notice that in this and the previous problem, we used the fact that a linear function is both convex and concave.
(d) $f(x) =\: \parallel x\: \parallel_{p} \:= \left(\displaystyle\sum_{i=1}^{n} x_{i}^{p}\right)^{1/p}$ with $-\infty < p < 1, p \neq 0$, on ${\rm{\mathbf{dom}}}(f) = \mathbb{R}^{n}_{>0}$, the positive orthant.
CONCAVE. The domain (positive orthant) is convex (in fact a convex cone). Let us now show the second order (Hessian) condition for function convexity. The first derivatives of $f$ are $$\displaystyle\frac{\partial f}{\partial x_{i}} = \left(\displaystyle\sum_{i=1}^{n}x_{i}^{p}\right)^{(1-p)/p} x_{i}^{p-1} = \left(\displaystyle\frac{f(x)}{x_{i}}\right)^{1-p}.$$ The second derivatives are given by $$\displaystyle\frac{\partial^{2} f}{\partial x_{i} \partial x_{j}} = \displaystyle\frac{1-p}{x_{i}} \left(\displaystyle\frac{f(x)}{x_{i}}\right)^{-p} \left(\displaystyle\frac{f(x)}{x_{j}}\right)^{1-p} = \displaystyle\frac{1-p}{f(x)} \left(\displaystyle\frac{(f(x))^{2}}{x_{i}x_{j}}\right)^{1-p}, \quad i\neq j,$$ and $$\displaystyle\frac{\partial^{2} f}{\partial x_{i}^{2}} = \displaystyle\frac{1-p}{f(x)} \left(\displaystyle\frac{(f(x))^{2}}{x_{i}^{2}}\right)^{1-p} - \displaystyle\frac{1-p}{x_{i}} \left(\displaystyle\frac{f(x)}{x_{i}}\right)^{1-p}.$$ That $$y^{\top}\:\text{Hess}(f)\:y =\displaystyle\frac{1-p}{f(x)} \left(\left(\displaystyle\sum_{i=1}^{n}\displaystyle\frac{y_{i}(f(x))^{1-p}}{x_{i}^{1-p}}\right)^{2} - \displaystyle\sum_{i=1}^{n}\displaystyle\frac{y_{i}^{2}(f(x))^{2-p}}{x_{i}^{2-p}}\right) \: \leq \: 0,$$ follows from the Cauchy-Schwarz inequality $a^{\top}b \leq \parallel a \parallel_{2} \parallel b \parallel_{2}$ with $$ a_{i}= \left(\displaystyle\frac{f(x)}{x_{i}}\right)^{-p/2}, \qquad b_{i} = y_{i}\left(\displaystyle\frac{f(x)}{x_{i}}\right)^{1-p/2},$$ and noting that $\sum_{i=1}^{n}a_{i}^{2} = 1$.
(e) (Kullback-Leibler divergence) $f(p,q) = \displaystyle\sum_{i=1}^{n} p_{i}\log\left(\displaystyle\frac{p_{i}}{q_{i}}\right)$ on ${\rm{\mathbf{dom}}}(f) = \{(p,q)\in\mathbb{R}_{\geq 0}^{n}\times\mathbb{R}_{\geq 0}^{n} \:\mid\: \mathbf{1}^{\top}p = \mathbf{1}^{\top}q = 1\}$, the probability simplex.
CONVEX. It is clear that the simplex $\textbf{dom}(f)$ is convex. To show that $f(p,q)$ is jointly convex in $(p,q)$, let us choose two diffrent pairs of probability vectors $(p_{1},q_{1})\in\textbf{dom}(f)$, and $(p_{2},q_{2})\in\textbf{dom}(f)$. For any choice of $0\leq\theta\leq 1$, let $$p = \theta p_{1} + (1-\theta)p_{2}, \quad q = \theta q_{1} + (1-\theta)q_{2}.$$ To establish convexity of $f(p,q)$, we will now prove that $$ f(p,q) \leq \theta f(p_{1},q_{1}) \:+\: (1-\theta) f(p_{2},q_{2}).$$ We start with the following Lemma.
$$ \displaystyle\sum_{i=1}^{n} a_{i}\log\displaystyle\frac{a_{i}}{b_{i}} \geq \left(\displaystyle\sum_{i=1}^{n}a_{i}\right)\log\displaystyle\frac{\displaystyle\sum_{i=1}^{n}a_{i}}{\displaystyle\sum_{i=1}^{n}b_{i}}.$$Lemma (Log sum inequality): Let $(a,b)\in\mathbb{R}^{n}_{\geq 0} \times \mathbb{R}^{n}_{\geq 0}$. Then
$$ \displaystyle\sum_{i=1}^{n} a_{i}\log\displaystyle\frac{a_{i}}{b_{i}} = \displaystyle\sum_{i=1}^{n} b_{i} g\left(\displaystyle\frac{a_{i}}{b_{i}}\right) = \beta \displaystyle\sum_{i=1}^{n} \displaystyle\frac{b_{i}}{\beta} g\left(\displaystyle\frac{a_{i}}{b_{i}}\right) \geq \beta g\left(\displaystyle\sum_{i=1}^{n}\displaystyle\frac{b_{i}}{\beta}\displaystyle\frac{a_{i}}{b_{i}}\right) \geq \beta g\left(\displaystyle\frac{1}{\beta}\displaystyle\sum_{i=1}^{n}a_{i}\right) = \beta g\left(\frac{\alpha}{\beta}\right) = \alpha \log\frac{\alpha}{\beta},$$Proof: For notational ease, let us denote $\sum_{i}a_{i}=\alpha$, and $\sum_{i}b_{i}=\beta$. Recalling that $g(x) = x\log x$ is a convex function (which can be easily checked), we have
where the inequality follows from the Jensen's inequality applied to the convex function $g(.)$. We have also used $b_{i}/\beta \geq 0$, $\sum_{i}b_{i}/\beta = 1$.
Now back to our function Kullback-Leibler divergence, let $a_{1i}=\theta p_{1i}$, $a_{2i}=(1-\theta) p_{2i}$, $b_{1i}=\theta q_{1i}$, $b_{2i}=(1-\theta) q_{2i}$. Then, $$\begin{aligned} f(p,q) = \displaystyle\sum_{i=1}^{n} \left(\theta p_{1i} + (1-\theta)p_{2i}\right)\log\displaystyle\frac{\theta p_{1i} + (1-\theta)p_{2i}}{\theta q_{1i} + (1-\theta)q_{2i}} &= \displaystyle\sum_{i=1}^{n} \left(a_{1i} + a_{2i}\right) \log\displaystyle\frac{a_{1i}+a_{2i}}{b_{1i}+b_{2i}}\\ &\leq \displaystyle\sum_{i=1}^{n} \left(a_{1i}\log\displaystyle\frac{a_{1i}}{b_{1i}} + a_{2i}\log\displaystyle\frac{a_{2i}}{b_{2i}}\right) \qquad \text{(using the log sum inequality from above lemma)}\\ &= \displaystyle\sum_{i=1}^{n} \left(\theta p_{1i}\log\displaystyle\frac{\theta p_{1i}}{\theta q_{1i}} \: + \: (1-\theta)p_{2i}\log\displaystyle\frac{(1-\theta)p_{2i}}{(1-\theta)q_{2i}}\right) \\ &= \theta\:f(p_{1},q_{1}) \: + \: (1-\theta)\:f(p_{2},q_{2}). \end{aligned}$$
See Example 3.19 in textbook.