Given $A\in\mathbb{R}^{m\times n}$, $b\in\mathbb{R}^{m}$, $c\in\mathbb{R}^{n}$, rewrite the LP $$\underset{x\in\mathbb{R}^{n}}{\text{minimize}}\quad c^{\top}x\\ \text{subject to}\quad Ax \preceq b,$$ as the SDP $$\underset{x\in\mathbb{R}^{n}}{\text{minimize}}\quad c^{\top}x\\ \text{subject to}\quad F(x) \succeq 0,$$ that is, write the matrix $F(x)$ appearing in the LMI constraint of the SDP in terms of the data of the LP.
We write the elementwise vector inequality $Ax \preceq b$ as the LMI $F(x) \succeq 0$, with the $m\times m$ matrix
$$F(x) = {\rm{diag}}\left(b - Ax\right) = {\rm{diag}}\left(b_{1} - (Ax)_{1}, ..., b_{m} - (Ax)_{m}\right).$$Given $f\in\mathbb{R}^{n}$, $A_{i}\in\mathbb{R}^{(n_{i}-1)\times n}$, $b_{i}\in\mathbb{R}^{n_{i}-1}$, $c_{i}\in\mathbb{R}^{n}$, $d_{i}\in\mathbb{R}$, rewrite the SOCP $$\underset{x\in\mathbb{R}^{n}}{\text{minimize}}\qquad f^{\top}x\\ \text{subject to}\quad \parallel A_{i}x + b_{i}\parallel_{2}\;\leq\; c_{i}^{\top}x + d_{i}, \quad i=1,...,m,$$ as the SDP $$\underset{x\in\mathbb{R}^{n}}{\text{minimize}}\quad f^{\top}x\\ \text{subject to}\quad F(x) \succeq 0,$$ that is, write the matrix $F(x)$ appearing in the LMI constraint of the SDP in terms of the data of the SOCP.
(Hint: Use the Schur Complement Lemma in Lecture 8, p. 10)
For any vector $u$, from the Schur Complement Lemma, we know that $$\parallel u \parallel \leq t \quad \Leftrightarrow \quad \begin{bmatrix} tI & u\\u^{\top} & t\end{bmatrix} \succeq 0.$$ Applying this to the given SOCP constraints result in the LMIs: $$G_{i}(x) := \begin{bmatrix} \left(c_{i}^{\top}x + d_{i}\right)I & A_{i}x + b_{i}\\\left(A_{i}x + b_{i}\right)^{\top} & c_{i}^{\top}x + d_{i}\end{bmatrix} \succeq 0, \quad i=1, ..., m,$$ which are equivalent to the single LMI: $$F(x) = {\rm{diag}}\left(G_{1}(x), ..., G_{m}(x)\right) \succeq 0,$$ since a block diagonal matrix is positive semidefinite if and only if each of the diagonal block is positive semidefinite.
The standard form of GP, given by $$ \underset{x\in\mathbb{R}^{n}_{>0}}{\text{minimize}}\qquad f_{0}(x)\\ \text{subject to}\quad f_{i}(x) \leq 1, \quad i=1,...,m, \\ \qquad\qquad h_{j}(x) = 1, \quad j=1,...,p,$$ where $f_{0}, f_{1}, ..., f_{m}$ are posynomials, and $h_{1}, ..., h_{p}$ are monomials, does not look like a convex optimization problem. But we can transform it to a convex optimization problem, by logarithmic change-of-variable $y_{i} = \log x_{i}$ (so $x_{i} = \exp(y_{i})$), followed by applying $\log(.)$ to the objective function and to the both sides of the constraints: $$ \underset{y\in\mathbb{R}^{n}}{\text{minimize}}\qquad \log\left(f_{0}(\exp(y))\right)\\ \text{subject to}\quad \log(f_{i}(\exp(y))) \leq 0, \quad i=1,...,m, \\ \qquad\qquad \log(h_{j}(\exp(y))) = 0, \quad j=1,...,p,$$ where $\exp(y)$ denotes elementwise exponential of the vector $y\in\mathbb{R}^{n}$. Clearly, the above and the original problems are equivalent.
(c.1) (10 points) To understand why the transformed problem is convex, consider the simple case: $m=p=1$, and that $$f_{0}(x) = \sum_{k=1}^{K_{0}}\alpha_{k}x_{1}^{\beta_{1,k}}...x_{n}^{\beta_{n,k}}, \qquad f_{1}(x) = \sum_{k=1}^{K_{1}}a_{k}x_{1}^{b_{1,k}}...x_{n}^{b_{n,k}}, \qquad h_{1}(x) = c x_{1}^{d_{1}} x_{2}^{d_{2}} ... x_{n}^{d_{n}}, \qquad \alpha_{k}, a_{k}, c> 0.$$ Prove that the transformed problem has the form $$\underset{y\in\mathbb{R}^{n}}{\text{minimize}}\qquad \log\left(\displaystyle\sum_{k=1}^{K_{0}}\exp(p_{k}^{\top}y + q_{k})\right)\\ \text{subject to} \qquad\log\left(\displaystyle\sum_{k=1}^{K_{1}}\exp(r_{k}^{\top}y + s_{k})\right) \leq 0,\\ \quad u^{\top}y = t.$$
In other words, derive the transformed problem data $p_{k}, q_{k}, r_{k}, s_{k}, u, t$ as function of the original problem data: $\alpha_{k}, \beta_{1,k}, ..., \beta_{n,k}, a_{k}, b_{1,k}, ..., b_{n,k}, c, d_{1}, ..., d_{n}$.
Rewriting the GP problem in the given transformed form yields $$p_{k} = \begin{pmatrix} \beta_{1,k}\\ \vdots\\ \beta_{n,k}\end{pmatrix}, \quad q_{k} = \log\alpha_{k}, \quad r_{k} = \begin{pmatrix}b_{1,k}\\ \vdots\\ b_{n,k} \end{pmatrix}, \quad s_{k} = \log a_{k}, \quad u = \begin{pmatrix}d_{1}\\ \vdots\\ d_{n}\end{pmatrix}, \quad t = -\log c.$$
(c.2) (20 points) Prove that the transformed problem derived in part (c.1) is indeed a convex optimization problem.
(Hint: First show that log-sum-exp is a convex function using second order condition. Then think operations preserving function convexity.)
That log-sum-exp is convex follows from showing the Hessian is positive semidefinite (see p. 74 in textbook). Since convex composed with affine is convex, therefore the objective function appearing in the tranformed form given in (c.1) is a convex function.
The inequality constraints appearing in the tranformed form given in (c.1) is zero sub-level set of a convex function, therefore it is a convex set. The equality constraint appearing in the tranformed form given in (c.1) is a hyperplane (which is a convex set). Since intersection of convex sets is convex, hence the constraint set in the tranformed form given in (c.1) is convex.
Thus, the tranformed form given in (c.1) is minimization of a convex function over a convex set, which is a convex optimization problem.