Given $A\in\mathbb{R}^{m\times n}$, $b\in\mathbb{R}^{m}$, $c\in\mathbb{R}^{n}$, rewrite the LP $$\underset{x\in\mathbb{R}^{n}}{\text{minimize}}\quad c^{\top}x\\ \text{subject to}\quad Ax \preceq b,$$ as the SDP $$\underset{x\in\mathbb{R}^{n}}{\text{minimize}}\quad c^{\top}x\\ \text{subject to}\quad F(x) \succeq 0,$$ that is, write the matrix $F(x)$ appearing in the LMI constraint of the SDP in terms of the data of the LP.
Given $f\in\mathbb{R}^{n}$, $A_{i}\in\mathbb{R}^{(n_{i}-1)\times n}$, $b_{i}\in\mathbb{R}^{n_{i}-1}$, $c_{i}\in\mathbb{R}^{n}$, $d_{i}\in\mathbb{R}$, rewrite the SOCP $$\underset{x\in\mathbb{R}^{n}}{\text{minimize}}\qquad f^{\top}x\\ \text{subject to}\quad \parallel A_{i}x + b_{i}\parallel_{2}\;\leq\; c_{i}^{\top}x + d_{i}, \quad i=1,...,m,$$ as the SDP $$\underset{x\in\mathbb{R}^{n}}{\text{minimize}}\quad f^{\top}x\\ \text{subject to}\quad F(x) \succeq 0,$$ that is, write the matrix $F(x)$ appearing in the LMI constraint of the SDP in terms of the data of the SOCP.
(Hint: Use the Schur Complement Lemma in Lecture 8, p. 10)
The standard form of GP, given by $$ \underset{x\in\mathbb{R}^{n}_{>0}}{\text{minimize}}\qquad f_{0}(x)\\ \text{subject to}\quad f_{i}(x) \leq 1, \quad i=1,...,m, \\ \qquad\qquad h_{j}(x) = 1, \quad j=1,...,p,$$ where $f_{0}, f_{1}, ..., f_{m}$ are posynomials, and $h_{1}, ..., h_{p}$ are monomials, does not look like a convex optimization problem. But we can transform it to a convex optimization problem, by logarithmic change-of-variable $y_{i} = \log x_{i}$ (so $x_{i} = \exp(y_{i})$), followed by applying $\log(.)$ to the objective function and to the both sides of the constraints: $$ \underset{y\in\mathbb{R}^{n}}{\text{minimize}}\qquad \log\left(f_{0}(\exp(y))\right)\\ \text{subject to}\quad \log(f_{i}(\exp(y))) \leq 0, \quad i=1,...,m, \\ \qquad\qquad \log(h_{j}(\exp(y))) = 0, \quad j=1,...,p,$$ where $\exp(y)$ denotes elementwise exponential of the vector $y\in\mathbb{R}^{n}$. Clearly, the above and the original problems are equivalent.
(c.1) (10 points) To understand why the transformed problem is convex, consider the simple case: $m=p=1$, and that $$f_{0}(x) = \sum_{k=1}^{K_{0}}\alpha_{k}x_{1}^{\beta_{1,k}}...x_{n}^{\beta_{n,k}}, \qquad f_{1}(x) = \sum_{k=1}^{K_{1}}a_{k}x_{1}^{b_{1,k}}...x_{n}^{b_{n,k}}, \qquad h_{1}(x) = c x_{1}^{d_{1}} x_{2}^{d_{2}} ... x_{n}^{d_{n}}, \qquad \alpha_{k}, a_{k}, c> 0.$$ Prove that the transformed problem has the form $$\underset{y\in\mathbb{R}^{n}}{\text{minimize}}\qquad \log\left(\displaystyle\sum_{k=1}^{K_{0}}\exp(p_{k}^{\top}y + q_{k})\right)\\ \text{subject to} \qquad\log\left(\displaystyle\sum_{k=1}^{K_{1}}\exp(r_{k}^{\top}y + s_{k})\right) \leq 0,\\ \quad u^{\top}y = t.$$
In other words, derive the transformed problem data $p_{k}, q_{k}, r_{k}, s_{k}, u, t$ as function of the original problem data: $\alpha_{k}, \beta_{1,k}, ..., \beta_{n,k}, a_{k}, b_{1,k}, ..., b_{n,k}, c, d_{1}, ..., d_{n}$.
(c.2) (20 points) Prove that the transformed problem derived in part (c.1) is indeed a convex optimization problem.
(Hint: First show that log-sum-exp is a convex function using second order condition. Then think operations preserving function convexity.)