The method of multipliers is an algorithm for solving convex optimization problems. Suppose we have a problem of the form

\begin{array}{ll} \mbox{minimize} & f(x)\\ \mbox{subject to} & Ax = b, \end{array}where $f$ is convex, $x \in \mathbf{R}^n$ is the optimization variable, and $A \in \mathbf{R}^{m \times n}$ and $b \in \mathbf{R}^m$ are problem data.

To apply the method of multipliers, we first form the augmented Lagrangian

$$L_{\rho}(x,y) = f(x) + y^T(Ax - b) + (\rho/2)\|Ax-b\|^2_2.$$The dual function associated with the augmented Lagrangian is $g_{\rho}(y) = \inf_x L_{\rho}(x,y)$. The dual function $g_{\rho}(y)$ is concave and its maximal value is the same as the optimal value of the original problem.

We maximize the dual function using gradient ascent. Each step of gradient ascent reduces to the $x$ and $y$ updates

\begin{array}{lll} x^{k+1} & := & \mathop{\rm argmin}_{x}\left(f(x) + (y^k)^T(Ax - b) + (\rho/2)\left\|Ax-b\right\|^2_2 \right) \\ y^{k+1} & := & y^{k} + \rho(Ax^{k+1}-b) \end{array}In [5]:

```
from cvxpy import *
import numpy as np
np.random.seed(1)
# Initialize data.
MAX_ITERS = 10
rho = 1.0
n = 20
m = 10
A = np.random.randn(m,n)
b = np.random.randn(m,1)
# Initialize problem.
x = Variable(n)
f = norm(x, 1)
# Solve with CVXPY.
Problem(Minimize(f), [A*x == b]).solve()
print "Optimal value from CVXPY", f.value
# Solve with method of multipliers.
resid = A*x - b
y = Parameter(m); y.value = np.zeros(m)
aug_lagr = f + y.T*resid + (rho/2)*sum_squares(resid)
for t in range(MAX_ITERS):
Problem(Minimize(aug_lagr)).solve()
y.value += rho*resid.value
print "Optimal value from method of multipliers", f.value
```