from traitlets.config.manager import BaseJSONConfigManager
import jupyter_core
#path = "/home/damian/miniconda3/envs/rise_latest/etc/jupyter/nbconfig"
path = "/Users/i.oseledets/anaconda2/envs/teaching/etc/jupyter/nbconfig"
cm = BaseJSONConfigManager(config_dir=path)
cm.update("livereveal", {
"theme": "sky",
"transition": "zoom",
"start_slideshow_at": "selected",
"scroll": True
})
Today we will talk about matrix factorizations
The basic matrix factorizations in numerical linear algebra:
We already introduced QR decomposition some time ago, but now we are going to discuss it in more details.
In numerical linear algebra we need to solve different tasks, for example:
In order to do this, we represent the matrix as a sum and/or product of matrices with simpler structure,
such that we can solve mentioned tasks faster / in a more stable form.
What is a simpler structure?
We already encountered several classes of matrices with structure.
For dense matrices the most important classes are
For sparse matrices the sparse constraints are often included in the factorizations.
For Toeplitz matrices an important class of matrices is the class of matrices with low displacement rank, which is based on the low-rank matrices.
The class of low-rank matrices and block low-rank matrices is everywhere.
The plan for today's lecture is to discuss the decompositions one-by-one and point out:
Any nonsingular matrix can be factored as
A=PLU,where P is a permutation matrix, L is a lower triangular matrix, U is an upper triangular
Main goal of the LU decomposition is to solve linear systems, because
A−1f=(LU)−1f=U−1L−1f,and this reduces to the solution of two linear systems
Ly=f,Ux=ywith lower and upper triangular matrices respectively.
If the matrix is hermitian positive definite, i.e.
A=A∗,(Ax,x)>0,x≠0,then it can be factored as
A=RR∗,where R is lower triangular.
We will need this for the QR decomposition.
The next decomposition: QR decomposition. Again from the name it is clear that a matrix is represented as a product
A=QR,where Q is an column orthogonal (unitary) matrix and R is upper triangular.
The matrix sizes: Q is n×m, R is m×m if n≥m. See our poster for visualization of QR decomposition
QR decomposition is defined for any rectangular matrix.
This decomposition plays a crucial role in many problems:
Suppose we need to solve
‖Ax−f‖2→minx,where A is n×m, n≥m.
Then we factorize
A=QR,and use equation for pseudo-inverse matrix in the case of the full rank matrix A:
x=A†b=(A∗A)−1A∗b=((QR)∗(QR))−1(QR)∗b=(R∗Q∗QR)−1R∗Q∗b=R−1Q∗b.thus x can be recovered from
Rx=Q∗bNote that this is a square system of linear equations with lower triangular matrix. What is the complexity of solving this system?
One of the efficient ways to solve really overdetermined (n≫m) system of linear equations is to use Kaczmarz method.
Instead of solving all equations, pick one randomly, which reads
and given an approximation xk try to find xk+1 as
xk+1=argminx‖x−xk‖,s.t.a⊤ix=fi.Theorem. Every rectangular n×m matrix has a QR decomposition.
There are several ways to prove it and compute it:
If we have the representation of the form
A=QR,then A∗A=(QR)∗(QR)=R∗(Q∗Q)R=R∗R, the matrix A∗A is called Gram matrix, and its elements are scalar products of the columns of A.
Assume that A has full column rank. Then, it is simple to show that A∗A is positive definite:
(A∗Ay,y)=(Ay,Ay)=‖Ay‖2>0,y≠0.Therefore, A∗A=R∗R always exists.
Then the matrix AR−1 is unitary:
(AR−1)∗(AR−1)=R−∗A∗AR−1=R−∗R∗RR−1=I.When an n×m matrix does not have full column rank, it is said to be rank-deficient.
The QR decomposition, however, also exists.
For any rank-deficient matrix there is a sequence of full-column rank matrices Ak such that Ak→A (why?).
Each Ak can be decomposed as Ak=QkRk.
The set of all unitary matrices is compact, thus there exists a converging subsequence Qnk→Q (why?), and Q∗Ak→Q∗A=R, which is triangular.
So, the simplest way to compute QR decomposition is then
A∗A=R∗R,and
Q=AR−1.It is a bad idea for numerical stability. Let us do some demo (for a submatrix of the Hilbert matrix).
import numpy as np
n = 20
r = 4
a = [[1.0 / (i + j + 0.5) for i in range(r)] for j in range(n)]
a = np.array(a)
q, Rmat = np.linalg.qr(a)
e = np.eye(r)
print('Built-in QR orth', np.linalg.norm(np.dot(q.T, q) - e))
gram_matrix = a.T.dot(a)
Rmat1 = np.linalg.cholesky(gram_matrix)
q1 = np.dot(a, np.linalg.inv(Rmat1.T))
print('Via Cholesky:', np.linalg.norm(np.dot(q1.T, q1) - e))
Gram-Schmidt:
Note that the transformation from Q to A has triangular structure, since from the k-th vector we subtract only the previous ones. It follows from the fact that the product of triangular matrices is a triangular matrix.
This is called loss of orthogonality.
we do it step-by-step. First we set qk:=ak and orthogonalize sequentially:
qk:=qk−(qk,q1)q1,qk:=qk−(qk,q2)q2,…In exact arithmetic, it is the same. In floating point it is absolutely different!
Note that the complexity is O(nm2) operations
If A=QR, then
R=Q∗A,
We will do it column-by-column.
First, we find a Householder matrix H1=(I−2uu⊤) such that (we illustrate on a 4×3 matrix)
Then, H2H1A=[∗∗∗0∗∗00∗00∗],
Finally, H3H2H1A=[∗∗∗0∗∗00∗000],
You can try to implement it yourself, it is simple.
In reality, since this is a dense matrix factorization, you should implement the algorithm in terms of blocks (why?).
Instead of using Householder transformation, we use block Householder transformation of the form
where U∗U=I.
Rank-Revealing QR Factorizations and the Singular Value Decomposition, Y. P. Hong; C.-T. Pan
It is done via so-called rank-revealing factorization.
It is based on the representation
where P is the permutation matrix (it permutes columns), and R has the block form
R=[R11R120R22].The goal is to find P such that the norm of R22 is small, so you can find the numerical rank by looking at it.
An estimate is σr+1≤‖R22‖2 (check why).
LU and QR decompositions can be computed using direct methods in finite amount of operations.
What about Schur form and SVD?
They can not be computed by direct methods (why?) they can only be computed by iterative methods.
Although iterative methods still have the same O(n3) complexity in floating point arithmetic thanks to fast convergence.
Recall that every matrix can be written in the Schur form
A=QTQ∗,with upper triangular T and unitary Q and this decomposition gives eigenvalues of the matrix (they are on the diagonal of T).
The first and the main algorithm for computing the Schur form is the QR algorithm.
The QR algorithm was independently proposed in 1961 by Kublanovskaya and Francis.
Do not mix QR algorithm and QR decomposition!
QR decomposition is the representation of a matrix, whereas QR algorithm uses QR decomposition to compute the eigenvalues!
Consider the equation
A=QTQ∗,and rewrite it in the form
QT=AQ.On the left we can see QR factorization of the matrix AQ.
We can use this to derive fixed-point iteration for the Schur form, also known as QR algorithm.
We can write down the iterative process
Qk+1Rk+1=AQk,Q∗k+1A=Rk+1Q∗kIntroduce
Ak=Q∗kAQk=Q∗kQk+1Rk+1=ˆQkRk+1and the new approximation reads
Ak+1=Q∗k+1AQk+1=(Q∗k+1A=Rk+1Q∗k)=Rk+1ˆQk.So we arrive at the standard form of the QR algorithm.
The final formulas are then written in the classical QRRQ-form:
Iterate until Ak is triangular enough (e.g. norm of subdiagonal part is small enough).
Statement
Matrices Ak are unitary similar to A
Ak=Q∗k−1Ak−1Qk−1=(Qk−1…Q1)∗A(Qk−1…Q1)and the product of unitary matrices is a unitary matrix.
Complexity of each step is O(n3), if a general QR decomposition is done.
Our hope is that Ak will be very close to the triangular matrix for suffiently large k.
import numpy as np
n = 4
a = [[1.0/(i + j + 0.5) for i in range(n)] for j in range(n)]
niters = 200
for k in range(niters):
q, rmat = np.linalg.qr(a)
a = rmat.dot(q)
print('Leading 3x3 block of a:')
print(a[:3, :3])
The convergence of the QR algorithm is from the largest eigenvalues to the smallest.
At least 2-3 iterations is needed for an eigenvalue.
Each step is one QR factorization and one matrix-by-matrix product, as a result O(n3) complexity.
Q: does it mean O(n4) complexity totally?
A: fortunately, not.
We can speedup the QR algorithm by using shifts, since Ak−λI has the same Schur vectors.
We will discuss these details in the next lecture
Last but not least: the singular value decomposition of matrices.
A=UΣV∗.We can compute it via eigendecomposition of
A∗A=V∗Σ2V,and/or
AA∗=U∗Σ2Uwith QR algorithm, but it is a bad idea (c.f. Gram matrices).
We will discuss methods for computing SVD later.
from IPython.core.display import HTML
def css_styling():
styles = open("./styles/custom.css", "r").read()
return HTML(styles)
css_styling()