Practice session 2

Fast matvecs

FFT

  • Create the Fourier matrix $F$ using np.meshgrid function
  • Generate random vector $x$
  • Compare timings of the FFT(x) and direct multiplication $Fx$ by plotting times for different matrix sizes
  • Make sure that you got $\mathcal{O}(n^2)$ and $\mathcal{O}(n \log n)$ complexity by find slopes of each line in the loglog scale
In [ ]:
 

Circulant matvec

  • Generate a random circulant matrix $C$
  • Implement fast multiplication by $C$ using fast Fourier transforms
  • Plot times of direct multiplication by $C$ and using your fast implementation
In [ ]:
 

Toeplitz matvec

  • Do tasks in the Circulant matvec part but for randomply generated Toeplitz matrix
In [ ]:
 

Singular Value Decomposition (SVD)

Singular value decomposition is a representation of $A$ in the folling form $$ A = U\Sigma V^*, $$ where $U$ and $V$ are unitary matrices and $\Sigma = \text{diag}(\sigma_1, \dots, \sigma_r, 0, \dots, 0)$ with $\sigma_1 \geq \sigma_2 \geq \dots \geq \sigma_r>0$ is a martrix with singular values on the diagonal, where $r$ is rank of the matrix $A$. Often in practice we have matrices of full rank, but a lot of singular values are small enough for this matrix to be considered as approximately of low rank. On the next week you will learn mathematical formulation of this property.

And now we will just play around SVD.

  • Assemble Hilbert matrix $A$ by using np.meshgrid function
  • Calculate singular value decomposition of the Hilbert matrix of size $n\times n$ by np.linalg.svd
  • Plot on the same plot singular values for different $n$ in the logarithmic scale
  • What is approximate rank $r_\text{machine}$ of the Hilbert matrix with machine precision for different $n$?
  • Plot $r_\text{machine}$ as a function of $n$
  • Find low-rank approximation of $A$ by truncating singular values that are of machine precision. Measure norm of difference between the initial matrix and the low-rank approximation
  • Plot singular values of randomly generated matrix. Can it be well approximated by a low-rank matrix?

Profiling and debugging

In this section you learn how to use profiling.

profiling

You have a task with implementing Gram-Schmidt algorithm in your home assignment. If you've already completed it, try to improve your implementation with help of line profiler. If you haven't done this task yet try to do it here. The instructions on how to use line profiler are below.

In [4]:
def gram_schmidt(V):
    #your code goes here
    #V - is a matrix of columns to orthogonalize
    #U - orthogonalized output
    
    return U

How fast is your code? Let's use line profiler package. It's not part of standard python library and not in anaconda distribution. In order to install it run:

conda install line_profiler

If installation is successful you can add profiling functionality into your notebook with help of IPython magic. Run

%load_ext line_profiler

somewhere in the notebook. Now you can assess any function with the following code:

%lprun -f your_func your_func(*args, **kwargs)

Task: Run line profiler on your algorithm. What's the most time-consuming operation in the code? Verify that it holds for different sizes of initial matrix.

How can you improve your code? Try to implement your optimization and run line profiler to

debugging

Try to run the following code. Why does it fail? Use %pdb magic to debug that code and fix the error.

In [19]:
import numpy as np
In [68]:
def mul_col(shape):
    mat = np.random.rand(*shape)
    vec = np.random.rand(shape[0])
    res = mat.dot(vec)
    return res