Note:
np.reshape(X, order='f')
, where the string 'f'
stands for the Fortran ordering.(2 pts) What is the complexity of a naive computation of (A⊗B)x? Show how it can be reduced.
(3 pts) Let matrices A and B have eigendecompositions A=SAΛAS−1A and B=SBΛBS−1B. Find eigenvectors and eigenvalues of the matrix A⊗I+I⊗B.
(10 pts) Let A=diag(11000,21000,…9991000,1,1000). Estimate analytically the number of iterations required to solve linear system with A with the relative accuracy 10−4 using
# Your solution is here
Given connected graph G and its corresponding graph Laplacian matrix L=D−A with eigenvalues 0=λ1,λ2,...,λn, where D is its degree matrix and A is its adjacency matrix, Fiedler vector is an eignevector correspondng to the second smallest eigenvalue λ2 of L. Fiedler vector can be used for graph partitioning: positive values correspond to the one part of a graph and negative values to another.
To find the Fiedler vector we will use the inverse iteration with adaptive shifts (Rayleigh quotient iteration).
(5 pts) Write down the orthoprojection matrix on the space orthogonal to the eigenvector of L, corresponding to the eigenvalue 0 and prove (analytically) that it is indeed an orthoprojection.
(5 pts) Implement the spectral partitioning as the function partition
:
# INPUT:
# A - adjacency matrix (scipy.sparse.csr_matrix)
# num_iter_fix - number of iterations with fixed shift (int)
# shift - (float number)
# num_iter_adapt - number of iterations with adaptive shift (int) -- Rayleigh quotient iteration steps
# x0 - initial guess (1D numpy.ndarray)
# OUTPUT:
# x - normalized Fiedler vector (1D numpy.ndarray)
# eigs - eigenvalue estimations at each step (1D numpy.ndarray)
# eps - relative tolerance (float)
def partition(A, shift, num_iter_fix, num_iter_adapt, x0, eps):
x = x0
eigs = np.array([0])
return x, eigs
Algorithm must halt before num_iter_fix + num_iter_adapt
iterations if the following condition is satisfied ‖λk−λk−1‖2/‖λk‖2≤ε at some step k.
Do not forget to use the orthogonal projection from above in the iterative process to get the correct eigenvector.
It is also a good idea to use shift=0
before the adaptive stragy is used. This, however, is not possible since the matrix L is singular, and sparse decompositions in scipy
does not work in this case. Therefore, we first use a very small shift instead.
(3 pts) Generate a random lollipop_graph
using networkx
library and find its partition. Draw this graph with vertices colored according to the partition.
(2 pts) Start the method with a random initial guess x0
, set num_iter_fix=0
and comment why the method can converge to a wrong eigenvalue.
graph G. A basic intuition behind the use of this term is that a graph with a higher algebraic
connectivity typically has more edges, and can therefore be thought of as being “more connected”.
To check this statement, create few graphs with equal number of vertices using networkx
, one of them should be C30 - simple cyclic graph, and one of them should be K30 - complete graph. (You also can change the number of vertices if it makes sense for your experiments, but do not make it trivially small).
* Find the algebraic connectivity for the each graph using inverse iteration.
* Plot the dependency λ2(Gi) on |Ei|.
* Draw a partition for a chosen graph from the generated set.
* Comment on the results.
Let us deal here with a graph constructed from a binarized image. Consider the rule, that graph vertices are only pixels with 1, and each vertex can have no more than 8 connected vertices (pixel neighbours), i.e graph degree is limited by 8.
# Your solution is here
You received a radar-made air scan data of a terrorist hideout made from a heavy-class surveillance drone. Unfortunately, it was made with an old-fashioned radar, so the picture is convolved with the diffractive pattern. You need to deconvolve the picture to recover the building plan.
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.special import hankel2
radiointel = np.load('radiointel.npy')
plt.subplot(1,2,1)
plt.imshow( np.abs(radiointel) )
plt.title('Intensity')
plt.subplot(1,2,2)
plt.imshow( np.angle(radiointel), cmap='bwr' )
plt.title('Phase')
plt.show()
In this problem you asked to use using FFT-matvec and make the convolution operator for the picture of the size N×N, where N=300 with the following kernel (2D Helmholtz scattering): G¯i1j1,¯i2j2=−1j4H(2)0(k0⋅Δr¯i1j1,¯i2j2),i1,j1,i2,j2=0,…,N−1
except when both i1=i2 and j1=j2.
In that case set Gi1=i2,j1=j2=0
Here 1j is the imaginary unit, H(2)0(x) - (complex-valued) Hankel function of the second kind of the order 0. See 'scipy.special.hankel2'.
Δr¯i1j1,¯i2j2=h√(i1−i2)2+(j1−j2)2See https://github.com/oseledets/nla2018/blob/master/lectures/lecture-15.ipynb for the recipe.
For the kernel array, there is a following element correspondence with BTTB matrix G:
eGi1−i2,j1−j2≡np.roll(np.roll(G¯i1j1,¯i2j2,−N,axis=1),−N,axis=0)Create the complex-valued kernel eG (2N−1×2N−1)-sized matrix according with the instructions above. Note that at the point where Δr=0 value of eG should be manually zet to zero. Store in the variable eG. Plot the eG.real of it with plt.imshow
Write function Gx
that calculates matvec of G by a given vector x. Make sure all calculations and arrays are in dtype=np.complex64.
Hint 1: matvec with a discrete delta function (vector of all zeros with one lone 1.0 somewhere) should return a kernel function centered on the location of the delta function. If this doesn't happen, something is wrong.
Hint 2: As shown in the lecture slides, kernel eG should be cyclically shifted on both dimensions (np.roll), such that the center of the Green's function is located at eG[0,0] to work properly inside fft2 function.
scipy.sparse.linalg.LinearOperator
to create an object that has attribute .dot()
(this object will be further used in the iterative process). Note that .dot()
input and output must be 1D vectors, so do not forget to use reshape.radiointel
. The result should be binary mask array (real, integer, of 0s and 1s) of the plane of the building. Make sure it converged sufficiently and you did the post-processing properly. Plot the result as an image.Note: You can use standart fft and ifft from e.q. numpy.fft
k0 = #
N = #
def make_eG(k0, N):
# INPUT:
# k0 #dtype = float
# N #dtype = int
# OUTPUT:
# np.array, shape = (2N-1, 2N-1), dtype = np.complex64
return eG
eG = make_eG(k0=k0, N=N)
plt.imshow(eG.real)
def Gx(x, eG):
# input:
# x, np.array, shape=(N**2, ), dtype = np.complex64
# eG, np.array, shape=(2N-1, 2N-1), dtype = np.complex64
# output:
# matvec, np.array, shape = (N**2, ), dtype = np.complex64
return matvec
Big-O complexity of one matvec operation is ... It can be shown...
L_Gx =
def normalize(mask): #proper normalization to binary mask
mask = np.clip(mask, a_min=0, a_max=1)
mask = np.round(mask)
mask = np.asarray(mask, dtype=int)
return mask
errs=[]
def callback(err): #callback function to store the history of convergence
global l
errs.append(err)
return
mask = #some_solver(, , , callback = callback)
plt.figure()
plt.imshow( mask.real , cmap='binary')
plt.title('Deconvolution after recovery')
plt.colorbar()
plt.figure()
plt.imshow( normalize( mask.real ) , cmap='binary')
plt.title('Restored building plan')
plt.colorbar()
plt.figure()
plt.semilogy(errs)
plt.title('Convergence')