### Note¶

• All items of problems in problem sets should be done in seperate cells.
• Do not forget to include title, labels of each line and set axis names for every figure you plot.
• You can submit incomplete problems. They will be graded as well.

# Problem 1 (Python demo: convolution of a signal)¶

## 40 pts¶

• First of all download one of the $\verb|.wav|$ files with starcraft sounds from here. Load them in python and play using functions from lecture 1.

Our next goal is to process this signal by multiplying it by a special type of matrix (convolution operation) that will smooth the signal.

• Before processing this file let us estimate what size of matrix we can afford. Let $N$ be the size of the signal. Estimate analytically memory in megabytes required to store dense square matrix of size $N\times N$ to fit in your operation memory. Cut the signal so that you will not have swap (overflow of the operation memory).
• Create matrix $T$: $$T_{ij} = \sqrt{\frac{\alpha}{\pi}}e^{-\alpha (i-j)^2}, \quad \alpha = \frac{1}{20}, \quad i,j=1,\dots,N.$$ Avoid using loops or lists! The function scipy.linalg.toeplitz might be helpful for this task. Note: matrices that depend only on difference of indices: $T_{ij} \equiv T_{i-j}$ are called Toeplitz. Toeplitz matrix-by-vector multiplication is called convolution.
• Multiply matrix $T$ by your signal (for matvec operations use $\verb|numpy|$ package). Plot first $100$ points of the result and first $100$ points of your signal on the same figure. Do the same plots for $\alpha = \frac{1}{5}$, $\alpha = \frac{1}{100}$ using subplots in matplotlib. Make sure that you got results that look like slighly smoothed initial signal.
• Play the resulting signal. In order to do so format your array into $\verb|int16|$ before playing by using
your_array = your_array.astype(np.int16)

• Measure times of multiplications by $T$ for different values of $N$ and plot them in loglog scale. What is the slope of this line? Why?
In [ ]:



# Problem 2 (Instability of the standard Gram-Schmidt algorithm)¶

## 30 pts¶

Our goal now is to orthogonalize a system of linearly independent vectors $v_1,\dots,v_n$. The standard algorithm for the task is the Gram-Schmidt algorithm: $$\begin{split} u_1 &= v_1, \\ u_2 &= v_2 - \frac{(v_2, u_1)}{(u_1, u_1)} u_1, \\ u_3 &= v_3 - \frac{(v_3, u_1)}{(u_1, u_1)} u_1 - \frac{(v_3, u_2)}{(u_2, u_2)} u_2, \\ \dots \\ u_n &= v_n - \frac{(v_n, u_1)}{(u_1, u_1)} u_1 - \frac{(v_n, u_2)}{(u_2, u_2)} u_2 - \dots - \frac{(v_n, u_{n-1})}{(u_{n-1}, u_{n-1})} u_{n-1}. \end{split}$$ Now $u_1, \dots, u_n$ are orthogonal vectors in exact arithmetics. Then to get orthonormal system you should divide each of the vectors by its norm: $u_i := u_i/\|u_i\|$.

• Implement the described Gram-Schmidt algorithm and check it on a random $10\times 10$ matrix. Note: To check orthogonality calculate the matrix of scalar products $G_{ij} = (u_i, u_j)$ (called Gram matrix) which should be equal to the identity matrix $I$. Error $\|G - I\|$ will show you how far is the system $u_i$ from orthonormal.
• Create a Hilbert matrix of size $10\times 10$ without using loops.
• Othogonalize its columns by the described Gram-Schmidt algorithm. Is the Gram matrix close to the identity matrix in this case? How do you think why?

The oberved loss of orthogonality is a problem of this particular algorithm. To avoid it modified Gram-Schmidt algorithm can be used. Moreover we will talk about more advanced algorithms later (QR decomposition lecture).

In [ ]:



# Problem 3 (Unitary invariance of norms)¶

## 30 pts¶

One of the key properties of norms that will help us to work easily with singular value decomposition (one of the key concepts of this course; will be discussed later) is unitary invariance of second, spectral and Frobenius norms. Let us prove them.

• Prove that the second vector norm $\|\cdot\|_2$ is unitary invariant: for any unitary matrix $U$ and any vector $x$ holds $\|Ux\|_2 = \|x\|_2$
• Prove that the spectral matrix norm $\|\cdot\|_2$ is unitary invariant: for any unitary matrices $U$ and $V$ and for any matrix $A$ holds $\|UAV\|_2 \equiv \|A\|_2$. Hint: use definition of the operator norm
• In order to prove unitary invariance of the Frobenius norm prove first that $\|A\|^2_F = \text{trace}(AA^*) = \text{trace}(A^*A)$
• Now prove that the Frobenius norm is unitary invariant. You can use the following property of the trace: $\text{trace}(BC) \equiv \text{trace}(CB)$
In [ ]:



# Problem 4 (Bonus)¶

• Given $A = [a_{ij}] \in\mathbb{C}^{n\times m}$ prove that for operator matrix norms $\Vert \cdot \Vert_{1}$, $\Vert \cdot \Vert_{\infty}$ hold $$\Vert A \Vert_{1} = \max_{1\leqslant j \leqslant m} \sum_{i=1}^n |a_{ij}|, \quad \Vert A \Vert_{\infty} = \max_{1\leqslant i \leqslant n} \sum_{j=1}^m |a_{ij}|.$$

• The norm is called absolute if $\|x\|=\| \lvert x \lvert \|$ holds for any vector $x$, where $x=(x_1,\dots,x_n)^T$ and $\lvert x \lvert = (\lvert x_1 \lvert,\dots, \lvert x_n \lvert)^T$. Give an example of a norm which is not absolute.

In [ ]: