import numpy as np
# difference between the following two functions
def myfunc1(x):
out = 0
for i in range(len(x)):
out += x
return out
def myfunc2(x):
out = 0
for i in range(len(x)):
out += x
return out
The inner product (or dot product): for $\mathbf{a}, \mathbf{b} \in \mathbb{R}^n$, their inner product is $\mathbf{a}\cdot\mathbf{b} = \displaystyle\sum_{i=1}^n a_i b_i.$
# for loop version
# semi-vectorized numpy version
# built-in function
# use function built-in the ndarray object
We have $\mathbf{A}\in \mathbb{R}^{m\times n}$ being a matrix, and $\mathbf{x}\in \mathbb{R}^n$ being a (column) vector. Then we can compute $\mathbf{A}\mathbf{x}$: $$\mathbf {A} \mathbf{x}= \begin{pmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n} \\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\\\end{pmatrix} \begin{pmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{pmatrix} = \begin{pmatrix}a_{11}x_{1}+\cdots +a_{1n}x_{n}\\a_{21}x_{1}+\cdots +a_{2n}x_{n}\\ \vdots \\a_{m1}x_{1}+\cdots +a_{mn}x_{n}\end{pmatrix} = \begin{pmatrix} \mathbf{a}_1\cdot \mathbf{x} \\ \mathbf{a}_2\cdot \mathbf{x} \\ \vdots \\ \mathbf{a}_m\cdot \mathbf{x}\end{pmatrix} $$ where the matrix $\mathbf {A} $ has the following representation: $$ \mathbf{A} = (\mathbf{a}_1 , \mathbf{a}_2 ,\cdots ,\mathbf{a}_m)^{\top}. $$ with $\mathbf{a}_j$ being the column vector representing $\mathbf{A}$'s $j$-th row.
A = np.array([[1,2,3],[4,5,6],[7,8,9]])
x = np.array([1, 0, 1])
# ???
A*x
np.multiply(A,x) # same with A*x, multiply arrays element-wise.
# more example
S = np.arange(9).reshape(3,3)
v = np.arange(3)
np.multiply(S,v)
# for loop version
# built-in function
Suppose we have a big matrix A
and a small filter matrix (called Kernel) K
:
3x3
filter K
with a (28,20)
array A
that are generated using the following code:A = np.random.randint(90,110, size=(28,20))
K = np.array([[0,-1,0], [-1,5,-1], [0,-1,0]])
Now let us consider when $m=n$, we want to solve a system of linear equations:
$$\mathbf{A} \mathbf{x} = \mathbf{b},$$
which is
$$\left\{
\begin{aligned} a_{11}x_{1}+a_{12}x_{2}+\cdots +a_{1n}x_{n}&=b_{1}\\a_{21}x_{1}+a_{22}x_{2}+\cdots +a_{2n}x_{n}&=b_{2}\\&\ \ \vdots \\a_{n1}x_{1}+a_{n2}x_{2}+\cdots +a_{nn}x_{n}&=b_{n}.\end{aligned}
\right.
$$
Instead of manual implementing the pivoting process using for
loops, we can use the Linear Algebra routines numpy.linalg
in NumPy, and later using scipy.linalg
in SciPy (a comprehensive scientific computing library).
Reference: https://docs.scipy.org/doc/numpy-1.15.1/reference/routines.linalg.html
from numpy import linalg as LA
# first we have to check the rank
A = np.array([[1,2,3],[4,5,6],[7,8,9]])
LA.matrix_rank(A)
A = np.array([[2,-1,1],[1,2,-1],[0,-1,2]])
LA.matrix_rank(A)
b = np.array([1,1,1])
x = LA.solve(A,b)
x
The determinant, inverse, and eigenvalues/eigenvectors: det
, inv
, eig
.
Ainv = LA.inv(A)
print(Ainv)
LA.det(A)
w, v = LA.eig(A)
The matrix $A$ is as follows: $$A = \begin{pmatrix} 2 & -1 \\ -1 & 2 &-1 \\ & -1 & 2 & -1 \\ && \ddots & \ddots & \ddots \\ &&& -1 & 2 & -1 \\& &&& -1 & 2 \end{pmatrix}$$
numpy
function eye
to generate this matrix without any loops (reference: numpy.eye).eig
function in linalg
to find the eigenvalues eigvals
and the eigenvectors eigvecs
of this matrix.linspace
, plot the eigenvectors which correspond to the smallest three eigenvalues against these points using plot
function in matplotlib.pyplot
(reference: Lecture 5), what does these eigenvectors look like?