Two masses $M_1$ and $M_2$ are hung from a horizontal rod with length $L$ in such a way that a rope of length $L_1$ connects the left end of the rod to $M_1$, a rope of length $L_2$ connects $M_1$ and $M_2$, and a rope of length $L_3$ connects $M_2$ to the right end of the rod. The system is at rest (in equilibrium under gravity).
Find the angles that the ropes make with the rod and the tension forces in the ropes.
In class we derived the equations that govern this problem – see 14_String_Problem_lecture_notes (PDF).
We can represent the problem as system of nine coupled non-linear equations:
$$ \mathbf{f}(\mathbf{x}) = 0 $$Treat $\sin\theta_i$ and $\cos\theta_i$ together with $T_i$, $1\leq i \leq 3$, as unknowns that have to simultaneously fulfill the nine equations \begin{align} -T_1 \cos\theta_1 + T_2\cos\theta_2 &= 0\\ T_1 \sin\theta_1 - T_2\sin\theta_2 - W_1 &= 0\\ -T_2\cos\theta_2 + T_3\cos\theta_3 &= 0\\ T_2\sin\theta_2 + T_3\sin\theta_3 - W_2 &= 0\\ L_1\cos\theta_1 + L_2\cos\theta_2 + L_3\cos\theta_3 - L &= 0\\ -L_1\sin\theta_1 - L_2\sin\theta_2 + L_3\sin\theta_3 &= 0\\ \sin^2\theta_1 + \cos^2\theta_1 - 1 &= 0\\ \sin^2\theta_2 + \cos^2\theta_2 - 1 &= 0\\ \sin^2\theta_3 + \cos^2\theta_3 - 1 &= 0 \end{align}
Consider the nine equations a vector function $\mathbf{f}$ that takes a 9-vector $\mathbf{x}$ of the unknowns as argument: \begin{align} \mathbf{f}(\mathbf{x}) &= 0\\ \mathbf{x} &= \left(\begin{array}{c} x_0 \\ x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\ x_6 \\ x_7 \\ x_8 \end{array}\right) = \left(\begin{array}{c} \sin\theta_1 \\ \sin\theta_2 \\ \sin\theta_3 \\ \cos\theta_1 \\ \cos\theta_2 \\ \cos\theta_3 \\ T_1 \\ T_2 \\ T_3 \end{array}\right) \\ \mathbf{L} &= \left(\begin{array}{c} L \\ L_1 \\ L_2 \\ L_3 \end{array}\right), \quad \mathbf{W} = \left(\begin{array}{c} W_1 \\ W_2 \end{array}\right) \end{align}
In more detail:
\begin{align} f_0(\mathbf{x}) &= -x_6 x_3 + x_7 x_4 &= 0\\ f_1(\mathbf{x}) &= x_6 x_0 - x_7 x_1 - W_1 & = 0\\ \dots\\ f_8(\mathbf{x}) &= x_2^2 + x_5^2 - 1 &=0 \end{align}We generalize the Newton-Raphson algorithm from the last lecture to $n$ dimensions:
Given a trial vector $\mathbf{x}$, the correction $\Delta\mathbf{x}$ can be derived from the Taylor expansion $$ f_i(\mathbf{x} + \Delta\mathbf{x}) = f_i(\mathbf{x}) + \sum_{j=1}^{n} \left.\frac{\partial f_i}{\partial x_j}\right|_{\mathbf{x}} \, \Delta x_j + \dots $$ or in full vector notation \begin{align} \mathbf{f}(\mathbf{x} + \Delta\mathbf{x}) &= \mathbf{f}(\mathbf{x}) + \left.\frac{d\mathbf{f}}{d\mathbf{x}}\right|_{\mathbf{x}} \Delta\mathbf{x} + \dots\\ &= \mathbf{f}(\mathbf{x}) + \mathsf{J}(\mathbf{x}) \Delta\mathbf{x} + \dots \end{align} where $\mathsf{J}(\mathbf{x})$ is the Jacobian matrix of $\mathbf{f}$ at $\mathbf{x}$, the generalization of the derivative to multivariate vector functions.
Solve $$ \mathbf{f}(\mathbf{x} + \Delta\mathbf{x}) = 0 $$ i.e., $$ \mathsf{J}(\mathbf{x}) \Delta\mathbf{x} = -\mathbf{f}(\mathbf{x}) $$ for the correction $\Delta x$ $$ \Delta\mathbf{x} = -\mathsf{J}(\mathbf{x})^{-1} \mathbf{f}(\mathbf{x}) $$ which has the same form as the 1D Newton-Raphson correction $\Delta x = -f'(x)^{-1} f(x)$.
These are matrix equations (we linearized the problem). One can either explicitly solve for the unknown vector $\Delta\mathbf{x}$ with the inverse matrix of the Jacobian or use other methods to solve the coupled system of linear equations of the general form $$ \mathsf{A} \mathbf{x} = \mathbf{b}. $$
numpy.linalg
¶import numpy as np
np.linalg?
Solve the coupled system of linear equations of the general form $$ \mathsf{A} \mathbf{x} = \mathbf{b}. $$
A = np.array([
[1, 0, 0],
[0, 1, 0],
[0, 0, 2]
])
b = np.array([1, 0, 1])
What does this system of equations look like?
for i in range(A.shape[0]):
terms = []
for j in range(A.shape[1]):
terms.append("{1} x[{0}]".format(i, A[i, j]))
print(" + ".join(terms), "=", b[i])
1 x[0] + 0 x[0] + 0 x[0] = 1 0 x[1] + 1 x[1] + 0 x[1] = 0 0 x[2] + 0 x[2] + 2 x[2] = 1
Now solve it with numpy.linalg.solve
:
Test that it satisfies the original equation: $$ \mathsf{A} \mathbf{x} - \mathbf{b} = 0 $$
With $$ \mathsf{A}_1 = \left(\begin{array}{ccc} +4 & -2 & +1\\ +3 & +6 & -4\\ +2 & +1 & +8 \end{array}\right) $$ and $$ \mathbf{b}_1 = \left(\begin{array}{c} +12 \\ -25 \\ +32 \end{array}\right), \quad \mathbf{b}_2 = \left(\begin{array}{c} +4 \\ -1 \\ +36 \end{array}\right), \quad $$ solve for $\mathbf{x}_i$ $$ \mathsf{A}_1 \mathbf{x}_i = \mathbf{b}_i $$ and check the correctness of your answer.
In order to solve directly we need the inverse of $\mathsf{A}$: $$ \mathsf{A}\mathsf{A}^{-1} = \mathsf{A}^{-1}\mathsf{A} = \mathsf{1} $$ Then $$ \mathbf{x} = \mathsf{A}^{-1} \mathbf{b} $$
If the inverse exists, numpy.linalg.inv()
can calculate it:
Check that it behaves like an inverse:
Now solve the coupled equations directly:
The equation \begin{gather} \mathsf{A} \mathbf{x}_i = \lambda_i \mathbf{x}_i \end{gather} is the eigenvalue problem and a solution provides the eigenvalues $\lambda_i$ and corresponding eigenvectors $x_i$ that satisfy the equation.
The principle axes of the moment of inertia tensor are defined through the eigenvalue problem $$ \mathsf{I} \mathbf{\omega}_i = \lambda_i \mathbf{\omega}_i $$ The principal axes are the $\mathbf{\omega}_i$.
Isquare = np.array([[2/3, -1/4], [-1/4, 2/3]])
Note that the eigenvectors are omegas[:, i]
! You can transpose so that axis 0 is the eigenvector index:
Test:
$$
(\mathsf{I} - \lambda_i \mathsf{1}) \mathbf{\omega}_i = 0
$$
(The identity matrix can be generated with np.identity(2)
.)
In quantum mechanics, a spin 1/2 particle is represented by a spinor $\chi$, a 2-component vector. The Hamiltonian operator for a stationary spin 1/2 particle in a homogenous magnetic field $B_y$ is $$ \mathsf{H} = -\gamma \mathsf{S}_y B_y = -\gamma B_y \frac{\hbar}{2} \mathsf{\sigma_y} = \hbar \omega \left( \begin{array}{cc} 0 & -i \\ i & 0 \end{array}\right) $$
Determine the eigenvalues and eigenstates $$ \mathsf{H} \mathbf{\chi} = E \mathbf{\chi} $$ of the spin 1/2 particle.
(To make this a purely numerical problem, divide through by $\hbar\omega$, i.e. calculate $E/\hbar\omega$.)
Normalize the eigenvectors: $$ \hat\chi = \frac{1}{\sqrt{\chi^\dagger \cdot \chi}} \chi $$
Find the eigenvalues and eigenvectors of $$ \mathsf{A}_2 = \left(\begin{array}{ccc} -2 & +2 & -3\\ +2 & +1 & -6\\ -1 & -2 & +0 \end{array}\right) $$
Are the eigenvectors normalized?
Check your results.