Marcos Duarte

Laboratory of Biomechanics and Motor Control (http://demotu.org/)

Federal University of ABC, Brazil

Python handles very well all mathematical operations with numeric scalars and vectors and you can use Sympy for similar stuff but with abstract symbols. Let's briefly review scalars and vectors and show how to use Python for numerical calculation.

For a review about scalars and vectors, see chapter 2 of Ruina and Rudra's book.

In [1]:

```
import math
a = 2
b = 3
print('a =', a, ', b =', b)
print('a + b =', a + b)
print('a - b =', a - b)
print('a * b =', a * b)
print('a / b =', a / b)
print('a ** b =', a ** b)
print('sqrt(b) =', math.sqrt(b))
```

If you have a set of numbers, or an array, it is probably better to use Numpy; it will be faster for large data sets, and combined with Scipy, has many more mathematical funcions.

In [2]:

```
import numpy as np
a = 2
b = [3, 4, 5, 6, 7, 8]
b = np.array(b)
print('a =', a, ', b =', b)
print('a + b =', a + b)
print('a - b =', a - b)
print('a * b =', a * b)
print('a / b =', a / b)
print('a ** b =', a ** b)
print('np.sqrt(b) =', np.sqrt(b)) # use numpy functions for numpy arrays
```

Numpy performs the arithmetic operations of the single number in `a`

with all the numbers of the array `b`

. This is called broadcasting in computer science.

Even if you have two arrays (but they must have the same size), Numpy handles for you:

In [3]:

```
a = np.array([1, 2, 3])
b = np.array([4, 5, 6])
print('a =', a, ', b =', b)
print('a + b =', a + b)
print('a - b =', a - b)
print('a * b =', a * b)
print('a / b =', a / b)
print('a ** b =', a ** b)
```

A

vectoris a quantity with magnitude (or length) and direction expressed numerically as an ordered list of values according to a coordinate reference system.

For example, position, force, and torque are physical quantities defined by vectors.

For instance, consider the position of a point in space represented by a vector:

The position of the point (the vector) above can be represented as a tuple of values:

$$ (x,\: y,\: z) \; \Rightarrow \; (1, 3, 2) $$

or in matrix form:

$$ \begin{bmatrix} x \\y \\z \end{bmatrix} \;\; \Rightarrow \;\; \begin{bmatrix} 1 \\3 \\2 \end{bmatrix}$$

We can use the Numpy array to represent the components of vectors.

For instance, for the vector above is expressed in Python as:

In [4]:

```
a = np.array([1, 3, 2])
print('a =', a)
```

Exactly like the arrays in the last example for scalars, so all operations we performed will result in the same values, of course.

However, as we are now dealing with vectors, now some of the operations don't make sense. For example, for vectors there are no multiplication, division, power, and square root in the way we calculated.

A vector can also be represented as:

$$ \overrightarrow{\mathbf{a}} = a_x\hat{\mathbf{i}} + a_y\hat{\mathbf{j}} + a_z\hat{\mathbf{k}} $$

Where $\hat{\mathbf{i}},\, \hat{\mathbf{j}},\, \hat{\mathbf{k}}\,$ are unit vectors, each representing a direction and $ a_x\hat{\mathbf{i}},\: a_y\hat{\mathbf{j}},\: a_z\hat{\mathbf{k}} $ are the vector components of the vector $\overrightarrow{\mathbf{a}}$.

A unit vector (or versor) is a vector whose length (or norm) is 1.

The unit vector of a non-zero vector $\overrightarrow{\mathbf{a}}$ is the unit vector codirectional with $\overrightarrow{\mathbf{a}}$:

$$ \mathbf{\hat{u}} = \frac{\overrightarrow{\mathbf{a}}}{||\overrightarrow{\mathbf{a}}||} = \frac{a_x\,\hat{\mathbf{i}} + a_y\,\hat{\mathbf{j}} + a_z\, \hat{\mathbf{k}}}{\sqrt{a_x^2+a_y^2+a_z^2}} $$

The magnitude (length) of a vector is often represented by the symbol $||\;||$, also known as the norm (or Euclidean norm) of a vector and it is defined as:

$$ ||\overrightarrow{\mathbf{a}}|| = \sqrt{a_x^2+a_y^2+a_z^2} $$

The function `numpy.linalg.norm`

calculates the norm:

In [5]:

```
a = np.array([1, 2, 3])
np.linalg.norm(a)
```

Out[5]:

Or we can use the definition and compute directly:

In [6]:

```
np.sqrt(np.sum(a*a))
```

Out[6]:

Then, the versor for the vector $ \overrightarrow{\mathbf{a}} = (1, 2, 3) $ is:

In [7]:

```
a = np.array([1, 2, 3])
u = a/np.linalg.norm(a)
print('u =', u)
```

And we can verify its magnitude is indeed 1:

In [8]:

```
np.linalg.norm(u)
```

Out[8]:

But the representation of a vector as a tuple of values is only valid for a vector with its origin coinciding with the origin $ (0, 0, 0) $ of the coordinate system we adopted.
For instance, consider the following vector:

Such a vector cannot be represented by $ (b_x, b_y, b_z) $ because this would be for the vector from the origin to the point B. To represent exactly this vector we need the two vectors $ \mathbf{a} $ and $ \mathbf{b} $. This fact is important when we perform some calculations in Mechanics.

The addition of two vectors is another vector:

$$ \overrightarrow{\mathbf{a}} + \overrightarrow{\mathbf{b}} = (a_x\hat{\mathbf{i}} + a_y\hat{\mathbf{j}} + a_z\hat{\mathbf{k}}) + (b_x\hat{\mathbf{i}} + b_y\hat{\mathbf{j}} + b_z\hat{\mathbf{k}}) = (a_x+b_x)\hat{\mathbf{i}} + (a_y+b_y)\hat{\mathbf{j}} + (a_z+b_z)\hat{\mathbf{k}} $$

The subtraction of two vectors is also another vector:

$$ \overrightarrow{\mathbf{a}} - \overrightarrow{\mathbf{b}} = (a_x\hat{\mathbf{i}} + a_y\hat{\mathbf{j}} + a_z\hat{\mathbf{k}}) + (b_x\hat{\mathbf{i}} + b_y\hat{\mathbf{j}} + b_z\hat{\mathbf{k}}) = (a_x-b_x)\hat{\mathbf{i}} + (a_y-b_y)\hat{\mathbf{j}} + (a_z-b_z)\hat{\mathbf{k}} $$

</div>

Consider two 2D arrays (rows and columns) representing the position of two objects moving in space. The columns represent the vector components and the rows the values of the position vector in different instants.

Once again, it's easy to perform addition and subtraction with these vectors:

In [9]:

```
a = np.array([[1, 2, 3], [1, 1, 1]])
b = np.array([[4, 5, 6], [7, 8, 9]])
print('a =', a, '\nb =', b)
print('a + b =', a + b)
print('a - b =', a - b)
```

Numpy can handle a N-dimensional array with the size limited by the available memory in your computer.

And we can perform operations on each vector, for example, calculate the norm of each one.

First let's check the shape of the variable `a`

using the method `shape`

or the function `numpy.shape`

:

In [10]:

```
print(a.shape)
print(np.shape(a))
```

This means the variable `a`

has 2 rows and 3 columns.

We have to tell the function `numpy.norm`

to calculate the norm for each vector, i.e., to operate through the columns of the variable `a`

using the paraneter `axis`

:

In [11]:

```
np.linalg.norm(a, axis=1)
```

Out[11]:

Dot product (or scalar product or inner product) between two vectors is a mathematical operation algebraically defined as the sum of the products of the corresponding components (maginitudes in each direction) of the two vectors. The result of the dot product is a single number (a scalar).

The dot product between vectors $\overrightarrow{\mathbf{a}}$ and $\overrightarrow{\mathbf{b}}$ is:

$$ \overrightarrow{\mathbf{a}} \cdot \overrightarrow{\mathbf{b}} = (a_x\,\hat{\mathbf{i}}+a_y\,\hat{\mathbf{j}}+a_z\,\hat{\mathbf{k}}) \cdot (b_x\,\hat{\mathbf{i}}+b_y\,\hat{\mathbf{j}}+b_z\,\hat{\mathbf{k}}) = a_x b_x + a_y b_y + a_z b_z $$

Because by definition:

$$ \hat{\mathbf{i}} \cdot \hat{\mathbf{i}} = \hat{\mathbf{j}} \cdot \hat{\mathbf{j}} = \hat{\mathbf{k}} \cdot \hat{\mathbf{k}}= 1 \quad \text{and} \quad \hat{\mathbf{i}} \cdot \hat{\mathbf{j}} = \hat{\mathbf{i}} \cdot \hat{\mathbf{k}} = \hat{\mathbf{j}} \cdot \hat{\mathbf{k}} = 0 $$

The geometric equivalent of the dot product is the product of the magnitudes of the two vectors and the cosine of the angle between them:

$$ \overrightarrow{\mathbf{a}} \cdot \overrightarrow{\mathbf{b}} = ||\overrightarrow{\mathbf{a}}||\:||\overrightarrow{\mathbf{b}}||\:cos(\theta) $$

Which is also equivalent to state that the dot product between two vectors $\overrightarrow{\mathbf{a}}$ and $\overrightarrow{\mathbf{b}}$ is the magnitude of $\overrightarrow{\mathbf{a}}$ times the magnitude of the component of $\overrightarrow{\mathbf{b}}$ parallel to $\overrightarrow{\mathbf{a}}$ (or the magnitude of $\overrightarrow{\mathbf{b}}$ times the magnitude of the component of $\overrightarrow{\mathbf{a}}$ parallel to $\overrightarrow{\mathbf{b}}$).

The dot product between two vectors can be visualized in this interactive animation:

In [12]:

```
from IPython.display import IFrame
IFrame('https://faraday.physics.utoronto.ca/PVB/Harrison/Flash/Vectors/DotProduct/DotProduct.html',
width='100%', height=400)
```

Out[12]:

The Numpy function for the dot product is `numpy.dot`

:

In [13]:

```
a = np.array([1, 2, 3])
b = np.array([4, 5, 6])
print('a =', a, '\nb =', b)
print('np.dot(a, b) =', np.dot(a, b))
```

Or we can use the definition and compute directly:

In [14]:

```
np.sum(a*b)
```

Out[14]:

For 2D arrays, the `numpy.dot`

function performs matrix multiplication rather than the dot product; so let's use the `numpy.sum`

function:

In [15]:

```
a = np.array([[1, 2, 3], [1, 1, 1]])
b = np.array([[4, 5, 6], [7, 8, 9]])
np.sum(a*b, axis=1)
```

Out[15]:

Cross product or vector product between two vectors is a mathematical operation in three-dimensional space which results in a vector perpendicular to both of the vectors being multiplied and a length (norm) equal to the product of the perpendicular components of the vectors being multiplied (which is equal to the area of the parallelogram that the vectors span).

The cross product between vectors $\overrightarrow{\mathbf{a}}$ and $\overrightarrow{\mathbf{b}}$ is:

$$ \overrightarrow{\mathbf{a}} \times \overrightarrow{\mathbf{b}} = (a_x\,\hat{\mathbf{i}} + a_y\,\hat{\mathbf{j}} + a_z\,\hat{\mathbf{k}}) \times (b_x\,\hat{\mathbf{i}}+b_y\,\hat{\mathbf{j}}+b_z\,\hat{\mathbf{k}}) = (a_yb_z-a_zb_y)\hat{\mathbf{i}} + (a_zb_x-a_xb_z)\hat{\mathbf{j}}+(a_xb_y-a_yb_x)\hat{\mathbf{k}} $$

Because by definition:

$$ \begin{array}{l l} \hat{\mathbf{i}} \times \hat{\mathbf{i}} = \hat{\mathbf{j}} \times \hat{\mathbf{j}} = \hat{\mathbf{k}} \times \hat{\mathbf{k}} = 0 \\ \hat{\mathbf{i}} \times \hat{\mathbf{j}} = \hat{\mathbf{k}}, \quad \hat{\mathbf{k}} \times \hat{\mathbf{k}} = \hat{\mathbf{i}}, \quad \hat{\mathbf{k}} \times \hat{\mathbf{i}} = \hat{\mathbf{j}} \\ \hat{\mathbf{j}} \times \hat{\mathbf{i}} = -\hat{\mathbf{k}}, \quad \hat{\mathbf{k}} \times \hat{\mathbf{j}}= -\hat{\mathbf{i}}, \quad \hat{\mathbf{i}} \times \hat{\mathbf{k}} = -\hat{\mathbf{j}} \end{array} $$

The direction of the vector resulting from the cross product between the vectors $\overrightarrow{\mathbf{a}}$ and $\overrightarrow{\mathbf{b}}$ is given by the right-hand rule.

The geometric equivalent of the cross product is:

The geometric equivalent of the cross product is the product of the magnitudes of the two vectors and the sine of the angle between them:

$$ \overrightarrow{\mathbf{a}} \times \overrightarrow{\mathbf{b}} = ||\overrightarrow{\mathbf{a}}||\:||\overrightarrow{\mathbf{b}}||\:sin(\theta) $$

Which is also eqivalent to state that the cross product between two vectors $\overrightarrow{\mathbf{a}}$ and $\overrightarrow{\mathbf{b}}$ is the magnitude of $\overrightarrow{\mathbf{a}}$ times the magnitude of the component of $\overrightarrow{\mathbf{b}}$ perpendicular to $\overrightarrow{\mathbf{a}}$ (or the magnitude of $\overrightarrow{\mathbf{b}}$ times the magnitude of the component of $\overrightarrow{\mathbf{a}}$ perpendicular to $\overrightarrow{\mathbf{b}}$).

The definition above, also implies that the magnitude of the cross product is the area of the parallelogram spanned by the two vectors:

The cross product can also be calculated as the determinant of a matrix:

$$ \overrightarrow{\mathbf{a}} \times \overrightarrow{\mathbf{b}} = \left| \begin{array}{ccc} \hat{\mathbf{i}} & \hat{\mathbf{j}} & \hat{\mathbf{k}} \\ a_x & a_y & a_z \\ b_x & b_y & b_z \end{array} \right| = a_y b_z \hat{\mathbf{i}} + a_z b_x \hat{\mathbf{j}} + a_x b_y \hat{\mathbf{k}} - a_y b_x \hat{\mathbf{k}}-a_z b_y \hat{\mathbf{i}} - a_x b_z \hat{\mathbf{j}} \\ \overrightarrow{\mathbf{a}} \times \overrightarrow{\mathbf{b}} = (a_yb_z-a_zb_y)\hat{\mathbf{i}} + (a_zb_x-a_xb_z)\hat{\mathbf{j}} + (a_xb_y-a_yb_x)\hat{\mathbf{k}} $$

The same result as before.

The cross product between two vectors can be visualized in this interactive animation:

In [16]:

```
IFrame('https://faraday.physics.utoronto.ca/PVB/Harrison/Flash/Vectors/CrossProduct/CrossProduct.html',
width='100%', height=400)
```

Out[16]:

The Numpy function for the cross product is `numpy.cross`

:

In [17]:

```
print('a =', a, '\nb =', b)
print('np.cross(a, b) =', np.cross(a, b))
```

For 2D arrays with vectors in different rows:

In [18]:

```
a = np.array([[1, 2, 3], [1, 1, 1]])
b = np.array([[4, 5, 6], [7, 8, 9]])
np.cross(a, b, axis=1)
```

Out[18]:

The Gram–Schmidt process is a method for orthonormalizing (orthogonal unit versors) a set of vectors using the scalar product. The Gram–Schmidt process works for any number of vectors.

For example, given three vectors, $\overrightarrow{\mathbf{a}}, \overrightarrow{\mathbf{b}}, \overrightarrow{\mathbf{c}}$, in the 3D space, a basis $\{\hat{e}_a, \hat{e}_b, \hat{e}_c\}$ can be found using the Gram–Schmidt process by:

The first versor is in the $\overrightarrow{\mathbf{a}}$ direction (or in the direction of any of the other vectors):

$$ \hat{e}_a = \frac{\overrightarrow{\mathbf{a}}}{||\overrightarrow{\mathbf{a}}||} $$

The second versor, orthogonal to $\hat{e}_a$, can be found considering we can express vector $\overrightarrow{\mathbf{b}}$ in terms of the $\hat{e}_a$ direction as:

$$ \overrightarrow{\mathbf{b}} = \overrightarrow{\mathbf{b}}^\| + \overrightarrow{\mathbf{b}}^\bot $$

Then:

$$ \overrightarrow{\mathbf{b}}^\bot = \overrightarrow{\mathbf{b}} - \overrightarrow{\mathbf{b}}^\| = \overrightarrow{\mathbf{b}} - (\overrightarrow{\mathbf{b}} \cdot \hat{e}_a ) \hat{e}_a $$

Finally:

$$ \hat{e}_b = \frac{\overrightarrow{\mathbf{b}}^\bot}{||\overrightarrow{\mathbf{b}}^\bot||} $$

The third versor, orthogonal to $\{\hat{e}_a, \hat{e}_b\}$, can be found expressing the vector $\overrightarrow{\mathbf{C}}$ in terms of $\hat{e}_a$ and $\hat{e}_b$ directions as:

$$ \overrightarrow{\mathbf{c}} = \overrightarrow{\mathbf{c}}^\| + \overrightarrow{\mathbf{c}}^\bot $$

Then:

$$ \overrightarrow{\mathbf{c}}^\bot = \overrightarrow{\mathbf{c}} - \overrightarrow{\mathbf{c}}^\| $$

Where:

$$ \overrightarrow{\mathbf{c}}^\| = (\overrightarrow{\mathbf{c}} \cdot \hat{e}_a ) \hat{e}_a + (\overrightarrow{\mathbf{c}} \cdot \hat{e}_b ) \hat{e}_b $$

Finally:

$$ \hat{e}_c = \frac{\overrightarrow{\mathbf{c}}^\bot}{||\overrightarrow{\mathbf{c}}^\bot||} $$

Let's implement the Gram–Schmidt process in Python.

For example, consider the positions (vectors) $\overrightarrow{\mathbf{a}} = [1,2,0], \overrightarrow{\mathbf{b}} = [0,1,3], \overrightarrow{\mathbf{c}} = [1,0,1]$:

In [19]:

```
import numpy as np
a = np.array([1, 2, 0])
b = np.array([0, 1, 3])
c = np.array([1, 0, 1])
```

The first versor is:

In [20]:

```
ea = a/np.linalg.norm(a)
print(ea)
```

The second versor is:

In [21]:

```
eb = b - np.dot(b, ea)*ea
eb = eb/np.linalg.norm(eb)
print(eb)
```

And the third version is:

In [22]:

```
ec = c - np.dot(c, ea)*ea - np.dot(c, eb)*eb
ec = ec/np.linalg.norm(ec)
print(ec)
```

Let's check the orthonormality between these versors:

In [23]:

```
print('Versors:', '\nea =', ea, '\neb =', eb, '\nec =', ec)
print('\nTest of orthogonality (scalar product between versors):',
'\nea x eb:', np.dot(ea, eb),
'\neb x ec:', np.dot(eb, ec),
'\nec x ea:', np.dot(ec, ea))
print('\nNorm of each versor:',
'\n||ea|| =', np.linalg.norm(ea),
'\n||eb|| =', np.linalg.norm(eb),
'\n||ec|| =', np.linalg.norm(ec))
```

Or, we can simply use the built-in QR factorization function from NumPy:

In [24]:

```
vectors = np.vstack((a,b,c)).T
Q, R = np.linalg.qr(vectors)
print(Q)
```

In [25]:

```
ea, eb, ec = Q[:, 0], Q[:, 1], Q[:, 2]
print('Versors:', '\nea =', ea, '\neb =', eb, '\nec =', ec)
print('\nTest of orthogonality (scalar product between versors):')
print(np.dot(Q.T, Q))
print('\nTest of orthogonality (scalar product between versors):',
'\nea x eb:', np.dot(ea, eb),
'\neb x ec:', np.dot(eb, ec),
'\nec x ea:', np.dot(ec, ea))
print('\nNorm of each versor:',
'\n||ea|| =', np.linalg.norm(ea),
'\n||eb|| =', np.linalg.norm(eb),
'\n||ec|| =', np.linalg.norm(ec))
```

Which results in the same basis with exception of the changed signals.

Given the vectors, $\overrightarrow{\mathbf{a}}=[1, 0, 0]$ and $\overrightarrow{\mathbf{b}}=[1, 1, 1]$, calculate the dot and cross products between them.

Calculate the unit vectors for $[2, −2, 3]$ and $[3, −3, 2]$ and determine an orthogonal vector to these two vectors.

Given the vectors $\overrightarrow{\mathbf{a}}$=[1, 0, 0] and $\overrightarrow{\mathbf{b}}$=[1, 1, 1], calculate $ \overrightarrow{\mathbf{a}} \times \overrightarrow{\mathbf{b}} $ and verify that this vector is orthogonal to vectors $\overrightarrow{\mathbf{a}}$ and $\overrightarrow{\mathbf{b}}$. Also, calculate $\overrightarrow{\mathbf{b}} \times \overrightarrow{\mathbf{a}}$ and compare it with $\overrightarrow{\mathbf{a}} \times \overrightarrow{\mathbf{b}}$.

Given the vectors $[1, 1, 0]; [1, 0, 1]; [0, 1, 1]$, calculate a basis using the Gram–Schmidt process.

Write a Python function to calculate a basis using the Gram–Schmidt process (implement the algorithm!) considering that the input are three variables where each one contains the coordinates of vectors as columns and different positions of these vectors as rows. For example, sample variables can be generated with the command

`np.random.randn(5, 3)`

.Study the sample problems

**2.1**to**2.9**,**2.11**(using Python),**2.12**,**2.14**,**2.17**,**2.18**to**2.24**of Ruina and Rudra's bookFrom Ruina and Rudra's book, solve the problems

**2.1.1**to**2.3.16**.

If you are new to scalars and vectors, you should solve these problems first by hand and then use Python to check the answers.

- Ruina A, Rudra P (2015) Introduction to Statics and Dynamics. Oxford University Press.