Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2017 L.A. Barba, N.C. Clementi

Catch things in motion

This module of the Engineering Computations course is our launching pad to investigate change, motion, dynamics, using computational thinking, Python, and Jupyter.

The foundation of physics and engineering is the subject of mechanics: how things move around, when pushed around. Or pulled… in the beginning of the history of mechanics, Galileo and Newton seeked to understand how and why objects fall under the pull of gravity.

This first lesson will explore motion by analyzing images and video, to learn about velocity and acceleration.

Acceleration of a falling ball

Let's start at the beginning. Suppose you want to use video capture of a falling ball to compute the acceleration of gravity. Could you do it? With Python, of course you can!

Here is a neat video we found online, produced over at MIT several years ago [1]. It shows a ball being dropped in front of a metered panel, while lit by a stroboscopic light. Watch the video!

In [1]:
from IPython.display import YouTubeVideo
vid = YouTubeVideo("xQ4znShlK5A")

We learn from the video that the marks on the panel are every $0.25\rm{m}$, and on the website they say that the strobe light flashes at about 15 Hz (that's 15 times per second). The final image on Flickr, however, notes that the strobe fired 16.8 times per second. So we have some uncertainty already!

Luckily, the MIT team obtained one frame with the ball visible at several positions as it falls. This, thanks to the strobe light and a long-enough exposure of that frame. What we'd like to do is use that frame to capture the ball positions digitally, and then obtain the velocity and acceleration from the distance over time.

You can find several toolkits for handling images and video with Python; we'll start with a simple one called imageio. Import this library like any other, and let's load numpy and pyplot while we're at it.

In [2]:
import imageio
import numpy
from matplotlib import pyplot

Read the video

With the get_reader() method of imageio, you can read a video from its source into a Reader object. You don't need to worry too much about the technicalities here—we'll walk you through it all—but check the type, the length (for a video, that's number of frames), and notice you can get info, like the frames-per-second, using get_meta_data().

In [3]:
reader = imageio.get_reader('')
In [4]:
In [5]:
In [6]:
fps = reader.get_meta_data()['fps']

You may get this error after calling get_reader():

NeedDownloadError: Need ffmpeg exe. You can obtain it with either:

  • install using conda: conda install ffmpeg -c conda-forge
  • download by calling:

If you do, follow the tips to install the needed ffmpeg tool.

Show a video frame in an interactive figure

With imageio, you can grab one frame of the video, and then use pyplot to show it as an image. But we want to interact with the image, somehow.

So far in this course, we have used the command %matplotlib inline to get our plots rendered inline in a Jupyter notebook. There is an alternative command that gives you some interactivity on the figures: %matplotlib notebook. Execute this now, and you'll see what it does below, when you show the image in a new figure.

Let's also set some font parameters for our plots in this notebook.

In [7]:
%matplotlib notebook

#Import rcParams to set font styles
from matplotlib import rcParams

#Set font style and size 
rcParams[''] = 'serif'
rcParams['font.size'] = 16

Now we can use the get_data() method on the imageio Reader object, to grab one of the video frames, passing the frame number. Below, we use it to grab frame number 1100, and then print the shape attribute to see that it's an "array-like" object with three dimensions: they are the pixel numbers in the horizontal and vertical directions, and the number of colors (3 colors in RGB format). Check the type to see that it's an imageio Image object.

In [8]:
image = reader.get_data(1100)
(480, 640, 3)
In [9]:

Naturally, imageio plays well with pyplot. You can use pyplot.imshow() to show the image in a figure. We chose to show frame 1100 after playing around a bit and finding that it gives a good view of the long-exposure image of the falling ball.


Check out the neat interactive options that we get with %matplotlib notebook. Then go back and change the frame number above, and show it below. Notice that you can see the $(x,y)$ coordinates of your cursor tip while you hover on the image with the mouse.

In [10]:
pyplot.imshow(image, interpolation='nearest');

Capture mouse clicks on the frame

Okay! Here is where things get really interesting. Matplotlib has the ability to create event connections, that is, connect the figure canvas to user-interface events on it, like mouse clicks.

To use this ability, you write a function with the events you want to capture, and then connect this function to the Matplotlib "event manager" using mpl_connect(). In this case, we connect the 'button_press_event' to the function named onclick(), which captures the $(x,y)$ coordinates of the mouse click on the figure. Magic!

In [11]:
fig = pyplot.figure()

pyplot.imshow(image, interpolation='nearest')

coords = []
def onclick(event):
    '''Capture the x,y coordinates of a mouse click on the image'''
    ix, iy = event.xdata, event.ydata
    coords.append([ix, iy]) 

connectId = fig.canvas.mpl_connect('button_press_event', onclick)

Notice that in the code cell above, we created an empty list named coords, and inside the onclick() function, we are appending to it the $(x,y)$ coordinates of each mouse click on the figure. After executing the cell above, you have a connection to the figure, via the user interface: try clicking with your mouse on the endpoints of the white lines of the metered panel (click on the edge of the panel to get approximately equal $x$ coordinates), then print the contents of the coords list below.

In [12]:
[[270.77840909090912, 53.306818181818073],
 [270.77840909090912, 107.85227272727263],
 [272.07711038961043, 163.6964285714285],
 [272.07711038961043, 219.54058441558436],
 [272.07711038961043, 274.08603896103892],
 [272.07711038961043, 328.63149350649348],
 [273.37581168831173, 383.17694805194799],
 [274.67451298701303, 435.125]]

The $x$ coordinates are pretty close, but there is some variation due to our shaky hand (or bad eyesight), and perhaps because the metered panel is not perfectly vertical. We can cast the coords list to a NumPy array, then grab all the first elements of the coordinate pairs, then get the standard deviation as an indication of our error in the mouse-click captures.

In [13]:
array([ 270.77840909,  270.77840909,  272.07711039,  272.07711039,
        272.07711039,  272.07711039,  273.37581169,  274.67451299])
In [14]:

Depending how shaky your hand was, you may get a different value, but we got a standard deviation of about one pixel. Pretty good!

Now, let's grab all the second elements of the coordinate pairs, corresponding to the $y$ coordinates, i.e., the vertical positions of the white lines on the video frame.

In [15]:
y_lines = numpy.array(coords)[:,1]
array([  53.30681818,  107.85227273,  163.69642857,  219.54058442,
        274.08603896,  328.63149351,  383.17694805,  435.125     ])

Looking ahead, what we'll do is repeat the process of capturing mouse clicks on the image, but clicking on the ball positions. Then, we will want to have the vertical positions converted to physical length (in meters), from the pixel numbers on the image.

You can get the scaling from pixels to meters via the distance between two white lines on the metered panel, which we know is $0.25\rm{m}$.

Let's get the average vertical distance between two while lines, which we can calculate as:

\begin{equation} \overline{\Delta y} = \sum_{i=0}^N \frac{y_{i+1}-y_i}{N-1} \end{equation}

In [16]:
gap_lines = y_lines[1:] - y_lines[0:-1]
Discuss with your neighbor
  • Why did we slice the y_lines array like that? If you can't explain it, write out the first few terms of the sum above and think!

Compute the acceleration of gravity

We're making good progress! You'll repeat the process of showing the image on an interactive figure, and capturing the mouse clicks on the figure canvas: but this time, you'll click on the ball positions.

Using the vertical displacements of the ball, $\Delta y_i$, and the known time between two flashes of the strobe light, $1/16.8\rm{s}$, you can get the velocity and acceleration of the ball! But first, to convert the vertical displacements to meters, you'll multiply by $0.25\rm{m}$ and divide by gap_lines.mean().

Before clicking on the ball positions, you may want to inspect the high-resolution final photograph on Flickr—notice that the first faint image of the falling ball is just "touching" the ring finger of Bill's hand. We decided not to use that photograph in our lesson because the Flickr post says "All rights reserved", while the video says specifically that it is licensed under a Creative Commons license. In other words, MIT has granted permission to use the video, but not the photograph. Sigh.

OK. Go for it: capture the clicks on the ball!

In [17]:
fig = pyplot.figure()

pyplot.imshow(image, interpolation='nearest')

coords = []
def onclick(event):
    '''Capture the x,y coordinates of a mouse click on the image'''
    ix, iy = event.xdata, event.ydata
    coords.append([ix, iy]) 

connectId = fig.canvas.mpl_connect('button_press_event', onclick)