Characterizing the optomotor response using PCA

Here we will work through an example of using PCA to characterize responses during the optomotor response, which is a behavioral response (swimming) induced by the presentation of a moving visual stimulus. The data are stored on S3 in a form that can be read into Thunder as a Series object. A Series object is a distributed collection of indexed records, each of which is a key-value pair, where the key is a label (in this case, the spatial coordinate), and the value is a 1D array (in this case, a time series).

The data set is approximately 50 GB total as integers, and 200 GB when converted to floats.

Before starting, we will import the functions and classes we'll need, as well as set up plotting inside the noteboook. We'll use the seaborn package for plotting, but this is entirely optional. If you cannot load the seaborn package, just ignore commands involving sns and everything should otherwise work as described.

In [9]:
from thunder import RegressionModel, PCA, Colorize
from numpy import amax
In [2]:
%matplotlib inline
In [23]:
import matplotlib.pyplot as plt
import seaborn as sns

Load and inspect the data

First we will load the data and do some basic operations to inspect it, like looking at a single time point, and looking at various summary statistics. To load the data, we use the loadExampleEC2 method of the ThunderContext, which is automatically created for us as tsc when we start Thunder.

In [11]:
data, params = tsc.loadExampleEC2('zebrafish-optomotor-response')

Compute the mean of each voxel and pack it into a local image on the driver. Check the shape (it's an xyz volume) and look at a maximum intensity projection.

In [87]:
img_mean = data.seriesMean().pack()
In [88]:
(1250, 1650, 15)
In [98]:
plt.imshow(amax(img_mean,2), cmap='gray', clim=(0,1500));

As another example, we can select a single time point, pack it into an image, and look at a single time point.

In [90]:
img_single =
In [97]:
plt.imshow(img_single[:,:,10], cmap='gray', clim=(0,1500));

Now compute the mean across voxels, which will yield a time series. This is raw flouresnce; note the unnormalized values on the y-axis. Also note the drift in the signal over time.

In [85]:
y = data.mean()
In [86]:

We can apply some simple filtering operations using methods availiable on a Series object. In this case, we'll perform linear detrending, as well as normalize by subtracting and dividing by a percentile. Note how these changes are reflected in the mean.

In [81]:
y = data.detrend().normalize().mean()
In [84]:

Trial-triggered avearging

The 394 time points of the traces above correspond to 9 repeated presentations of the stimulus. There are several ways to compute a trial-triggered average, but one simple method is through regression, where we build a design matrix such that regressing against this matrix computes the correct average. We loaded that matrix above in the variable params. First, we build a model from it.

In [29]:
model = RegressionModel.load(params, 'linear')

Filter the data as we did above, and fit the model to the filtered data.

In [99]:
data_filtered = data.detrend().normalize()
params =
['betas', 'stats', 'resid']

We have three sets of parameters from the fit: the betas (or coeffients) for each voxel, corresponding to the triggered-average, as well as an r2 statistic, and residuals. For now, we select the betas, because we'll perform PCA on those. And we cache them to speed up any subsequent operations performed on them. Cacheing these, rather than the raw data, avoids storing the raw data set in RAM (which we otherwise no longer need).

In [ ]:
betas ='betas')


Perform PCA on the trial-triggered responses, and plot the resulting components, which are basis functions in time.

In [32]:
pca = PCA(k=3).fit(betas)
In [83]:
sns.set_palette("husl", 3)

PCA also recovers basis functions in space. These are stored in the attribute scores, which we can pack into as image just as we did earlier with the mean. Check the dimensions, they should be kxyz, where k is the number of components, because we get a full volume for each component.

In [35]:
imgs = pca.scores.pack()
In [37]:
(3, 1250, 1650, 15)

For visualization, it's useful to convert the scores from the first two principal components into a polar angle space, where color describes the relative projection onto the two components, and brightness indicates response strength. We can perform this conversion, as well as several other conversions from numerical data into colors, using the Colorize function.

In [40]:
maps = Colorize("polar", scale=1000).images(imgs[0:2])

Look at the result maps from a couple single planes, as well as a maximum intensity projection

In [45]: