Contents:
To the end of putting all the pieces together to get a fully-functional learning machine, which takes advantage of the various functions and classes we prepared in previous lessons.
The basic workflow of the final procedure is as follows:
Load raw stimulus --> Generate and save visual features --> Fit sparse linear model.
We shall take this one piece at a time. After going through a simple and fast prototypical implementation of this encoder, we will put forward a series of exercises that constitute the bulk of the work to be done here.
import numpy as np
import math
import tables
import gaborfil
import models
import dataclass
import algorithms
# Establish connection with the file objects.
h5_X = tables.open_file("data/vim-2/stimulus_ds.h5", mode="r")
print(h5_X)
data/vim-2/stimulus_ds.h5 (File) 'vim-2: stimulus' Last modif.: 'Tue Mar 27 21:14:47 2018' Object Tree: / (RootGroup) 'vim-2: stimulus' /test (Array(64, 64, 3, 8100)) 'Testing data' /train (Array(64, 64, 3, 108000)) 'Training data'
# Set up the parameters that specify the first filter bank.
PIX_W = 64
PIX_H = 64
max_cycles = 32 # the maximum cycles per image.
myparas = {"freqs": max_cycles/max(PIX_W,PIX_H),
"dir": 0,
"amp": 0.1,
"sdev": max(PIX_W,PIX_H)/20,
"phase": 0}
mygrid_h = 4
mygrid_w = 4
# Construct features using the specified filter bank (TRAINING).
X_tr = gaborfil.G2_getfeatures(ims=h5_X.root.train.read(),
fil_paras=myparas,
gridshape=(mygrid_h, mygrid_w),
mode="reflect", cval=0, verbose=True)
print(X_tr.shape)
Images processed so far: 0 Images processed so far: 2160 Images processed so far: 4320 Images processed so far: 6480 Images processed so far: 8640 Images processed so far: 10800 Images processed so far: 12960 Images processed so far: 15120 Images processed so far: 17280 Images processed so far: 19440 Images processed so far: 21600 Images processed so far: 23760 Images processed so far: 25920 Images processed so far: 28080 Images processed so far: 30240 Images processed so far: 32400 Images processed so far: 34560 Images processed so far: 36720 Images processed so far: 38880 Images processed so far: 41040 Images processed so far: 43200 Images processed so far: 45360 Images processed so far: 47520 Images processed so far: 49680 Images processed so far: 51840 Images processed so far: 54000 Images processed so far: 56160 Images processed so far: 58320 Images processed so far: 60480 Images processed so far: 62640 Images processed so far: 64800 Images processed so far: 66960 Images processed so far: 69120 Images processed so far: 71280 Images processed so far: 73440 Images processed so far: 75600 Images processed so far: 77760 Images processed so far: 79920 Images processed so far: 82080 Images processed so far: 84240 Images processed so far: 86400 Images processed so far: 88560 Images processed so far: 90720 Images processed so far: 92880 Images processed so far: 95040 Images processed so far: 97200 Images processed so far: 99360 Images processed so far: 101520 Images processed so far: 103680 Images processed so far: 105840 (108000, 16)
# Construct features using the specified filter bank (TESTING).
X_te = gaborfil.G2_getfeatures(ims=h5_X.root.test.read(),
fil_paras=myparas,
gridshape=(mygrid_h, mygrid_w),
mode="reflect", cval=0, verbose=True)
print(X_te.shape)
Images processed so far: 0 Images processed so far: 162 Images processed so far: 324 Images processed so far: 486 Images processed so far: 648 Images processed so far: 810 Images processed so far: 972 Images processed so far: 1134 Images processed so far: 1296 Images processed so far: 1458 Images processed so far: 1620 Images processed so far: 1782 Images processed so far: 1944 Images processed so far: 2106 Images processed so far: 2268 Images processed so far: 2430 Images processed so far: 2592 Images processed so far: 2754 Images processed so far: 2916 Images processed so far: 3078 Images processed so far: 3240 Images processed so far: 3402 Images processed so far: 3564 Images processed so far: 3726 Images processed so far: 3888 Images processed so far: 4050 Images processed so far: 4212 Images processed so far: 4374 Images processed so far: 4536 Images processed so far: 4698 Images processed so far: 4860 Images processed so far: 5022 Images processed so far: 5184 Images processed so far: 5346 Images processed so far: 5508 Images processed so far: 5670 Images processed so far: 5832 Images processed so far: 5994 Images processed so far: 6156 Images processed so far: 6318 Images processed so far: 6480 Images processed so far: 6642 Images processed so far: 6804 Images processed so far: 6966 Images processed so far: 7128 Images processed so far: 7290 Images processed so far: 7452 Images processed so far: 7614 Images processed so far: 7776 Images processed so far: 7938 (8100, 16)
Note that the above features only cover one specific filter setting, namely that with spatial frequency set to zero. Once again linking up with Nishimoto et al. (2011), the filter bank that we are using here corresponds to their "static" model (i.e., motion information is not considered). The filter orientations they used were "0, 45, 90 and 135 degrees" (quoting their appendix). We've covered the 0 degree case, so let's handle the rest.
todo_dir = math.pi * np.array([1,2,3]) / 4
for mydir in todo_dir:
print("Adding features using dir =", mydir*(360/(2*math.pi)), "degrees")
myparas["dir"] = mydir
tmp_X = gaborfil.G2_getfeatures(ims=h5_X.root.train.read(),
fil_paras=myparas,
gridshape=(mygrid_h, mygrid_w),
mode="reflect", cval=0, verbose=False)
X_tr = np.concatenate((X_tr, tmp_X), axis=1)
tmp_X = gaborfil.G2_getfeatures(ims=h5_X.root.test.read(),
fil_paras=myparas,
gridshape=(mygrid_h, mygrid_w),
mode="reflect", cval=0, verbose=False)
X_te = np.concatenate((X_te, tmp_X), axis=1)
Adding features using dir = 45.0 degrees Adding features using dir = 90.0 degrees Adding features using dir = 135.0 degrees
# All finished with this file, so close it.
h5_X.close()
print(h5_X)
<closed File>
Next, let's do some temporal down-scaling. The responses only have one observation per second, while there are 15 frames of stimulus per second. Let's take averages over disjoint 15-frame windows.
framerate = 15
n_tr = X_tr.shape[0]//framerate
n_te = X_te.shape[0]//framerate
print(n_tr)
print(n_te)
7200 540
# Training data.
tmp_X = np.zeros((n_tr,X_tr.shape[1]), dtype=X_tr.dtype)
idx = np.arange(framerate)
for i in range(n_tr):
tmp_X[i,:] = np.mean(X_tr[idx,:], axis=0)
idx += framerate
X_tr = tmp_X
print("X_tr shape after down-sampling:", X_tr.shape)
# Testing data.
tmp_X = np.zeros((n_te,X_te.shape[1]), dtype=X_te.dtype)
idx = np.arange(framerate)
for i in range(n_te):
tmp_X[i,:] = np.mean(X_te[idx,:], axis=0)
idx += framerate
X_te = tmp_X
print("X_te shape after down-sampling:", X_te.shape)
X_tr shape after down-sampling: (7200, 64) X_te shape after down-sampling: (540, 64)
Again following Nishimoto et al. (2011), let us compute the so-called Z-score of the feature values. This is simply
\begin{align} z = \frac{x - \bar{x} }{\sqrt{\widehat{v}}}, \end{align}where $\bar{x}$ is the empirical mean, and $\widehat{v}$ is the empirical variance.
# Z-scores
Z_tr = X_tr - np.mean(X_tr, axis=0)
Z_tr = Z_tr / np.std(Z_tr, axis=0)
#print("Mean =", np.mean(Z_tr, axis=0), "StdDev =", np.std(Z_tr, axis=0))
Z_te = X_te - np.mean(X_te, axis=0)
Z_te = Z_te / np.std(Z_te, axis=0)
#print("Mean =", np.mean(Z_te, axis=0), "StdDev =", np.std(Z_te, axis=0))
In addition, a hard truncation of outliers is carried out (anything beyond three standard deviations from the mean is truncated). This is done separately for training and testing data, to ensure the learner does not gain unfair oracle information.
# Truncation of outliers.
thres = 3
for j in range(X_tr.shape[1]):
stdval = np.std(Z_tr[:,j])
Z_tr[:,j] = np.clip(Z_tr[:,j], a_min=(-thres*stdval), a_max=thres*stdval)
stdval = np.std(Z_te[:,j])
Z_te[:,j] = np.clip(Z_te[:,j], a_min=(-thres*stdval), a_max=thres*stdval)
Let's make a new hierarchical data file, features.h5
, to store the features that shall be used for both training and evaluation.
# Open file connection, writing new file to disk.
myh5 = tables.open_file("data/vim-2/features.h5",
mode="w",
title="Features from vim-2 stimulus, via 2D Gabor filter bank")
print(myh5)
data/vim-2/features.h5 (File) 'Features from vim-2 stimulus, via 2D Gabor filter bank' Last modif.: 'Sat Apr 7 17:14:23 2018' Object Tree: / (RootGroup) 'Features from vim-2 stimulus, via 2D Gabor filter bank'
# Add arrays.
myh5.create_array(where=myh5.root, name="train", obj=Z_tr, title="Training data")
print(myh5)
myh5.create_array(where=myh5.root, name="test", obj=Z_te, title="Testing data")
print(myh5)
data/vim-2/features.h5 (File) 'Features from vim-2 stimulus, via 2D Gabor filter bank' Last modif.: 'Sat Apr 7 17:14:34 2018' Object Tree: / (RootGroup) 'Features from vim-2 stimulus, via 2D Gabor filter bank' /train (Array(7200, 64)) 'Training data' data/vim-2/features.h5 (File) 'Features from vim-2 stimulus, via 2D Gabor filter bank' Last modif.: 'Sat Apr 7 17:14:34 2018' Object Tree: / (RootGroup) 'Features from vim-2 stimulus, via 2D Gabor filter bank' /test (Array(540, 64)) 'Testing data' /train (Array(7200, 64)) 'Training data'
# Close the file connection.
myh5.close()
print(myh5)
<closed File>
Assuming the above has all been run once, then we can feel free to restart the kernel and run from here.
We shall run the Algo_LASSO_CD
routine implemented in a previous lesson here, over a grid of $\lambda$ parameters controlling the impact of the $\ell_{1}$ norm constraint. As for performance metrics, citing the Nishimoto et al. (2011) work, from which this data set was born:
"Prediction accuracy was defined as the correlation between predicted and observed BOLD signals. The averaged accuracy across subjects and voxels in early visual areas (V1, V2, V3, V3A, and V3B) was 0.24, 0.39, and 0.40 for the static, nondirectional, and directional encoding models, respectively."
Every element of our implementation is less sophisticated than theirs, from the filter bank used to create inputs, to the learning procedure used to set parameter values, and thus this performance should be considered an upper bound for the performance we can achieve in our pedagogical exercise here. In particular, since we are only using two-dimensional Gabor filters, our encoding model corresponds to a simplified version of their "static" encoding model (which achieved average accuracy of 0.24).
After running our algorithm, as an output we get an estimate $\widehat{w}$ called w_est
. Given a new collection of features $X$ (this will be X_te
) and a response $y$ (this will be y_te
), the goal is to estimate as $\widehat{y} \approx y$ with $\widehat{y} = X\widehat{w}$. Evaluation of performance then can be done with the correlation
implemented in scipy.stats.pearsonr
, and the root mean squared error (RMSE), defined
implemented in mod.eval
as a method of the model object, where $m$ represents the number of samples in the test data (here $m=540$).
import numpy as np
import math
import tables
import gaborfil
import dataclass
# Open file connections with data to be used in learning and evaluation.
h5_X = tables.open_file("data/vim-2/features.h5", mode="r")
print(h5_X)
h5_y = tables.open_file("data/vim-2/response.h5", mode="r")
print(h5_y)
data/vim-2/features.h5 (File) 'Features from vim-2 stimulus, via 2D Gabor filter bank' Last modif.: 'Sat Apr 7 17:14:42 2018' Object Tree: / (RootGroup) 'Features from vim-2 stimulus, via 2D Gabor filter bank' /test (Array(540, 64)) 'Testing data' /train (Array(7200, 64)) 'Training data' data/vim-2/response.h5 (File) 'vim-2: BOLD responses' Last modif.: 'Mon Apr 9 09:14:08 2018' Object Tree: / (RootGroup) 'vim-2: BOLD responses' /sub1 (Group) 'Data for subject 1' /sub2 (Group) 'Data for subject 2' /sub3 (Group) 'Data for subject 3' /sub3/idx (Group) 'ROI-specific voxel indices' /sub3/idx/v1lh (Array(653,)) '' /sub3/idx/v1rh (Array(713,)) '' /sub3/idx/v2lh (Array(735,)) '' /sub3/idx/v2rh (Array(642,)) '' /sub3/idx/v3alh (Array(164,)) '' /sub3/idx/v3arh (Array(118,)) '' /sub3/idx/v3blh (Array(88,)) '' /sub3/idx/v3brh (Array(138,)) '' /sub3/idx/v3lh (Array(504,)) '' /sub3/idx/v3rh (Array(627,)) '' /sub3/resp (Group) 'Response arrays' /sub3/resp/test (Array(4381, 540)) 'Testing data' /sub3/resp/train (Array(4381, 7200)) 'Training data' /sub2/idx (Group) 'ROI-specific voxel indices' /sub2/idx/v1lh (Array(470,)) '' /sub2/idx/v1rh (Array(573,)) '' /sub2/idx/v2lh (Array(733,)) '' /sub2/idx/v2rh (Array(926,)) '' /sub2/idx/v3alh (Array(135,)) '' /sub2/idx/v3arh (Array(202,)) '' /sub2/idx/v3blh (Array(83,)) '' /sub2/idx/v3brh (Array(140,)) '' /sub2/idx/v3lh (Array(714,)) '' /sub2/idx/v3rh (Array(646,)) '' /sub2/resp (Group) 'Response arrays' /sub2/resp/test (Array(4622, 540)) 'Testing data' /sub2/resp/train (Array(4622, 7200)) 'Training data' /sub1/idx (Group) 'ROI-specific voxel indices' /sub1/idx/v1lh (Array(490,)) '' /sub1/idx/v1rh (Array(504,)) '' /sub1/idx/v2lh (Array(715,)) '' /sub1/idx/v2rh (Array(762,)) '' /sub1/idx/v3alh (Array(92,)) '' /sub1/idx/v3arh (Array(160,)) '' /sub1/idx/v3blh (Array(104,)) '' /sub1/idx/v3brh (Array(152,)) '' /sub1/idx/v3lh (Array(581,)) '' /sub1/idx/v3rh (Array(560,)) '' /sub1/resp (Group) 'Response arrays' /sub1/resp/test (Array(4120, 540)) 'Testing data' /sub1/resp/train (Array(4120, 7200)) 'Training data'
# Subject ID.
subid = 1
y_node = h5_y.get_node(h5_y.root, "sub"+str(subid))
# Number of voxels.
num_voxels = y_node.resp.train.nrows
# Set up the model and data objects.
mod = models.LinearL1()
data = dataclass.DataSet()
data.X_tr = h5_X.root.train.read()
data.X_te = h5_X.root.test.read()
# Basic info.
n = data.X_tr.shape[0]
d = data.X_tr.shape[1]
# Dictionaries of performance over all voxels.
dict_corr_tr = {}
dict_corr_te = {}
dict_l0norm = {}
Now for the long routine: run the full learning procedure for each individual voxel (using random initial values each time).
for voxidx in range(num_voxels):
# Set up the responses.
data.y_tr = np.transpose(np.take(a=y_node.resp.train.read(),
indices=[voxidx],
axis=0))
data.y_te = np.transpose(np.take(a=y_node.resp.test.read(),
indices=[voxidx],
axis=0))
# Set up for a loop over trials and lambda values.
todo_lambda = np.logspace(start=math.log10(1/n), stop=math.log10(2.5), num=50)
num_loops = 15
t_max = num_loops * d
# Storage for performance metrics.
corr_tr = np.zeros(todo_lambda.size, dtype=np.float32)
corr_te = np.zeros(todo_lambda.size, dtype=np.float32)
l0norm = np.zeros(todo_lambda.size, dtype=np.uint32)
# Initialize and run learning algorithm.
w_init = 1*np.random.uniform(size=(d,1))
for l in range(todo_lambda.size):
lamval = todo_lambda[l]
if (voxidx % 100 == 0) and (l == 0):
print("Voxel:", voxidx+1, "of", num_voxels)
#print("Lambda value =", lamval, "(", l, "of", todo_lambda.size, ")")
# Use warm starts when available.
if l > 0:
w_init = al.w
al = algorithms.Algo_CDL1(w_init=w_init, t_max=t_max, lamreg=lamval)
# Iterate the learning algorithm.
for onestep in al:
al.update(model=mod, data=data)
# Record performance based on final output.
corr_tr[l] = gaborfil.corr(w=al.w, X=data.X_tr, y=data.y_tr)
corr_te[l] = gaborfil.corr(w=al.w, X=data.X_te, y=data.y_te)
l0norm[l] = np.nonzero(al.w)[0].size
# Save the performance for this voxel.
dict_corr_tr[voxidx] = corr_tr
dict_corr_te[voxidx] = corr_te
dict_l0norm[voxidx] = l0norm
Voxel: 1 of 4120 Voxel: 101 of 4120 Voxel: 201 of 4120 Voxel: 301 of 4120 Voxel: 401 of 4120 Voxel: 501 of 4120 Voxel: 601 of 4120 Voxel: 701 of 4120 Voxel: 801 of 4120 Voxel: 901 of 4120 Voxel: 1001 of 4120 Voxel: 1101 of 4120 Voxel: 1201 of 4120 Voxel: 1301 of 4120 Voxel: 1401 of 4120 Voxel: 1501 of 4120 Voxel: 1601 of 4120 Voxel: 1701 of 4120 Voxel: 1801 of 4120 Voxel: 1901 of 4120 Voxel: 2001 of 4120 Voxel: 2101 of 4120 Voxel: 2201 of 4120 Voxel: 2301 of 4120 Voxel: 2401 of 4120 Voxel: 2501 of 4120 Voxel: 2601 of 4120 Voxel: 2701 of 4120 Voxel: 2801 of 4120 Voxel: 2901 of 4120 Voxel: 3001 of 4120 Voxel: 3101 of 4120 Voxel: 3201 of 4120 Voxel: 3301 of 4120 Voxel: 3401 of 4120 Voxel: 3501 of 4120 Voxel: 3601 of 4120 Voxel: 3701 of 4120 Voxel: 3801 of 4120 Voxel: 3901 of 4120 Voxel: 4001 of 4120 Voxel: 4101 of 4120
Save results to disk.
import pickle
# Method name
mthname = "DefaultMth"
# Lambda values used.
fname = "results/"+mthname+"sub"+str(subid)+".lam"
with open(fname, mode="bw") as fbin:
pickle.dump(todo_lambda, fbin)
# Correlation over lambda values.
fname = "results/"+mthname+"sub"+str(subid)+".corrtr"
with open(fname, mode="bw") as fbin:
pickle.dump(dict_corr_tr, fbin)
fname = "results/"+mthname+"sub"+str(subid)+".corrte"
with open(fname, mode="bw") as fbin:
pickle.dump(dict_corr_te, fbin)
# Sparsity over lambda values.
fname = "results/"+mthname+"sub"+str(subid)+".l0norm"
with open(fname, mode="bw") as fbin:
pickle.dump(dict_l0norm, fbin)
Feel free to restart the kernel here for evaluation, since the core results should all be saved to disk at this point. First let's load the results.
# Preparation.
import math
import numpy as np
import pickle
import matplotlib
import matplotlib.pyplot as plt
import tables
# Method name
mthname = "DefaultMth"
# Subject ID.
subid = 1
# Lambda values used.
fname = "results/"+mthname+"sub"+str(subid)+".lam"
with open(fname, mode="br") as fbin:
todo_lambda = pickle.load(fbin)
# Correlation over lambda values, on the training and test data.
fname = "results/"+mthname+"sub"+str(subid)+".corrtr"
with open(fname, mode="br") as fbin:
dict_corr_tr = pickle.load(fbin)
fname = "results/"+mthname+"sub"+str(subid)+".corrte"
with open(fname, mode="br") as fbin:
dict_corr_te = pickle.load(fbin)
# Sparsity over lambda values.
fname = "results/"+mthname+"sub"+str(subid)+".l0norm"
with open(fname, mode="br") as fbin:
dict_l0norm = pickle.load(fbin)
First, some visualization of performance for each voxel.
# Single-voxel performance evaluation.
voxidx = 3
print("Results: subject", subid, ", voxel id", voxidx)
myfig = plt.figure(figsize=(14,7))
ax_corr = myfig.add_subplot(1, 2, 1)
plt.title("Correlation coefficient")
plt.xlabel("Lambda values")
ax_corr.set_xscale('log')
ax_corr.plot(todo_lambda, dict_corr_tr[voxidx], label="train", color="red")
ax_corr.plot(todo_lambda, dict_corr_te[voxidx], label="test", color="pink")
ax_corr.legend(loc=1,ncol=1)
ax_spar = myfig.add_subplot(1, 2, 2)
plt.title("Sparsity via l0-norm")
plt.xlabel("Lambda values")
ax_spar.set_xscale('log')
ax_spar.plot(todo_lambda, dict_l0norm[voxidx], color="blue")
plt.show()
Results: subject 1 , voxel id 3
Next, we look over all voxels, and then look at average performance over all voxels in ROIs of interest. First is to collect the "best" performance (over all $\lambda$ values) achieved.
num_voxels = len(dict_corr_tr)
best_corr_tr = np.zeros(num_voxels, dtype=np.float32)
best_corr_te = np.zeros(num_voxels, dtype=np.float32)
for v in range(num_voxels):
# Best absolute correlation value.
best_corr_tr[v] = np.max(np.abs(dict_corr_tr[v]))
best_corr_te[v] = np.max(np.abs(dict_corr_te[v]))
Next, we iterate over ROIs, collecting the relevant indices each time. Fortunately, our hierarchical data set will come in very handy here.
dict_roi_corr_tr = {}
dict_roi_corr_te = {}
f = tables.open_file("data/vim-2/response.h5", mode="r")
tocheck = f.get_node(("/sub"+str(subid)), "idx")
for idxnode in tocheck._f_iter_nodes():
idx = idxnode.read()
roi_name = idxnode._v_name
dict_roi_corr_tr[roi_name] = np.mean(best_corr_tr[idx])
dict_roi_corr_te[roi_name] = np.mean(best_corr_te[idx])
f.close()
# Training
xvals = list(dict_roi_corr_tr.keys())
yvals = list(dict_roi_corr_tr.values())
myfig = plt.figure(figsize=(14,7))
plt.barh(range(len(dict_roi_corr_tr)), yvals)
plt.yticks(range(len(dict_roi_corr_tr)), xvals)
plt.title("Best correlation, within-ROI average; subject "+ str(subid)+" (training)" )
plt.axvline(x=np.mean(np.array(yvals)), color="gray")
plt.show()
# Testing
xvals = list(dict_roi_corr_te.keys())
yvals = list(dict_roi_corr_te.values())
myfig = plt.figure(figsize=(14,7))
plt.barh(range(len(dict_roi_corr_te)), yvals, color="pink")
plt.yticks(range(len(dict_roi_corr_te)), xvals)
plt.title("Best correlation, within-ROI average (testing)")
plt.axvline(x=np.mean(np.array(yvals)), color="gray")
plt.show()
It is clear that such a simple toy example is wildly inefficient---the output is basically junk. Some serious modifications to the following factors will be required. Keep them in mind when reading the major tasks below.
w_init
.t_max
.myparas
), especially freqs
, dir
, sdev
.Focus on "the early visual areas" looked at by Nishimoto et al. (V1, V2, V3, V3A, and V3B) in both hemispheres. Train a model for each voxel in these regions, and compute the performance (on test data) for the best lambda value (determined on the training set). Average the error/correlation over all the voxels in these regions.
Complete the above exercise for each subject. Is there much difference in performance between subjects? How does your best model perform against the cited work?
Another approach is to capture temporal delays in the feature vectors. For example, if our original feature vectors are $x_{i}$ for $i=1,\ldots,n$, then re-christen the feature vectors as $\widetilde{x}_{i} = (x_{i},x_{i-1})$ for a delay of one step (here, one second) for all $i>1$ (we lose one data point, now $n-1$ total). The dimension grows from $d$ to $2d$. Analogously, for a delay of $k$ steps, this would be $\widetilde{x}_{i} = (x_{i},x_{i-1},\ldots,x_{i-k})$ for all $i>k$, and lose $k$ data points for a total of $n-k$ now. Try several temporal delays; which seem to work best?
How does performance depend on ROI looked at? Which ROI saw comparatively good/bad performance? Provide visuals to highlight performance in each region.
Be sure to experiment with the learning algorithm parameters (number of iterations, size and range of $\lambda$ grid, etc.). What strategies did you find particularly effective? If you made any modifications to the algorithm, describe them.
Make a large data set of stimulus from any two subjects, and use the third subject's data as a evaluation of inter-subject generalization ability. How does performance compare with the more standard by-subject approach? Are there subjects that are particularly difficult to predict for? In contrast to this, considering the by-subject training approach we have considered thus far, how does our interpretation of generalization change?