%matplotlib inline
Nistats is a Python module for fast and easy functional MRI statistical analysis.
It leverages Nilearn, Nibabel and other Python libraries from the Python scientific stack like Scipy, Numpy and Pandas.
In this tutorial, we're going to explore nistats
functionality by analyzing a single subject single run example using a General Linear Model (GLM). We're gonna use the same example dataset (ds000114) as from the nibabel
and nilearn
tutorials. As this is a multi run multi task dataset, we've to decide on a run and a task we want to analyze. Let's go with ses-test
and task-fingerfootlips
of sub-01
.
fmri_img = '/data/ds000114/derivatives/fmriprep/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold_space-mni152nlin2009casym_preproc.nii.gz'
anat_img = '/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz'
We can display the mean functional image and the subject's anatomy:
from nilearn.image import mean_img
mean_img = mean_img(fmri_img)
from nilearn.plotting import plot_stat_map, plot_anat, plot_img, show
plot_img(mean_img)
plot_anat(anat_img)
We must now provide a description of the experiment, that is, define the timing of the task and rest periods. This is typically provided in an events.tsv file.
import pandas as pd
events = pd.read_table('/data/ds000114/task-fingerfootlips_events.tsv')
print(events)
It is now time to create and estimate a FirstLevelModel
object, that will generate the design matrix using the information provided by the events
object.
from nistats.first_level_model import FirstLevelModel
Parameters of the first-level model
series to mean 0, variance 1
model (without time or dispersion derivatives)
We need the TR of the functional images, luckily we can extract that information using nibabel
:
!nib-ls /data/ds000114/derivatives/fmriprep/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold_space-mni152nlin2009casym_preproc.nii.gz
As we can see the TR
is 2.5.
fmri_glm = FirstLevelModel(t_r=2.5,
noise_model='ar1',
standardize=False,
hrf_model='spm',
drift_model='cosine',
period_cut=160)
Usually, we also want to include confounds computed during preprocessing (e.g., motion, global signal, etc.) as regressors of no interest. In our example, these were computed by fmriprep
and can be found in derivatives/fmriprep/sub-01/func/
. We can use pandas
to inspect that file:
confounds = pd.read_csv('/data/ds000114/derivatives/fmriprep/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold_confounds.tsv', delimiter='\t')
confounds
Comparable to other neuroimaging softwards, we have a timepoint x confound dataframe. However, fmriprep
computes way more confounds than most of you are used to and that require a bit of reading to understand and therefore utilize properly. We therefore and for the sake of simplicity stick to the "classic" ones: WhiteMatter
, GlobalSignal
, FramewiseDisplacement
and the motion correction parameters
in translation and rotation:
import numpy as np
confounds_glm = confounds[['WhiteMatter', 'GlobalSignal', 'FramewiseDisplacement', 'X', 'Y', 'Z', 'RotX', 'RotY', 'RotZ']].replace(np.nan, 0)
confounds_glm
Now that we have specified the model, we can run it on the fMRI image
fmri_glm = fmri_glm.fit(fmri_img, events, confounds_glm)
One can inspect the design matrix (rows represent time, and columns contain the predictors).
design_matrix = fmri_glm.design_matrices_[0]
Formally, we have taken the first design matrix, because the model is implictily meant to for multiple runs.
from nistats.reporting import plot_design_matrix
plot_design_matrix(design_matrix)
import matplotlib.pyplot as plt
plt.show()
Save the design matrix image to disk, first creating a directory where you want to write the images:
import os
outdir = 'results'
if not os.path.exists(outdir):
os.mkdir(outdir)
from os.path import join
plot_design_matrix(design_matrix, output_file=join(outdir, 'design_matrix.png'))
The first column contains the expected reponse profile of regions which are sensitive to the "Finger" task. Let's plot this first column:
plt.plot(design_matrix['Finger'])
plt.xlabel('scan')
plt.title('Expected Response for condition "Finger"')
plt.show()
To access the estimated coefficients (Betas of the GLM model), we created constrast with a single '1' in each of the columns: The role of the contrast is to select some columns of the model --and potentially weight them-- to study the associated statistics. So in a nutshell, a contrast is a weigted combination of the estimated effects. Here we can define canonical contrasts that just consider the two condition in isolation ---let's call them "conditions"--- then a contrast that makes the difference between these conditions.
from numpy import array
conditions = {
'active - Finger': array([1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]),
'active - Foot': array([0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]),
'active - Lips': array([0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]),
}
Let's look at it: plot the coefficients of the contrast, indexed by the names of the columns of the design matrix.
from nistats.reporting import plot_contrast_matrix
plot_contrast_matrix(conditions['active - Finger'], design_matrix=design_matrix)
Below, we compute the estimated effect. It is in BOLD signal unit, but has no statistical guarantees, because it does not take into account the associated variance.
eff_map = fmri_glm.compute_contrast(conditions['active - Finger'],
output_type='effect_size')
In order to get statistical significance, we form a t-statistic, and directly convert is into z-scale. The z-scale means that the values are scaled to match a standard Gaussian distribution (mean=0, variance=1), across voxels, if there were now effects in the data.
z_map = fmri_glm.compute_contrast(conditions['active - Finger'],
output_type='z_score')
Plot thresholded z scores map.
We display it on top of the average functional image of the series (could be the anatomical image of the subject). We use arbitrarily a threshold of 3.0 in z-scale. We'll see later how to use corrected thresholds. we show to display 3 axial views: display_mode='z', cut_coords=3
plot_stat_map(z_map, bg_img=mean_img, threshold=3.0,
display_mode='z', cut_coords=3, black_bg=True,
title='active - Finger (Z>3)')
plt.show()
Statistical signifiance testing. One should worry about the statistical validity of the procedure: here we used an arbitrary threshold of 3.0 but the threshold should provide some guarantees on the risk of false detections (aka type-1 errors in statistics). One first suggestion is to control the false positive rate (fpr) at a certain level, e.g. 0.001: this means that there is.1% chance of declaring active an inactive voxel.
from nistats.thresholding import map_threshold
_, threshold = map_threshold(z_map, level=.001, height_control='fpr')
print('Uncorrected p<0.001 threshold: %.3f' % threshold)
plot_stat_map(z_map, bg_img=mean_img, threshold=threshold,
display_mode='z', cut_coords=3, black_bg=True,
title='active - Finger (p<0.001)')
plt.show()
The problem is that with this you expect 0.001 * n_voxels to show up while they're not active --- tens to hundreds of voxels. A more conservative solution is to control the family wise errro rate, i.e. the probability of making ony one false detection, say at 5%. For that we use the so-called Bonferroni correction:
_, threshold = map_threshold(z_map, level=.05, height_control='bonferroni')
print('Bonferroni-corrected, p<0.05 threshold: %.3f' % threshold)
plot_stat_map(z_map, bg_img=mean_img, threshold=threshold,
display_mode='z', cut_coords=3, black_bg=True,
title='active - Finger (p<0.05, corrected)')
plt.show()
This is quite conservative indeed ! A popular alternative is to control the false discovery rate, i.e. the expected proportion of false discoveries among detections. This is called the false disovery rate.
_, threshold = map_threshold(z_map, level=.05, height_control='fdr')
print('False Discovery rate = 0.05 threshold: %.3f' % threshold)
plot_stat_map(z_map, bg_img=mean_img, threshold=threshold,
display_mode='z', cut_coords=3, black_bg=True,
title='active - Finger (fdr=0.05)')
plt.show()
Finally people like to discard isolated voxels (aka "small clusters") from these images. It is possible to generate a thresholded map with small clusters removed by providing a cluster_threshold argument. here clusters smaller than 10 voxels will be discarded.
clean_map, threshold = map_threshold(
z_map, level=.05, height_control='fdr', cluster_threshold=10)
plot_stat_map(clean_map, bg_img=mean_img, threshold=threshold,
display_mode='z', cut_coords=3, black_bg=True,
title='active - Finger (fdr=0.05), clusters > 10 voxels')
plt.show()
We can save the effect and zscore maps to the disk
z_map.to_filename(join(outdir, 'active_finger_z_map.nii.gz'))
eff_map.to_filename(join(outdir, 'active_finger_eff_map.nii.gz'))
Report the found positions in a table
from nistats.reporting import get_clusters_table
table = get_clusters_table(z_map, stat_threshold=threshold,
cluster_threshold=20)
print(table)
This table can be saved for future use:
table.to_csv(join(outdir, 'table.csv'))
Or use atlasreader to get even more information and informative figures:
from atlasreader import create_output
create_output(join(outdir, 'active_finger_z_map.nii.gz'), cluster_extent=5)
Let's have a look at the csv file containing relevant information about the peak of each cluster. This table contains the cluster association and location of each peak, its signal value at this location, the cluster extent (in mm, not in number of voxels), as well as the membership of each peak, given a particular atlas.
peak_info = pd.read_csv('results/active_finger_z_map_peaks.csv')
peak_info
And the clusters:
cluster_info = pd.read_csv('results/active_finger_z_map_clusters.csv')
cluster_info
For each cluster, we also get a corresponding visualization, saved as .png
:
from IPython.display import Image
Image("results/active_finger_z_map.png")
Image("results/active_finger_z_map_cluster01.png")
Image("results/active_finger_z_map_cluster02.png")
"active vs rest" is a typical t test: condition versus baseline. Another popular type of test is an F test in which one seeks whether a certain combination of conditions (possibly two-, three- or higher-dimensional) explains a significant proportion of the signal. Here one might for instance test which voxels are well explained by combination of the active and rest condition.
import numpy as np
effects_of_interest = np.vstack((conditions['active - Finger'], conditions['active - Lips']))
plot_contrast_matrix(effects_of_interest, design_matrix)
plt.show()
Specify the contrast and compute the correspoding map. Actually, the contrast specification is done exactly the same way as for t contrasts.
z_map = fmri_glm.compute_contrast(effects_of_interest,
output_type='z_score')
Note that the statistic has been converted to a z-variable, which makes it easier to represent it.
clean_map, threshold = map_threshold(
z_map, level=.05, height_control='fdr', cluster_threshold=10)
plot_stat_map(clean_map, bg_img=mean_img, threshold=threshold,
display_mode='z', cut_coords=3, black_bg=True,
title='Effects of interest (fdr=0.05), clusters > 10 voxels')
plt.show()