Analysis of neuroimaging-research data involves the sequential application of algorithms implemented in a number of heterogeneous toolkits (e.g. FSL, SPM, MRTrix, ANTs, AFNI, DiPy). This makes constructing complete workflows challenging as it requires not only the relevant scientific knowledge but also familiarity with the syntax and options of each of the tools involved.
The workshop will show how to wrap neuroimaging tools within consistent Python interfaces and link them together into robust workflows using Nipype (http://nipype.readthedocs.io). Participants will then be shown how common components of these analysis workflows can be consolidated within object-oriented base classes using the Abstraction of Repository Centric ANAlysis (Arcana) (http://arcana.readthedocs.io) framework, and how this is used in Brain imAgiNg Analysis iN Arcana (Banana) to capture the "arcana" (obscure knowledge) of neuroimaging analysis workflow design.
In the last part of the course, participants will learn how to extend and customise the classes in Banana to the specific needs of their own analysis.
The full content of this course, all notebooks and slides can be found on the github repository.
Most of the materials for the Nipype section of this course have been adpated from Michael Notter's excellent Nipype tutorial and a series of workshops he and Peer Herholz have run on "Python in neuroimaging" (Cambridge 2018 and Marburg 2019). The introductory notebook on Python classes was written by Steven C. Howell for the Anaconda Cloud.
Communication in the lead up to the workshop (and afterwards) will be held in the nipype-arcana-workshop channel of the https://melb-neuroinformatics.slack.com Slack workspace (an invite should have been sent to you before the course).
Note: If you are viewing this course via the nbview.jupyter.org link then you won't be able to actually run any of the notebooks. Please see the Configuring Your Workstation section for instructions on how to run them on your workstation.
If you would like to run this tutorial from your own workstation please see the GitHub README for instructions on how to use either Docker, Pip or Conda to configure a virtual environment on your workstation.
For background on virtualisation technologies see
During the workshop you can use the Characterisation Virtual Laboratory (CVL) to run this notebook (you can use the CVL to run it before/after too) using a guest account that will be provided to you. If you would like to use the CVL outside of the workshop you will need to register for an account and either request access to an existing project or apply for a new project.
Follow these instructions to use the CVL for the first time.
~/<your-project>
(project) directorymkdir ~/training/${USER}
cd ~/training/${USER}
git clone https://github.com/MonashBI/nipype_arcana_workshop
cd nipype_arcana_workshop
ln -s ~/training/additional-data/* notebooks/data/
mkdir notebooks/output
mkdir ~/training_scratch/${USER}
ln -s ~/training_scratch/${USER} notebooks/work
module load neuro-workshop
jupyter nbextension enable exercise2/main
jupyter notebook program.ipynb
This will open up a Firefox window with an interactive version of this program, which you can link to interactive versions of the course materials from.
*Note:* The visualisations won't work with Firefox but will with Chrome (but the slides won't visualise properly with Chrome). You can run this notebook with Chrome by copying the 'localhost+token' link displayed in your terminal after you have run the notebook into Chrome's navigation bar.
The section contains some background information required for the workshop and working with scientific Python packages in general. If you have any questions regarding any of the sections please write to the workshop Slack channel and @Tom Close.
This section is meant as a quick introduction to Jupyter Notebooks, Python and Object-Orientated (OO) programming. It is *STRONGLY RECOMMENDED* that you go through this section before the workshop if you are not already familiar with these technologies/concepts.
It's liberating to have direct access to your neuroimaging data. Nibabel
and Nilearn
allow exactly that. With those two neuroimaging packages, you can consider the brain a simple 3D/4D matrix of datapoints and do with it whatever you want.
One advantage of Python is the vast availability of toolboxes. There's a toolbox for almost everything! In this section, we want to introduce you to the main scientific toolboxes that every researcher should know. While not essential for this workshop they will definitely come in handy in the future.
The workshop will be held over one day and is split into two parts: an introduction to Nipype concepts and applications in the morning, and how to use Arcana and Banana in the afternoon.
9:00-9:30
Introduction¶In this short introduction, we will explain a little about what Nipype, Arcana and Banana are and how they relate to eacher other.
9:30-11:00
Nipype Basics: Interfaces, Nodes & Workflows¶Nipype can be learned very quickly, but it's nonetheless important that you know about some of the main building blocks.
11:00-11:30
Coffee & Tea Break¶11:30-12:30
Advanced Nipype: Iteration & Custom Interfaces¶Once you have the building blocks in place, you can start iterating over your data and writing custom interfaces
12:30-13:30
Lunch¶13:30-15:00
Abstraction of Repository-Centric ANAlysis (Arcana)¶Arcana is a framework for designing "data-centric" analysis suites for different types of dataset (e.g. DWI, fMRI or PET images). In this section you will learn how to apply existing analyses to a dataset and construct your own analyses.
15:00-15:30
Coffee & Tea Break¶15:30-17:00
Brain imAgiNg Analysis iN Arcana (Banana)¶Banana implements analysis methods for a a range of MR contrasts (e.g. DWI, T1w, BOLD, T2star) in Analysis classes. In this section you will learn how to use Banana to analyse datasets and extended it to meet the requirements of your analysis.
Note: If you would like to use Banana to analyse data from your own project, please ensure you have access to another project on MASSIVE (i.e. not just 'training' as the data is shared amongst all particpants), or bring along a laptop configured for this course (see Configuring Your Workstation) and your data, and have a go during this section :)