This the multi-page printable view of this section. Click here to print.
Tutorials
- 1: Electrophysiology
- 2: Functional Imaging
- 2.1: Using fmriprep with neurodesk on an HPC
- 2.2: Using mriqc with neurodesk on HPC
- 2.3: PhysIO
- 2.4: A batch scripting example for PhysIO toolbox
- 2.5: Statistical Parametric Mapping (SPM)
- 3: MRI phase Processing
- 3.1: Quantitative Susceptibility Mapping
- 3.2: SWI
- 3.3: Unwrapping
- 4: Open Data
- 4.1: datalad
- 4.2: Oracle Open Data
- 4.3: osfclient
- 5: Reproducibility
- 6: Spectroscopy
- 7: Structural Imaging
- 7.1: FreeSurfer
- 7.2: Structural connectivity dMRI
- 8: Documentation
1 - Electrophysiology
1.1 - Analysing M/EEG Data with FieldTrip
This tutorial was created by Judy D Zhu.
Email: judyzhud@gmail.com
Github: @JD-Zhu
Twitter: @JudyDZhu
Please note that this container uses a compiled version of FieldTrip to run scripts (without needing a Matlab license). Code development is not currently supported within the container and needs to be carried out separatedly in Matlab.
Getting started
- Navigate to Neurodesk->Electrophysiology->fieldtrip->fieldtrip20211114 in the menu:
Once this window is loaded, you are ready to go:
- Type the following into the command window (replacing “yourscript.m” with the name of your custom script - note that you may also need to supply the full path):
run_fieldtrip.sh /opt/MCR/v99 yourscript.m
For example, here we ran a script to browse some raw data:
The fieldtrip GUI is displayed automatically and functions as it normally would when running inside Matlab.
NOTES:
- The script specified in the command line can call other scripts
- The script and the scripts it calls can use all the MATLAB toolboxes included in the compiled version of FieldTrip. If additional MATLAB toolboxes are needed, they need to be put in a filesystem accessible to the FieldTrip container (/neurodesktop-storage, /home/user, etc.), and the path should be added to the MATLAB search path with the addpath function (https://www.mathworks.com/help/matlab/ref/addpath.html)
1.2 - Analysing EEG Data with MNE
Getting started
To begin, navigate to Neurodesk->Electrophysiology->mne->vscodeGUI 0.23.4 in the menu. This version of vscode has been installed in a software container together with the a conda environment containing MNE-python. Note that if you open any other version of vscode in Neurodesk, you will not be able to access the MNE conda environment.
Open the folder: “/home/user/Desktop/storage” or a subfolder in which you would like to store this demo. In this folder, create a new file named “EEGDemo.ipynb” or something similar:
If this is your first time opening a Jupyter notebook on vscode in neurodesktop, you may see the following popup. If so, click “install” to install the vscode extensions for Jupyter.
Select MNE python kernel
Next, we need to direct vscode to use the python kernel associated with MNE. In the top right corner of your empty jupyter notebook, click “Select Kernel”:
Then, select mne-0.23.4 from the dropdown menu, which should look something like this:
Activate the MNE conda environment in the terminal
Next, we’ll activate the same MNE environment in a terminal. From the top menu in vscode, select Terminal->New Terminal, or hit [Ctrl]+[Shift]+[`].
If this is your first time using vscode in this container, you may have to initialise conda by typing conda init bash
in the bash terminal. After initialising bash, you will have to close and then reopen the terminal.
Once you have initialised conda, you can activate the MNE environment in the terminal:
conda activate mne-0.23.4
You should now see “(mne-0.23.4)” ahead of the current line in the terminal.
Download sample data
In the terminal (in which you have activated the MNE environment), input the following code to download some BIDS formatted sample EEG data:
Remember to update the path to the location you are storing this tutorial!
pip install osfclient
osf -p C689U fetch Data_sample.zip /neurodesktop-storage/EEGDEMO/Data_sample.zip
unzip Data_sample.zip
This is a small dataset with only 5 EEG channels from a single participant. The participant is viewing a frequency tagged display and is cued to attend to dots tagged at one frequency or another (6 Hz, 7.5 Hz) for long, 15 s trials. To read more about the dataset, click here
Plotting settings
To make sure our plots retain their interactivity, set the following line at the top of your notebook:
%matplotlib qt
This will mean your figures pop out as individual, interactive plots that will allow you to explore the data, rather than as static, inline plots. You can switch “qt” to “inline” to switch back to default, inline plotting.
Loading and processing data
NOTE: MNE has many helpful tutorials which delve into data processing and analysis using MNE-python in much further detail. These can be found here
Begin by importing the necessary modules and creating a pointer to the data:
# Interactive plotting
%matplotlib qt
# Import modules
import os
import numpy as np
import mne
# Load data
sample_data_folder = '/neurodesktop-storage/EEGDemo/Data_sample'
sample_data_raw_file = os.path.join(sample_data_folder, 'sub-01', 'eeg',
'sub-01_task-FeatAttnDec_eeg.vhdr')
raw = mne.io.read_raw_brainvision(sample_data_raw_file , preload=True)
the raw.info structure contains information about the dataset:
# Display data info
print(raw)
print(raw.info)
This data file did not include a montage. Lets create one using standard values for the electrodes we have:
# Create montage
montage = {'Iz': [0, -110, -40],
'Oz': [0, -105, -15],
'POz': [0, -100, 15],
'O1': [-40, -106, -15],
'O2': [40, -106, -15],
}
montageuse = mne.channels.make_dig_montage(ch_pos=montage, lpa=[-82.5, -19.2, -46], nasion=[0, 83.2, -38.3], rpa=[82.2, -19.2, -46]) # based on mne help file on setting 10-20 montage
Next, lets visualise the data.
raw.plot()
This should open an interactive window in which you can scroll through the data. See the MNE documentation for help on how to customise this plot.
If, upon visual inspection, you decide to exclude one of the channels, you can specify this in raw.info[‘bads’] now. For example:
raw.info['bads'] = ['POz']
Next, we’ll extract our events. The trigger channel in this file is incorrectly scaled, so we’ll correct that before we extract our events:
# Correct trigger scaling
trigchan = raw.copy()
trigchan = trigchan.pick('TRIG')
trigchan._data = trigchan._data*1000000
# Extract events
events = mne.find_events(trigchan, stim_channel='TRIG', consecutive=True, initial_event=True, verbose=True)
print('Found %s events, first five:' % len(events))
print(events[:5])
# Plot events
mne.viz.plot_events(events, raw.info['sfreq'], raw.first_samp)
Now that we’ve extracted our events, we can extract our EEG channels and do some simple pre-processing:
# select
eeg_data = raw.copy().pick_types(eeg=True, exclude=['TRIG'])
# Set montage
eeg_data.info.set_montage(montageuse)
# Interpolate
eeg_data_interp = eeg_data.copy().interpolate_bads(reset_bads=True)
# Filter Data
eeg_data_interp.filter(l_freq=1, h_freq=45, h_trans_bandwidth=0.1)
Let’s visualise our data again now that it’s cleaner:
#plot results again, this time with some events and scaling.
eeg_data_interp.plot(events=events, duration=10.0, scalings=dict(eeg=0.00005), color='k', event_color='r')
That’s looking good! We can even see hints of the frequency tagging. It’s about time to epoch our data.
# Epoch to events of interest
event_id = {'attend 6Hz K': 23, 'attend 7.5Hz K': 27}
# Extract 15 s epochs relative to events, baseline correct, linear detrend, and reject
# epochs where eeg amplitude is > 400
epochs = mne.Epochs(eeg_data_interp, events, event_id=event_id, tmin=0,
tmax=15, baseline=(0, 0), reject=dict(eeg=0.000400), detrend=1)
# Drop bad trials
epochs.drop_bad()
We can average these epochs to form Event Related Potentials (ERPs):
# Average erpochs to form ERPs
attend6 = epochs['attend 6Hz K'].average()
attend75 = epochs['attend 7.5Hz K'].average()
# Plot ERPs
evokeds = dict(attend6=list(epochs['attend 6Hz K'].iter_evoked()),
attend75=list(epochs['attend 7.5Hz K'].iter_evoked()))
mne.viz.plot_compare_evokeds(evokeds, combine='mean')
In this plot, we can see that the data are frequency tagged. While these data were collected, the participant was performing an attention task in which two visual stimuli were flickering at 6 Hz and 7.5 Hz respectively. On each trial the participant was cued to monitor one of these two stimuli for brief bursts of motion. From previous research, we expect that the steady-state visual evoked potential (SSVEP) should be larger at the attended frequency than the unattended frequency. Lets check if this is true.
We’ll begin by exporting our epoched EEG data to a numpy array
# Preallocate
n_samples = attend6.data.shape[1]
sampling_freq = 1200 # sampling frequency
epochs_np = np.empty((n_samples, 2) )
# Get data - averaging across EEG channels
epochs_np[:,0] = attend6.data.mean(axis=0)
epochs_np[:,1] = attend75.data.mean(axis=0)
Next, we can use a Fast Fourier Transform (FFT) to transform the data from the time domain to the frequency domain. For this, we’ll need to import the FFT packages from scipy:
from scipy.fft import fft, fftfreq, fftshift
# Get FFT
fftdat = np.abs(fft(epochs_np, axis=0)) / n_samples
freq = fftfreq(n_samples, d=1 / sampling_freq) # get frequency bins
Now that we have our frequency transformed data, we can plot our two conditions to assess whether attention altered the SSVEP amplitudes:
import matplotlib.pyplot as plt
fig,ax = plt.subplots(1, 1)
ax.plot(freq, fftdat[:,0], '-', label='attend 6Hz', color=[78 / 255, 185 / 255, 159 / 255])
ax.plot(freq, fftdat[:,1], '-', label='attend 7.5Hz', color=[236 / 255, 85 / 255, 58 / 255])
ax.set_xlim(4, 17)
ax.set_ylim(0, 1e-6)
ax.set_title('Frequency Spectrum')
ax.legend()
This plot shows that the SSVEPs were indeed modulated by attention in the direction we would expect! Congratulations! You’ve run your first analysis of EEG data in neurodesktop.
2 - Functional Imaging
2.1 - Using fmriprep with neurodesk on an HPC
This tutorial was created by Kelly G. Garner.
Github: @kel_github
Twitter: @garnertheory
This workflow documents how to use fmriprep with neurodesk and provides some details that may help you troubleshoot some common problems I found along the way.
Assumptions
- Your data is already in BIDS format
- You plan to run fmriprep using Neurodesk
- You have a local copy of the freesurfer license file (freesurfer.txt)
Steps
Open fmriprep
From the applications go Neurodesk -> Functional Imaging -> fmriprep and select the latest version of fmriprep. This should take you to a terminal window with fmriprep loaded.
Setting up fmriprep command
If you like, you can enter the following fmriprep command straight into the command line in the newly opened terminal. However, as with increasing options and preferences the command can get rather verbose, I instead opted to create an executable bash script that I can run straight from the command line, with minimal editing inbetween runs. If you’re not interested in this option you can skip straight to copying/adjusting the code from fmriprep
to -v
below.
- open a new file in your editor of choice but really you know it should be Visual Studio Code
- save that file with your chosen name without an extension, e.g. run_fmriprep
- paste in the following and update with your details
#!/bin/bash
#
# written by A. Name - the purpose of this code is to run fmriprep with neurodesk
export ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS=6 # specify the number of threads you want to use
fmriprep /path/to/your/data \ # this is the top level of your data folder
/path/to/your/data/derivatives \ # where you want fmriprep output to be saved
participant \ # this tells fmriprep to analyse at the participant level
--fs-license-file /path/to/your/freesurfer.txt \ # where the freesurfer license file is
--output-spaces T1w MNI152NLin2009cAsym fsaverage fsnative \
--participant-label 01 \ # put what ever participant labels you want to analyse
--nprocs 6 --mem 10000 \ # fmriprep can be greedy on the hpc, make sure it is not
--skip_bids_validation \ # its normally fine to skip this but do make sure your data are BIDS enough
-v # be verbal fmriprep, tell me what you are doing
To make the file executable, navigate to this file via the command line in terminal and type
chmod u+x run_fmriprep # this tells the system to make your new file executable
Then to run your new executable, return to your terminal window for fmriprep (that opened when you navigated to fmriprep) and type:
./run_fmriprep
fmriprep should now be merrily working away on your data :)
Some common pitfalls I have learned from my mistakes (and sometimes from others)
If fmriprep hangs it could well be that you are out of disk space. Sometimes this is because fmriprep created a work directory in your home folder which is often limited on the HPC. Make sure fmriprep knows to use a work drectory in your scratch. you can specify this in the fmriprep command by using -w /path/to/the/work/directory/you/made
I learned this from TomCat (@thomshaw92) - fmriprep can get confused between subjects when run in parallel. Parallelise with caution.
If running on a HPC, make sure to set the processor and memory limits, if not your job will get killed because it hogs all the resources.
2.2 - Using mriqc with neurodesk on HPC
This tutorial was created by Kelly G. Garner.
Github: @kel_github
Twitter: @garnertheory
This workflow documents how to use mriqc with neurodesk and provides some details that may help you troubleshoot some common problems I found along the way.
Assumptions
- Your data is already in BIDS format
- You plan to run mriqc using Neurodesk
Steps
Open mriqc
From the applications go Neurodesk -> Functional Imaging -> mriqc and select the latest version of mriqc. This should take you to a terminal window with mriqc loaded.
Setting up mriqc command
If you like, you can enter the following mriqc commands straight into the command line in the newly opened terminal. However, as with increasing options and preferences the command can get rather verbose, so I instead opted to create executable bash scripts that I can run straight from the command line, with minimal editing inbetween runs. I made one for running mriqc at the participant level, and one for running at the group level (for the group report, once all the participants are done). If you’re not interested in this option you can skip straight to copying/adjusting the code from mriqc
to -v
below.
- open a new file in your editor of choice (e.g. Visual Studio Code)
- save that file with your chosen name without an extension, e.g. run_mriqc_participant or run_mriqc_group
- paste in the following and update with your details
#!/bin/bash
#
# written by A. Name - the purpose of this code is to run mriqc with neurodesk
export ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS=6 # specify the number of threads you want to use
mriqc /path/to/your/data \ # this is the top level of your data folder
/path/to/your/data/derivatives \ # where you want mriqc output to be saved
participant \ # this tells mriqc to analyse at the participant level
--participant-label 01 \ # put what ever participant labels you want to analyse
--work-dir /path/to/work/directory \ #useful to specify so your home directory definitely doesnt get clogged
--nprocs 6 --mem_gb 10000 \ # mriqc can be greedy on the hpc, make sure it is not
-v # be verbal mriqc, tell me what you are doing
OR: if you have run all the participants and you just want the group level report, use these mriqc commands instead:
mriqc /path/to/your/data \ # this is the top level of your data folder
/path/to/your/data/derivatives \ # where you want mriqc output to be saved. As you are running the group level analysis this folder should be prepopulated with the results of the participant level analysis
group \ # this tells mriqc to agive you the group report
-w /path/to/work/directory \ #useful to specify so your home directory definitely doesnt get clogged
--nprocs 6 --mem_gb 10000 \ # mriqc can be greedy on the hpc, make sure it is not
-v # be verbal mriqc, tell me what you are doing
To make either of yours files executable, navigate via the terminal to the same folder in which this file is saved. If you list the files in the folder by using the command ls
you should see your file with the name printed in white.
Now type the following command:
chmod u+x run_mriqc_participant # this tells the system to make your new file executable
To know this worked, list the files again. If you have successfully made your file executable then it will be listed in green.
Then to run your new executable, return to your terminal window for mriqc (that opened when you navigated to mriqc), navigate to the directory where your executable file is stored and type:
./name_of_your_mriqc_file
mriqc should now be merrily working away on your data :)
Some common pitfalls I have learned from my mistakes (and sometimes from others)
- If running on a HPC, make sure to set the processor and memory limits, if not your job will get killed because mriqc hogs all the resources.
2.3 - PhysIO
This tutorial was created by Lars Kasper.
Github: @mrikasper
Twitter: @mrikasper
Origin
The PhysIO Toolbox implements ideas for robust physiological noise modeling in fMRI, outlined in this paper:
- Kasper, L., Bollmann, S., Diaconescu, A.O., Hutton, C., Heinzle, J., Iglesias, S., Hauser, T.U., Sebold, M., Manjaly, Z.-M., Pruessmann, K.P., Stephan, K.E., 2017. The PhysIO Toolbox for Modeling Physiological Noise in fMRI Data. Journal of Neuroscience Methods 276, 56-72. https://doi.org/10.1016/j.jneumeth.2016.10.019
PhysIO is part of the open-source TAPAS Software Package for Translational Neuromodeling and Computational Psychiatry, introduced in the following paper:
- Frässle, S., Aponte, E.A., Bollmann, S., Brodersen, K.H., Do, C.T., Harrison, O.K., Harrison, S.J., Heinzle, J., Iglesias, S., Kasper, L., Lomakina, E.I., Mathys, C., Müller-Schrader, M., Pereira, I., Petzschner, F.H., Raman, S., Schöbi, D., Toussaint, B., Weber, L.A., Yao, Y., Stephan, K.E., 2021. TAPAS: an open-source software package for Translational Neuromodeling and Computational Psychiatry. Frontiers in Psychiatry 12, 857. https://doi.org/10.3389/fpsyt.2021.680811
Please cite these works if you use PhysIO and see the FAQ for details.
NeuroDesk offers the possibility of running PhysIO without installing Matlab or requiring a Matlab license. The functionality should be equivalent, though debugging and extending the toolbox, as well as unreleased development features, will only be available in the Matlab version of PhysIO, which is exlusively hosted on the TAPAS GitHub.
More general info about PhysIO besides NeuroDesk usage is found in the README on GitHub.
Purpose
The general purpose of the PhysIO toolbox is model-based physiological noise correction of fMRI data using peripheral measures of respiration and cardiac pulsation (respiratory bellows, ECG, pulse oximeter/plethysmograph).
It incorporates noise models of
- cardiac/respiratory phase (RETROICOR, Glover et al. 2000), as well as
- heart rate variability and respiratory volume per time (cardiac response function, Chang et. al, 2009, respiratory response function, Birn et al. 2006),
- and extended motion models (e.g., censoring/scrubbing)
While the toolbox is particularly well integrated with SPM via the Batch Editor GUI, its output text files can be incorporated into any major neuroimaging analysis package for nuisance regression, e.g., within a GLM.
Core design goals for the toolbox were: flexibility, robustness, and quality assurance to enable physiological noise correction for large-scale and multi-center studies.
Some highlights:
- Robust automatic preprocessing of peripheral recordings via iterative peak detection, validated in noisy data and patients, and extended processing of respiratory data (Harrison et al., 2021)
- Flexible support of peripheral data formats (BIDS, Siemens, Philips, GE, BioPac, HCP, …) and noise models (RETROICOR, RVHRCOR).
- Fully automated noise correction and performance assessment for group studies.
- Integration in fMRI pre-processing pipelines as SPM Toolbox (Batch Editor GUI).
The accompanying technical paper about the toolbox concept and methodology can be found at: https://doi.org/10.1016/j.jneumeth.2016.10.019
Download Example Data
The example data should already be present in NeuroDesk in the following folder /opt/spm12
If you cannot find the example data there:
- Download the latest version from the location mentioned in the TAPAS distribution
- Follow the instructions for copying your own data in the next section
Copy your own data
- On Windows, the folder
C:\neurodesktop-storage
should have been automatically created when starting NeuroDesk - This is your direct link to the NeuroDesk environment, and anything you put in there should end up within the NeuroDesk desktop in
/neurodesktop-storage/
and on your desktop understorage
Example: Running PhysIO in the GUI
- Open the PhysIO GUI (Neurodesk -> Functional Imaging -> physio -> physioGUI r7771, see screenshot:
- SPM should automatically open up (might take a while). Select ‘fMRI’ from the modality selection screen.
- Press the “Batch Editor” button (see screenshot with open Batch Editor, red highlights)
- NB: If you later want to create a new PhysIO batch with all parameters, from scratch or explore the options, select from the Batch Editor Menu top row, SPM -> Tools -> TAPAS PhysIO Toolbox (see screenshot, read highlights)
- For now, load an existing example (or previously created SPM Batch File) as follows: It is most convenient to change the working directory of SPM to the location of the physiological logfiles
- In the Batch Editor GUI, lowest row, choose ‘CD’ from the ‘Utils..’ dropdown menu
- Navigate to any of the example folders, e.g.,
/opt/spm12/examples/Philips/ECG3T/
and select it - NB: you can skip this part, if you later manually update all input files in the Batch Editor window (resp/cardiac/scan timing and realignment parameter file further down)
- Any other example should also work the same way, just CD to its folder before the next step
- Select File -> Load Batch from the top row menu of the Batch Editor window
- make sure you select the matlab batch file
*_spm_job.<m|mat>
, (e.g.,philips_ecg3t_spm_job.m
andphilips_ecg3t_spm_job.mat
are identical, either is fine), but not the script.
- make sure you select the matlab batch file
- Press The green “Play” button in the top icon menu row of the Batch Editor Window
- Several output figures should appear, with the last being a grayscale plot of the nuisance regressor design matrix
- Congratulations, your first successful physiological noise model has been created! If you don’t see the mentioned figure, chances are certain input files were not found (e.g., wrong file location specified). You can always check the text output in the “bash” window associated with the SPM window for any error messages.
Further Info on PhysIO
2.4 - A batch scripting example for PhysIO toolbox
This tutorial was created by Kelly G. Garner.
Github: @kel-github
Twitter: @garner_theory
This tutorial walks through 1 way to batch script the use of the PhysIO toolbox with Neurodesk. The goal is to use the toolbox to generate physiological regressors to use when modelling fMRI data. The output format of the regressor files are directly compatible for use with SPM, and can be adapted to fit the specifications of other toolboxes.
Getting started
This tutorial assumes the following:
- Your data are (largely) in BIDs format
- That you have converted your .zip files containing physiological data to .log files. As I was using a CMRR multi-band sequence, I used this function
- That your .log files are in the subject derivatives/…/sub-…/ses-…/‘func’ folders of aformentioned BIDs structured data
- That you have a file that contains the motion regressors you plan to use in your GLM. I’ll talk below a bit about what I did with the output given by fmriprep (e.g. …_desc-confounds_timeseries.tsv’)
- That you can use SPM12 and the PhysIO GUI to initialise your batch code
NB. You can see the code generated from this tutorial here
1. Generate an example script for batching
First you will create an example batch script that is specific to one of your participants. To achieve this I downloaded locally the relevant ‘.log’ files for one participant, as well as the ‘…desc-confounds_timeseries.tsv’ output for fmriprep for each run. PhysIO is nice in that it will append the regressors from your physiological data to your movement parameters, so that you have a single file of regressors to add to your design matrix in SPM etc (other toolboxes are available).
To work with PhysIO toolbox, your motion parameters need to be in the .txt format as required by SPM.
I made some simple functions in python that would extract my desired movement regressors and save them to the space separated .txt file as is required by SPM. They can be found here.
Once I had my .log files and .txt motion regressors file, I followed the instructions here to get going with the Batch editor, and used this paper to aid my understanding of how to complete the fields requested by the Batch editor.
I wound up with a Batch script for the PhysIO toolbox that looked a little bit like this:
2. Generalise the script for use with any participant
Now that you have an example script that contains the specific details for a single participant, you are ready to generalise this code so that you can run it for any participant you choose. I decided to do this by doing the following:
- First I generate an ‘info’ structure for each participant. This is a structure saved as a matfile for each participant under ‘derivatives’, in the relevant sub-z/ses-y/func/ folder. This structure contains the subject specific details that PhysIO needs to know to run. Thus I wrote a matlab function that saves a structure called info with the following fields:
% -- outputs: a matfile containing a structure called info with the
% following fields:
% -- sub_num = subject number: [string] of form '01' '11' or '111'
% -- sess = session number: [integer] e.g. 2
% -- nrun = [integer] number of runs for that participant
% -- nscans = number of scans (volumes) in the design matrix for each
% run [1, nrun]
% -- cardiac_files = a cell of the cardiac files for that participant
% (1,n = nrun) - attained by using extractCMRRPhysio()
% -- respiration_files = same as above but for the resp files - attained by using extractCMRRPhysio()
% -- scan_timing = info file from Siemens - attained by using extractCMRRPhysio()
% -- movement = a cell of the movement regressor files for that
% participant (.txt, formatted for SPM)
To directly see the functions that produce this information, you can go to this repo here coming soon!
- Next I amended the batch script to load a given participant’s info file and to retrieve this information for the required fields in the batch. The batch script winds up looking like this:
%% written by K. Garner, 2022
% uses batch info:
%-----------------------------------------------------------------------
% Job saved on 17-Aug-2021 10:35:05 by cfg_util (rev $Rev: 7345 $)
% spm SPM - SPM12 (7771)
% cfg_basicio BasicIO - Unknown
%-----------------------------------------------------------------------
% load participant info, and print into the appropriate batch fields below
% before running spm jobman
% assumes data is in BIDS format
%% load participant info
sub = '01';
dat_path = '/file/path/to/top-level/of-your-derivatives-fmriprep/folder';
task = 'attlearn';
load(fullfile(dat_path, sprintf('sub-%s', sub), 'ses-02', 'func', ...
sprintf('sub-%s_ses-02_task-%s_desc-physioinfo', sub, task)))
% set variables
nrun = info.nrun;
nscans = info.nscans;
cardiac_files = info.cardiac_files;
respiration_files = info.respiration_files;
scan_timing = info.scan_timing;
movement = info.movement;
%% initialise spm
spm_jobman('initcfg'); % check this for later
spm('defaults', 'FMRI');
%% run through runs, print info and run
for irun = 1:nrun
clear matlabbatch
matlabbatch{1}.spm.tools.physio.save_dir = cellstr(fullfile(dat_path, sprintf('sub-%s', sub), 'ses-02', 'func')); % 1
matlabbatch{1}.spm.tools.physio.log_files.vendor = 'Siemens_Tics';
matlabbatch{1}.spm.tools.physio.log_files.cardiac = cardiac_files(irun); % 2
matlabbatch{1}.spm.tools.physio.log_files.respiration = respiration_files(irun); % 3
matlabbatch{1}.spm.tools.physio.log_files.scan_timing = scan_timing(irun); % 4
matlabbatch{1}.spm.tools.physio.log_files.sampling_interval = [];
matlabbatch{1}.spm.tools.physio.log_files.relative_start_acquisition = 0;
matlabbatch{1}.spm.tools.physio.log_files.align_scan = 'last';
matlabbatch{1}.spm.tools.physio.scan_timing.sqpar.Nslices = 81;
matlabbatch{1}.spm.tools.physio.scan_timing.sqpar.NslicesPerBeat = [];
matlabbatch{1}.spm.tools.physio.scan_timing.sqpar.TR = 1.51;
matlabbatch{1}.spm.tools.physio.scan_timing.sqpar.Ndummies = 0;
matlabbatch{1}.spm.tools.physio.scan_timing.sqpar.Nscans = nscans(irun); % 5
matlabbatch{1}.spm.tools.physio.scan_timing.sqpar.onset_slice = 1;
matlabbatch{1}.spm.tools.physio.scan_timing.sqpar.time_slice_to_slice = [];
matlabbatch{1}.spm.tools.physio.scan_timing.sqpar.Nprep = [];
matlabbatch{1}.spm.tools.physio.scan_timing.sync.nominal = struct([]);
matlabbatch{1}.spm.tools.physio.preproc.cardiac.modality = 'PPU';
matlabbatch{1}.spm.tools.physio.preproc.cardiac.filter.no = struct([]);
matlabbatch{1}.spm.tools.physio.preproc.cardiac.initial_cpulse_select.auto_template.min = 0.4;
matlabbatch{1}.spm.tools.physio.preproc.cardiac.initial_cpulse_select.auto_template.file = 'initial_cpulse_kRpeakfile.mat';
matlabbatch{1}.spm.tools.physio.preproc.cardiac.initial_cpulse_select.auto_template.max_heart_rate_bpm = 90;
matlabbatch{1}.spm.tools.physio.preproc.cardiac.posthoc_cpulse_select.off = struct([]);
matlabbatch{1}.spm.tools.physio.preproc.respiratory.filter.passband = [0.01 2];
matlabbatch{1}.spm.tools.physio.preproc.respiratory.despike = true;
matlabbatch{1}.spm.tools.physio.model.output_multiple_regressors = 'mregress.txt';
matlabbatch{1}.spm.tools.physio.model.output_physio = 'physio';
matlabbatch{1}.spm.tools.physio.model.orthogonalise = 'none';
matlabbatch{1}.spm.tools.physio.model.censor_unreliable_recording_intervals = true; %false;
matlabbatch{1}.spm.tools.physio.model.retroicor.yes.order.c = 3;
matlabbatch{1}.spm.tools.physio.model.retroicor.yes.order.r = 4;
matlabbatch{1}.spm.tools.physio.model.retroicor.yes.order.cr = 1;
matlabbatch{1}.spm.tools.physio.model.rvt.no = struct([]);
matlabbatch{1}.spm.tools.physio.model.hrv.no = struct([]);
matlabbatch{1}.spm.tools.physio.model.noise_rois.no = struct([]);
matlabbatch{1}.spm.tools.physio.model.movement.yes.file_realignment_parameters = {fullfile(dat_path, sprintf('sub-%s', sub), 'ses-02', 'func', sprintf('sub-%s_ses-02_task-%s_run-%d_desc-motion_timeseries.txt', sub, task, irun))}; %8
matlabbatch{1}.spm.tools.physio.model.movement.yes.order = 6;
matlabbatch{1}.spm.tools.physio.model.movement.yes.censoring_method = 'FD';
matlabbatch{1}.spm.tools.physio.model.movement.yes.censoring_threshold = 0.5;
matlabbatch{1}.spm.tools.physio.model.other.no = struct([]);
matlabbatch{1}.spm.tools.physio.verbose.level = 2;
matlabbatch{1}.spm.tools.physio.verbose.fig_output_file = '';
matlabbatch{1}.spm.tools.physio.verbose.use_tabs = false;
spm_jobman('run', matlabbatch);
end
3. Ready to run on Neurodesk!
Now we have a batch script, we’re ready to run this on Neurodesk - yay!
First make sure the details at the top of the script are correct. You can see that this script could easily be amended to run multiple subjects.
On Neurodesk, go to the PhysIO toolbox, but select the command line tool rather than the GUI interface (‘physio r7771 instead of physioGUI r7771). This will take you to the container for the PhysIO toolbox
Now to run your PhysIO batch script, type the command:
run_spm12.sh /opt/mcr/v99/ batch /your/batch/scipt/named_something.m
Et Voila! Physiological regressors are now yours - mua ha ha!
2.5 - Statistical Parametric Mapping (SPM)
This tutorial was created by Steffen Bollmann.
Email: s.bollmannn@uq.edu.au
Github: @stebo85
Twitter: @sbollmann_MRI
This tutorial is based on the excellent tutorial from Andy’s Brain book: https://andysbrainbook.readthedocs.io/en/latest/SPM/SPM_Overview.html Our version here is a shortened and adjusted version for using on the Neurodesk platform.
Download data
First, let’s download the data. We will use this open dataset: https://openneuro.org/datasets/ds000102/versions/00001/download
Open a terminal and use datalad to install the dataset:
cd neurodesktop-storage
datalad install https://github.com/OpenNeuroDatasets/ds000102.git

We will use subject 08 as an example here, so we use datalad to download sub-08 and since SPM doesn’t support compressed files, we need to unpack them:
cd ds000102
datalad get sub-08/
gunzip sub-08/anat/sub-08_T1w.nii.gz -f
gunzip sub-08/func/sub-08_task-flanker_run-1_bold.nii.gz -f
gunzip sub-08/func/sub-08_task-flanker_run-2_bold.nii.gz -f
chmod a+rw sub-08/ -R
The task used is described here: https://andysbrainbook.readthedocs.io/en/latest/SPM/SPM_Short_Course/SPM_02_Flanker.html
Starting SPM and visualizing the data
Start spm12GUI from the Application Menu:
When the SPM menu loaded, click on fMRI and the full SPM interface should open up:
For convenience let’s change our default directory to our example subject. Click on Utils
and select CD
:
Then navigate to sub-08 and select the directory in the right browser window:
Now let’s visualize the anatomical T1 scan of subject 08 by clicking on Display and navigating and selecting the anatomical scan:

Now let’s look at the functional scans. Use CheckReg and open run-01. Then right click and Browse ...
. Then set frames to 1:146 and right click Select All

Now we get a slider viewer and we can investigate all functional scans:
Let’s check the alignment between the anatomical and the functional scans - use CheckReg and open the anatomical and the functional scan. They shouldn’t align yet, because we haven’t done any preprocessing yet:
Preprocessing the data
Realignment
Select Realign (Est & Reslice)
from the SPM Menu (the third option):
Then select the functional run (important: Select frames from 1:146 again!) and leave everything else as Defaults. Then hit run:
As an output we should see the realignment parameters:
Slice timing correction
Click on Slice timing
in the SPM menu to bring up the Slice Timing section in the batch editor:
Select the realigned images (use filter rsub
and Frames 1:146) and then enter the parameters:
- Number of Slices = 40
- TR = 2
- TA = 1.95
- Slice order = [1:2:40 2:2:40]
- Reference Slice = 1

Coregistration
Now, we coregister the functional scans and the anatomical scan.
Click on Coregister (Estimate & Reslice)
(the third option) in the SPM menu to bring up the batch editor:
Use the Mean image as the reference and the T1 scan as the source image and hit Play:
Let’s use CheckReg again and overlay a Contour (Right Click -> Contour -> Display onto -> all) to check the coregistration between the images:
Segmentation
Click the Segmentation
button in the SPM menu:
Then change the following settings:
- Volumes = our coregistered anatomical scan rsub-08-T1w.nii
- Save Bias Corrected = Save Bias Correced
- Deformation Fields = Forward
and hit Play again.
Apply normalization
Select Normalize (Write)
from the SPM menu:
For the Deformation Field select the y_rsub-08 file we created in the last step and for the Images to Write select the arsub-08 functional images (Filter ^ar and Frames 1:146):
Hit Play again.
Checking the normalization
Use CheckReg to make sure that the functional scans (starting with w to indicate that they were warped: warsub-08) align with the template (found in /opt/spm12/spm12_mcr/spm12/spm12/canonical/avg305T1.nii):

Smoothing
Click the Smooth
button in the SPM menu and select the warped functional scans:
Then click Play.
You can check the smoothing by using CheckReg again:
Analyzing the data
Click on Specify 1st-level
- then set the following options:
- Directory: Select the sub-08 top level directory
- Units for desing: Seconds
- Interscan interval: 2
- Data & Design: Click twice on New Subject/Session
- Select the smoothed, warped data from run 1 and run 2 for the two sessions respectively
- Create two Conditions per run and set the following:
- For Run 1:
- Name: Inc
- Onsets (you can copy from here and paste with CTRL-V): 0 10 20 52 88 130 144 174 236 248 260 274
- Durations: 2 (SPM will assume that it’s the same for each event)
- Name: Con
- Onsets: 32 42 64 76 102 116 154 164 184 196 208 222
- Durations: 2
- For Run 2:
- Name: Inc
- Onsets: 0 10 52 64 88 150 164 174 184 196 232 260
- Durations: 2
- Name: Con
- Onsets: 20 30 40 76 102 116 130 140 208 220 246 274
- Durations: 2
When done, click the green Play button.
We can Review the design by clicking on Review
in the SPM menu and selecting the SPM.mat file in the model directory we specified earlier and it should look like this:
Estimating the model
Click on Estimate
in the SPM menu and select the SPM.mat file, then hit the green Play button.
Inference
Now open the Results
section and select the SPM.mat file again. Then we can test our hypotheses:
Define a new contrast as:
- Name: Incongruent-Congruent
- Contrast weights vector: 0.5 -0.5 0.5 -0.5

Then we can view the results. Set the following options:
- masking: none”
- p value adjustment to control: Click on “none”, and set the uncorrected p-value to 0.01.
- extent threshold {voxels}: 10

3 - MRI phase Processing
3.1 - Quantitative Susceptibility Mapping
This tutorial was created by Steffen Bollmann.
Github: @stebo85 Web: mri.sbollmann.net Twitter: @sbollmann_MRI
Quantitative Susceptibility Mapping in QSMxT
Neurodesk includes QSMxT, a complete and end-to-end QSM processing and analysis framework that excels at automatically reconstructing and processing QSM for large groups of participants.
QSMxT provides pipelines implemented in Python that:
- Automatically convert DICOM data to the Brain Imaging Data Structure (BIDS)
- Automatically reconstruct QSM, including steps for:
- Robust masking without anatomical priors
- Phase unwrapping (Laplacian based)
- Background field removal + dipole inversion (
tgv_qsm
) - Multi-echo combination
- Automatically generate a common group space for the whole study, as well as average magnitude and QSM images that facilitate group-level analyses.
- Automatically segment T1w data and register them to the QSM space to extract quantitative values in anatomical regions of interest.
- Export quantitative data to CSV for all subjects using the automated segmentations, or a custom segmentation in the group space (we recommend ITK snap).
If you use QSMxT for a study, please cite https://doi.org/10.1101/2021.05.05.442850 (for QSMxT) and http://www.ncbi.nlm.nih.gov/pubmed/25731991 (for TGVQSM)
Download demo data
Open a terminal and run:
pip install osfclient
export PATH=$PATH:~/.local/bin
cd /neurodesktop-storage/
osf -p ru43c clone /neurodesktop-storage/qsmxt-demo
unzip /neurodesktop-storage/qsmxt-demo/osfstorage/GRE_2subj_1mm_TE20ms/sub1/GR_M_5_QSM_p2_1mmIso_TE20.zip -d /neurodesktop-storage/qsmxt-demo/dicoms
unzip /neurodesktop-storage/qsmxt-demo/osfstorage/GRE_2subj_1mm_TE20ms/sub1/GR_P_6_QSM_p2_1mmIso_TE20.zip -d /neurodesktop-storage/qsmxt-demo/dicoms
unzip /neurodesktop-storage/qsmxt-demo/osfstorage/GRE_2subj_1mm_TE20ms/sub2/GR_M_5_QSM_p2_1mmIso_TE20.zip -d /neurodesktop-storage/qsmxt-demo/dicoms
unzip /neurodesktop-storage/qsmxt-demo/osfstorage/GRE_2subj_1mm_TE20ms/sub2/GR_P_6_QSM_p2_1mmIso_TE20.zip -d /neurodesktop-storage/qsmxt-demo/dicoms
QSMxT Usage
Start QSMxT (in this demo we used 1.1.9) from the applications menu in the desktop (Neurodesk > Quantitative Imaging > qsmxt)
- Convert DICOM data to BIDS:
cd /neurodesktop-storage/qsmxt-demo python3 /opt/QSMxT/run_0_dicomSort.py /neurodesktop-storage/qsmxt-demo/dicoms 00_dicom python3 /opt/QSMxT/run_1_dicomConvert.py 00_dicom 01_bids
This will bring up an interactive question to ask you which sequence your QSM data are. It will automatically detect the QSM sequence if it has qsm or t2star in the protocol name or you can use the command line argument --t2starw_series_patterns
to specify. This demo data comes without a structural scan (automatically recognized with t1w in the name, or specified with --t1w_series_patterns
, so hit Enter to continue when it asks you to identify which scan the T1w scan is:
- Run QSM pipeline:
python3 /opt/QSMxT/run_2_qsm.py 01_bids 02_qsm_output
Then you can open a viewer (Visualization -> mricrogl -> mricroglGUI) and you can find the QSM outputs in /neurodesktop-storage/qsmxt-demo/02_qsm_output/qsm_final/_run_run-1/
for example: sub-170705-134431-std-1312211075243167001_ses-1_run-1_part-phase_T2starw_scaled_qsm_000_composite_average.nii
Please note that the demo dataset does not have a T1w scan for anatomical segmentation and therefore the subsequent steps in QSMxT (e.g.
python3 /opt/QSMxT/run_3_segment.py 01_bids 03_segmentation
) will NOT work.
3.2 - SWI
This tutorial was created by Steffen Bollmann.
Github: @stebo85 Web: mri.sbollmann.net Twitter: @sbollmann_MRI
Download demo data
Open a terminal and run:
pip install osfclient
cd /neurodesktop-storage/
osf -p ru43c fetch 01_bids.zip /neurodesktop-storage/swi-demo/01_bids.zip
unzip /neurodesktop-storage/swi-demo/01_bids.zip -d /neurodesktop-storage/swi-demo/
Open the CLEARSWI tool from the application menu:
paste this julia script in a julia file and execute:
cd /neurodesktop-storage/
vi clearswi.jl
hit a or i and then paste this:
using CLEARSWI
TEs = [20]
nifti_folder = "/neurodesktop-storage/swi-demo/01_bids/sub-170705134431std1312211075243167001/ses-1/anat"
magfile = joinpath(nifti_folder, "sub-170705134431std1312211075243167001_ses-1_acq-qsm_run-1_magnitude.nii.gz")
phasefile = joinpath(nifti_folder, "sub-170705134431std1312211075243167001_ses-1_acq-qsmPH00_run-1_phase.nii.gz")
mag = readmag(magfile);
phase = readphase(phasefile);
data = Data(mag, phase, mag.header, TEs);
swi = calculateSWI(data);
# mip = createIntensityProjection(swi, minimum); # minimum intensity projection, other Julia functions can be used instead of minimum
mip = createMIP(swi); # shorthand for createIntensityProjection(swi, minimum)
savenii(swi, "/neurodesktop-storage/swi-demo/swi.nii"; header=mag.header)
savenii(mip, "/neurodesktop-storage/swi-demo/mip.nii"; header=mag.header)
hit SHIFT-Z-Z and run:
julia clearswi.jl
Open ITK snap from the Visualization Application’s menu and inspect the results (the outputs are in swi-demo/swi.nii and mip.nii)
3.3 - Unwrapping
This tutorial was created by Steffen Bollmann.
Github: @stebo85 Web: mri.sbollmann.net Twitter: @sbollmann_MRI
Download demo data
Open a terminal and run:
pip install osfclient
cd /neurodesktop-storage/
osf -p ru43c fetch 01_bids.zip /neurodesktop-storage/swi-demo/01_bids.zip
unzip /neurodesktop-storage/swi-demo/01_bids.zip -d /neurodesktop-storage/swi-demo/
mkdir /neurodesktop-storage/romeo-demo/
cp /neurodesktop-storage/swi-demo/01_bids/sub-170705134431std1312211075243167001/ses-1/anat/sub-170705134431std1312211075243167001_ses-1_acq-qsmPH00_run-1_phase.nii.gz /neurodesktop-storage/romeo-demo/phase.nii.gz
cp /neurodesktop-storage/swi-demo/01_bids/sub-170705134431std1312211075243167001/ses-1/anat/sub-170705134431std1312211075243167001_ses-1_acq-qsm_run-1_magnitude.nii.gz /neurodesktop-storage/romeo-demo/mag.nii.gz
gunzip /neurodesktop-storage/romeo-demo/mag.nii.gz
gunzip /neurodesktop-storage/romeo-demo/phase.nii.gz
Using ROMEO for phase unwrapping
Open the ROMEO tool from the application menu and run:
romeo -p /neurodesktop-storage/romeo-demo/phase.nii -m /neurodesktop-storage/romeo-demo/mag.nii -k nomask -o /neurodesktop-storage/romeo-demo/
4 - Open Data
4.1 - datalad
This tutorial was created by Steffen Bollmannn.
Github: @stebo85
DataLad is an open-source tool to publish and access open datasets. In addition to many open data sources (OpenNeuro, CBRAIN, brainlife.io, CONP, DANDI, Courtois Neuromod, Dataverse, Neurobagel), it can also connect to the Open Science Framework (OSF): http://osf.io/
Publish a dataset
First we have to create a DataLad dataset:
datalad create my_dataset
# now add files to your project and then add save the files with datalad
datalad save -m "added new files"
Now we can create a token on OSF (Account Settings -> Personal access tokens -> Create token) and authenticate:
datalad osf-credentials
Here is an example how to publish a dataset on the OSF:
# create sibling
datalad create-sibling-osf --title best-study-ever -s osf
# push
datalad push --to osf
The last steps creates a DataLad dataset, which is not easily human readable.
If you would like to create a human-readable dataset (but without the option of downloading it as a datalad dataset later on):
# create sibling
datalad create-sibling-osf --title best-study-ever-human-readable --mode exportonly -s osf-export
git-annex export HEAD --to osf-export-storage
Access a dataset
To download a dataset from the OSF (if it was uploaded as a DataLad dataset before):
datalad clone osf://ehnwz
cd ehnwz
# now get the files you want to download:
datalad get .
4.2 - Oracle Open Data
This tutorial was created by Steffen Bollmannn.
Github: @stebo85
Oracle Open Data is an open platform for scientific data
Publish a dataset
To publish your data there you need to get in touch with Oracle and create a project. The upload then is done via the OCI command line tool. We for example uploaded one our datasets there: https://opendata.oraclecloud.com/ords/r/opendata/opendata/details?data_set_id=28&clear=RR,5
Access a dataset
To download a dataset from Oracle Open data you can use curl or wget:
wget https://objectstorage.us-ashburn-1.oraclecloud.com/n/idrvm4tkz2a8/b/TOMCAT/o/TOMCAT_DIB/sub-01/ses-01_7T/anat/sub-01_ses-01_7T_IV1_defaced.nii.gz
curl -OL https://objectstorage.us-ashburn-1.oraclecloud.com/n/idrvm4tkz2a8/b/TOMCAT/o/TOMCAT_DIB/sub-01/ses-01_7T/anat/sub-01_ses-01_7T_IV1_defaced.nii.gz
Or you can mount the object storage bucket inside NeuroDesk using rlcone (requires rclone v1.60.1 + this does not work on the hosted Neurodesk instances on play.neurodesk.org due to limited privileges):
mkdir -p ~/TOMCAT
rclone mount opendata3p:TOMCAT ~/TOMCAT &
This assumes the following ~/.config/rclone/rclone.conf configuration (which is setup already for you inside Neurodesk):
[opendata3p]
type = oracleobjectstorage
provider = no_auth
namespace = idrvm4tkz2a8
region = us-ashburn-1
4.3 - osfclient
This tutorial was created by Steffen Bollmannn.
Github: @stebo85
The osfclient is an open-source tool to publish and access open datasets on the Open Science Framework (OSF): http://osf.io/
Publish a dataset
Here is an example how to publish a dataset on the OSF:
osf init
# enter your OSF credentials and project ID
# now copy your data into the directory, cd into the directory and then run:
osf upload -r . osfstorage/data
Access a dataset
To download a dataset from the OSF:
osf -p PROJECTID_HERE_eg_y5cq9 clone .
5 - Reproducibility
5.1 - Reproducible script execution with DataLad
This tutorial was created by Sin Kim.
Github: @kimsin98
Twitter: @SinKim98
In addition to being a convenient method of sharing data, DataLad can also help you create reproducible analyses by recording how certain result files were produced (i.e. provenance). This helps others (and you!) easily keep track of analyses and rerun them.
This tutorial will assume you know the basics of navigating the terminal. If you are not familiar with the terminal at all, check the DataLad Handbook’s brief guide.
Create a DataLad project
A DataLad dataset can be any collection of files in folders, so it could be many things including an analysis project. Let’s go to the Neurodesktop storage and create a dataset for some project. Open a terminal and enter these commands:
$ cd /storage
$ datalad create -c yoda SomeProject
[INFO ] Creating a new annex repo at /home/user/Desktop/storage/SomeProject
[INFO ] Running procedure cfg_yoda
[INFO ] == Command start (output follows) =====
[INFO ] == Command exit (modification check follows) =====
create(ok): /home/user/Desktop/storage/SomeProject (dataset)
yoda?
-c yoda
option configures the dataset according to
the YODA, a
set of intuitive organizational principles for data analyses that works
especially well with version control.Go in the dataset and check its contents.
$ cd SomeProject
$ ls
CHANGELOG.md README.md code
Create a script
One of DataLad’s strengths is that it assumes very little about your datasets. Thus, it can work with any other software on the terminal: Python, R, MATLAB, AFNI, FSL, FreeSurfer, etc. For this tutorial, we will run the simplest Julia script.
$ ml julia
$ cat > code/hello.jl << EOF
println("hello neurodesktop")
EOF
EOF?
For sake of demonstration, we create the script using built-in Bash terminal commands only (here document that starts after<< EOF
and ends when you enter EOF
), but you may use whatever text editor you are
most comfortable with to create the code/hello.jl
file.You may want to test (parts of) your script.
$ julia code/hello.jl > hello.txt
$ cat hello.txt
hello neurodesktop
Run and record
Before you run your analyses, you should check the dataset for changes and save or clean them.
$ datalad status
untracked: /home/user/Desktop/storage/SomeProject/code/hello.jl (file)
untracked: /home/user/Desktop/storage/SomeProject/hello.txt (file)
$ datalad save -m 'hello script' code/
add(ok): code/hello.jl (file)
save(ok): . (dataset)
action summary:
add (ok: 1)
save (ok: 1)
$ git clean -i
Would remove the following item:
hello.txt
*** Commands ***
1: clean 2: filter by pattern 3: select by numbers 4: ask each 5: quit 6: help
What now> 1
Removing hello.txt
git
git clean
is for removing new, untracked files. For
resetting existing, modified files to the last saved version, you would need
git reset --hard
.When the dataset is clean, we are ready to datalad run
!
$ mkdir outputs
$ datalad run -m 'run hello' -o 'outputs/hello.txt' 'julia code/hello.jl > outputs/hello.txt'
[INFO ] == Command start (output follows) =====
[INFO ] == Command exit (modification check follows) =====
add(ok): outputs/hello.txt (file)
save(ok): . (dataset)
Let’s go over each of the arguments:
-m 'run hello'
: Human-readable message to record in the dataset log.-o 'outputs/hello.txt'
: Expected output of the script. You can specify multiple-o
arguments and/or use wildcards like'outputs/*'
. This script has no inputs, but you can similarly specify inputs with-i
.'julia ... '
: The final argument is the command that DataLad will run.
Before getting to the exciting part, let’s do a quick sanity check.
$ cat outputs/hello.txt
hello neurodesktop
View history and rerun
So what’s so good about the extra hassle of running scripts with datalad run
?
To see that, you will need to pretend you are someone else (or you of future!)
and install the dataset somewhere else. Note that -s
argument is probably a
URL if you were really someone else.
$ cd ~
$ datalad install -s /neurodesktop-storage/SomeProject
install(ok): /home/user/SomeProject (dataset)
$ cd SomeProject
Because a DataLad dataset is a Git repository, people who download your dataset
can see exactly how outputs/hello.txt
was created using Git’s logs.
$ git log outputs/hello.txt
commit 52cff839596ff6e33aadf925d15ab26a607317de (HEAD -> master, origin/master, origin/HEAD)
Author: Neurodesk User <user@neurodesk.github.io>
Date: Thu Dec 9 08:31:15 2021 +0000
[DATALAD RUNCMD] run hello
=== Do not change lines below ===
{
"chain": [],
"cmd": "julia code/hello.jl > outputs/hello.txt",
"dsid": "1e82813d-856f-4118-b54d-c3823e025709",
"exit": 0,
"extra_inputs": [],
"inputs": [],
"outputs": [
"outputs/hello.txt"
],
"pwd": "."
}
^^^ Do not change lines above ^^^
Then, using that information, they can re-run the command that created the file
using datalad rerun
!
$ datalad rerun 52cf
[INFO ] run commit 52cff83; (run hello)
run.remove(ok): outputs/hello.txt (file) [Removed file]
[INFO ] == Command start (output follows) =====
[INFO ] == Command exit (modification check follows) =====
add(ok): outputs/hello.txt (file)
action summary:
add (ok: 1)
run.remove (ok: 1)
save (notneeded: 1)
git
In Git, each commit (save state) is assigned a long, unique machine-generated ID.52cf
refers to the commit with ID that starts
with those characters. Usually 4 is the minimum needed to uniquely identify a
commit. Of course, this ID is probably different for you, so change this
argument to match your commit.See Also
- To learn more basics and advanced applications of DataLad, check out the DataLad Handbook.
- DataLad is built on top of the popular version control tool Git. There are many great resources on Git online, like this free book.
- DataLad is only available on the terminal. For a detailed introduction on the Bash terminal, check the BashGuide.
- For even more reproducibility, you can include containers with your dataset to run analyses in. DataLad has an extension to support script execution in containers. See here.
6 - Spectroscopy
6.1 - Spectroscopy with lcmodel
This tutorial was created by Steffen Bollmann.
Github: @stebo85 Web: mri.sbollmann.net Twitter: @sbollmann_MRI
Open lcmodel from the menu: Applications -> Spectroscopy -> lcmodel -> lcmodel 6.3
run
setup_lcmodel.sh
then run
lcmgui
We packed example data into the container (https://zenodo.org/record/3904443/) and we will use this to show a basic analysis.
The example data comes in the Varian fid format, so click on Varian:

and then select the fid data in: /opt/datasets/Spectra_hippocampus(rat)_TE02/s_20131015_03_BDL106_scan0/isise_01.fid

Then Change BASIS and select the appropriate basis set in /opt/datasets/Spectra_hippocampus(rat)_TE02/Control_files_Basis_set

Then hit Run LCModel:

and confirm:

then wait a couple of minutes until the analyzed spectra appear - by closing the window you can go through the results:

the results are also saved in ~/.lcmodel/saved/
7 - Structural Imaging
7.1 - FreeSurfer
This tutorial was created by Steffen Bollmann.
Github: @stebo85 Web: mri.sbollmann.net Twitter: @sbollmann_MRI
FreeSurfer Example using module load (e.g. on an HPC)
Download data:
wget https://objectstorage.us-ashburn-1.oraclecloud.com/n/idrvm4tkz2a8/b/TOMCAT/o/TOMCAT_DIB/sub-01/ses-01_7T/anat/sub-01_ses-01_7T_T1w_defaced.nii.gz
# or alternatively:
curl -OL https://objectstorage.us-ashburn-1.oraclecloud.com/n/idrvm4tkz2a8/b/TOMCAT/o/TOMCAT_DIB/sub-01/ses-01_7T/anat/sub-01_ses-01_7T_T1w_defaced.nii.gz
Setup FreeSurfer:
ml freesurfer/7.3.2
mkdir ~/freesurfer-output
export SINGULARITYENV_SUBJECTS_DIR=~/freesurfer-output
Run Recon all pipeline:
recon-all -subject test-subject -i ~/sub-01_ses-01_7T_T1w_defaced.nii.gz -all
Alternative instructions for using Freesurfer via the Neurodesk application menu
Download demo data
Open a terminal and run:
pip install osfclient
osf -p bt4ez fetch TOMCAT_DIB/sub-01/ses-01_7T/anat/sub-01_ses-01_7T_T1w_defaced.nii.gz /neurodesktop-storage/sub-01_ses-01_7T_T1w_defaced.nii.gz
FreeSurfer License file:
Before using Freesurfer you need to request a license here (https://surfer.nmr.mgh.harvard.edu/registration.html) and store it in your homedirectory as ~/.license
FreeSurfer Example
Open FreeSurfer (Neurodesk -> Image Segmentation -> Freesurfer -> Freesurfer 7.1.1)
Setup FreeSurfer license (for example - replace with your license):
echo "Steffen.Bollmann@cai.uq.edu.au
> 21029
> *Cqyn12sqTCxo
> FSxgcvGkNR59Y" >> ~/.license
export FS_LICENSE=~/.license
Setup FreeSurfer:
mkdir /neurodesktop-storage/freesurfer-output
source /opt/freesurfer-7.1.1/SetUpFreeSurfer.sh
export SUBJECTS_DIR=/neurodesktop-storage/freesurfer-output
Run Recon all pipeline:
recon-all -subject test-subject -i /neurodesktop-storage/sub-01_ses-01_7T_T1w_defaced.nii.gz -all
7.2 - Structural connectivity dMRI
This tutorial was created by Joan Amos.
Email: joan@std.uestc.edu.cn Github: @Joanone
References:
The steps used for this tutorial were referenced from: https://github.com/civier/HCP-dMRI-connectome https://andysbrainbook.readthedocs.io/en/latest/MRtrix/MRtrix_Course/MRtrix_00_Diffusion_Overview.html https://mrtrix.readthedocs.io/en/latest/quantitative_structural_connectivity/structural_connectome.html
Data Description
Reference:
The single subject data used in this tutorial has been preprocessed and was downloaded from:
https://db.humanconnectome.org/
100307_3T_Structural_preproc.zip 100307_3T_Diffusion_preproc.zip
Download demo data:
https://1drv.ms/u/s!AjZJgBZ_P9UO8nWvAFwQyKQnrroe?e=6qmRlQ - Diffusion data https://1drv.ms/u/s!AjZJgBZ_P9UO8nblYQyUVsibqggs?e=mkwLpQ - Structural data
Required structural preprocessed input files
aparc+aseg.nii.gz T1w_acpc_dc_restore_brain.nii.gz
Required diffusion preprocessed input files
bvals bvecs data.nii.gz
Install Neurodesk on windows and mount external storage on your host computer
References: https://neurodesk.github.io/docs/neurodesktop/getting-started/windows/ https://neurodesk.github.io/docs/neurodesktop/storage/
N/B: Constructing the structural connectivity using dMRI HCP data is computationally intensive. Thus, ensure you have sufficient disk space (>100GB) and RAM size (16, 32GB)
Open the powershell terminal and run:
docker run --shm-size=1gb -it --privileged --name neurodesktop -v C:/neurodesktop-storage:/neurodesktop-storage -v D:/moredata:/data -p 8080:8080 -h neurodesktop-20220222 vnmd/neurodesktop:20220222
Navigate to the mounted storage–>more data–>Create a new folder of your choice–> copy the required input files into a folder->100307
N/B: The folder created in this tutorial was tagged “Test”
Open a terminal in neurodesk and run:
cd/data/Test/100307
Activate mrtrix3 software in the neurodesk terminal
ml mrtrix3/3.0.3
N/B: The advantage neurodesk offers is the version of software can be selected from a range of others, which caters for reproducibility. The mrtrix3 (3.0.3) version was used in this tutorial.
Step 1: Further pre-processing
Extract data.nii.gz to enable memory-mapping. The extracted files are about 4.5GB:
gunzip -c data.nii.gz > data.nii;
mrconvert data.nii DWI.mif -fslgrad bvecs bvals -datatype float32 -stride 0,0,0,1 -force -info;
rm -f data.nii
Perform mrconvert:
dwibiascorrect ants DWI.mif DWI_bias_ants.mif -bias bias_ants_field.mif -force -info;
Extract the response function. Uses -stride 0,0,0,1:
dwi2response dhollander DWI_bias_ants.mif response_wm.txt response_gm.txt response_csf.txt -voxels RF_voxels.mif -force;
dwiextract DWI_bias_ants.mif - -bzero | mrmath - mean meanb0.mif -axis 3 -force -info
Generate mask:
dwi2mask DWI_bias_ants.mif DWI_mask.mif -force -info;
Generate Fibre Orientation Distributions (FODs):
dwi2fod msmt_csd DWI_bias_ants.mif response_wm.txt wmfod.mif response_gm.txt gm.mif response_csf.txt csf.mif -mask DWI_mask.mif -force -info;
Perform normalization:
mtnormalise wmfod.mif wmfod_norm.mif gm.mif gm_norm.mif csf.mif csf_norm.mif -mask DWI_mask.mif -check_norm mtnormalise_norm.mif -check_mask mtnormalise_mask.mif -force -info
Generate a 5 tissue image:
5ttgen fsl T1w_acpc_dc_restore_brain.nii.gz 5TT.mif -premasked
Convert the B0 image:
mrconvert meanb0.mif mean_b0.nii.gz
Activate the fsl and afni softwares in the neurodesk terminal:
ml fsl/6.0.3
ml afni/21.0.0
Use “fslroi” to extract the first volume of the segmented dataset which corresponds to the Grey Matter Segmentation:
fslroi 5TT.nii.gz 5TT_vol0.nii.gz 0
Use “flirt” command to coregister the two datasets:
flirt -in mean_b0.nii.gz -ref 5TT_vol0.nii.gz -interp nearestneighbour -dof 6 -omat diff2struct_fsl.mat
Convert the transformation matrix to a format readble by MRtrix:
transformconvert diff2struct_fsl.mat mean_b0.nii.gz 5TT.nii.gz flirt_import diff2struct_mrtrix.txt
Coregister the anatomical image to the diffusion image:
mrtransform 5TT.mif -linear diff2struct_mrtrix.txt -inverse 5TT_coreg
Create the seed boundary which sepearates the grey from the white matter. The command “5tt2gmwmi” denotes (5 tissue type(segmentation) to grey matter/white matter interface):
5tt2gmwmi 5TT_coreg.mif gmwmSeed_coreg.mif
Step 2: Tractogram construction
The probabilistic tractography which is the default in MRtrix is used in this tutorial. The default method is the iFOD2 algorithm. The number of streamlines used is 10 million, this was chosen to save computational time:
tckgen -act 5TT_coreg.mif -backtrack -seed_gmwmi gmwmSeed_coreg.mif -nthreads 8 -minlength 5.0 -maxlength 300 -cutoff 0.06 -select 10000000 wmfod_norm.mif tracks_10M.tck -force
Step 3: SIFT2 construction
The generated streamlines can be refined with tcksift2 to counterbalance the overfitting. This creates a text file containing weights for each voxel in the brain:
tcksift2 -act 5TT_coreg.mif -out_mu sift_mu.txt -out_coeffs sift_coeffs.txt -nthreads 8 tracks.tck wmfod_norm.mif sift_1M.txt -force
Step 4: Connectome construction
In constructing the connectome, the desikan-killany atlas which includes the cortical and sub-cortical regions (84 regions) was used.
Copy the FreeSurferColorLUT.txt file from the ml freesurfer 7.2.0 singularity container to the subject’s folder
cp /opt/freesurfer-7.2.0/FreeSurferColorLUT.txt /data/Test/100307
Copy the fs_default.txt file from the ml mrtrix3 3.0.3 singularity container to the subject’s folder
cp /opt/mrtrix3-3.0.3/share/mrtrix3/labelconvert/fs-default.txt /data/Test/100307
The command labelconvert will use the parcellation and segmentation output of FreeSurfer to create a new parcellated file in .mif format:
labelconvert aparc+aseg.nii.gz FreeSurferColorLUT.txt fs_default.txt nodes.mif -force
Perform nodes co-registeration:
mrtransform nodes.mif -linear diff2struct_mrtrix.txt -inverse -datatype uint32 nodes_coreg.mif -force
Create a whole-brain connectome which denotes the streamlines between each parcellation pair in the atlas. The “symmetric” option makes the lower and upper diagonal the same, the “scale_invnodevol” option scales the connectome by the inverse of the size of the node:
tck2connectome -symmetric -zero_diagonal -scale_invnodevol -tck_weights_in sift_1M.txt tracks.tck nodes_coreg.mif nodes.csv -out_assignment assignments_nodes.csv -force
Viewing the connectome
The generated nodes.csv file can be viewed outside neurodesk as a matrix in Matlab.
connectome=importdata('nodes.csv');
imagesc(connectome,[0 1])
–>
8 - Documentation
8.1 - Template for workflow creation
This tutorial was created by Name P. Namington.
Email: n.namington@institution.edu.au
Github: @Namesgit
Twitter: @Nameshandle
Welcome to the workflow template, which you can use to contribute your own neurodesk workflow to our documentation. We aim to collect a wide variety of workflows representing the spectrum of tools available under the neurodesk architechture and the diversity in how researchers might apply them. Please add plenty of descriptive detail and make sure that all steps of the workflow work before submitting the tutorial.
How to contribute a new workflow
Begin by creating a copy of our documentation that you can edit:
- Visit the github repository for the Neurodesk documentation (https://github.com/NeuroDesk/neurodesk.github.io).
- Fork the repository.
- You should now have your own copy of the documentation, which you can alter without affecting our official documentation. You should see a panel stating “This branch is up to date with Neurodesk:main.” If someone else makes a change to the official documentation, the statement will change to reflect this. You can bring your repository up to date by clicking “Fetch upstream”.
Next, create your workflow:
- Clone your forked version of our documentation to a location of your choice on your computer.
- In this new folder, navigate to “neurodesk.github.io/content/en/tutorials” and then navigate to the subfolder you believe your workflow belongs in (e.g. “/functional_imaging”).
- Create a new, appropriately named markdown file to house your workflow. (e.g. “/physio.md”)
- Open this file in the editor of your choice (we recommend vscode) and populate it with your workflow! Please use this template as a style guide, it can be located at “neurodesk.github.io\content\en\tutorials\documentation\workflowtemplate.md”. You’re also welcome to have a look at other the workflows already documented on our website for inspiration.
Finally, contribute your new workflow to the official documentation!:
- Once you are happy with your workflow, make sure you commit all your changes and push these local commits to github.
- Navigate to your forked version of the repository on github.
- Before you proceed, make sure you are up to date with our upstream documentation, you may need to fetch upstream changes.
- Now you can preview the changes before contributing them upstream. For this click on the “Actions” tab and enable the Actions (“I understand my workflows…”). The first build will fail (due to a bug with the Github token), but the second build will work.
- Then you need to open the settings of the repository and check that Pages points to gh-pages and when clicking on the link the site should be there.
- To contribute your changes, click “Contribute”, and then “Open pull request”.
- Give your pull request a title (e.g. “Document PhysIO workflow”), leave a comment briefly describing what you have done, and then create the pull request.
- Someone from the Neurodesk team will review and accept your workflow and it will appear on our website soon!.
Thanks so much for taking the time to contribute your workflow to the Neurodesk community! If you have any feedback on the process, please let us know on github discussions.
Formatting guidelines
You can embelish your text in this tutorial using markdown conventions; text can be bold, italic, or strikethrough. You can also add Links, and you can organise your tutorial with headers, starting at level 2 (the page title is a level 1 header):
Level 2 heading
You can also include progressively smaller subheadings:
Level 3 heading
Some more detailed information.
Level 4 heading
Even more detailed information.
Code blocks
You can add codeblocks to your tutorial as follows:
# Some example code
import numpy as np
a = np.array([1, 2])
b = np.array([3, 4])
print(a+b)
Or add syntax highlighting to your codeblocks:
# Some example code
import numpy as np
a = np.array([1, 2])
b = np.array([3, 4])
print(a+b)
Advanced code or command line formatting using this html snippet:
# Some example code
import numpy as np
a = np.array([1, 2])
b = np.array([3, 4])
print(a+b)
[4 6]
You can also add code snippets, e.g. var foo = "bar";
, which will be shown inline.
Images
To add screenshots to your tutorial, create a subfolder in neurodesk.github.io/static
with the same link name as your tutorial. Add your screenshot to this folder, keeping in mind that you may want to adjust your screenshot to a reasonable size before uploading. You can then embed these images in your tutorial using the following convention:
 <!--  -->
Alerts and warnings
You can grab reader’s attention to particularly important information with quoteblocks, alerts and warnings:
This is a quoteblock
Note
This is an alert with a title.Warning
This is a warning with a title.You can also segment information as follows:
There’s a horizontal rule above and below this.
Or add page information:
This is a placeholder. Replace it with your own content.
Tables
You may want to order information in a table as follows:
Neuroscientist | Notable work | Lifetime |
---|---|---|
Santiago Ramón y Cajal | Investigations on microscopic structure of the brain | 1852–1934 |
Rita Levi-Montalcini | Discovery of nerve growth factor (NGF) | 1909–2012 |
Anne Treisman | Feature integration theory of attention | 1935–2018 |
Lists
You may want to organise information in a list as follows:
Here is an unordered list:
- Rstudio
- JASP
- SPSS
And an ordered list:
- Collect data
- Try to install analysis software
- Cry a little
And an unordered task list:
- Install Neurodesktop
- Analyse data
- Take a vacation
And a “mixed” task list:
- writing
- ?
- more writing probably
And a nested list:
- EEG file extensions
- .eeg, .vhdr, .vmrk
- .edf
- .bdf
- .set, .fdt
- .smr
- MEG file extensions
- .ds
- .fif
- .sqd
- .raw
- .kdf