Neurodesk is a flexible and scalable data analysis environment for reproducible neuroimaging. More info
2. Choose Your Setup:
Neurodesk can be used on various platforms including a local PC, High-Performance Computing (HPC), Cloud, and Google Colab. It supports Linux, Mac, and Windows operating systems. You can interact with it through a desktop interface, command line, container, or VSCode. Choose the setup that best suits your needs based on this table.
Each Jupyter notebook in the repository is equipped with two buttons at the top: a Binder button and a Google Colab button. These buttons will allow you to interact with the notebooks in a cloud-based environment. The environment is pre-configured to support Neurodesk, so you can start experimenting with the notebooks right away without having to install any additional software or packages.
2 - Tutorials
Tutorials
2.1 - Electrophysiology
Tutorials about processing of EEG/MEG/ECoG data
2.1.1 - Analysing M/EEG Data with FieldTrip
A brief guide to using FieldTrip to analyse electrophysiological data within neurodesk.
For more information on getting set up with a Neurodesk environment, see here
Please note that this container uses a compiled version of FieldTrip to run scripts (without needing a Matlab license). Code development is not currently supported within the container and needs to be carried out separately in Matlab.
Getting started
Navigate to Neurodesk->Electrophysiology->fieldtrip->fieldtrip20211114 in the menu:
Once this window is loaded, you are ready to go:
Type the following into the command window (replacing “./yourscript.m” with the name of your custom script - if the script is in the current folder, use “./” before the script name like in the example; otherwise, please supply the full path):
run_fieldtrip.sh /opt/MCR/v99 ./yourscript.m
For example, here we ran a script to browse some raw data:
The fieldtrip GUI is displayed automatically and functions as it normally would when running inside Matlab.
NOTES:
The script can only call FieldTrip and SPM functions (these are the only functions in the search path, and the search path cannot be altered using addpath)
The script cannot include internal functions
The script can use all the MATLAB toolboxes included in the compiled version of FieldTrip
2.1.2 - Analysing EEG Data with MNE
Use mne-python to load, pre-process, and plot example EEG data in a jupyter notebook through vscode.
This tutorial was created by Angela Renton.
Github: @air2310
Getting Setup with Neurodesk
For more information on getting set up with a Neurodesk environment, see here
Getting started
To begin, navigate to Neurodesk->Electrophysiology->mne->vscodeGUI 0.23.4 in the menu. This version of vscode has been installed in a software container together with the a conda environment containing MNE-python. Note that if you open any other version of vscode in Neurodesk, you will not be able to access the MNE conda environment.
Open the folder: “/home/user/Desktop/storage” or a subfolder in which you would like to store this demo. In this folder, create a new file named “EEGDemo.ipynb” or something similar:
If this is your first time opening a Jupyter notebook on vscode in neurodesktop, you may see the following popup. If so, click “install” to install the vscode extensions for Jupyter.
Select MNE python kernel
Next, we need to direct vscode to use the python kernel associated with MNE. In the top right corner of your empty jupyter notebook, click “Select Kernel”:
Then, select mne-0.23.4 from the dropdown menu, which should look something like this:
Activate the MNE conda environment in the terminal
Next, we’ll activate the same MNE environment in a terminal. From the top menu in vscode, select Terminal->New Terminal, or hit [Ctrl]+[Shift]+[`].
If this is your first time using vscode in this container, you may have to initialise conda by typing conda init bash in the bash terminal. After initialising bash, you will have to close and then reopen the terminal.
Once you have initialised conda, you can activate the MNE environment in the terminal:
conda activate mne-0.23.4
You should now see “(mne-0.23.4)” ahead of the current line in the terminal.
Download sample data
In the terminal (in which you have activated the MNE environment), input the following code to download some BIDS formatted sample EEG data:
Remember to update the path to the location you are storing this tutorial!
This is a small dataset with only 5 EEG channels from a single participant. The participant is viewing a frequency tagged display and is cued to attend to dots tagged at one frequency or another (6 Hz, 7.5 Hz) for long, 15 s trials. To read more about the dataset, click here
Plotting settings
To make sure our plots retain their interactivity, set the following line at the top of your notebook:
%matplotlib qt
This will mean your figures pop out as individual, interactive plots that will allow you to explore the data, rather than as static, inline plots. You can switch “qt” to “inline” to switch back to default, inline plotting.
Loading and processing data
NOTE: MNE has many helpful tutorials which delve into data processing and analysis using MNE-python in much further detail. These can be found here
Begin by importing the necessary modules and creating a pointer to the data:
# Interactive plotting
%matplotlib qt
# Import modules
import os
import numpy as np
import mne
# Load data
sample_data_folder = '/neurodesktop-storage/EEGDemo/Data_sample'
sample_data_raw_file = os.path.join(sample_data_folder, 'sub-01', 'eeg',
'sub-01_task-FeatAttnDec_eeg.vhdr')
raw = mne.io.read_raw_brainvision(sample_data_raw_file , preload=True)
the raw.info structure contains information about the dataset:
# Display data info
print(raw)
print(raw.info)
This data file did not include a montage. Lets create one using standard values for the electrodes we have:
Let’s visualise our data again now that it’s cleaner:
#plot results again, this time with some events and scaling.
eeg_data_interp.plot(events=events, duration=10.0, scalings=dict(eeg=0.00005), color='k', event_color='r')
That’s looking good! We can even see hints of the frequency tagging. It’s about time to epoch our data.
# Epoch to events of interest
event_id = {'attend 6Hz K': 23, 'attend 7.5Hz K': 27}
# Extract 15 s epochs relative to events, baseline correct, linear detrend, and reject
# epochs where eeg amplitude is > 400
epochs = mne.Epochs(eeg_data_interp, events, event_id=event_id, tmin=0,
tmax=15, baseline=(0, 0), reject=dict(eeg=0.000400), detrend=1)
# Drop bad trials
epochs.drop_bad()
We can average these epochs to form Event Related Potentials (ERPs):
# Average erpochs to form ERPs
attend6 = epochs['attend 6Hz K'].average()
attend75 = epochs['attend 7.5Hz K'].average()
# Plot ERPs
evokeds = dict(attend6=list(epochs['attend 6Hz K'].iter_evoked()),
attend75=list(epochs['attend 7.5Hz K'].iter_evoked()))
mne.viz.plot_compare_evokeds(evokeds, combine='mean')
In this plot, we can see that the data are frequency tagged. While these data were collected, the participant was performing an attention task in which two visual stimuli were flickering at 6 Hz and 7.5 Hz respectively. On each trial the participant was cued to monitor one of these two stimuli for brief bursts of motion. From previous research, we expect that the steady-state visual evoked potential (SSVEP) should be larger at the attended frequency than the unattended frequency. Lets check if this is true.
We’ll begin by exporting our epoched EEG data to a numpy array
# Preallocate
n_samples = attend6.data.shape[1]
sampling_freq = 1200 # sampling frequency
epochs_np = np.empty((n_samples, 2) )
# Get data - averaging across EEG channels
epochs_np[:,0] = attend6.data.mean(axis=0)
epochs_np[:,1] = attend75.data.mean(axis=0)
Next, we can use a Fast Fourier Transform (FFT) to transform the data from the time domain to the frequency domain. For this, we’ll need to import the FFT packages from scipy:
from scipy.fft import fft, fftfreq, fftshift
# Get FFT
fftdat = np.abs(fft(epochs_np, axis=0)) / n_samples
freq = fftfreq(n_samples, d=1 / sampling_freq) # get frequency bins
Now that we have our frequency transformed data, we can plot our two conditions to assess whether attention altered the SSVEP amplitudes:
This plot shows that the SSVEPs were indeed modulated by attention in the direction we would expect! Congratulations! You’ve run your first analysis of EEG data in neurodesktop.
2.2 - Functional Imaging
Tutorials about processing functional MRI data
2.2.1 - Connectome Workbench
A tutorial for accessing and visualizing the 7T HCP Retinotopy Dataset on Connectome Workbench.
These files include preprocessed collated data from 181 participants, including retinotopic, curvature, midthickness, and myelin maps.
Finally, unzip the S1200_7T_Retinotopy_9Zkk.zip file.
Visualizing scene files
Using Connectome Workbench, you can load “.scene” files and visualize all individuals’ retinotopic maps.
To do so, follow the next steps:
In the application menu, navigate to Neurodesk → functional imaging → connectomeworkbench → connectomeworkbench 1.5.0
On the terminal shell that pops up, type in:
wb_view
Click on “Open Other”
and search for a scene file
in the path where your data is
Finally, select the desired file and open it:
On the ‘Scenes’ window that will pop up, select the first option.
The default images are the average maps.
To change the displayed images for an individual’s data instead, click on the first ticked dropdown menu
and select “S1200_7T_Retinotopy181.All.Fit1_PolarAngle_MSMALL.32k_fs_LR.dscalar.nii”:
Now, you should be able to select specific maps from the dropdown menu on the right. For example, here we have the first individual polar angle map (top left):
Now we have the fifth:
You can do the same for the other functional maps by navigating through the tabs at the top.
This workflow documents how to use fmriprep with neurodesk and provides some details that may help you troubleshoot some common problems I found along the way.
Getting Setup with Neurodesk
For more information on getting set up with a Neurodesk environment, see here
You have a copy of the freesurfer license file (freesurfer.txt), that can be read from the file system using Neurodesk
Steps
Launch Neurodesk
From the launcher, click the Neurodesktop icon:
Open fmriprep
Now you’re in Neurodesk, use the menus to first open the neurodesk options
and then select fMRIPrep. Note that the latest version will be the lowest on the dropdown list:
This will open a terminal window where fMRIPrep is ready and waiting at your fingertips - woohoo!
Setting up fmriprep command
You can now enter your fmriprep command straight into the command line in the newly opened terminal. Here is a quick guide to the command I have used with the options I have found most useful. Note that fMRIPrep requests the path to the freesurfer license file, which should be somewhere in your system for neurodesk to read - e.g. in ’neurodesktop-storage'.
export ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS=6# specify the number of threads you want to usefmriprep /path/to/your/data \ # this is the top level of your data folder /path/to/your/data/derivatives \ # where you want fmriprep output to be saved participant \ # this tells fmriprep to analyse at the participant level --fs-license-file /path/to/your/freesurfer.txt \ # where the freesurfer license file is --output-spaces T1w MNI152NLin2009cAsym fsaverage fsnative \ --participant-label 01\ # put what ever participant labels you want to analyse --nprocs 6 --mem 10000\ # fmriprep can be greedy on the hpc, make sure it is not --skip_bids_validation \ # its normally fine to skip this but do make sure your data are BIDS enough -v # be verbal fmriprep, tell me what you are doing
Then hit return and fMRIPrep should now be merrily working away on your data :)
Some common pitfalls I have learned from my mistakes (and sometimes from others)
If fmriprep hangs it could well be that you are out of disk space. Sometimes this is because fmriprep created a work directory in your home folder which is often limited on the HPC. Make sure fmriprep knows to use a work drectory in your scratch. you can specify this in the fmriprep command by using -w /path/to/the/work/directory/you/made
I learned the following from TomCat (@thomshaw92) - fMRIPrep can get confused between subjects when run in parallel. Parallelise with caution.
If running on a HPC, make sure to set the processor and memory limits, if not your job will get killed because it hogs all the resources.
This workflow documents how to use MRIQC with neurodesk and provides some details that may help you troubleshoot some common problems I found along the way.
Getting Setup with Neurodesk
For more information on getting set up with a Neurodesk environment, see here
Now you’re in Neurodesk, use the menus to first open the neurodesk options
and then select MRIQC. Note that the latest version will be the lowest on the dropdown list:
This will open a terminal window where MRIQC is ready and waiting at your fingertips - woohoo!
Setting up mriqc command
You can now enter the following mriqc commands straight into the command line in the newly opened terminal window.
export ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS=6# specify the number of threads you want to use
mriqc /path/to/your/data \# this is the top level of your data folder
/path/to/your/data/derivatives \# where you want mriqc output to be saved
participant \# this tells mriqc to analyse at the participant level
--participant-label 01\# put what ever participant labels you want to analyse
--work-dir /path/to/work/directory \#useful to specify so your home directory definitely does not get clogged
--nprocs 6--mem_gb 10000\# mriqc can be greedy on the hpc, make sure it is not
-v # be verbal mriqc, tell me what you are doing
Note that above I have set the processor and memory limits. This is because I was in this case running on an HPC, and I used those commands to stop MRIQC from hogging all the resources. You may want to skip those inputs if you’re running MRIQC locally.
OR: if you have run all the participants and you just want the group level report, use these mriqc commands instead:
mriqc /path/to/your/data \ # this is the top level of your data folder
/path/to/your/data/derivatives \ # where you want mriqc output to be saved. As you are running the group level analysis this folder should be prepopulated with the results of the participant level analysis
group \ # this tells mriqc to agive you the group report
-w /path/to/work/directory \ #useful to specify so your home directory definitely does not get clogged
--nprocs 6 --mem_gb 10000 \ # mriqc can be greedy on the hpc, make sure it is not
-v # be verbal mriqc, tell me what you are doing
Hit enter, and mriqc should now be merrily working away on your data :)
2.2.4 - PhysIO
Example workflow for the PhysIO Toolbox
This tutorial was created by Lars Kasper.
Github: @mrikasper
Twitter: @mrikasper
Getting Setup with Neurodesk
For more information on getting set up with a Neurodesk environment, see here
Origin
The PhysIO Toolbox implements ideas for robust physiological noise modeling in fMRI, outlined in this paper:
Kasper, L., Bollmann, S., Diaconescu, A.O., Hutton, C., Heinzle, J., Iglesias,
S., Hauser, T.U., Sebold, M., Manjaly, Z.-M., Pruessmann, K.P., Stephan, K.E., 2017.
The PhysIO Toolbox for Modeling Physiological Noise in fMRI Data.
Journal of Neuroscience Methods 276, 56-72. https://doi.org/10.1016/j.jneumeth.2016.10.019
PhysIO is part of the open-source TAPAS Software Package for Translational Neuromodeling and Computational Psychiatry, introduced in the following paper:
Frässle, S., Aponte, E.A., Bollmann, S., Brodersen, K.H., Do, C.T., Harrison, O.K., Harrison, S.J., Heinzle, J., Iglesias, S., Kasper, L., Lomakina, E.I., Mathys, C., Müller-Schrader, M., Pereira, I., Petzschner, F.H., Raman, S., Schöbi, D., Toussaint, B., Weber, L.A., Yao, Y., Stephan, K.E., 2021. TAPAS: an open-source software package for Translational Neuromodeling and Computational Psychiatry. Frontiers in Psychiatry 12, 857. https://doi.org/10.3389/fpsyt.2021.680811
Please cite these works if you use PhysIO and see the FAQ for details.
NeuroDesk offers the possibility of running PhysIO without installing Matlab or requiring a Matlab license. The functionality should be equivalent, though debugging and extending the toolbox, as well as unreleased development features, will only be available in the Matlab version of PhysIO, which is exclusively hosted on the TAPAS GitHub.
More general info about PhysIO besides NeuroDesk usage is found in the README on GitHub.
Purpose
The general purpose of the PhysIO toolbox is model-based physiological noise correction of fMRI data using peripheral measures of respiration and cardiac pulsation (respiratory bellows, ECG, pulse oximeter/plethysmograph).
It incorporates noise models of
cardiac/respiratory phase (RETROICOR, Glover et al. 2000), as well as
heart rate variability and respiratory volume per time (cardiac response function, Chang et. al, 2009, respiratory response function, Birn et al. 2006),
and extended motion models (e.g., censoring/scrubbing)
While the toolbox is particularly well integrated with SPM via the Batch Editor GUI, its output text files can be incorporated into any major neuroimaging analysis package for nuisance regression, e.g., within a GLM.
Core design goals for the toolbox were: flexibility, robustness, and quality assurance to enable physiological noise correction for large-scale and multi-center studies.
Some highlights:
Robust automatic preprocessing of peripheral recordings via iterative peak detection, validated in noisy data and patients, and extended processing of respiratory data (Harrison et al., 2021)
Flexible support of peripheral data formats (BIDS, Siemens, Philips, GE, BioPac, HCP, …) and noise models (RETROICOR, RVHRCOR).
Fully automated noise correction and performance assessment for group studies.
Integration in fMRI pre-processing pipelines as SPM Toolbox (Batch Editor GUI).
Follow the instructions for copying your own data in the next section
Copy your own data
On Windows, the folder C:\neurodesktop-storage should have been automatically created when starting NeuroDesk
This is your direct link to the NeuroDesk environment, and anything you put in there should end up within the NeuroDesk desktop in /neurodesktop-storage/ and on your desktop under storage
Example: Running PhysIO in the GUI
Open the PhysIO GUI (Neurodesk -> Functional Imaging -> physio -> physioGUI r7771, see screenshot:
SPM should automatically open up (might take a while). Select ‘fMRI’ from the modality selection screen.
Press the “Batch Editor” button (see screenshot with open Batch Editor, red highlights)
- NB: If you later want to create a new PhysIO batch with all parameters, from scratch or explore the options, select from the Batch Editor Menu top row, SPM -> Tools -> TAPAS PhysIO Toolbox (see screenshot, read highlights)
For now, load an existing example (or previously created SPM Batch File) as follows: It is most convenient to change the working directory of SPM to the location of the physiological logfiles
In the Batch Editor GUI, lowest row, choose ‘CD’ from the ‘Utils..’ dropdown menu
Navigate to any of the example folders, e.g., /opt/spm12/examples/Philips/ECG3T/ and select it
NB: you can skip this part, if you later manually update all input files in the Batch Editor window (resp/cardiac/scan timing and realignment parameter file further down)
Any other example should also work the same way, just CD to its folder before the next step
Select File -> Load Batch from the top row menu of the Batch Editor window
make sure you select the matlab batch file *_spm_job.<m|mat>, (e.g., philips_ecg3t_spm_job.m and philips_ecg3t_spm_job.mat are identical, either is fine), but not the script.
Press The green “Play” button in the top icon menu row of the Batch Editor Window
Several output figures should appear, with the last being a grayscale plot of the nuisance regressor design matrix
Congratulations, your first successful physiological noise model has been created! If you don’t see the mentioned figure, chances are certain input files were not found (e.g., wrong file location specified). You can always check the text output in the “bash” window associated with the SPM window for any error messages.
2.2.5 - A batch scripting example for PhysIO toolbox
Follow this tutorial as an example of how to batch script for the PhysIO toolbox using Neurodesk.
This tutorial was created by Kelly G. Garner.
Github: @kel-github
Twitter: @garner_theory
Getting Setup with Neurodesk
For more information on getting set up with a Neurodesk environment, see here
This tutorial walks through 1 way to batch script the use of the PhysIO toolbox with Neurodesk.
The goal is to use the toolbox to generate physiological regressors to use when modelling fMRI data.
The output format of the regressor files are directly compatible for use with SPM, and can be adapted to fit the specifications of other toolboxes.
That you have converted your .zip files containing physiological data to .log files. For example, if you’re using a CMRR multi-band sequence, then you can use this function
That your .log files are in the subject derivatives/…/sub-…/ses-…/‘func’ folders of aforementioned BIDs structured data
That you have a file that contains the motion regressors you plan to use in your GLM. I’ll talk below a bit about what I did with the output given by fmriprep (e.g. …_desc-confounds_timeseries.tsv’)
That you can use SPM12 and the PhysIO GUI to initialise your batch code
NB. You can see the code generated from this tutorial here
1. Generate an example script for batching
First you will create an example batch script that is specific to one of your participants. To achieve this I downloaded locally the relevant ‘.log’ files for one participant, as well as the ‘…desc-confounds_timeseries.tsv’ output for fmriprep for each run. PhysIO is nice in that it will append the regressors from your physiological data to your movement parameters, so that you have a single file of regressors to add to your design matrix in SPM etc (other toolboxes are available).
To work with PhysIO toolbox, your motion parameters need to be in the .txt format as required by SPM.
I made some simple functions in python that would extract my desired movement regressors and save them to the space separated .txt file as is required by SPM. They can be found here.
Once I had my .log files and .txt motion regressors file, I followed the instructions here to get going with the Batch editor, and used this paper to aid my understanding of how to complete the fields requested by the Batch editor.
I wound up with a Batch script for the PhysIO toolbox that looked a little bit like this:
2. Generalise the script for use with any participant
Now that you have an example script that contains the specific details for a single participant, you are ready to generalise this code so that you can run it for any participant you choose. I decided to do this by doing the following:
First I generate an ‘info’ structure for each participant. This is a structure saved as a matfile for each participant under ‘derivatives’, in the relevant sub-z/ses-y/func/ folder. This structure contains the subject specific details that PhysIO needs to know to run. Thus I wrote a matlab function that saves a structure called info with the following fields:
% -- outputs: a matfile containing a structure called info with the% following fields:% -- sub_num = subject number: [string] of form '01' '11' or '111'% -- sess = session number: [integer] e.g. 2% -- nrun = [integer] number of runs for that participant% -- nscans = number of scans (volumes) in the design matrix for each% run [1, nrun]% -- cardiac_files = a cell of the cardiac files for that participant% (1,n = nrun) - attained by using extractCMRRPhysio()% -- respiration_files = same as above but for the resp files - attained by using extractCMRRPhysio()% -- scan_timing = info file from Siemens - attained by using extractCMRRPhysio()% -- movement = a cell of the movement regressor files for that% participant (.txt, formatted for SPM)
To see the functions that produce this information, you can go to this repo here
Next I amended the batch script to load a given participant’s info file and to retrieve this information for the required fields in the batch. The batch script winds up looking like this:
Now we have a batch script, we’re ready to run this on Neurodesk - yay!
First make sure the details at the top of the script are correct. You can see that this script could easily be amended to run multiple subjects.
On Neurodesk, go to the PhysIO toolbox, but select the command line tool rather than the GUI interface (‘physio r7771 instead of physioGUI r7771). This will take you to the container for the PhysIO toolbox
Now to run your PhysIO batch script, type the command:
Open a terminal and use datalad to install the dataset:
cd neurodesktop-storage
datalad install https://github.com/OpenNeuroDatasets/ds000102.git
We will use subject 08 as an example here, so we use datalad to download sub-08 and since SPM doesn’t support compressed files, we need to unpack them:
cd ds000102
datalad get sub-08/
gunzip sub-08/anat/sub-08_T1w.nii.gz -f
gunzip sub-08/func/sub-08_task-flanker_run-1_bold.nii.gz -f
gunzip sub-08/func/sub-08_task-flanker_run-2_bold.nii.gz -f
chmod a+rw sub-08/ -R
When the SPM menu loaded, click on fMRI and the full SPM interface should open up:
For convenience let’s change our default directory to our example subject. Click on Utils and select CD:
Then navigate to sub-08 and select the directory in the right browser window:
Now let’s visualize the anatomical T1 scan of subject 08 by clicking on Display and navigating and selecting the anatomical scan:
Now let’s look at the functional scans. Use CheckReg and open run-01. Then right click and Browse .... Then set frames to 1:146 and right click Select All
Now we get a slider viewer and we can investigate all functional scans:
Let’s check the alignment between the anatomical and the functional scans - use CheckReg and open the anatomical and the functional scan. They shouldn’t align yet, because we haven’t done any preprocessing yet:
Preprocessing the data
Realignment
Select Realign (Est & Reslice) from the SPM Menu (the third option):
Then select the functional run (important: Select frames from 1:146 again!) and leave everything else as Defaults. Then hit run:
As an output we should see the realignment parameters:
Slice timing correction
Click on Slice timing in the SPM menu to bring up the Slice Timing section in the batch editor:
Select the realigned images (use filter rsub and Frames 1:146) and then enter the parameters:
Number of Slices = 40
TR = 2
TA = 1.95
Slice order = [1:2:40 2:2:40]
Reference Slice = 1
Coregistration
Now, we coregister the functional scans and the anatomical scan.
Click on Coregister (Estimate & Reslice) (the third option) in the SPM menu to bring up the batch editor:
Use the Mean image as the reference and the T1 scan as the source image and hit Play:
Let’s use CheckReg again and overlay a Contour (Right Click -> Contour -> Display onto -> all) to check the coregistration between the images:
For the Deformation Field select the y_rsub-08 file we created in the last step and for the Images to Write select the arsub-08 functional images (Filter ^ar and Frames 1:146):
Hit Play again.
Checking the normalization
Use CheckReg to make sure that the functional scans (starting with w to indicate that they were warped: warsub-08) align with the template (found in /opt/spm12/spm12_mcr/spm12/spm12/canonical/avg305T1.nii):
Smoothing
Click the Smooth button in the SPM menu and select the warped functional scans:
Then click Play.
You can check the smoothing by using CheckReg again:
Analyzing the data
Click on Specify 1st-level - then set the following options:
Directory: Select the sub-08 top level directory
Units for design: Seconds
Interscan interval: 2
Data & Design: Click twice on New Subject/Session
Select the smoothed, warped data from run 1 and run 2 for the two sessions respectively
Create two Conditions per run and set the following:
For Run 1:
Name: Inc
Onsets (you can copy from here and paste with CTRL-V): 0 10 20 52 88 130 144 174 236 248 260 274
Durations: 2 (SPM will assume that it’s the same for each event)
We can Review the design by clicking on Review in the SPM menu and selecting the SPM.mat file in the model directory we specified earlier and it should look like this:
Estimating the model
Click on Estimate in the SPM menu and select the SPM.mat file, then hit the green Play button.
Inference
Now open the Results section and select the SPM.mat file again. Then we can test our hypotheses:
Define a new contrast as:
Name: Incongruent-Congruent
Contrast weights vector: 0.5 -0.5 0.5 -0.5
Then we can view the results. Set the following options:
masking: none”
p value adjustment to control: Click on “none”, and set the uncorrected p-value to 0.01.
extent threshold {voxels}: 10
2.3 - MRI phase Processing
Tutorials about processing MRI phase
2.3.1 - Quantitative Susceptibility Mapping
Example workflow for Quantitative Susceptibility Mapping
This tutorial was created by Steffen Bollmann and Ashley Stewart.
Quantitative Susceptibility Mapping in Neurodesk with QSMxT
Neurodesk provides QSMxT, an end-to-end pipeline that automates the reconstruction, segmentation and analysis of QSM data across large groups of participants, from scanner images (DICOMs) through to susceptibility maps and quantitative outputs.
QSMxT provides pipelines implemented in Python that:
Automatically convert unorganised DICOM or NIfTI data to the Brain Imaging Data Structure (BIDS)
Automatically reconstruct QSM, including steps for:
Masking
Phase unwrapping
Background field removal
Dipole inversion
Multi-echo combination
Automatically generate a common group space for the cohort, as well as average magnitude and QSM images that facilitate group-level analyses.
Automatically segment T1w data and register them to the QSM space to extract quantitative values in anatomical regions of interest.
Export quantitative data to CSV for all subjects using the automated segmentations, or a custom segmentation in the group space (we recommend ITK-SNAP to perform manual segmenations).
For a list of algorithms QSMxT uses, see the Reference List on the GitHub page.
Open QSMxT
Start QSMxT v1.3.3 from the applications menu in the desktop (Neurodesk > Quantitative Imaging > qsmxt)
Download test DICOMs
Start by downloading the test DICOM data we provide for QSMxT:
Next, we need to sort the DICOMs into the structure QSMxT expects (by subject, session, and series), and then convert to the Brain Imaging Data Structure (BIDS) by running the following:
cd qsmxt-demo
run_0_dicomSort.py 0_dicoms 1_dicoms_sorted
run_1_dicomConvert.py 1_dicoms_sorted 2_bids
The conversion to BIDS will prompt you to enter which sequence matches your QSM data. For the demo data, you can simply enter 1 when prompted:
The demo data comes without a structural scan (automatically recognised if t1w is in the protocol name.
Run QSM pipeline
Finally, we can run the QSM pipeline using:
run_2_qsm.py 2_bids 3_qsm
You will first be prompted to choose an initial premade pipeline. Simply press ENTER to use the default pipeline, or choose one of the other premade pipelines (e.g. fast for QSMxT’s fastest reconstruction pipeline):
QSMxT then allows you to adjust any relevant reconstruction settings. The defaults should be fine for this data, so simply enter ‘run’:
The reconstruction may take some time, though QSMxT will attempt to run various processes in parallel wherever possible.
View QSM results
When the processing is finished, you can open a viewer (Visualization -> mricrogl -> mricroglGUI) and you can find the QSM outputs in /neurodesktop-storage/qsmxt-demo/3_qsm:
Please note that the demo dataset does not have a T1w scan for anatomical segmentation, and therefore the subsequent steps in QSMxT (e.g. run_3_segment.py 2_bids 4_segmentation) will NOT work.
Tutorials about publishing and accessing open datasets
2.4.1 - datalad
Using datalad to publish and access open data on OSF
This tutorial was created by Steffen Bollmannn.
Github: @stebo85
Getting Setup with Neurodesk
For more information on getting set up with a Neurodesk environment, see here
DataLad is an open-source tool to publish and access open datasets. In addition to many open data sources (OpenNeuro, CBRAIN, brainlife.io, CONP, DANDI, Courtois Neuromod, Dataverse, Neurobagel), it can also connect to the Open Science Framework (OSF): http://osf.io/
Publish a dataset
First we have to create a DataLad dataset:
datalad create my_dataset
# now add files to your project and then add save the files with dataladdatalad save -m "added new files"
Now we can create a token on OSF (Account Settings -> Personal access tokens -> Create token) and authenticate:
datalad osf-credentials
Here is an example how to publish a dataset on the OSF:
Or you can mount the object storage bucket inside NeuroDesk using rlcone (requires rclone v1.60.1 + this does not work on the hosted Neurodesk instances on play.neurodesk.org due to limited privileges):
mkdir -p ~/TOMCAT
rclone mount opendata3p:TOMCAT ~/TOMCAT &
This assumes the following ~/.config/rclone/rclone.conf configuration (which is setup already for you inside Neurodesk):
Using osfclient to publish and access open data on OSF
This tutorial was created by Steffen Bollmannn.
Github: @stebo85
Getting Setup with Neurodesk
For more information on getting set up with a Neurodesk environment, see here
The osfclient is an open-source tool to publish and access open datasets on the Open Science Framework (OSF): http://osf.io/
Publish a dataset
Here is an example how to publish a dataset on the OSF:
osf init
# enter your OSF credentials and project ID# now copy your data into the directory, cd into the directory and then run:osf upload -r . osfstorage/data
Setup an OSF token
You can generate an OSF token under your user settings. Then, set the OSF token as an environment variable:
export OSF_TOKEN=YOURTOKEN
Access a dataset
To download a dataset from the OSF:
osf -p PROJECTID_HERE_eg_y5cq9 clone .
2.5 - Programming
Tutorials about programming with matlab, julia, and others.
2.5.1 - Conda environments
A tutorial for setting up your conda environments on Neurodesk.
For more information on getting set up with a Neurodesk environment, see here
This tutorial documents how to create conda environments on Neurodesk.
Conda environment
Conda is promptly available on Neurodesk. The default environment is not persistent across sessions, but you can create your own environment, which will be stored in your homedirectory, by following these steps:
In a Terminal window, type in:
For Python:
conda create -n myenv ipykernel
or for R:
conda create -n r_env r-irkernel
Important: For Python environments, you have to set the ipykernel explicitly or a Python version (like “conda create -n myenv python=3.8”), since a kernel is required. Alternatively, in case it was forgotten, you can add a kernel with:
conda install ipykernel
To check the list of environments you have created, run the following:
conda env list
To activate your conda environment and install required packages from a provided txt file, run:
Given the available environment, when you open a new Launcher tab, there will be a new Notebook option for launching a Jupyter Notebook with that environment active.
Switching the environment on a Jupyter Notebook is also possible on the top right corner dropdown menu.
2.5.2 - Matlab
A tutorial for setting up your matlab license on Neurodesk.
For more information on getting set up with a Neurodesk environment, see here
This tutorial documents how to set up your matlab license on Neurodesk.
Matlab license
In the application menu, navigate to Neurodesk → Programming → matlab → matlabGUI 2022a
Select “Activate automatically using the internet” and hit next.
Then, add your email address and password from your MathWorks account (which you can set up using your university credentials if they provide a license for staff and students).
Hit next after you select the appropriate license.
Do not change the login name and hit next.
Hit confirm, and you are all set!
To launch the GUI, navigate through the application menu to Neurodesk → Programming → matlab → matlabGUI 2022a
Changing Matlab Keyboard Shortcuts
By default, Matlab uses the emacs keyboard shortcuts in Linux, which might not be what most users expect. To change the keyboard shortcuts to a more common pattern, follow the next steps:
Open the Preferences menu:
Navigate to Keyboard -> Shortcuts and change the active settings from “Emacs Default Set” to “Windows Default Set”:
2.6 - Reproducibility
Tutorials about performing reproducible analyses in general
2.6.1 - Reproducible script execution with DataLad
Using datalad run, you can precisely record results of your analysis scripts.
This tutorial was created by Sin Kim.
Github: @kimsin98
Twitter: @SinKim98
Getting Setup with Neurodesk
For more information on getting set up with a Neurodesk environment, see here
In addition to being a convenient method of sharing data, DataLad can also help
you create reproducible analyses by recording how certain result files were
produced (i.e. provenance). This helps others (and you!) easily keep track of
analyses and rerun them.
This tutorial will assume you know the basics of navigating the terminal. If
you are not familiar with the terminal at all, check the DataLad Handbook’s
brief guide.
Create a DataLad project
A DataLad dataset can be any collection of files in folders, so it could be
many things including an analysis project. Let’s go to the Neurodesktop storage
and create a dataset for some project. Open a terminal and enter these commands:
-c yoda option configures the dataset according to
the YODA, a
set of intuitive organizational principles for data analyses that works
especially well with version control.
Go in the dataset and check its contents.
$ cd SomeProject
$ ls
CHANGELOG.md README.md code
Create a script
One of DataLad’s strengths is that it assumes very little about your datasets.
Thus, it can work with any other software on the terminal: Python, R, MATLAB,
AFNI, FSL, FreeSurfer, etc. For this tutorial, we will run the simplest Julia
script.
$ ml julia
$ cat > code/hello.jl << EOF
println("hello neurodesktop")
EOF
EOF?
For sake of demonstration, we create the script using
built-in Bash terminal commands only (here document that starts after << EOF
and ends when you enter EOF), but you may use whatever text editor you are
most comfortable with to create the code/hello.jl file.
You may want to test (parts of) your script.
$ julia code/hello.jl > hello.txt
$ cat hello.txt
hello neurodesktop
Run and record
Before you run your analyses, you should check the dataset for changes and save
or clean them.
$ datalad status
untracked: /home/user/Desktop/storage/SomeProject/code/hello.jl (file)
untracked: /home/user/Desktop/storage/SomeProject/hello.txt (file)
$ datalad save -m 'hello script' code/
add(ok): code/hello.jl (file)
save(ok): . (dataset)
action summary:
add (ok: 1)
save (ok: 1)
$ git clean -i
Would remove the following item:
hello.txt
*** Commands ***
1: clean 2: filter by pattern 3: select by numbers 4: ask each 5: quit 6: help
What now> 1
Removing hello.txt
git
git clean is for removing new, untracked files. For
resetting existing, modified files to the last saved version, you would need
git reset --hard.
When the dataset is clean, we are ready to datalad run!
-m 'run hello': Human-readable message to record in the dataset log.
-o 'outputs/hello.txt': Expected output of the script. You can specify
multiple -o arguments and/or use wildcards like 'outputs/*'. This script
has no inputs, but you can similarly specify inputs with -i.
'julia ... ': The final argument is the command that DataLad will run.
Before getting to the exciting part, let’s do a quick sanity check.
$ cat outputs/hello.txt
hello neurodesktop
View history and rerun
So what’s so good about the extra hassle of running scripts with datalad run?
To see that, you will need to pretend you are someone else (or you of future!)
and install the dataset somewhere else. Note that -s argument is probably a
URL if you were really someone else.
$ cd ~
$ datalad install -s /neurodesktop-storage/SomeProject
install(ok): /home/user/SomeProject (dataset)
$ cd SomeProject
Because a DataLad dataset is a Git repository, people who download your dataset
can see exactly how outputs/hello.txt was created using Git’s logs.
$ git log outputs/hello.txt
commit 52cff839596ff6e33aadf925d15ab26a607317de (HEAD -> master, origin/master, origin/HEAD)
Author: Neurodesk User <user@neurodesk.github.io>
Date: Thu Dec 9 08:31:15 2021 +0000
[DATALAD RUNCMD] run hello
=== Do not change lines below ===
{
"chain": [],
"cmd": "julia code/hello.jl > outputs/hello.txt",
"dsid": "1e82813d-856f-4118-b54d-c3823e025709",
"exit": 0,
"extra_inputs": [],
"inputs": [],
"outputs": [
"outputs/hello.txt"
],
"pwd": "."
}
^^^ Do not change lines above ^^^
Then, using that information, they can re-run the command that created the file
using datalad rerun!
In Git, each commit (save state) is assigned a long,
unique machine-generated ID. 52cf refers to the commit with ID that starts
with those characters. Usually 4 is the minimum needed to uniquely identify a
commit. Of course, this ID is probably different for you, so change this
argument to match your commit.
See Also
To learn more basics and advanced applications of DataLad, check out the
DataLad Handbook.
DataLad is built on top of the popular version control tool Git. There
are many great resources on Git online, like this free book.
DataLad is only available on the terminal. For a detailed introduction on the
Bash terminal, check the BashGuide.
For even more reproducibility, you can include containers with your dataset
to run analyses in. DataLad has an extension to support script execution in
containers. See here.
2.7 - Spectroscopy
Tutorials about performing MR spectroscopy analyses
2.7.1 - Spectroscopy with lcmodel
Using lcmodel, you can analyze MR spectroscopy data.
You have neurodesk already running from your chrome browser.
You have sufficient disk space to successfully implement the structural connectivity.
The structural and diffusion sample data have been unzipped in the mounted storage directory.
Sample Subject (100307) directory tree structure should include these input files:
aparc+aseg.nii.gz
T1w_acpc_dc_restore_brain.nii.gz
bvals
bvecs
data.nii.gz
Navigate to the mounted storage → more data → Create a new folder of your choice → copy the required input files into a folder → 100307
N/B: The subfolder used in this tutorial was tagged “Test”
Open a terminal in neurodesk and confirm your input files:
Activate mrtrix3, fsl and afni software versions of your choice in the neurodesk terminal
N/B: mrtrix3 (3.0.3), afni (21.2.00), fsl(6.0.5.1) versions were used in this tutorial. For reproducibility, the same versions can be maintained.
Step 1: Further pre-processing
Extract data.nii.gz to enable memory-mapping. The extracted files are about 4.5GB:
Perform mrconvert:
Extract the response function. Uses stride 0,0,0,1:
Generate mask:
Generate Fibre Orientation Distributions (FODs):
Perform normalization:
Generate a 5 tissue image:
Convert the B0 and 5TT image to a compressed format:
Use “fslroi” to extract the first volume of the segmented dataset which corresponds to the Grey Matter Segmentation:
Use “flirt” command to perform coregisteration:
Convert the transformation matrix to a format readable by MRtrix3:
Coregister the anatomical image to the diffusion image:
Create the seed boundary which separates the grey from the white matter. The command “5tt2gmwmi” denotes (5 tissue type(segmentation) to grey matter/white matter interface):
Step 2: Tractogram construction
The probabilistic tractography which is the default in MRtrix is used in this tutorial. The default method is the iFOD2 algorithm.
The number of streamlines used is 10 million, this was chosen to save computational time:
Proceed to Step 3 when the process above is completed (100%).
Step 3: SIFT2 construction
The generated streamlines can be refined with tcksift2 to counterbalance the overfitting. This creates a text file containing weights for each voxel in the brain:
Step 4: Connectome construction
In constructing the connectome, the desikan-killany atlas which includes the cortical and sub-cortical regions (84 regions) was used.
Copy the “FreeSurferColorLUT.txt” file from the ml freesurfer 7.2.0 singularity container to the subject’s folder:
Copy the “fs_default.txt” file from the ml mrtrix3 3.0.3 singularity container to the subject’s folder:
The command labelconvert uses the parcellation and segmentation output of FreeSurfer to create a new parcellated file in .mif format:
Perform nodes co-registeration:
Create a whole-brain connectome which denotes the streamlines between each parcellation pair in the atlas. The “symmetric” option makes the lower and upper diagonal the same, the “scale_invnodevol” option scales the connectome by the inverse of the size of the node:
Viewing the connectome
The generated nodes.csv file can be viewed outside neurodesk as a matrix in Matlab.
Congratulations on constructing a single subject’s structural connectome with neurodesk! Running multiple subjects would require scripting. Kindly consult the references above.
2.9 - Contribute
A brief guide for contributing new tutorials.
2.9.1 - Template for tutorial creation
Follow this template to contribute your own tutorial to the Neurodesk documentation.
Welcome to the tutorial template, which you can use to contribute your neurodesk tutorial to our documentation. We aim to collect a wide variety of tutorials and examples representing the spectrum of tools available under the neurodesk architecture and the diversity in how researchers might apply them.
Tutorials: We kindly ask you to add a concise step-by-step guide for using specific neuroimaging software on neurodesk with screenshots for visual aid.
Examples: If you want to provide more descriptive details for running specific pipelines, we highly recommend contributing an example (in the form of a Jupyter notebook) to our documentation.
In either case, make sure that all steps of the tutorial work before submitting.
You should now have your own copy of the documentation, which you can alter without affecting our official documentation. You should see a panel stating “This branch is up to date with Neurodesk:main.” If someone else makes a change to the official documentation, the statement will change to reflect this. You can bring your repository up to date by clicking “Sync fork”.
Create your tutorial
Clone your forked version of our documentation to a location of your choice on your computer.
The URL for the repository can be copied by clicking on the button highlighted below:
Now, you can open your copy of neurodesk.github.io using the editor of your choice (we recommend vscode). Before making changes to the current repository, the best practice is to create a new branch for avoiding version conflicts.
Create a branch:
git branch tutorial-template
Checkout the branch you want to use for the addition of your new tutorial
git checkout tutorial-template
Confirm you are in the right branch:
git branch
Navigate to neurodesk.github.io/content/en/tutorials-examples/tutorials/ and then navigate to the subfolder you believe your tutorial belongs in (e.g. “/functional_imaging”).
Create a new, appropriately named markdown file to house your tutorial (e.g. “physio.md”). Images need to be stored in the /static directory - please mirror the same directory structure as for your markdown files.
Open this file and populate it with your tutorial! You’re also welcome to look at other tutorials already documented on our website for inspiration.
Contribute your new tutorial to the official documentation
Once you are happy with your tutorial, to avoid merge conflicts, rebase your branch with the main branch, which should be synced with NeuroDesk/neurodesk.github.io:main (on GitHub check if your repo is synced and locally checkout the main branch and run git pull).
git rebase main
You might have to correct some merge conflicts, but vscode makes it easy.
Commit all your changes and push these local commits to GitHub.
Navigate to your forked version of the repository on GitHub and switch branches for the one with your additions.
Now, you can preview the changes before contributing them upstream. For this, if this is your first time to run the Action build, click on the “Actions” tab and enable the Actions (“I understand my tutorials…”). The first build will fail (due to a bug with the Github token), but the second build will work. You can run the workflow if clicking on each of them in the left sidebar.
Then you need to open the settings of the repository and check that Pages points to gh-pages, and when clicking on the link, the site should be there.
To contribute your changes, click “Compare & pull request” and then “Create pull request”.
Give your pull request a title (e.g. “Document PhysIO tutorial”), leave a comment briefly describing what you have done, and then create the pull request.
Someone from the Neurodesk team will review and accept your tutorial, which will appear on our website soon!
Thanks so much for taking the time to contribute your tutorial to the Neurodesk community! If you have any feedback on the process, please let us know on github discussions.
Formatting guidelines
As seen throughout this tutorial, you can embellish your text using markdown conventions; text can be bold, italic, or strikethrough. You can also add Links, and you can organise your tutorial with headers, starting at level 2 (the page title is a level 1 header):
Level 2 heading
You can also include progressively smaller subheadings:
Level 3 heading
Some more detailed information.
Level 4 heading
Even more detailed information.
Code blocks
You can add codeblocks to your tutorial as follows:
# Some example code
import numpy as np
a = np.array([1, 2])
b = np.array([3, 4])
print(a+b)
Or add syntax highlighting to your codeblocks:
# Some example code
import numpy as np
a = np.array([1, 2])
b = np.array([3, 4])
print(a+b)
Advanced code or command line formatting using this html snippet:
# Some example codeimport numpy as np
a= np.array([1, 2])b= np.array([3, 4])print(a+b)[4 6]
You can also add code snippets, e.g. var foo = "bar";, which will be shown inline.
Images
To add screenshots to your tutorial, create a subfolder in /static with the same file structure as in your tutorial markdown file. Add your screenshot to this folder, keeping in mind that you may want to adjust your screenshot to a reasonable size before uploading. You can then embed these images in your tutorial using the following convention:
For a filename.png in a /content/en/tutorials-examples/subject/tutorial1/markdownfile.md use