This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Developer Documentation

Documentation for Developers

Video tutorial

Click here to watch a 57 minute tutorial video from ReproNim 2023

1 - Contributors

This section acknowledges the contributions made to the project.

If you contributed to the project please list yourself here with a description of your contribution. We try to update this page based on the git commit history:

Steffen Bollmann

  • funding: Oracle Cloud (114k AUD), ECR Knowledge Exchange & Translation Fund (42k AUD), ARDC (CI for $566k AUD)
  • system architecture
  • CVMFS container deployment
  • initial desktop container prototype
  • container build scripts
  • application containers (afni, aslprep, code, convert3D, freesurfer, hdbet, minc mriqc, romeo, spm12, tgvqsm, bart, fatsegnet, fsl, itksnap, lcmodel, mritools, niistat, qsmxt, root, slicer, trackvis, ants, cat12, conn, diffusiontoolkit, gimp, mricrogl, mrtrix3, rstudio, slicersalt, surfice, vesselapp, bidstools, clearswi, connectomeworkbench, dsistudio, fmriprep, julia, mrtrix3tissue, palm rabies, spinalcordtoolbox)
  • migrating container recipes and bugfixes to neurodocker upstream (fsl, ants)
  • documentation
  • tutorials (QSM, SWI, Unwrapping, lcmodel, freesurfer)
  • google colab support
  • outreach (e.g. Mastodon, talks at conferences, youtube videos)

Aswin Narayanan

  • funding: ARDC (CI for $566k AUD)
  • Neurocontainer devops
  • Neurodesktop development
  • Neurocommand installer rewrite
  • Neurodesk Play & Kubernetes implementation
  • Jupyter notebook support
  • Hugo website build and documentation

Angela Renton

  • Tutorials (MNE-Python, Tutorial template)
  • graphics for website (layer diagram)
  • documentation
  • user testing
  • neurodesk paper lead author

Oren Civier

  • Funding: ARDC Australian Electrophysiology Data Analytics PlaTform (AEDAPT) (CI for $566k AUD) - contributing to initial conceptualisation, EOI writeup, scope of project, proposal writeup, teaming up with the Australian Imaging Service (AIS), recruiting collaborators, assisting collaborators with case studies
  • Design: leading the Virtual Neuro Machine (VNM) hackathon project in the 2020 OHBM BrainHack, where the first version of Neurodesktop was developed
  • Development: allowing Neurodesk containers running in Neurodesktop to access sshfs mounts
  • Development: template for container recipe documentation
  • Development: software application containers - developer (bidscoin, MATLAB; in progress: MMVT)
  • Development: software application containers - facilitator (Fieldtrip, running arbitrary scripts using compiled MATLAB containers; in progress: SOVABIDS)
  • Documentation: for developers (contribution to “add tools”, configuring Github)
  • Documentation: for users (copy and paste troubleshooting, accessing storage, installation on different platforms, using ARDC Virtual Desktop Service, screenshots)
  • Documentation: Neurodesk’s original logo
  • User testing: HPC, NECTAR, Mac, Linux, ARDC Virtual Desktop Service
  • User testing: VNC and RDP interfaces, including multiple concurrent users
  • Outreach: providing assistance to nodes of the Australian National Imaging Facility with installing/using Neurodesk
  • Administration: one of Neurodesk/AEDAPT representatives in ARCOS, AIS and NECTAR Interactive Analytics committees and working groups
  • Papers: co-author Neurodesk manuscript (in preparation; contribution to initial outline, input on first draft), co-author proceedings of the OHBM Brainhack 2021 (to be published in Aperture)

Thomas Shaw

  • Win, Mac, Linux startup scripts
  • initial transparent singularity prototype
  • application container development (LASHiS, ASHS)
  • user testing

Tom Johnstone

Martin Grignard

David White

Akshaiy Narayanan

Kelly Garner

Paris Lyons

  • design of Neurodesk Logo
  • project management of AEDAPT project

Thuy Dao

  • Application search tool with lunr
  • proof of concept for GUI
  • application container development (civet)
  • documentation (github workflow)

Ashley Stewart

  • application container development (qsmxt)
  • presentation of neurodesk at OHBM Brainhack 2022 and OHBM educational course 2022

Lars Kasper

Judy D Zhu

Korbinian Eckstein

Stefanie Evas

Xincheng Ye

Fernanda Ribeiro

Jeryn Chang

Sin Kim

Jakub Kaczmarzyk

Alan Hockings

Aditya Garg

Kexin Lou

Renzo Huber

Steering Committee members without code contributions:

  • Ryan Sullivan, University of Sydney, Key User, Steering Committee
  • Thomas Close, University of Sydney, Key User, Scientific/Subject Expert Advisory Board
  • Wojtek Goscinski, Monash University, Steering Committee, Technical Advisory Board
  • Tony Hannan, Florey Institute of Neuroscience and Mental Health, Scientific/Subject Expert Advisory Board
  • Gary Egan, Monash University, Steering Committee
  • Paul Sowman, Macquarie University, Key User, Scientific/Subject Expert Advisory Board
  • Marta Garrido, University of Melbourne, Key User, Scientific/Subject Expert Advisory Board
  • Patrick Johnston, Queensland University of Technology, Key User, Scientific/Subject Expert Advisory Board
  • Aina Puce, Indiana University, Key User, Scientific/Subject Expert Advisory Board
  • Franco Pestilli, Indiana University, Technical Advisory Board
  • Levin Kuhlmann, Monash University, Key User, Scientific/Subject Expert Advisory Board
  • Gershon Spitz, Monash Epworth Rehabilitation Research Centre, Key User, Scientific/Subject Expert Advisory Board
  • David Abbott, Florey Institute of Neuroscience and Mental Health, Key User, Scientific/Subject Expert Advisory Board
  • Megan Campbell, The University of Newcastle, Key User, Scientific/Subject Expert Advisory Board
  • Nigel Rogasch, University of Adelaide, Key User, Scientific/Subject Expert Advisory Board
  • Will Woods, Swinburne University of Technology, Key User
  • Satrajit Ghosh, Massachusetts Institute of Technology, Provision of advice only

2 - Architecture

The architecture of the Neurodesk ecosystem


2.1 - Neurodesk Architecture

The architecture of the Neurodesk ecosystem

Neurodesktop is a compact Docker container with a browser-accessible virtual desktop that allows you develop and implement data analysis, pre-equipped with basic fMRI and EEG analysis tools. To get started, see: Neurodesktop (Github)

  • docker container with interface modifications
  • contains tools necessary to manage workflows in sub-containers: vscode, git
  • CI: builds docker image and tests if it runs; tests if CVMFS servers are OK before deployment
  • CD: pushes images to github & docker registry


Neurocommand offers the option to install and manage multiple distinct containers for more advanced users who prefer a command-line interface. Neurocommand is the recommended interface for users seeking to use Neurodesk in high performance computing (HPC) environments.

To get started, see: Neurocommand (Github)

  • script to install and manage multiple containers using transparent singularity on any linux system
  • this repo also handles the creation of menu entries in a general form applicable to different desktop environments
  • this repo can be re-used in other projects like CVL and when installing it on bare-metal workstations
  • CI: tests if containers can be installed
  • CD: this repo checks if containers requested in apps.json file are available on object storage and if not converts the singularity containers based on the docker containers and uploads them to object storage


transparent-singularity offers seamless access to applications installed in neurodesktop and neurocommand, treating containerised software as native installations.

More info: transparent-singularity (Github)

  • script to install neuro-sub-containers, installers are called by neurocommand
  • this repo provides a way of using our containers on HPCs for large scale processing of the pipelines (including the support of SLURM and other job schedulers)
  • CI: test if exposing of binaries from container works


neurocontainers contains scripts for building sub-containers for neuroimaging data-analysis software. These containers can be used alongside neurocommand or transparent-singularity.

To get started, see: Neurocontainers (Github)

  • build scripts for neuro-sub-containers
  • CI: building and testing of containers
  • CD: pushing containers to github and dockerhub registry


Neurodocker is a command-line program that generates custom Dockerfiles and Singularity recipes for neuroimaging and minifies existing containers.

More info: Github

  • fork of neurodocker project
  • provides recipes for our containers built
  • we are providing pull requests back of recipes
  • CI: handled by neurodocker - testing of generating container recipes

2.2 - Neurodesktop Release Process

A description of what to do to create new release of our Neurodesktop
  1. Check if the last automated build ran OK:
  2. Run this build date and test if everything is ok and no regression happened
  3. Check what changes where made since the last release:
  4. Summarize the main changes and copy this to the Release History:
  5. Change the version of the latest desktop in
  6. Commit all changes
  7. Tweet a quick summary of the changes and announce new version:

2.3 - Neurodesktop Dev

Testing the latest dev version of Neurodesktop

Building neurodesktop-dev

Dev builds can be triggered by Neurodesk admins from

Running latest neurodesktop-dev


docker pull vnmd/neurodesktop-dev:latest
sudo docker run \
  --shm-size=1gb -it --cap-add SYS_ADMIN \
  --security-opt apparmor:unconfined --device=/dev/fuse \
  --name neurodesktop-dev \
  -v ~/neurodesktop-storage:/neurodesktop-storage \
  -e NB_UID="$(id -u)" -e NB_GID="$(id -g)" \
  -p 8888:8888 -e NEURODESKTOP_VERSION=dev \


docker pull vnmd/neurodesktop-dev:latest
docker run --shm-size=1gb -it --cap-add SYS_ADMIN --security-opt apparmor:unconfined --device=/dev/fuse --name neurodesktop -v C:/neurodesktop-storage:/neurodesktop-storage -p 8888:8888 -e NEURODESKTOP_VERSION=dev vnmd/neurodesktop-dev:latest

2.4 - Transparent Singularity

For more advanced users who wish to use Transparent Singularity directly

Transparent singularity is here

This project allows to use singularity containers transparently on HPCs, so that an application inside the container can be used without adjusting any scripts or pipelines (e.g. nipype).

Important: add bind points to .bashrc before executing this script

This script expects that you have adjusted the Singularity Bindpoints in your .bashrc, e.g.:

export SINGULARITY_BINDPATH="/gpfs1/,/QRISdata,/data"

This gives you a list of all tested images available in neurodesk:

curl -s

Clone repo into a folder with the intended image name

git clone convert3d_1.0.0_20210104


This will create scripts for every binary in the container located in the $DEPLOY_PATH inside the container. It will also create activate and deactivate scripts and module files for lmod (

cd convert3d_1.0.0_20210104
./ convert3d_1.0.0_20210104

Options for Transparent singularity:

  • --storage - this option can be used to force a download from docker, e.g.: --storage docker
  • --container - this option can be used to explicitly define the container name to be downloaded
  • --unpack - this will unpack the singularity container so it can be used on systems that do not allow to open simg / sif files for security reasons, e.g.: --unpack true
  • --singularity-opts - this will be passed on to the singularity call, e.g.: --singularity-opts '--bind /cvmfs'

Use in module system LMOD

Add the module folder path to $MODULEPATH

Manual activation and deactivation (in case module system is not available). This will add the paths to the .bashrc





Uninstall container and cleanup


2.5 - Neurodesk CVMFS

How to interact with our CVMFS service.

2.5.1 - Setup CVMFS Proxy

Setup CVMFS Proxy server

If you want more speed in a region one way could be to setup another Stratum 1 server or a proxy. We currently don’t run any proxy servers but it would be important for using it on a cluster.

docker run --shm-size=1gb -it --privileged --user=root --name neurodesktop `
-v C:/neurodesktop-storage:/neurodesktop-storage -p 8888:8888 `

Setup a CVMFS proxy server

sudo yum install -y squid

Open the squid.confand use the following configuration

sudo vi /etc/squid/squid.conf
# List of local IP addresses (separate IPs and/or CIDR notation) allowed to access your local proxy
#acl local_nodes src YOUR_CLIENT_IPS

# Destination domains that are allowed
#acl stratum_ones dstdomain .YOURDOMAIN.ORG
#acl stratum_ones dstdom_regex YOUR_REGEX
acl stratum_ones dst

# Squid port
http_port 3128

# Deny access to anything which is not part of our stratum_ones ACL.
http_access deny !stratum_ones

# Only allow access from our local machines
#http_access allow local_nodes
http_access allow localhost

# Finally, deny all other access to this proxy
http_access deny all

minimum_expiry_time 0
maximum_object_size 1024 MB

cache_mem 128 MB
maximum_object_size_in_memory 128 KB
# 5 GB disk cache
cache_dir ufs /var/spool/squid 5000 16 256

sudo squid -k parse
sudo systemctl start squid
sudo systemctl enable squid
sudo systemctl status squid
sudo systemctl restart squid

2.5.2 - CVMFS architecture

CVMFS architecture

We store our singularity containers unpacked on CVMFS. We tried the DUCC tool in the beginning, but it was causing too many issues with dockerhub and we were rate limited. The script to unpack our singularity containers is here:

It gets called by a cronjob on the CVMFS Stratum 0 server and relies on the log.txt file being updated via an action in the neurocommand repository (

The Stratum 1 servers then pull this repo from Stratum 0 and our desktops mount these repos (configured here:

The startup script ( sets up CVMFS and tests which server is fastest during the container startup.

This can also be done manually:

sudo cvmfs_talk -i host info
sudo cvmfs_talk -i host probe
cvmfs_config stat -v

2.5.3 - Setup Stratum 0 server

Host a Stratum 0 server

Setup a Stratum 0 server:

Setup Storage

(would object storage be better? -> see comment below under next iteration ideas)

lsblk -l
sudo mkfs.ext4 /dev/vdb
sudo mkdir /storage
sudo mount /dev/vdb /storage/ -t auto
sudo chown ec2-user /storage/
sudo chmod a+rwx /storage/
sudo vi /etc/fstab
/dev/vdb  /storage    auto    defaults,nofail   0  2

Setup server

sudo yum install vim htop gcc git screen
sudo timedatectl set-timezone Australia/Brisbane

sudo yum install -y
sudo yum install -y cvmfs cvmfs-server

sudo systemctl enable httpd
sudo systemctl restart httpd

# sudo systemctl stop firewalld

# restore keys:
sudo mkdir /etc/cvmfs/keys/incoming
sudo chmod a+rwx /etc/cvmfs/keys/incoming
cd connections/cvmfs_keys/
scp neuro* ec2-user@
sudo mv /etc/cvmfs/keys/incoming/* /etc/cvmfs/keys/

#backup keys: 
#mkdir cvmfs_keys
#scp opc@* .

sudo cvmfs_server mkfs -o $USER

cd /storage
sudo mkdir -p cvmfs-storage/srv/
cd /srv/
sudo mv cvmfs/ /storage/cvmfs-storage/srv/
sudo ln -s /storage/cvmfs-storage/srv/cvmfs/

cd /var/spool
sudo mkdir /storage/spool
sudo mv cvmfs/ /storage/spool/
sudo ln -s  /storage/spool/cvmfs .

cvmfs_server transaction

cvmfs_server publish
sudo vi /etc/cron.d/cvmfs_resign
0 11 * * 1 root /usr/bin/cvmfs_server resign
cat /etc/cvmfs/keys/
-----END PUBLIC KEY-----

Next iteration of this:

use object storage?

  • current implementation uses block storage, but this makes increasing the volume size a bit more work
  • we couldn’t get object storage to work on Oracle as it assumes AWS S3

Optimize settings for repositories for Container Images

from the CVMFS documentation: Repositories containing Linux container image contents (that is: container root file systems) should use overlayfs as a union file system and have the following configuration:


Extended attributes of files, such as file capabilities and SElinux attributes, are recorded. And previous file system revisions can be accessed from the clients.

Currently not used

We tested the DUCC tool in the beginning, but it was leading to too many docker pulls and we therefore replaced it with our own script:

This is the old DUCC setup

sudo yum install cvmfs-ducc.x86_64
sudo -i
dnf install -y yum-utils 
yum-config-manager --add-repo
dnf install docker-ce docker-ce-cli
systemctl enable docker
systemctl start docker
docker version
docker info

# leave root mode

sudo groupadd docker
sudo usermod -aG docker $USER
sudo chown root:docker /var/run/docker.sock
newgrp docker

export DUCC_DOCKER_REGISTRY_PASS=configure_secret_password_here_and_dont_push_to_github
cd neurodesk
git pull
cvmfs_ducc convert recipe_neurodesk_auto.yaml
cd ..

chmod +x

git clone

# setup cron job
sudo vi /etc/cron.d/cvmfs_dockerpull
*/5 * * * * opc cd ~ && bash /home/opc/

#vi recipe.yaml

##version: 1
#user: vnmd
#output_format: '$(scheme)://$(registry)/vnmd/thin_$(image)'
#- ''
#- ''

#cvmfs_ducc convert recipe_neurodesk.yaml
#cvmfs_ducc convert recipe_unpacked.yaml

2.5.4 - Setup Stratum 1 server

Host a Stratum 1 server

The stratum 1 servers for the desktop are configured here:

If you want more speed in a region one way could be to setup another Stratum 1 server or a proxy.

Setup a Stratum 1 server:

sudo yum install -y
sudo yum install -y cvmfs-server squid
sudo yum install -y python3-mod_wsgi 

sudo sed -i 's/Listen 80/Listen' /etc/httpd/conf/httpd.conf

set +H
echo "http_port 80 accel" | sudo tee /etc/squid/squid.conf
echo "http_port 8000 accel" | sudo tee -a /etc/squid/squid.conf
echo "http_access allow all" | sudo tee -a /etc/squid/squid.conf
echo "cache_peer parent 8080 0 no-query originserver" | sudo tee -a /etc/squid/squid.conf
echo "acl CVMFSAPI urlpath_regex ^/cvmfs/[^/]*/api/" | sudo tee -a /etc/squid/squid.conf
echo "cache deny !CVMFSAPI" | sudo tee -a /etc/squid/squid.conf
echo "cache_mem 128 MB" | sudo tee -a /etc/squid/squid.conf

sudo systemctl start httpd
sudo systemctl start squid
sudo systemctl enable httpd
sudo systemctl enable squid

echo 'CVMFS_GEO_LICENSE_KEY=kGepdzqbAP4fjf5X' | sudo tee -a /etc/cvmfs/server.local
sudo chmod 600 /etc/cvmfs/server.local

sudo mkdir -p /etc/cvmfs/keys/

echo "-----BEGIN PUBLIC KEY-----
-----END PUBLIC KEY-----" | sudo tee /etc/cvmfs/keys/

sudo cvmfs_server add-replica -o $USER /etc/cvmfs/keys/

# CVMFS will store everything in /srv/cvmfs so make sure there is enough space or create a symlink to a bigger storage volume
# e.g.:
<!-- cd /storage
sudo mkdir -p cvmfs-storage/srv/
cd /srv/
sudo mv cvmfs/ /storage/cvmfs-storage/srv/
sudo ln -s /storage/cvmfs-storage/srv/cvmfs/ -->

sudo cvmfs_server snapshot

echo "/var/log/cvmfs/*.log {
}" | sudo tee /etc/logrotate.d/cvmfs

echo '*/5 * * * * root output=$(/usr/bin/cvmfs_server snapshot -a -i 2>&1) || echo "$output" ' | sudo tee /etc/cron.d/cvmfs_stratum1_snapshot

sudo yum install iptables
sudo iptables -t nat -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8000

sudo systemctl disable firewalld 
sudo systemctl stop firewalld 
# make sure that port 80 is open in the real firewall

sudo cvmfs_server update-geodb

3 - Documentation

How to edit the documentation

3.1 - Local Hugo Docsy

How to edit the documentation

Local Hugo Docsy in Linux and WSL2

Local Hugo Docsy in Windows

Clone repository

Using SSH

git clone --recurse-submodules

or Https:

git clone --recurse-submodules

If you cloned without –recurse-submodules

Run the following command to pull submodules

git submodule update --init --recursive --remote

Download Hugo binary

Hugo releases are on

Download latest version of hugo extended

e.g. for windows:

Start local hugo server

Extract hugo binary (hugo.exe) to your dir

Run server for windows: .\hugo.exe server --disableFastRender

Once started, dev website will be accessible via http://localhost:1313

Update docsy theme submodule

git submodule update --remote
git add themes/
git commit -m "Updating theme submodule"
git push origin main

4 - How to add new tools

How to add new tools to neurodesk

4.1 - Build Containers

How to contribute a new container.

To make contributing containers easier, we developed an interactive container build system. If you are very familiar with Git and building containers you can also follow the manual process, which you can find documented here:

1) Open an issue to get access to the interactive container build system

  • describe which container you would like to add
  • wait for a reply on your issue that your account has been setup

2) Access the container build system

  • authenticate with your github account
  • select a CPU session or a GPU session (if your container requires a GPU)
  • open a Neurodesktop session

3) Run the interactive build process

  • open a Terminal session
  • run:
cd ~
git clone
cd neurocontainers/interactive_builder/
  • Follow the instructions of the interactive build tool. After a couple of seconds where the base image gets updated you should see a “root@neurodesk-builder:~$>” shell. Now run the commands to get your tool to work.
  • Once the tool works, hit CTRL-D or type “exit”
  • Then answer more questions in the build tool

4) Submit the generated and file as attachments to your issue

  • once completed, download the and file and submit them as attachments to your github issue

4.2 - Add tools

Add a tool to neurodesktop

The goal of neurodesk is to provide users with a large choice of tools to use in their pipelines. Use the guide below to add a tool to neurodesktop or neurocontainers.

Guiding principles

To decide if a tool should be packaged in a singularity container in neurocontainers or be installed in the neurodesktop container we are currently following these guiding principles:

  1. neurodesk is not a package manager. This means we are not distributing tools in containers that can easily be installed via a standard package manager
  2. neurodesk allows users to have multiple versions of tools in parallel via lmod, this means that if different versions of a tool can’t be installed in parallel we package the tool inside a container.
  3. neurodesk aims to provide tooling to link tools from different containers (such as workflow managers like nipype or nextflow). This means that if a tool is required to coordinate various container-tools, it should be in the neurodesktop container.


easy installcoordinates containerssmall in sizelatest version is okuseful to most usersConclusion

Adding new tools via our interactive container builder:

This is the recommended way for all contributors:

Adding new tools via manual steps

This is only for developers who are familiar with building containers and github:

Adding new recipes

Refer to neurodocker for more information on neurodocker recipes

Build container

Environment Requirements

Install Neurodocker

Neurodocker is the dependency we use to build containers.

  1. (optional) Sync upstream repository:
    If you have the permissions to do so: Press “Fetch upstream” in to check if our fork of Neurodocker is already up-to-date. Otherwise, open an issue in, requesting to pull-in latest changes from Neurodocker upstream into our fork of Neurodocker. One of the admins will attend the issue and perform the operation.
  2. (optional) Add a new neurodocker tool:
    If relevant to your project, add an option to neurodocker that installs new software ( and create a pull request to neurodocker’s main repository (add new tool in a branch!).
  3. Clone our fork of Neurodocker:
    git clone
  4. Install neurodocker:
    cd neurodocker  
    python -m pip install .
    cd ..
  5. Append line to .bashrc for adding the path:
    echo 'export PATH=${PATH}:${HOME}/.local/bin' >> ${HOME}/.bashrc
  6. Close the terminal, and reopen it for the updated PATH to take effect

Fork the Neurocontainers repository

  • Fork neurocontainers and setup github actions.

Create a new app

  1. Copy the directory template and rename to NEWAPP in neurocontainers/recipes (NEWAPP being the name of the application to be displayed in Neurodesk’s menu; notice it shouldn’t have any special characters):

    cd neurocontainers/recipes
    cp -R template NEWAPP
  2. Create your Container Files:
    Modify in neurocontainers/recipes/NEWAPP to build your application and update (make sure the version is correct in the README!). Notice that the example build script in the template has instructions to build a container for datalad, that may or may not suite your exact needs

    cd NEWAPP
    (edit as required)
    (edit as required)

    Upload your application to object storage first if needed, so you can then download it in (ask for instructions about this if you don’t know the key, and never share it anywhere public!)

  3. Building containers

    Any NEWAPP under the recipes/ directory are built and pushed automatically via github actions

  4. Build and test the container locally

    1. run the build script with the debug flag:

      cd recipes/NEWAPP
      chmod +x
      ./ -ds

      NOTICE: the file will automatically be updated to reflect the version of the tool given in the script. For this to work, leave “toolVersion” in the README and do not remove this or alter.

    2. test running some commands within the container that should be available in your local docker container repository.

      For example, to open an interactive shell in a container (with the home folder /root bound to /root on host), you may run:

      sudo docker run -it -v /root:/root --entrypoint /bin/bash NEWAPP_VERSION:TAG

      with VERSION being the version of the app, and TAG the version tag of the container (run ‘sudo docker image list’ to find the tag)

    3. if your application requires a Matlab Runtime and you get an error about shared library “” not found, check which version of the runtime was installed by the build script

  5. Update changes in local git repository

    git add .github/workflows/NEWAPP.yml recipes/NEWAPP/ recipes/NEWAPP/ recipes/NEWAPP/
    git config "the email that you use for github"
    git config "your name"
    git commit

Push the new or updated app to Neurocontainers


Generate git personal access token (if you don’t have one already)

  1. Browse to
  2. Log into your account
  3. Press on your picture in upper right corner → Setting → Developer Settings → Personal Access Token
  4. Press on “generate personal access token”
  5. Write something in “Notes” (doesn’t matter what, it’s for your own use)
  6. Check “repo”
  7. Check “Workflow”
  8. Press “Generate Token” at the bottom
  9. Copy the token displayed to somewhere safe, as you will have to user it later

Verify that user has write permission to /neurocommand/local

  1. If not, run sudo chmod a+w /neurocommand/local

Step by step guide

  1. Test the container locally, and if successful push repo to trigger the automatic build on GitHub. When asked for your Github password, please provide the personal access token obtained in the previous stage.

    git pull
    git push
  2. Go to Check that the most recent workflow run in the list terminated successfully (green). Otherwise, click on it, click on “build docker”, and the line that caused the error will be highlighted

  3. Find your new package under
    Enter the name of the package in the search box, and verify that the full package name shows up in the format toolName_toolVersion

  4. Obtain buildDate by clicking on the full package name that came up in the search. The build date will be the newest date shown under Recent tagged image versions

  5. If updating an app, use toolName delete the locally installed container of the old app version or old app build:

    rm -R /neurocommand/local/containers/toolName_*/
    rm -R /neurocommand/local/containers/modules/toolName/
  6. Use toolName, toolVersion and buildDate from the previous two steps to manually download the package by typing the following in a terminal open in Neurodesktop

    bash /neurocommand/local/ toolName toolVersion buildDate
    (when you see the "Singularity>" prompt, type exit and ENTER)
    ml toolName/toolVersion

    For example: If the full package name that comes up in the step 11 is itksnap_3.8.0, and the newest date under Recent tagged image versions is 20210322

    The command to use in a terminal open in Neurodesktop is:

    bash /neurocommand/local/ itksnap 3.8.0 20210322
    (when you see the "Singularity>" prompt, type exit and ENTER)
     ml toolName/toolVersion
  7. Test the new container. Run some commands, to see all is good
    If the container doesn’t work yet, it’s sometimes useful to try and troubleshoot it and install missing libraries. This can be achieved by running it in a writable mode with fakeroot enabled:

    SINGULARITY_BINDPATH=''; singularity shell --writable --fakeroot /neurodesktop-storage/containers/toolName_toolVersion_buildDate/toolName_toolVersion_buildDate.simg
  8. Fork to your Github account

  9. Edit an entry for your package in your fork of neurocommand/blob/main/neurodesk/apps.json based on one of the other entries (generating one menu item for opening a terminal inside the containers, and one menu item for the GUI, if relevant). Notice that in the json file, the version field should contain the buildDate rather than the toolVersion !!! toolVersion should be included instead in the text of the menu entry itself, e.g., “fsl 6.0.3”. Also notice that whereas categories appear in the Neurodesktop menu in start case (first letter of each word capitalized), in the json files they are sentence case (all letters lower case).

  10. Include an icon file in your fork of neurocommand/neurodesk/icons

  11. Send a pull request from your fork of neurocommand to

  12. When the pull request is merged by Neurodesk admins, it will trigger an action to build the singularity container, distribute it in all object storage locations and on CVMFS, and it will update the menus in the desktop image on the next daily build.

  13. Wait at least 24 hours

  14. Download and run the daily build of neurodesktop to check that your app can be launched from the start menu and works properly:

    sudo docker pull vnmd/neurodesktop:latest && sudo docker run --shm-size=1gb -it --privileged --user=root --name neurodesktop -v ~/neurodesktop-storage:/neurodesktop-storage -e HOST_UID="$(id -u)"  -e HOST_GID="$(id -g)" -p 8888:8888 -e NEURODESKTOP_VERSION=latest vnmd/neurodesktop:latest
  15. Open an issue in notifying that your app appears in the start menu and tested. The app will be included in the next release of Neurodesktop, and will be mentioned in the public announcement that accompanies the release. If the app is not in the start menu or not working as expected based on your earlier testing, open an issue as well, and report it.

  16. If somebody wants to use the application before the next release of Neurodesktop is out, you can instruct them to use the command in step 14 above instead of the default commands given in the user install instructions.

  17. Consider contributing a tutorial about the new tool:

4.3 - Menu entries

Menu entries in neurodesktop

As we want to propose several versions of the tools, each piece of software should have its own submenu under VNM Neuroimaging. To do so, you first have to add a submenu to menus/ by adding:

<!-- [[Tool Name]] submenu -->
    <Name>[[Tool Name]]</Name>
</Menu> <!-- End [[Tool Name]] -->

The following table shows the formatting rules to follow:

[[Tool name]]Capitalized, spacesITK snap
[[tool-name]]Lower case, no spaces (use - instead)itk-snap or itksnap
[[Tool-name]]Capitalized, no spaces (use - instead)ITK-snap

Next, we have to create the submenu itself as we referenced it by vnm-[[tool-name]].directory. To do so, create the file menus/submenus/vnm-[[tool-name]].directory and add the following information inside:

[Desktop Entry]
Name=[[Tool Name]]
Comment=[[Tool Name]]

If a specific icon is available in the menus/icons directory, replace [[icon-name]] by its name. Otherwise, use vnm.

Create the application

Finally, we have to create the actual application by creating the file menus/applications/vnm-[[tool-name]]-[[0.0.0]].desktop. The name of this file must contain the version of the tool (once again to allow multiple versions to live inside the same directory). Add the following description to this file:

[Desktop Entry]
Name=[[Tool Name]] [[0.0.0]] [[(Install only)]]
GenericName=[[Tool Name]] [[0.0.0]]
Comment=The description of what clicking on this application does. # This will be the tooltip of the application.
Exec=The command used to run the application.
Terminal=true # or false

The important part here is the value of Exec. If the tool is in the form of a singularity image, you should run the following command:

bash /usr/share/ [[tool-name]] [[0.0.0]] [[YYYYMMDD]] [[cmd]] [[args]]

What does is check if the image is already installed as a module. If not, it checks whether it can be installed or not (return 1 if not possible). After that, it installs the image as a module. If [[cmd]] is specified, once the image is installed, it runs the command by giving the arguments from [[args]]. Here are two examples for FreeSurfer and FreeView. This first one only installs the image as a module:

bash /usr/share/ freesurfer 6.0.1 20200506

And this does the same but runs FreeView afterward:

bash /usr/share/ freesurfer 6.0.1 20200506 freeview

The resulting .desktop file corresponding to FreeView contains:

[Desktop Entry]
Name=FreeView 6.0.1
GenericName=FreeView 6.0.1
Comment=Start FreeView 6.0.1
Exec=bash /usr/share/ freesurfer 6.0.1 20200506 freeview