1 - Contributors

This section acknowledges the contributions made to the project.

If you contributed to the project please list yourself here with a description of your contribution. We try to update this page based on the git commit history:

Steffen Bollmann

  • https://github.com/stebo85
  • funding: Oracle Cloud (114k AUD), ECR Knowledge Exchange & Translation Fund (42k AUD), ARDC (CI for $566k AUD)
  • system architecture
  • CVMFS container deployment
  • initial desktop container prototype
  • container build scripts
  • application containers (afni, aslprep, code, convert3D, freesurfer, hdbet, minc mriqc, romeo, spm12, tgvqsm, bart, fatsegnet, fsl, itksnap, lcmodel, mritools, niistat, qsmxt, root, slicer, trackvis, ants, cat12, conn, diffusiontoolkit, gimp, mricrogl, mrtrix3, rstudio, slicersalt, surfice, vesselapp, bidstools, clearswi, connectomeworkbench, dsistudio, fmriprep, julia, mrtrix3tissue, palm rabies, spinalcordtoolbox)
  • migrating container recipes and bugfixes to neurodocker upstream (fsl, ants)
  • documentation
  • tutorials (QSM, SWI, Unwrapping, lcmodel, freesurfer)
  • google colab support
  • outreach (e.g. Twitter, talks at conferences, youtube videos)

Aswin Narayanan

  • https://github.com/aswinnarayanan
  • funding: ARDC (CI for $566k AUD)
  • Neurocontainer devops
  • Neurodesktop development
  • Neurocommand installer rewrite
  • Neurodesk Play & Kubernetes implementation
  • Jupyter notebook support
  • Hugo website build and documentation

Angela Renton

  • https://github.com/air2310
  • Tutorials (MNE-Python, Tutorial template)
  • graphics for website (layer diagram)
  • documentation
  • user testing
  • neurodesk paper lead author

Thomas Shaw

  • https://github.com/thomshaw92
  • Win, Mac, Linux startup scripts
  • initial transparent singularity prototype
  • application container development (LASHiS, ASHS)
  • user testing

Oren Civier

  • https://github.com/civier
  • funding: ARDC (CI for $566k AUD)
  • documentation
  • softare application containers (bidscoin, matlab)
  • user testing

Tom Johnston

Martin Grignard

David White

Akshaiy Narayanan

Kelly Garner

Paris Lyons

  • design of Neurodesk Logo
  • project management of AEDAPT project

Thuy Dao

  • https://github.com/iishiishii
  • Application search tool with lunr
  • proof of concept for GUI
  • application container development (civet)
  • documentation (github workflow)

Ashley Stewart

  • https://github.com/astewartau
  • application container development (qsmxt)
  • presentation of neurodesk at OHBM Brainhack 2022 and OHBM educational course 2022

Lars Kasper

Judy D Zhu

Korbinian Eckstein

Stefanie Evas

Jeryn Chang

Sin Kim

Jakub Kaczmarzyk

Alan Hockings

Aditya Garg

Xincheng Ye

Kexin Lou

Renzo Huber

Steering Committee members without code contributions:

  • Ryan Sullivan, University of Sydney, Key User, Steering Committee
  • Thomas Close, University of Sydney, Key User, Scientific/Subject Expert Advisory Board
  • Wojtek Goscinski, Monash University, Steering Committee, Technical Advisory Board
  • Tony Hannan, Florey Institute of Neuroscience and Mental Health, Scientific/Subject Expert Advisory Board
  • Gary Egan, Monash University, Steering Committee
  • Paul Sowman, Macquarie University, Key User, Scientific/Subject Expert Advisory Board
  • Marta Garrido, University of Melbourne, Key User, Scientific/Subject Expert Advisory Board
  • Patrick Johnston, Queensland University of Technology, Key User, Scientific/Subject Expert Advisory Board
  • Aina Puce, Indiana University, Key User, Scientific/Subject Expert Advisory Board
  • Franco Pestilli, Indiana University, Technical Advisory Board
  • Levin Kuhlmann, Monash University, Key User, Scientific/Subject Expert Advisory Board
  • Gershon Spitz, Monash Epworth Rehabilitation Research Centre, Key User, Scientific/Subject Expert Advisory Board
  • David Abbott, Florey Institute of Neuroscience and Mental Health, Key User, Scientific/Subject Expert Advisory Board
  • Megan Campbell, The University of Newcastle, Key User, Scientific/Subject Expert Advisory Board
  • Nigel Rogasch, University of Adelaide, Key User, Scientific/Subject Expert Advisory Board
  • Will Woods, Swinburne University of Technology, Key User
  • Satrajit Ghosh, Massachusetts Institute of Technology, Provision of advice only

2 - Architecture

The architecture of the Neurodesk ecosystem

2.1 - Neurodesktop Release Process

A description of what to do to create new release of our Neurodesktop
  1. Check if the last automated build ran OK: https://github.com/NeuroDesk/neurodesktop/actions
  2. Run this build date and test if everything is ok and no regression happened
  3. Check what changes where made since the last release: https://github.com/NeuroDesk/neurodesktop/commits/main
  4. Summarize the main changes and copy this to the Release History: https://www.neurodesk.org/docs/neurodesktop/release-history/
  5. Change the version of the latest desktop in https://github.com/NeuroDesk/neurodesk.github.io/blob/hugo-docsy/data/neurodesktop.toml
  6. Commit all changes
  7. Tweet a quick summary of the changes and announce new version: https://twitter.com/neuro_desk

2.2 - Neurodesk Architecture

The architecture of the Neurodesk ecosystem

Layers

Neurodesktop: https://github.com/NeuroDesk/neurodesktop

  • docker container with interface modifications
  • contains tools necessary to manage workflows in sub-containers: vscode, git
  • CI: builds docker image and tests if it runs; tests if CVMFS servers are OK before deployment
  • CD: pushes images to github & docker registry

Neurocommand: https://github.com/NeuroDesk/neurocommand

  • script to install and manage multiple containers using transparent singularity on any linux system
  • this repo also handles the creation of menu entries in a general form applicable to different desktop environments
  • this repo can be re-used in other projects like CVL and when installing it on bare-metal workstations
  • CI: tests if containers can be installed
  • CD: this repo checks if containers requested in apps.json file are availabe on object storage and if not converts the singularity containers based on the docker containers and uploads them to object storage

transparent-singularity: https://github.com/NeuroDesk/transparent-singularity

  • script to install neuro-sub-containers, installers are called by neurocommand
  • this repo provides a way of using our containers on HPCs for large scale processing of the pipelines (including the support of SLURM and other job schedulers)
  • CI: test if exposing of binaries from container works

Neurocontainers: https://github.com/NeuroDesk/neurocontainers

  • build scripts for neuro-sub-containers
  • CI: building and testing of containers
  • CD: pushing containers to github and dockerhub registry

Neurodocker: https://github.com/NeuroDesk/neurodocker

  • fork of neurodocker project
  • provides recipes for our containers built
  • we are providing pull requests back of recipes
  • CI: handled by neurodocker - testing of generating container recipes

2.3 - Neurodesktop Dev

Testing the latest dev version of Neurodesktop

Building neurodesktop-dev

Dev builds can be triggered by Neurodesk admins from https://github.com/NeuroDesk/neurodesktop/actions/workflows/build-neurodesktop-dev.yml

Running latest neurodesktop-dev

Linux

docker pull vnmd/neurodesktop-dev:latest
sudo docker run \
  --shm-size=1gb -it --cap-add SYS_ADMIN \
  --security-opt apparmor:unconfined --device=/dev/fuse \
  --name neurodesktop-dev \
  -v ~/neurodesktop-storage:/neurodesktop-storage \
  -e HOST_UID="$(id -u)" -e HOST_GID="$(id -g)" \
  -p 8080:8080 -h neurodesktop-dev \
  vnmd/neurodesktop-dev:latest

Windows

docker pull vnmd/neurodesktop-dev:latest
docker run --shm-size=1gb -it --cap-add SYS_ADMIN --security-opt apparmor:unconfined --device=/dev/fuse --name neurodesktop -v C:/neurodesktop-storage:/neurodesktop-storage -p 8080:8080 -h neurodesktop-dev vnmd/neurodesktop-dev:latest

3 - Documentation

How to edit the documentation

3.1 - Local Hugo Docsy

How to edit the documentation

Local Hugo Docsy in Linux and WSL2

https://github.com/NeuroDesk/neurodesk.github.io/blob/hugo-docsy/CONTRIBUTING.md

Local Hugo Docsy in Windows

Clone repository

Using SSH

git clone --recurse-submodules git@github.com:NeuroDesk/neurodesk.github.io.git

or Https:

git clone --recurse-submodules https://github.com/NeuroDesk/neurodesk.github.io.git

If you cloned without –recurse-submodules

Run the following command to pull submodules

git submodule update --init --recursive --remote

Download Hugo binary

Hugo releases are on https://github.com/gohugoio/hugo/releases

Download latest version of hugo extended

e.g. for windows: https://github.com/gohugoio/hugo/releases/download/v0.88.1/hugo_extended_0.88.1_Windows-64bit.zip

Start local hugo server

Extract hugo binary (hugo.exe) to your neurodesk.github.io dir

Run server for windows: .\hugo.exe server --disableFastRender

Once started, dev website will be accessible via http://localhost:1313

Update docsy theme submodule

git submodule update --remote
git add themes/
git commit -m "Updating theme submodule"
git push origin hugo-docsy

4 - How to add new tools

How to add new tools to neurodesk

4.1 - Get Neurodesk code

Clone neurocontainer code

Get Neurocontainers code

Neurocontainers uses a forked-repo and rebase-oriented workflow. This means that all contributors create a fork of the neurocontainer repository they want to contribute to and then submit pull requests to the upstream repository to have their contributions reviewed and accepted. We also recommend you work on feature branches.

Step 1a: Create your fork

The following steps you’ll only need to do the first time you set up a machine for contributing to Neurocontainers. You’ll need to repeat the steps for any additional NeuroDesk projects (list) that you work on.

The first thing you’ll want to do to contribute to NeuroDesk is fork (see how) the appropriate NeuroDesk repository.

Step 1b: Clone to your machine

Next, clone your fork to your local machine:

$ git clone --config pull.rebase https://github.com/YOUR_USERNAME/neurocontainers.git
Cloning into 'neurocontainers'...
remote: Enumerating objects: 6730, done.
remote: Counting objects: 100% (504/504), done.
remote: Compressing objects: 100% (229/229), done.
remote: Total 6730 (delta 308), reused 423 (delta 269), pack-reused 6226
Receiving objects: 100% (6730/6730), 1.67 MiB | 196.00 KiB/s, done.
Resolving deltas: 100% (4222/4222), done.

(The --config pull.rebase option configures Git so that git pull will behave like git pull --rebase by default. Using git pull --rebase to update your changes to resolve merge conflicts is expected by essentially all of open source projects. You can also set that option after cloning using git config --add pull.rebase true, or just be careful to always run git pull --rebase, never git pull).

Note: If you receive an error while cloning, you may not have added your ssh key to GitHub.

Step 1c: Connect your fork to Neurocontainers upstream

Next you’ll want to configure an upstream remote repository for your fork of Neurocontainers. This will allow you to sync changes from the main project back into your fork.

First, show the currently configured remote repository:

$ git remote -v
origin  git@github.com:YOUR_USERNAME/neurocontainers.git (fetch)
origin  git@github.com:YOUR_USERNAME/neurocontainers.git (push)

Note: If you’ve cloned the repository using Github GUI, you may already have the upstream remote repository configured. For example, when you clone NeuroDesk/neurocontainers with the GitHub desktop client it configures the remote repository neurocontainer and you see the following output from git remote -v:

origin  git@github.com:YOUR_USERNAME/neurocontainer.git (fetch)
origin  git@github.com:YOUR_USERNAME/neurocontainer.git (push)
neurocontainers	https://github.com/NeuroDesk/neurocontainers.git (fetch)
neurocontainers	https://github.com/NeuroDesk/neurocontainers.git (push)

If your client hasn’t automatically configured a remote for NeuroDesk/eurocontainers, you’ll need to with:

$ git remote add -f upstream https://github.com/NeuroDesk/neurocontainers.git

Finally, confirm that the new remote repository, upstream, has been configured:

$ git remote -v
origin	https://github.com/YOUR_USERNAME/neurocontainers.git (fetch)
origin	https://github.com/YOUR_USERNAME/neurocontainers.git (push)
upstream	https://github.com/NeuroDesk/neurocontainers.git (fetch)
upstream	https://github.com/NeuroDesk/neurocontainers.git (push)

Step 2: Set up the Neurocontainers development environment

If you haven’t already, now is a good time to install the Neurocontainers development environment (Add tools).

Step 3: Configure continuous integration for your fork

This step is optional, but recommended.

  1. Go to your neurocontainers fork.
  2. If Actions tab is missing, go to Settings > Actions. Select Allow all actions. Then Save.
  3. In the actions tab, select “I understand my workflows, go ahead and enable them”

Neurocontainers is configured to use GitHub Actions to test and create builds upon each new commit and pull request. GitHub Actions is the primary CI that runs frontend and backend tests across a wide range of Ubuntu distributions.

GitHub Actions is free for open source projects and it’s easy to configure for your own fork of neurocontainer. After doing so, GitHub Actions will run tests for new refs you push to GitHub and email you the outcome (you can also view the results in the web interface).

Running CI against your fork can help save both your and the NeuroDesk maintainers time by making it easy to test a change fully before submitting a pull request. We generally recommend a workflow where as you make changes, you use a fast edit-refresh cycle running individual tests locally until your changes work. But then once you’ve gotten the tests you’d expect to be relevant to your changes working, push a branch to run the full test suite in GitHub Actions before you create a pull request. While you wait for GitHub Actions jobs to run, you can start working on your next task. When the tests finish, you can create a pull request that you already know passes the tests.

GitHub Actions will run all the jobs by default on your forked repository. You can check the Actions tab of your repository to see the builds.

4.2 - Using Git

Contribution workflow using Git

Working copies

When you work on Neurocontainers code, there are three copies of the Neurocontainers Git repository that you are generally concerned with:

  • The upstream remote. This is the official Neurocontainers repository on GitHub. You probably don’t have write access to this repository.
  • The origin remote: Your personal remote repository on GitHub. You’ll use this to share your code and create pull requests.
  • local copy: This lives on your laptop or your remote dev instance, and is what you’ll use to make changes and create commits.

When you work on Neurocontainers code, you will end up moving code between the various working copies.

Workflows

Sometimes you need to get commits. Here are some scenarios:

  • You may fork the official Neurocontainers repository to your GitHub fork.
  • You may fetch commits from the official Neurocontainers repository to your local copy.
  • You occasionally may fetch commits from your forked copy.

Sometimes you want to publish commits. Here are some scenarios:

  • You push code from your local copy to your GitHub fork. (You usually want to put the commit on a feature branch.)
  • You submit a PR to the official Neurocontainers repo.

Finally, the NeuroDesk core team will occasionally want your changes!

  • The NeuroDesk core team can accept your changes and add them to the official repo, usually on the master branch.

Relevant Git commands

The following commands are useful for moving commits between working copies:

  • git fetch: This grabs code from another repository to your local copy. (Defaults to fetching from your default remote, origin).
  • git fetch upstream: This grabs code from the upstream repository to your local copy.
  • git push: This pushes code from your local repository to one of the remotes.
  • git remote: This helps you configure short names for remotes.
  • git pull: This pulls code, but by default creates a merge commit (which you definitely don’t want). However, if you’ve followed our cloning documentation, this will do git pull --rebase instead, which is the only mode you’ll want to use when working on Neurodesk.

Know what branch you’re working on

When using Git, it’s important to know which branch you currently have checked out because most Git commands implicitly operate on the current branch. You can determine the currently checked out branch several ways.

One way is with git status:

$ git status
On branch newapp
nothing to commit, working directory clean

Another is with git branch which will display all local branches, with a star next to the current branch:

$ git branch
* newapp
  master

To see even more information about your branches, including remote branches, use git branch -vva:

$ git branch -vva
* civet_2.1.1                             f736814 [origin/civet_2.1.1] set DEPLOY_PATH
  master                                  a0f0455 [origin/master] Merge pull request #129
  remotes/origin/cat12_with_neurodocker   763f6de works :)
  remotes/origin/civet_2.1.1              f736814 set DEPLOY_PATH
  remotes/origin/master                   a0f0455 Merge pull request #129

You can also configure Bash and Zsh to display the current branch in your prompt.

Keep your fork up to date

You’ll want to keep your fork up-to-date with changes from Neurocontainers’s master repositories.

Note about git pull: Rather than using git pull, which by default is a shortcut for git fetch && git merge FETCH_HEAD (docs), you should use git pull --rebase, which is like git fetch and then git rebase.

First, fetch changes from Neurocontainers’s upstream repository you configured in the step above:

$ git fetch upstream

Next, check out your master branch and rebase it on top of upstream/master:

$ git checkout master
Switched to branch 'master'

$ git rebase upstream/master

This will rollback any changes you’ve made to master, update it from upstream/master, and then re-apply your changes. Rebasing keeps the commit history clean and readable.

When you’re ready, push your changes to your remote fork. Make sure you’re in branch master and then run git push:

$ git checkout master
$ git push origin master

You can keep any branch up to date using this method. If you’re working on a feature branch (see next section), which we recommend, you would change the command slightly, using the name of your feature-branch rather than master:

$ git checkout feature-branch
Switched to branch 'feature-branch'

$ git rebase upstream/master

$ git push origin feature-branch

Work on a feature branch

One way to keep your work organized is to create a branch for each issue or feature. You can and should create as many branches as you’d like.

First, make sure your master branch is up-to-date with Neurocontainers upstream (see how).

Next, from your master branch, create a new tracking branch, providing a descriptive name for your feature branch:

$ git checkout master
Switched to branch 'master'

$ git checkout -b issue-1755-fail2ban
Switched to a new branch 'issue-1755-fail2ban'

Alternatively, you can create a new branch explicitly based off upstream/master:

$ git checkout -b issue-1755-fail2ban upstream/master
Switched to a new branch 'issue-1755-fail2ban'

Now you’re ready to work on the issue or feature.

Stage changes

Recall that files tracked with Git have three possible states: committed, modified, and staged.

To prepare a commit, first add the files with changes that you want to include in your commit to your staging area. You add both new files and existing ones. You can also remove files from staging when necessary.

Get status of working directory

To see which files in the working directory have changes that have not been staged, use git status.

If you have no changes in the working directory, you’ll see something like this:

$ git status
On branch issue-123
nothing to commit, working directory clean

If you have unstaged changes, you’ll see something like this:

On branch issue-123
Untracked files:
  (use "git add <file>..." to include in what will be committed)

        build.sh

nothing added to commit but untracked files present (use "git add" to track)

Stage additions with git add

To add changes to your staging area, use git add <filename>. Because git add is all about staging the changes you want to commit, you use it to add new files as well as files with changes to your staging area.

Continuing our example from above, after we run git add build.sh, we’ll see the following from git status:

On branch issue-123
Changes to be committed:
  (use "git reset HEAD <file>..." to unstage)

        new file:   build.sh

You can view the changes in files you have staged with git diff --cached. To view changes to files you haven’t yet staged, just use git diff.

If you want to add all changes in the working directory, use git add -A (documentation).

You can also stage changes using your Github GUI.

If you stage a file, you can undo it with git reset HEAD <filename>. Here’s an example where we stage a file build.sh and then unstage it:

$ git add build.sh
On branch issue-1234
Changes to be committed:
  (use "git reset HEAD <file>..." to unstage)

        new file:   build.sh

$ git reset HEAD build.sh
$ git status
On branch issue-1234
Untracked files:
  (use "git add <file>..." to include in what will be committed)

        build.sh

nothing added to commit but untracked files present (use "git add" to track)

Stage deletions with git rm

To remove existing files from your repository, use git rm (documentation). This command can either stage the file for removal from your repository AND delete it from your working directory or just stage the file for deletion and leave it in your working directory.

To stage a file for deletion and remove it from your working directory, use git rm <filename>:

$ git rm test.txt
rm 'test.txt'

$ git status
On branch issue-1234
Changes to be committed:
  (use "git reset HEAD <file>..." to unstage)

        deleted:    test.txt

$ ls test.txt
ls: No such file or directory

To stage a file for deletion and keep it in your working directory, use git rm --cached <filename>:

$ git rm --cached test2.txt
rm 'test2.txt'

$ git status
On branch issue-1234
Changes to be committed:
  (use "git reset HEAD <file>..." to unstage)

        deleted:    test2.txt

$ ls test2.txt
test2.txt

If you stage a file for deletion with the --cached option, and haven’t yet run git commit, you can undo it with git reset HEAD <filename>:

$ git reset HEAD test2.txt

Unfortunately, you can’t restore a file deleted with git rm if you didn’t use the --cache option. However, git rm only deletes files it knows about. Files you have never added to Git won’t be deleted.

Commit changes

When you’ve staged all your changes, you’re ready to commit. You can do this with git commit -m "My commit message." to include a commit message.

Here’s an example of committing with the -m for a one-line commit message:

$ git commit -m "Add a test commit for docs."
[issue-123 173e17a] Add a test commit for docs.
 1 file changed, 1 insertion(+)
 create mode 100644 newfile.py

You can also use git commit without the -m option and your editor to open, allowing you to easily draft a multi-line commit message.

How long your commit message should be depends on where you are in your work. Using short, one-line messages for commits related to in-progress work makes sense. For a commit that you intend to be final or that encompasses a significant amount or complex work, you should include a longer message.

Keep in mind that your commit should contain a ‘minimal coherent idea’ and have a quality commit message.

Here’s an example of a longer commit message that will be used for a pull request:

Add CIVET 2.1.1 container.

Edit build.sh and README.md to build container for CIVET 2.1.1

Tested on my local Ubuntu development server, but need to test within Neurodesktop.

Fixes #1755.

The first line is the summary. The following paragraphs are full prose and explain why and how the change was made. It explains what testing was done and asks specifically for further testing. The final paragraph indicates that this commit addresses and fixes issue #1755. When you submit your pull request, GitHub will detect and link this reference to the appropriate issue. Once your commit is merged into upstream/master, GitHub will automatically close the referenced issue. See Closing issues via commit messages for details.

Note in particular that GitHub’s regular expressions for this feature are sloppy, so phrases like Partially fixes #1234 will automatically close the issue. Phrases like Fixes part of #1234 are a good alternative.

Make as many commits as you need to address the issue or implement your feature.

Push your commits to GitHub

As you’re working, it’s a good idea to frequently push your changes to GitHub. This ensures your work is backed up should something happen to your local machine and allows others to follow your progress. It also allows you to work from multiple computers without losing work.

Pushing to a feature branch is just like pushing to master:

$ git push origin <branch-name>
Counting objects: 6, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (6/6), 658 bytes | 0 bytes/s, done.
Total 6 (delta 3), reused 0 (delta 0)
remote: Resolving deltas: 100% (3/3), completed with 1 local objects.
To git@github.com:christi3k/neurocontainers.git
 * [new branch]      issue-demo -> issue-demo

If you want to see what Git will do without actually performing the push, add the -n (dry-run) option: git push -n origin <branch-name>. If everything looks good, re-run the push command without -n.

If the feature branch does not already exist on GitHub, it will be created when you push and you’ll see * [new branch] in the command output.

Examine and tidy your commit history

Examining your commit history prior to submitting your pull request is a good idea. Will the person reviewing your commit history be able to clearly understand your progression of work?

On the command line, you can use the git log command to display an easy to read list of your commits:

$ git log --all --graph --oneline --decorate

* 4f8d75d (HEAD -> 1754-docs-add-git-workflow) docs: Add details about configuring Travis CI.
* bfb2433 (origin/1754-docs-add-git-workflow) docs: Add section for keeping fork up-to-date to Git Guide.
* 4fe10f8 docs: Add sections for creating and configuring fork to Git Guide.
* 985116b docs: Add graphic client recs to Git Guide.
* 3c40103 docs: Add stubs for remaining Git Guide sections.
* fc2c01e docs: Add git guide quickstart.
| * f0eaee6 (upstream/master) bug: Fix traceback in get_missed_message_token_from_address().

Alternatively, use your graphical client to view the history for your feature branch.

If you need to update any of your commits, you can do so with an interactive rebase. Common reasons to use an interactive rebase include:

  • squashing several commits into fewer commits
  • splitting a single commit into two or more
  • rewriting one or more commit messages

There is ample documentation on how to rebase, so we won’t go into details here. We recommend starting with GitHub’s help article on rebasing and then consulting Git’s documentation for git-rebase if you need more details.

If all you need to do is edit the commit message for your last commit, you can do that with git commit --amend. See Git Basics - Undoing Things for details on this and other useful commands.

Force-push changes to GitHub after you’ve altered your history

Any time you alter history for commits you have already pushed to GitHub, you’ll need to prefix the name of your branch with a +. Without this, your updates will be rejected with a message such as:

$ git push origin 1754-docs-add-git-workflow
To git@github.com:christi3k/neurocontainers.git
 ! [rejected] 1754-docs-add-git-workflow -> 1754-docs-add-git-workflow (non-fast-forward)
error: failed to push some refs to 'git@github.com:christi3k/neurocontainers.git'
hint: Updates were rejected because the tip of your current branch is behind
hint: its remote counterpart. Integrate the remote changes (e.g.
hint: 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.

Re-running the command with +<branch> allows the push to continue by re-writing the history for the remote repository:

$ git push origin +1754-docs-add-git-workflow
Counting objects: 12, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (12/12), done.
Writing objects: 100% (12/12), 3.71 KiB | 0 bytes/s, done.
Total 12 (delta 8), reused 0 (delta 0)
remote: Resolving deltas: 100% (8/8), completed with 2 local objects.
To git@github.com:christi3k/neurocontainers.git
 + 2d49e2d...bfb2433 1754-docs-add-git-workflow -> 1754-docs-add-git-workflow (forced update)

This is perfectly okay to do on your own feature branches, especially if you’re the only one making changes to the branch. If others are working along with you, they might run into complications when they retrieve your changes because anyone who has based their changes off a branch you rebase will have to do a complicated rebase.

4.3 - Add tools

Add a tool to neurodesktop

The goal of neurodesk is to provide users with a large choice of tools to use in their pipelines. Use the guide below to add a tool to neurodesktop or neurocontainers.

Guiding principles

To decide if a tool should be packaged in a singularity container in neurocontainers or be installed in the neurodesktop container we are currently following these guiding principles:

  1. neurodesk is not a package manager. This means we are not distributing tools in containers that can easily be installed via a standard package manager
  2. neurodesk allows users to have multiple versions of tools in parallel via lmod, this means that if different versions of a tool can’t be installed in parallel we package the tool inside a container.
  3. neurodesk aims to provide tooling to link tools from different containers (such as workflow managers like nipype or nextflow). This means that if a tool is required to coordinate various container-tools, it should be in the neurodesktop container.

Examples:

easy installcoordinates containerssmall in sizelatest version is okuseful to most usersConclusion
gityesyesyesyesyesneurodesktop
lmodnoyesyesyesyesneurodesktop
nipypeyesyesyesyesyesneurodesktop
vscodeyesyesyesyesyesneurodesktop
itksnapyesnoyesyesyescontainer?
convert3Dyesnoyesnonocontainer
fslnononononocontainer
mrtrixnononononocontainer
freesurfernononononocontainer

Adding new recipes

Refer to neurodocker for more information on neurodocker recipes

Build container

Environment Requirements

  • Docker
  • Recent Python Version
    Search for “python_requires” in https://github.com/NeuroDesk/neurodocker/blob/master/setup.cfg for minimal version of Python required. If you have several versions of Python installed in the environment, typing ‘python’ in the terminal should launch a version with equal or higher version number
  • Python pip3
    This should be launched by ‘python -m pip’
  • git

Install Neurodocker

Neurodocker is the dependency we use to build containers.

  1. (optional) Sync upstream repository:
    If you have the permissions to do so: Press “Fetch upstream” in https://github.com/NeuroDesk/neurodocker to check if our fork of Neurodocker is already up-to-date. Otherwise, open an issue in https://github.com/NeuroDesk/neurocontainers/issues, requesting to pull-in latest changes from Neurodocker upstream into our fork of Neurodocker. One of the admins will attend the issue and perform the operation.
  2. (optional) Add a new neurodocker tool:
    If relevant to your project, add an option to neurodocker that installs new software (https://github.com/NeuroDesk/neurodocker) and create a pull request to neurodocker’s main responsitory (add new tool in a branch!).
  3. Clone our fork of Neurodocker:
    git clone https://github.com/NeuroDesk/neurodocker/
  4. Install neurodocker:
    cd neurodocker  
    python -m pip install .
    cd ..
  5. Run: echo ’export PATH=${PATH}:${HOME}/.local/bin’ » ${HOME}/.bashrc
  6. Close the terminal, and reopen it for the updated PATH to take effect

Clone the Neurocontainers repository

  • Option A) Fork neurocontainers and setup github actions:
    Follow the steps in Get Neurodesk code.

  • Option B) Clone from NeuroDesk:

    git clone https://github.com/NeuroDesk/neurocontainers/

Create a new app

  1. Copy the directory template and rename to NEWAPP in neurocontainers/recipes (NEWAPP being the name of the application to be displayed in Neurodesk’s menu; notice it shouldn’t have any special characters):

    cd neurocontainers/recipes
    cp -R template NEWAPP
  2. Create your Container Files:
    Modify build.sh in neurocontainers/recipes/NEWAPP to build your application and update README.md (make sure the version is correct in the README!). Notice that the example build script in the template has instructions to build a conatiner for datalad, that may or may not suite your exact needs

    cd NEWAPP
    (edit build.sh as required)
    (edit README.md as required)

    Upload your application to object storage first if needed, so you can then download it in build.sh (ask for instructions about this if you don’t know the key, and never share it anywhere public!)

  3. Run update-builders.sh: This will auto-create the CI workflow for the application (or manually duplicate the template file and rename all occurances of template to NEWAPP)

    cd ../..
    sh update-builders.sh
  4. Build and test the container locally

    1. run the build script with the debug flag:

      cd recipes/NEWAPP
      chmod +x build.sh
      ./build.sh -ds

      NOTICE: if the README.md file does not contain the same tool-version string as in the build.sh the build will not start to prevent an incorrect README.md description.

    2. test running some commands within the container that should be available in your local docker container repository.

      For example, to open an interactive shell in a container (with the home folder /root binded to /root on host), you may run:

      sudo docker run -it -v /root:/root --entrypoint /bin/bash NEWAPP_VERSION:TAG
      

      with VERSION being the version of the app, and TAG the version tag of the container (run ‘sudo docker image list’ to find the tag)

    3. if your application requires a Matlab Runtime and you get an error about shared library “libmwlaunchermain.so” not found, check which version of the runtime was installed by the build script

  5. Update changes in local git repository

    git add .github/workflows/NEWAPP.yml recipes/NEWAPP/test.sh recipes/NEWAPP/build.sh recipes/NEWAPP/README.md
    git config user.email "the email that you use for github"
    git config user.name "your name"
    git commit

Push the new app to Neurocontainers

Prerequisite

Generate git personal access token (if you don’t have one already)

  1. Browse to https://github.com/
  2. Log into your account
  3. Press on your picture in upper right corner –> Setting –> Developer Settings –> Personal Access Token
  4. Press on “generate personal access token”
  5. Write something in “Notes” (doesn’t matter what, it’s for your own use)
  6. Check “repo”
  7. Check “Workflow”
  8. Press “Generate Token” at the bottom
  9. Copy the token displayed to somewhere safe, as you will have to user it later

Step by step guide

  1. Test the container locally, and if successful push repo to trigger the automatic build on GitHub. When asked for your Github password, please provide the personal access token obtained in the previous stage.

    git pull
    git push
  2. Go to https://github.com/neurodesk/neurocontainers/actions. Check that the most recent workflow run in the list terminated successfully (green). Otherwise, click on it, click on “build docker”, and the line that caused the error will be highlighted

  3. Find your new package under https://github.com/orgs/NeuroDesk/packages?repo_name=neurocontainers
    Enter the name of the package in the search box, and verify that the full package name shows up in the format toolName_toolVersion

  4. Obtain buildDate by clicking on the full package name that came up in the search. The build date will be the newest date shown under Recent tagged image versions

  5. Use toolName, toolVersion and buildDate from the previous two steps to manually download the package by typing the following in a terminal open in Neurodesktop

    bash /neurocommand/local/fetch_and_run.sh toolName toolVersion buildDate
       (when you see the "Singularity>" prompt, type exit and ENTER)
     ml toolName/toolVersion

    For example: If the full package name that comes up in the step 11 is itksnap_3.8.0, and the newest date under Recent tagged image versions is 20210322

    The command to use in a terminal open in Neurodesktop is:

    bash /neurocommand/local/fetch_and_run.sh itksnap 3.8.0 20210322
      (when you see the "Singularity>" prompt, type exit and ENTER)
     ml toolName/toolVersion
  1. Test the new container. Run some commands, to see all is good
    If the container doesn’t work yet, it’s sometimes useful to try and troubleshoot it and install missing libraries. This can be achieved by running it in a writable mode with fakeroot enabled:

    SINGULARITY_BINDPATH=''; singularity shell --writable --fakeroot /neurodesktop-storage/containers/toolName_toolVersion_buildDate/toolName_toolVersion_buildDate.simg
  2. Fork https://github.com/NeuroDesk/neurocommand/ to your Github account

  3. Edit an entry for your package in your fork of neurocommand/blob/main/neurodesk/apps.json based on one of the other entries (generating one menu item for opening a terminal inside the containers, and one menu item for the GUI, if relevant). Notice that in the json file, the version field should contain the buildDate

  4. Include an icon file in your fork of neurocommand/neurodesk/icons

  5. Send a pull request from your fork of neurocommand to https://github.com/NeuroDesk/neurocommand/

  6. When the pull request is merged by Neurodesk admins, it will trigger an action to build the singularity container, distribute it in all object storage locations and on CVMFS, and it will update the menus in the desktop image on the next daily build.

  7. Wait at least 24 hours

  8. Download and run the daily build of neurodesktop to check that your app can be launched from the start menu and works properly:

    sudo docker pull vnmd/neurodesktop:latest && sudo docker run   --shm-size=1gb -it --privileged --name neurodesktop   -v ~/neurodesktop-storage:/neurodesktop-storage   -e HOST_UID="$(id -u)" -e HOST_GID="$(id -g)"   -p 8080:8080 -h neurodesktop-latest   vnmd/neurodesktop:latest
  9. Open an issue in https://github.com/NeuroDesk/neurocontainers/issues notifying that your app appears in the start menu and tested. The app will be included in the next release of Neurodesktop, and will be mentioned in the public announcement that accompanies the release. If the app is not in the start menu or not working as expected based on your earlier testing, open an issue as well, and report it.

  10. If somebody wants to use the application before the next release of Neurodesktop is out, you can instruct them to use the command in step 13 above instead of the deafult commands given in the user install instructions.

  11. Consider contributing a tutorial about the new tool: https://github.com/NeuroDesk/neurodesk.github.io/tree/hugo-docsy/content/en/tutorials

4.4 - Fix commit

Fix commit

Fixing the last commit

Changing the last commit message

  1. git commit --amend -m "New message"

Changing the last commit

  1. Make your changes to the files
  2. Run git add <filename> to add one file or git add <filename1> <filename2> ... to add multiple files
  3. git commit --amend

Fixing older commits

Changing commit messages

  1. git rebase -i HEAD~5 (if, for example, you are editing some of the last five commits)
  2. For each commit that you want to change the message, change pick to reword, and save
  3. Change the commit messages

Deleting old commits

  1. git rebase -i HEAD~n where n is the number of commits you are looking at
  2. For each commit that you want to delete, change pick to drop, and save

Squashing commits

Sometimes, you want to make one commit out of a bunch of commits. To do this,

  1. git rebase -i HEAD~n where n is the number of commits you are interested in
  2. Change pick to squash on the lines containing the commits you want to squash and save

Reordering commits

  1. git rebase -i HEAD~n where n is the number of commits you are interested in
  2. Reorder the lines containing the commits and save

Pushing commits after tidying them

  1. git push origin +my-feature-branch (Note the + there and substitute your actual branch name.)

4.5 - Create a pull request

Pull request and make contribution

Create a pull request

When you’re ready for feedback, submit a pull request. Pull requests are a feature specific to GitHub. They provide a simple, web-based way to submit your work (often called “patches”) to a project. It’s called a pull request because you’re asking the project to pull changes from your fork.

If you’re unfamiliar with how to create a pull request, you can check out GitHub’s documentation on creating a pull request from a fork. You might also find GitHub’s article about pull requests helpful. That all said, the tutorial below will walk you through the process.

Create a pull request

Step 0: Make sure you’re on a feature branch (not master)

It is important to work on feature branch when creating a pull request. Your new pull request will be inextricably linked with your branch while it is open, so you will need to reserve your branch only for changes related to your issue, and avoid introducing extraneous changes for other issues or from upstream.

If you are working on a branch named master, you need to create and switch to a feature branch before proceeding.

Step 1: Update your branch with git rebase

The best way to update your branch is with git fetch and git rebase. Do not use git pull or git merge as this will create merge commits. See keep your fork up to date for details.

Here’s an example (you would replace issue-123 with the name of your feature branch):

$ git checkout issue-123
Switched to branch 'issue-123'

$ git fetch upstream
remote: Counting objects: 69, done.
remote: Compressing objects: 100% (23/23), done.
remote: Total 69 (delta 49), reused 39 (delta 39), pack-reused 7
Unpacking objects: 100% (69/69), done.
From https://github.com/NeuroDesk/neurocontainers/
   69fa600..43e21f6  master     -> upstream/master

$ git rebase upstream/master

First, rewinding head to replay your work on top of it...
Applying: troubleshooting tip about provisioning

Step 2: Push your updated branch to your remote fork

Once you’ve updated your local feature branch, push the changes to GitHub:

$ git push origin issue-123
Counting objects: 6, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (6/6), 658 bytes | 0 bytes/s, done.
Total 6 (delta 3), reused 0 (delta 0)
remote: Resolving deltas: 100% (3/3), completed with 1 local objects.
To git@github.com:christi3k/neurocontainers.git
 + 2d49e2d...bfb2433 issue-123 -> issue-123

If your push is rejected with error failed to push some refs then you need to prefix the name of your branch with a +:

$ git push origin +issue-123
Counting objects: 6, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (6/6), 658 bytes | 0 bytes/s, done.
Total 6 (delta 3), reused 0 (delta 0)
remote: Resolving deltas: 100% (3/3), completed with 1 local objects.
To git@github.com:christi3k/neurocontainers.git
 + 2d49e2d...bfb2433 issue-123 -> issue-123 (forced update)

This is perfectly okay to do on your own feature branches, especially if you’re the only one making changes to the branch. If others are working along with you, they might run into complications when they retrieve your changes because anyone who has based their changes off a branch you rebase will have to do a complicated rebase.

Step 3: Open the pull request

If you’ve never created a pull request or need a refresher, take a look at GitHub’s article creating a pull request from a fork. Note: Pull request titles are different from commit messages. Commit messages can be edited with git commit --amend, git rebase -i, etc., while the title of a pull request can only be edited via GitHub.

Update a pull request

As you make progress on your feature or bugfix, your pull request, once submitted, will be updated each time you push commits to your remote branch. This means you can keep your pull request open as long as you need, rather than closing and opening new ones for the same feature or bugfix.

It’s a good idea to keep your pull request mergeable with neurocontainer upstream by frequently fetching, rebasing, and pushing changes. See keep your fork up to date for details. You might also find this excellent article How to Rebase a Pull Request helpful.

And, as you address review comments others have made, we recommend posting a follow-up comment in which you: a) ask for any clarifications you need, b) explain to the reviewer how you solved any problems they mentioned, and c) ask for another review.

4.6 - Troubleshooting

Troubleshoot commit issue with Git

Undo a merge commit

A merge commit is a special type of commit that has two parent commits. It’s created by Git when you merge one branch into another and the last commit on your current branch is not a direct ancestor of the branch you are trying to merge in. This happens quite often in a busy project like NeuroDesk where there are many contributors because upstream/neurocontainer will have new commits while you’re working on a feature or bugfix. In order for Git to merge your changes and the changes that have occurred on neurocontainer/upstream since you first started your work, it must perform a three-way merge and create a merge commit.

neurocontainer uses a forked-repo, rebase-oriented workflow.

A merge commit is usually created when you’ve run git pull or git merge. You’ll know you’re creating a merge commit if you’re prompted for a commit message and the default is something like this:

Merge branch 'master' of https://github.com/NeuroDesk/neurocontainer

# Please enter a commit message to explain why this merge is necessary,
# especially if it merges an updated upstream into a topic branch.
#
# Lines starting with '#' will be ignored, and an empty message aborts
# the commit.

And the first entry for git log will show something like:

commit e5f8211a565a5a5448b93e98ed56415255546f94
Merge: 13bea0e e0c10ed
Author: Christie Koehler <ck@christi3k.net>
Date:   Mon Oct 10 13:25:51 2016 -0700

    Merge branch 'master' of https://github.com/NeuroDesk/neurocontainer

Some graphical Git clients may also create merge commits.

To undo a merge commit, first run git reflog to identify the commit you want to roll back to:

$ git reflog

e5f8211 HEAD@{0}: pull upstream master: Merge made by the 'recursive' strategy.
13bea0e HEAD@{1}: commit: test commit for docs.

Reflog output will be long. The most recent Git refs will be listed at the top. In the example above e5f8211 HEAD@{0}: is the merge commit made automatically by git pull and 13bea0e HEAD@{1}: is the last commit I made before running git pull, the commit that I want to rollback to.

Once you’d identified the ref you want to revert to, you can do so with git reset:

$ git reset --hard 13bea0e
HEAD is now at 13bea0e test commit for docs.

:::{important} git reset --hard <commit> will discard all changes in your working directory and index since the commit you’re resetting to with <commit>. This is the main way you can lose work in Git. If you need to keep any changes that are in your working directory or that you have committed, use git reset --merge <commit> instead. :::

You can also use the relative reflog HEAD@{1} instead of the commit hash, just keep in mind that this changes as you run Git commands.

Now when you look at the output of git reflog, you should see that the tip of your branch points to your last commit 13bea0e before the merge:

$ git reflog

13bea0e HEAD@{2}: reset: moving to HEAD@{1}
e5f8211 HEAD@{3}: pull upstream master: Merge made by the 'recursive' strategy.
13bea0e HEAD@{4}: commit: test commit for docs.

And the first entry git log shows is this:

commit 13bea0e40197b1670e927a9eb05aaf50df9e8277
Author: Christie Koehler <ck@christi3k.net>
Date:   Mon Oct 10 13:25:38 2016 -0700

    test commit for docs.

Restore a lost commit

We’ve mentioned you can use git reset --hard to rollback to a previous commit. What if you run git reset --hard and then realize you actually need one or more of the commits you just discarded? No problem, you can restore them with git cherry-pick (docs).

For example, let’s say you just committed “some work” and your git log looks like this:

* 67aea58 (HEAD -> master) some work
* 13bea0e test commit for docs.

You then mistakenly run git reset --hard 13bea0e:

$ git reset --hard 13bea0e
HEAD is now at 13bea0e test commit for docs.

$ git log
* 13bea0e (HEAD -> master) test commit for docs.

And then realize you actually needed to keep commit 67aea58. First, use git reflog to confirm that commit you want to restore and then run git cherry-pick <commit>:

$ git reflog
13bea0e HEAD@{0}: reset: moving to 13bea0e
67aea58 HEAD@{1}: commit: some work

$ git cherry-pick 67aea58
 [master 67aea58] some work
 Date: Thu Oct 13 11:51:19 2016 -0700
 1 file changed, 1 insertion(+)
 create mode 100644 test4.txt

Recover from a git rebase failure

One situation in which git rebase will fail and require you to intervene is when your change, which Git will try to re-apply on top of new commits from which ever branch you are rebasing on top of, is to code that has been changed by those new commits.

For example, while I’m working on a file, another contributor makes a change to that file, submits a pull request and has their code merged into master. Usually this is not a problem, but in this case the other contributor made a change to a part of the file I also want to change. When I try to bring my branch up to date with git fetch and then git rebase upstream/master, I see the following:

First, rewinding head to replay your work on top of it...
Applying: test change for docs
Using index info to reconstruct a base tree...
M    README.md
Falling back to patching base and 3-way merge...
Auto-merging README.md
CONFLICT (content): Merge conflict in README.md
error: Failed to merge in the changes.
Patch failed at 0001 test change for docs
The copy of the patch that failed is found in: .git/rebase-apply/patch

When you have resolved this problem, run "git rebase --continue".
If you prefer to skip this patch, run "git rebase --skip" instead.
To check out the original branch and stop rebasing, run "git rebase --abort".

This message tells me that Git was not able to apply my changes to README.md after bringing in the new commits from upstream/master.

Running git status also gives me some information:

rebase in progress; onto 5ae56e6
You are currently rebasing branch 'docs-test' on '5ae56e6'.
  (fix conflicts and then run "git rebase --continue")
  (use "git rebase --skip" to skip this patch)
  (use "git rebase --abort" to check out the original branch)

Unmerged paths:
  (use "git reset HEAD <file>..." to unstage)
  (use "git add <file>..." to mark resolution)

  both modified:   README.md

no changes added to commit (use "git add" and/or "git commit -a")

To fix, open all the files with conflicts in your editor and decide which edits should be applied. Git uses standard conflict-resolution (<<<<<<<, =======, and >>>>>>>) markers to indicate where in files there are conflicts.

Tip: You can see recent changes made to a file by running the following commands:

git fetch upstream
git log -p upstream/master -- /path/to/file

You can use this to compare the changes that you have made to a file with the ones in upstream, helping you avoid undoing changes from a previous commit when you are rebasing.

Once you’ve done that, save the file(s), stage them with git add and then continue the rebase with git rebase --continue:

$ git add README.md

$ git rebase --continue
Applying: test change for docs

For help resolving merge conflicts, see basic merge conflicts, advanced merging, and/or GitHub’s help on how to resolve a merge conflict.

Working from multiple computers

Working from multiple computers with neurocontainer and Git is fine, but you’ll need to pay attention and do a bit of work to ensure all of your work is readily available.

Recall that most Git operations are local. When you commit your changes with git commit they are safely stored in your local Git database only. That is, until you push the commits to GitHub, they are only available on the computer where you committed them.

So, before you stop working for the day, or before you switch computers, push all of your commits to GitHub with git push:

$ git push origin <branchname>

When you first start working on a new computer, you’ll clone the neurocontainer repository and connect it to neurocontainer upstream. A clone retrieves all current commits, including the ones you pushed to GitHub from your other computer.

But if you’re switching to another computer on which you have already cloned neurocontainer, you need to update your local Git database with new refs from your GitHub fork. You do this with git fetch:

$ git fetch <username>

Ideally you should do this before you have made any commits on the same branch on the second computer. Then you can git merge on whichever branch you need to update:

$ git checkout <my-branch>
Switched to branch '<my-branch>'

$ git merge origin/master

If you have already made commits on the second computer that you need to keep, you’ll need to use git log FETCH_HEAD to identify that hashes of the commits you want to keep and then git cherry-pick <commit> those commits into whichever branch you need to update.

4.7 - Menu entries

Menu entries in neurodesktop

Menu entry

As we want to propose several versions of the tools, each piece of software should have its own submenu under VNM Neuroimaging. To do so, you first have to add a submenu to menus/vnm-applications.menu by adding:

<!-- [[Tool Name]] submenu -->
<Menu>
    <Name>[[Tool Name]]</Name>
    <Directory>vnm-[[tool-name]].directory</Directory>
    <Include>
        <And>
            <Category>[[Tool-Name]]</Category>
        </And>
    </Include>
</Menu> <!-- End [[Tool Name]] -->

The following table shows the formatting rules to follow:

PlaceholderRuleExample
[[Tool name]]Capitalized, spacesITK snap
[[tool-name]]Lower case, no spaces (use - instead)itk-snap or itksnap
[[Tool-name]]Capitalized, no spaces (use - instead)ITK-snap

Next, we have to create the submenu itself as we referenced it by vnm-[[tool-name]].directory. To do so, create the file menus/submenus/vnm-[[tool-name]].directory and add the following information inside:

[Desktop Entry]
Name=[[Tool Name]]
Comment=[[Tool Name]]
Icon=/home/neuro/.config/lxpanel/LXDE/icons/[[icon-name]].png
Type=Directory

If a specific icon is available in the menus/icons directory, replace [[icon-name]] by its name. Otherwise, use vnm.

Create the application

Finally, we have to create the actual application by creating the file menus/applications/vnm-[[tool-name]]-[[0.0.0]].desktop. The name of this file must contain the version of the tool (once again to allow multiple versions to live inside the same directory). Add the following description to this file:

[Desktop Entry]
Name=[[Tool Name]] [[0.0.0]] [[(Install only)]]
GenericName=[[Tool Name]] [[0.0.0]]
Comment=The description of what clicking on this application does. # This will be the tooltip of the application.
Exec=The command used to run the application.
Icon=/home/neuro/.config/lxpanel/LXDE/icons/[[icon-name]].png
Type=Application
Categories=[[Tool-name]]
Terminal=true # or false

The important part here is the value of Exec. If the tool is in the form of a singularity image, you should run the following command:

bash /usr/share/fetch_and_run.sh [[tool-name]] [[0.0.0]] [[YYYYMMDD]] [[cmd]] [[args]]

What fetch_and_run.sh does is check if the image is already installed as a module. If not, it checks whether it can be installed or not (return 1 if not possible). After that, it installs the image as a module. If [[cmd]] is specified, once the image is installed, it runs the command by giving the arguments from [[args]]. Here are two examples for FreeSurfer and FreeView. This first one only installs the image as a module:

bash /usr/share/fetch_and_run.sh freesurfer 6.0.1 20200506

And this does the same but runs FreeView afterward:

bash /usr/share/fetch_and_run.sh freesurfer 6.0.1 20200506 freeview

The resulting .desktop file corresponding to FreeView contains:

[Desktop Entry]
Name=FreeView 6.0.1
GenericName=FreeView 6.0.1
Comment=Start FreeView 6.0.1
Exec=bash /usr/share/fetch_and_run.sh freesurfer 6.0.1 20200506 freeview
Icon=/home/neuro/.config/lxpanel/LXDE/icons/run.png
Type=Application
Categories=FreeSurfer
Terminal=true

5 - Transparent Singularity

For more advanced users who wish to use Transparent Singularity directly

Transparent singularity is here https://github.com/NeuroDesk/transparent-singularity/

This project allows to use singularity containers transparently on HPCs, so that an application inside the container can be used without adjusting any scripts or pipelines (e.g. nipype).

Important: add bind points to .bashrc before executing this script

This script expects that you have adjusted the Singularity Bindpoints in your .bashrc, e.g.:

export SINGULARITY_BINDPATH="/gpfs1/,/QRISdata,/data"

This gives you a list of all tested images available in neurodesk:

https://github.com/NeuroDesk/neurodesk/blob/master/cvmfs/log.txt

curl -s https://raw.githubusercontent.com/NeuroDesk/neurodesk/master/cvmfs/log.txt

Clone repo into a folder with the intented image name

git clone https://github.com/NeuroDesk/transparent-singularity convert3d_1.0.0_20210104

Install

This will create scripts for every binary in the container located in the $DEPLOY_PATH inside the container. It will also create activate and deactivate scripts and module files for lmod (https://lmod.readthedocs.io/en/latest/)

cd convert3d_1.0.0_20210104
./run_transparent_singularity.sh convert3d_1.0.0_20210104

Options for Transparent singularity:

  • --storage - this option can be used to force a download from docker, e.g.: --storage docker
  • --container - this option can be used to explicitly define the container name to be downloaded
  • --unpack - this will unpack the singularity container so it can be used on systems that do not allow to open simg / sif files for security reasons, e.g.: --unpack true
  • --singularity-opts - this will be passed on to the singularity call, e.g.: --singularity-opts '--bind /cvmfs'

Use in module system LMOD

Add the module folder path to $MODULEPATH

Manual activation and deactivation (in case module system is not available). This will add the paths to the .bashrc

Activate

source activate_convert3d_1.0.0_20210104.sh

Deactivate

source deactivate_convert3d_1.0.0_20210104.sif.sh

Uninstall container and cleanup

./ts_uninstall.sh

6 - Neurodesk CVMFS

How to interact with our CVMFS service.

6.1 - Setup CVMFS Proxy

Setup CVMFS Proxy server

If you want more speed in a region one way could be to setup another Stratum 1 server or a proxy. We currently don’t run any proxy servers but it would be important for using it on a cluster.

docker run --shm-size=1gb -it --privileged --name neurodesktop `
-v C:/neurodesktop-storage:/neurodesktop-storage -p 8080:8080 `
-h neurodesktop-20220813 `
vnmd/neurodesktop:20220813

Setup a CVMFS proxy server

sudo yum install -y squid

Open the squid.confand use the following configuration

sudo vi /etc/squid/squid.conf
# List of local IP addresses (separate IPs and/or CIDR notation) allowed to access your local proxy
#acl local_nodes src YOUR_CLIENT_IPS

# Destination domains that are allowed
#acl stratum_ones dstdomain .YOURDOMAIN.ORG
#acl stratum_ones dstdom_regex YOUR_REGEX
acl stratum_ones dst 140.238.211.92

# Squid port
http_port 3128

# Deny access to anything which is not part of our stratum_ones ACL.
http_access deny !stratum_ones

# Only allow access from our local machines
#http_access allow local_nodes
http_access allow localhost

# Finally, deny all other access to this proxy
http_access deny all

minimum_expiry_time 0
maximum_object_size 1024 MB

cache_mem 128 MB
maximum_object_size_in_memory 128 KB
# 5 GB disk cache
cache_dir ufs /var/spool/squid 5000 16 256

sudo squid -k parse
sudo systemctl start squid
sudo systemctl enable squid
sudo systemctl status squid
sudo systemctl restart squid

6.2 - CVMFS architecture

CVMFS architecture

We store our singuarlity containers unpacked on CVMFS. We tried the DUCC tool in the beginning, but it was causing too many issues with dockerhub and we were rate limited. The script to unpack our singularity containers is here: https://github.com/NeuroDesk/neurocommand/blob/main/cvmfs/sync_containers_to_cvmfs.sh

It gets called by a cronjob on the CVMFS Stratum 0 server and relies on the log.txt file being updated via an action in the neurocommand repository (https://github.com/NeuroDesk/neurocommand/blob/main/.github/workflows/upload_containers_simg.sh)

The Stratum 1 servers then pull this repo from Stratum 0 and our desktops mount these repos (configured here: https://github.com/NeuroDesk/neurodesktop/blob/main/Dockerfile)

The startup script (https://github.com/NeuroDesk/neurodesktop/blob/main/config/startup.sh) sets up CVMFS and tests which server is fastest during the container startup.

This can also be done manually:

sudo cvmfs_talk -i neurodesk.ardc.edu.au host info
sudo cvmfs_talk -i neurodesk.ardc.edu.au host probe
cvmfs_config stat -v neurodesk.ardc.edu.au

6.3 - Setup Stratum 0 server

Host a Stratum 0 server

Setup a Stratum 0 server:

Setup Storage

(would object storage be better? -> see comment below under next iteration ideas)

lsblk -l
sudo mkfs.ext4 /dev/vdb
sudo mkdir /storage
sudo mount /dev/vdb /storage/ -t auto
sudo chown ec2-user /storage/
sudo chmod a+rwx /storage/
sudo vi /etc/fstab
/dev/vdb  /storage    auto    defaults,nofail   0  2

Setup server

sudo yum install vim htop gcc git screen
sudo timedatectl set-timezone Australia/Brisbane

sudo yum install -y https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest.noarch.rpm
sudo yum install -y cvmfs cvmfs-server

sudo systemctl enable httpd
sudo systemctl restart httpd

# sudo systemctl stop firewalld 

# restore keys:
sudo mkdir /etc/cvmfs/keys/incoming
sudo chmod a+rwx /etc/cvmfs/keys/incoming
cd connections/cvmfs_keys/
scp neuro* ec2-user@203.101.226.164:/etc/cvmfs/keys/incoming
sudo mv /etc/cvmfs/keys/incoming/* /etc/cvmfs/keys/

#backup keys: 
#mkdir cvmfs_keys
#scp opc@158.101.127.61:/etc/cvmfs/keys/neuro* .

sudo cvmfs_server mkfs -o $USER neurodesk.ardc.edu.au

cd /storage
sudo mkdir -p cvmfs-storage/srv/
cd /srv/
sudo mv cvmfs/ /storage/cvmfs-storage/srv/
sudo ln -s /storage/cvmfs-storage/srv/cvmfs/

cd /var/spool
sudo mkdir /storage/spool
sudo mv cvmfs/ /storage/spool/
sudo ln -s  /storage/spool/cvmfs .

cvmfs_server transaction neurodesk.ardc.edu.au

cvmfs_server publish neurodesk.ardc.edu.au
sudo vi /etc/cron.d/cvmfs_resign
0 11 * * 1 root /usr/bin/cvmfs_server resign neurodesk.ardc.edu.au
cat /etc/cvmfs/keys/neurodesk.ardc.edu.au.pub
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuV9JBs9uXBR83qUs7AiE
nSQfvh6VCdNigVzOfRMol5cXsYq3cFy/Vn1Nt+7SGpDTQArQieZo4eWC9ww2oLq0
vY1pWyAms3Y4i+IUmMbwNifDU4GQ1KN9u4zl9Peun2YQCLE7mjC0ZLQtLM7Q0Z8h
NwP8jRJTN+u8mRKzkyxfSMLscVMKhm2pAwnT1zB9i3bzVV+FSnidXq8rnnzNHMgv
tfqx1h0gVyTeodToeFeGG5vq69wGZlwEwBJWVRGzzr+a8dWNBFMJ1HxamrBEBW4P
AxOKGHmQHTGbo+tdV/K6ZxZ2Ry+PVedNmbON/EPaGlI8Vd0fascACfByqqeUEhAB
dQIDAQAB
-----END PUBLIC KEY-----

Next iteration of this:

use object storage?

  • current implementation uses block storage, but this makes increasing the volume size a bit more work
  • we coulddn’t get object storage to work on Oracle as it assumes AWS S3

Optimize settings for repositories for Container Images

from the CVMFS documentation: Repositories containing Linux container image contents (that is: container root file systems) should use overlayfs as a union file system and have the following configuration:

CVMFS_INCLUDE_XATTRS=true
CVMFS_VIRTUAL_DIR=true

Extended attributes of files, such as file capabilities and SElinux attributes, are recorded. And previous file system revisions can be accessed from the clients.

Currently not used

We tested the DUCC tool in the beginning, but it was leading to too many docker pulls and we therefore replaced it with our own script: https://github.com/NeuroDesk/neurocommand/blob/main/cvmfs/sync_containers_to_cvmfs.sh

This is the old DUCC setup

sudo yum install cvmfs-ducc.x86_64
sudo -i
dnf install -y yum-utils 
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
dnf install docker-ce docker-ce-cli containerd.io
systemctl enable docker
systemctl start docker
docker version
docker info

# leave root mode

sudo groupadd docker
sudo usermod -aG docker $USER
sudo chown root:docker /var/run/docker.sock
newgrp docker


vi convert_appsjson_to_wishlist.sh
export DUCC_DOCKER_REGISTRY_PASS=configure_secret_password_here_and_dont_push_to_github
cd neurodesk
git pull
./gen_cvmfs_wishlist.sh
cvmfs_ducc convert recipe_neurodesk_auto.yaml
cd ..


chmod +x convert_appsjson_to_wishlist.sh

git clone https://github.com/NeuroDesk/neurodesk/

# setup cron job
sudo vi /etc/cron.d/cvmfs_dockerpull
*/5 * * * * opc cd ~ && bash /home/opc/convert_appsjson_to_wishlist.sh



#vi recipe.yaml

##version: 1
#user: vnmd
#cvmfs_repo: neurodesk.ardc.edu.au
#output_format: '$(scheme)://$(registry)/vnmd/thin_$(image)'
#input:
#- 'https://registry.hub.docker.com/vnmd/tgvqsm_1.0.0:20210119'
#- 'https://registry.hub.docker.com/vnmd/itksnap_3.8.0:20201208'


#cvmfs_ducc convert recipe_neurodesk.yaml
#cvmfs_ducc convert recipe_unpacked.yaml

6.4 - Setup Stratum 1 server

Host a Stratum 1 server

The stratum 1 servers for the desktop are configured here: https://github.com/NeuroDesk/neurodesktop/blob/main/Dockerfile

If you want more speed in a region one way could be to setup another Stratum 1 server or a proxy.

Setup a Stratum 1 server:

sudo yum install -y https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest.noarch.rpm
sudo yum install -y cvmfs-server squid
sudo yum install -y python3-mod_wsgi 

sudo sed -i 's/Listen 80/Listen 127.0.0.1:8080/' /etc/httpd/conf/httpd.conf

set +H
echo "http_port 80 accel" | sudo tee /etc/squid/squid.conf
echo "http_port 8000 accel" | sudo tee -a /etc/squid/squid.conf
echo "http_access allow all" | sudo tee -a /etc/squid/squid.conf
echo "cache_peer 127.0.0.1 parent 8080 0 no-query originserver" | sudo tee -a /etc/squid/squid.conf
echo "acl CVMFSAPI urlpath_regex ^/cvmfs/[^/]*/api/" | sudo tee -a /etc/squid/squid.conf
echo "cache deny !CVMFSAPI" | sudo tee -a /etc/squid/squid.conf
echo "cache_mem 128 MB" | sudo tee -a /etc/squid/squid.conf

sudo systemctl start httpd
sudo systemctl start squid
sudo systemctl enable httpd
sudo systemctl enable squid

echo 'CVMFS_GEO_LICENSE_KEY=kGepdzqbAP4fjf5X' | sudo tee -a /etc/cvmfs/server.local
sudo chmod 600 /etc/cvmfs/server.local

sudo mkdir -p /etc/cvmfs/keys/ardc.edu.au/

echo "-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwUPEmxDp217SAtZxaBep
Bi2TQcLoh5AJ//HSIz68ypjOGFjwExGlHb95Frhu1SpcH5OASbV+jJ60oEBLi3sD
qA6rGYt9kVi90lWvEjQnhBkPb0uWcp1gNqQAUocybCzHvoiG3fUzAe259CrK09qR
pX8sZhgK3eHlfx4ycyMiIQeg66AHlgVCJ2fKa6fl1vnh6adJEPULmn6vZnevvUke
I6U1VcYTKm5dPMrOlY/fGimKlyWvivzVv1laa5TAR2Dt4CfdQncOz+rkXmWjLjkD
87WMiTgtKybsmMLb2yCGSgLSArlSWhbMA0MaZSzAwE9PJKCCMvTANo5644zc8jBe
NQIDAQAB
-----END PUBLIC KEY-----" | sudo tee /etc/cvmfs/keys/ardc.edu.au/neurodesk.ardc.edu.au.pub


sudo cvmfs_server add-replica -o $USER http://203.101.226.164/cvmfs/neurodesk.ardc.edu.au /etc/cvmfs/keys/ardc.edu.au

# CVMFS will store everything in /srv/cvmfs so make sure there is enough space or create a symlink to a bigger storage volume
# e.g.:



sudo cvmfs_server snapshot neurodesk.ardc.edu.au


echo "/var/log/cvmfs/*.log {
    weekly
    missingok
    notifempty
}" | sudo tee /etc/logrotate.d/cvmfs


echo '*/5 * * * * root output=$(/usr/bin/cvmfs_server snapshot -a -i 2>&1) || echo "$output" ' | sudo tee /etc/cron.d/cvmfs_stratum1_snapshot

sudo yum install iptables
sudo iptables -t nat -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8000

sudo systemctl disable firewalld 
sudo systemctl stop firewalld 
# make sure that port 80 is open in the real firewall

sudo cvmfs_server update-geodb