Commit e2cf30dd authored by Chris Hines's avatar Chris Hines
Browse files

correct quoting in readme

parent 7e7b505d
......@@ -7,50 +7,50 @@ You should find a .def file and build_container.sh to create a singularity conta
1. Start a linux computer/VM
2. Ensure singularity (http://singularity.lbl.gov/) is installed
3. Install git ('apt-get install git', or 'yum install git' depending on your linux flavour)
3. Install git (`apt-get install git`, or `yum install git` depending on your linux flavour)
Use
'''
```
git clone <this repo> ; cd <repo>
sudo build_container.sh
'''
```
This should generate a enigma-neuro.img file. You can copy this file to any computer with singularity installed (e.g. HPC systems)
You can execute the container with a command line like:
'''
```
singularity exec -B <datapath>:/mnt -B <licensepath>:/licese <path-to-img>/enigma-neuro.img run_pipeline ...
'''
```
You might like to alias this
'''
```
alias ataxia='singularity exec -B /mnt/enigma:/mnt /mnt/ubuntu/ENIGMA-subcortical-volumes-ataxia/build/enigma-neuro.img run_pipeline'
'''
```
to save on typing, then you can just do
'''
```
ataxia recon
'''
```
etc.
Options to 'run_pipeline' are explained bellow
Options to `run_pipeline` are explained bellow
The 'datapath' should contain 'input' and 'output' directorys. a 'figures' directory will also be created
The 'licensepath' should contain a license.txt file with a Freesurfer license (we are investigating including a license with the container)
On M3 you could use the licensepath '/usr/local/freesurfer/20160922/'
The `licensepath` should contain a license.txt file with a Freesurfer license (we are investigating including a license with the container)
On M3 you could use the licensepath `/usr/local/freesurfer/20160922/`
Using run_pipeline
------------------
'run_pipeline' is a simple python script to handle the various steps of the pipeline.
You can use 'run_pipeline --help' or 'run_pipeline recon --help' for more detail
`run_pipeline` is a simple python script to handle the various steps of the pipeline.
You can use `run_pipeline --help` or `run_pipeline recon --help` for more detail
There are three steps to the pipeline currently executed individually
run_pipeline recon will Look for '\*.nii.gz' files in your input directory and process them with recon-all. Its smart enought that you can rerun it multiple times on different computers without overwriting things (suitable for processing on an HPC cluster). You can also modify if with --oneonly if you want it to process only one subject and exit (suitable for HPC systems where you need to provide an accurate esimate of walltime). The other option of note is --retry if recon fails for some reason (note you should remove any output generated by recon-all from your directory first or recon-all will fail again)
run_pipeline recon will Look for `\*.nii.gz` files in your input directory and process them with recon-all. Its smart enought that you can rerun it multiple times on different computers without overwriting things (suitable for processing on an HPC cluster). You can also modify if with --oneonly if you want it to process only one subject and exit (suitable for HPC systems where you need to provide an accurate esimate of walltime). The other option of note is --retry if recon fails for some reason (note you should remove any output generated by recon-all from your directory first or recon-all will fail again)
run_pipeline stats will generate histograms and look for outliers based on standard deviation. Of course this will fail if you only have one subject (you can define the stddev for one data point!). The initial pipeline called for you to run fslview at this point. If you want to do so you will need to do so manually.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment