ENIGMA Ataxia project-pipeline ============================== This is a neuroimaging project studing ataxia You should find a .def file and build_container.sh to create a singularity container 1. Start a linux computer/VM 2. Ensure singularity (http://singularity.lbl.gov/) is installed 3. Install git (`apt-get install git`, or `yum install git` depending on your linux flavour) Use ``` git clone ; cd sudo build_container.sh ``` This should generate a enigma-neuro.img file. You can copy this file to any computer with singularity installed (e.g. HPC systems) You can execute the container with a command line like: ``` singularity exec -B :/mnt -B :/licese /enigma-neuro.img run_pipeline ... ``` You might like to alias this ``` alias ataxia='singularity exec -B /mnt/enigma:/mnt /mnt/ubuntu/ENIGMA-subcortical-volumes-ataxia/build/enigma-neuro.img run_pipeline' ``` to save on typing, then you can just do ``` ataxia recon ``` etc. Options to `run_pipeline` are explained bellow The 'datapath' should contain 'input' and 'output' directorys. a 'figures' directory will also be created The `licensepath` should contain a license.txt file with a Freesurfer license (we are investigating including a license with the container) On M3 you could use the licensepath `/usr/local/freesurfer/20160922/` Using run_pipeline ------------------ `run_pipeline` is a simple python script to handle the various steps of the pipeline. You can use `run_pipeline --help` or `run_pipeline recon --help` for more detail There are three steps to the pipeline currently executed individually run_pipeline recon will Look for `\*.nii.gz` files in your input directory and process them with recon-all. Its smart enought that you can rerun it multiple times on different computers without overwriting things (suitable for processing on an HPC cluster). You can also modify if with --oneonly if you want it to process only one subject and exit (suitable for HPC systems where you need to provide an accurate esimate of walltime). The other option of note is --retry if recon fails for some reason (note you should remove any output generated by recon-all from your directory first or recon-all will fail again) run_pipeline stats will generate histograms and look for outliers based on standard deviation. Of course this will fail if you only have one subject (you can define the stddev for one data point!). The initial pipeline called for you to run fslview at this point. If you want to do so you will need to do so manually. run_pipeline qc will generate html files for you to view. These will be placed in your output directory. Its up to each individual to figure out the best way to view these