Commit 360ca4dd authored by Chris Hines's avatar Chris Hines
Browse files

documentation on how to add an application

parent aade7722
Pipeline #7821 passed with stages
in 4 minutes and 37 seconds
Adding an appliction
You need to define a program to run and put the value in `startscript`. You can't use #SBATCH pragmas here though.
You also need to define a program that will take a jobid and return the port the program is running on and anything else (like access tokens or passwords). This is a paramscmd. The paramscmd can return a json error message. The paramscmd might look easy and a proof of concept might take 5 minutes, but then you get into edge cases and it turns out to be the hardest bit so try to copy an existing one.
Next you need the URL to connect to (eg index.html?token=adsf). This is a realtive URL (i.e. if you would normally use ssh tunnels and localhost:<nnn> drop the localhost part
Finally you put all this data into a json structure and save it to the config files.
The config file
Strudel applications look like this
"url": null,
"name": "Jupyter Lab",
"startscript": "#!/bin/bash\n/usr/local/sv2/dev/jupyter/jupyter.slurm\n",
"actions": [
"name": "Connect",
"paramscmd": "/usr/local/sv2/dev/jupyter/ {jobid}",
"client": {"cmd": null, "redir": "?token={token}"},
"states": ["RUNNING"]
"name": "View log",
"paramscmd": "/usr/local/sv2/dev/desktop/ {jobid}",
"client": {"cmd": null, "redir": "index.html?token={token}" },
"states": ["RUNNING","Finished"]
"name": "View Usage",
"paramscmd": "/usr/local/sv2/dev/desktop/ {jobid}",
"client": {"cmd": null, "redir": "index.html?token={token}" },
"states": ["Finished"]
"name": "Remove log",
"paramscmd": "/usr/local/sv2/dev/ {jobid}",
"client": null,
"states": ["Finished"]
"localbind": true,
"applist": null
This is block of json data. It gets stored in a config file. Each compute site has its own list of applications. And if you are running dev and test environments you probably have a different set of applications on each. For M3 the applications are deployed as part of the frontend build (because it was easy to combine the config and the code) but for other sites the URL for this configuration data might be completely independent. For M3 the applications deployed to dev are defined here
and the applications for test are here
In order to deploy a new application you sould edit those files on dev and create a merge request
Whats are all these keys and values
The first value `url` allows up to specify a URL that will provide additional configuration info. We don't use this for desktops or Jupyter but we might use it for transfering files. The URL should open in an iframe and use window.message methods to pass data back to Strudel2. For most applications this will be set to None.
the `name` value is reasonably self explinatory but its worth noting: When the form is rendered for what resources to use (CPUs/GPUs/Time) this value is passed to the form so that for example the application named "Desktop" renders a different form than the applictaion named "Jupyter Lab". Don't the M3 forms will render a good default for any unknown applicaiton names, so you can pretty much fill in whatever you like.
The `startscript` gets passed as stdin to whatever command the site runs things with. In the case of M3 this is sbatch, so this contents (start script) gets passed to sbatch. In this example `/usr/local/sv2/dev/jupyter/jupyter.slurm` i *NOT* a slurm script. i.e. if you put #SBATCH lines in there they will be ignored. It is a program. If you need to do #SBATCH lines it should be like
`"startscript": "#!/bin/bash\n#SBATCH -w m3a011\n/usr/local/sv2/dev/jupyter/jupyter.slurm\n",` but you really shouldn't do this. The intention is that the start script doesn't care what job scheduler your using
Next we have a list of `actions`. Each of these renders a button in Strudel2 depending on the state of the Job. For each action what happens is tha the the `paramscmd` gets run and returns a blob of json data. Then the client is executed using the data returned from the `paramscmd`. The paramscmd should always return a value for `port` i.e. the network port to connect on and should also return any info used in the `client` definition (so `` must return both a port and a token like `{"port":123,"token":"abc"}`) You sould attempt to copy your params cmd off an existing implementation. Ideally they are not aware of what batch environment is used so that a `paramscmd` used on a PBS site can be shared with a slurm site. In practice we've had to make our paramcmds aware of the jobid and the process tree origining from that job id in order to connect to the correct job (incase multiple jobs are running on the same node)
You'll notice tha tthe `client` defines both a `cmd` and a `redir`. The `cmd` field is reserved for future work where Strudle2 can be installed locally and use things like a native vnc viewer instead of a web browser and noVNC.
Next we have the `localbind` option. This should be set to true. It controls the behaviour of tunnels. In particular you generally can't access Jupyter from the login node, you have to ssh to the execution host and access using `ssh -L 8888:localhost:8888 <exechost>` On the other hand if you want to access the SSH server on the execution host you don't need to do `ssh -L 2222:localhost:22 <exechost>` you can get stright there from the login node.
Finally the `applist` option allows for recursivly nexting another list of apps. I implement this feature in the S2 UI, but because its not currently in use and has no test coverage its probably got some bit rot. If you feel the need to have a multilevel list of applications, please contact the developer.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment