Skip to content

updating spank plugin

Andreas Hamacher requested to merge spankupdate into master

Updated spank plugin after complaints on ubuntu.

testd on m3t000 and m3f031

[username@m3-login1 ~]$ srun --partition=desktop --reservation=AWX -w m3f031 --qos=desktopq "hostname"                                     
m3f031
[root@m3f031 ~]# cat /var/log/slurmd.log 
[2021-10-15T10:57:38.513] Node reconfigured socket/core boundaries SocketsPerBoard=1:3(hw) CoresPerSocket=3:1(hw)
[2021-10-15T10:57:38.513] Message aggregation disabled
[2021-10-15T10:57:38.515] CPU frequency setting not configured for this node
[2021-10-15T10:57:38.516] slurmd version 20.02.7 started
[2021-10-15T10:57:38.517] error: Invalid PrologSlurmctld(`/opt/slurm-latest/etc/slurmctld.prolog`): No such file or directory
[2021-10-15T10:57:38.517] slurmd started on Fri, 15 Oct 2021 10:57:38 +1100
[2021-10-15T10:57:40.792] CPUs=3 Boards=1 Sockets=3 Cores=1 Threads=1 Memory=13869 TmpDisk=30172 Uptime=3182687 CPUSpecList=(null) FeaturesAvail=(null) FeaturesActive=(null)
[2021-10-15T11:22:13.602] _run_prolog: run job script took usec=362669
[2021-10-15T11:22:13.619] _run_prolog: prolog with lock for job 20944182 ran for 0 seconds
[2021-10-15T11:22:13.852] [20944182.extern] task/cgroup: /slurm/uid_11436/job_20944182: alloc=4096MB mem.limit=4096MB memsw.limit=unlimited
[2021-10-15T11:22:13.866] [20944182.extern] task/cgroup: /slurm/uid_11436/job_20944182/step_extern: alloc=4096MB mem.limit=4096MB memsw.limit=unlimited
[2021-10-15T11:22:15.079] launch task 20944182.0 request from UID:11436 GID:10025 HOST:172.16.202.163 PORT:47788
[2021-10-15T11:22:15.080] lllp_distribution jobid [20944182] implicit auto binding: sockets,one_thread, dist 8192
[2021-10-15T11:22:15.080] _task_layout_lllp_cyclic 
[2021-10-15T11:22:15.080] _lllp_generate_cpu_bind jobid [20944182]: mask_cpu,one_thread, 0x1
[2021-10-15T11:22:15.104] [20944182.0] _setup_stepd_job_info: SLURM_STEP_RESV_PORTS found 12261-12262
[2021-10-15T11:22:15.119] [20944182.0] task/cgroup: /slurm/uid_11436/job_20944182: alloc=4096MB mem.limit=4096MB memsw.limit=unlimited
[2021-10-15T11:22:15.126] [20944182.0] task/cgroup: /slurm/uid_11436/job_20944182/step_0: alloc=4096MB mem.limit=4096MB memsw.limit=unlimited
[2021-10-15T11:22:15.140] [20944182.0] task_p_pre_launch: Using sched_affinity for tasks
[2021-10-15T11:22:15.180] [20944182.0] done with job
[2021-10-15T11:22:15.214] [20944182.extern] done with job
[2021-10-15T11:23:03.434] _run_prolog: run job script took usec=207665
[2021-10-15T11:23:03.443] _run_prolog: prolog with lock for job 20944190 ran for 0 seconds
[2021-10-15T11:23:03.581] [20944190.extern] task/cgroup: /slurm/uid_11436/job_20944190: alloc=4096MB mem.limit=4096MB memsw.limit=unlimited
[2021-10-15T11:23:03.595] [20944190.extern] task/cgroup: /slurm/uid_11436/job_20944190/step_extern: alloc=4096MB mem.limit=4096MB memsw.limit=unlimited
[2021-10-15T11:23:04.797] launch task 20944190.0 request from UID:11436 GID:10025 HOST:172.16.202.163 PORT:16045
[2021-10-15T11:23:04.798] lllp_distribution jobid [20944190] implicit auto binding: sockets,one_thread, dist 8192
[2021-10-15T11:23:04.798] _task_layout_lllp_cyclic 
[2021-10-15T11:23:04.798] _lllp_generate_cpu_bind jobid [20944190]: mask_cpu,one_thread, 0x1
[2021-10-15T11:23:04.826] [20944190.0] _setup_stepd_job_info: SLURM_STEP_RESV_PORTS found 12265-12266
[2021-10-15T11:23:04.838] [20944190.0] task/cgroup: /slurm/uid_11436/job_20944190: alloc=4096MB mem.limit=4096MB memsw.limit=unlimited
[2021-10-15T11:23:04.845] [20944190.0] task/cgroup: /slurm/uid_11436/job_20944190/step_0: alloc=4096MB mem.limit=4096MB memsw.limit=unlimited
[2021-10-15T11:23:04.859] [20944190.0] task_p_pre_launch: Using sched_affinity for tasks
[2021-10-15T11:23:04.915] [20944190.0] done with job
[2021-10-15T11:23:04.949] [20944190.extern] done with job

Merge request reports

Loading