By default, Slurm is configured such that it allocates an entire node to a job which requests a subset of the resources.
You need to configure the partition to be shared.
In slurm.conf on the headnode, you’ll need to add the following lines below the autogenerated section and restart the slurmctld service:
SelectType=select/cons_tres
SelectTypeParameters=CR_Core
Additionally, pending the output of the following command:# module load slurm
# scontrol show node nodename
If the core count is accurate, setting OverSubscribe to YES will suffice. If not “Yes:##” where ## is the number of total Cores.
Example# cmsh -c "wlm ; jobqueue ; use defq ; set OverSubscribe YES:## ; commit"
Please note, changing the SelectType plugin and restarting the slurmctld service may result in terminating all the running jobs.
You may also need to run the scontrol command on the headnode to inform the compute nodes of the change.
scontrol reconfigure