Or perhaps:
How do I make configuration changes to the workload manager (WLM) permanent?
If the configuration changes of a WLM disappear over time (do not stay “frozen”), then you aren’t doing configuration the “Bright” way.
If you want to configure a WLM manually outside of Bright Cluster Manager, by directly modifying the configuration files, then you have to explicitly freeze these configuration files in CMDaemon. Otherwise, these configuration files will revert to their default values. (These defaults are stored in the CMDaemon database.)
There are two ways to keep the configuration changes of the WLM permanent when CMDaemon is running:
- By declaring a freeze for the WLM configuration files in CMDaemon and then manually modifying the configuration files directly
- By having CMDaemon take on the management of the configuration changes. This is done by doing any required changes via the CMDaemon front ends, ie:
- With Brightview
- With Cmsh
These ways are now described:
1 Freezing And Configuring the WLM Manually:
Sometimes it’s more convenient to configure the WLM manually by modifying the configuration files directly. This may be done because not all features of a particular WLM can be controlled by the Bright front ends.
- First freeze the configuration files via CMDaemon, to prevent the changes from being overwritten by CMDaemon:
- Edit
/cm/local/apps/cmd/etc/cmd.conf
- Change
- Edit
FreezeChangesToSlurmConfig = false
- to
FreezeChangesToSlurmConfig = true
- Restart cmd with: systemctl restart cmd.service
- Then modify the configuration file of the WLM. This can then be carried out according to taste. After that, the manual WLM configuration procedure is complete.
2 Configuring the WLM via the CMDaemon Frontends, Brightview and CMSH:
The Brightview/Cmsh frontends
can be used to configure the more common and generic aspects of the WLM.
2.1 Configuring the WLM via brightview:
Please refer to the section: Examples Of Workload Management Assignment in the Workload Management chapter of the Administrator Manual.
2.2 Configuring the WLM via cmsh:
Here are some examples on how to configure slurm via CMSH. slurm should be substituted by the currently installed WLM, for example, PBSPro, Torque, LSF or OPENLAVA. Also, slurmclient should be substituted by the appropriate WLM role.
Add a queue:
[root@bright92 etc]# cmsh
[root@bright92 etc]# wlm
[bright92]% jobqueue; add new.q; commit
Add nodes to a queue:
The recommended way to add a node to a queue is by adding the queue to the list of queues in the WLM role of a category to which this node belongs.
[root@bright92 etc]# cmsh
[bright92]% category use test; roles
[bright92->category[default]->roles[slurmclient]]% assign slurmclient
[bright92->category[default]->roles[slurmclient]]% set wlmcluster slurm
[bright92->category[default]->roles[slurmclient]]% set queues new.q
[bright92->category[default]->roles[slurmclient]]% commit
Modify number of slots:
The only way to modify number of slots of a queue is by modifying the number of slots of the WLM role of a category to which this queue belongs to.
[root@bright92 etc]# cmsh
[bright92]% category use test; roles
[bright92->category[default]->roles]% set slurmclient slots 4; commit;
Note: if there is more than one queue assigned to a particular WLM role, then modifying the slots will apply to all the queues.