Following this method, configurations can be copied between Bright clusters with the same major version (tested on Bright 9.0 with latest updates applied)
Method 1: Using JSON exporter
- Export configuration in JSON format from the current existing cluster
# service cmd stop |
- Copy the exported directory to the new cluster on which the configurations from the old cluster has to be imported
# scp -r /tmp/cmd/ root@192.168.2.139:/tmp |
- On the new cluster, search for the occurrences of the <old-head-hostname>:
# grep -r "<old-head-hostname>" /tmp/cmd/ |
- Edit the resulting files and change the “<old-head-hostname>” to “<new-head-hostname>”
# grep -r "<old-head-hostname>" ./ | awk -F" " '{print $1}' |
- Copy ./device/<old-head-hostname> to ./device/<new-head-hostname>
- Copy ./device/<old-head-hostname>.json to ./device/<new-head-hostname>.json
# cp -r device/<old-head-hostname> device/<new-head-hostname> # cp device/<old-head-hostname>.json device/<new-head-hostname>.json |
Note: In case the hostname will remain the same, then the previous 2 steps can be ignored.
- Edit ./device/<new-head-hostname>.json and change the MAC addresses to the new mac addresses of the new head node:
# grep mac /tmp/cmd/device/<new-head-hostname>.json |
In case of using slurm wlm
- Copy munge keys
- Copy slurm configuration files.conf, slurmdbd.conf files.
# scp /etc/munge/munge.key root@192.168.2.139:/tmp |
- Copy slurm cm-wlm-setup.conf script to redo the slurm setup:
# scp /root/cm-wlm-setup.conf root@192.168.2.139:/tmp |
- Create slurm configuration directory
# mkdir /cm/shared/apps/slurm/var/etc/slurm |
- Copy slurm configuration files to the slurm directory:
# cp /tmp/slurm* /cm/shared/apps/slurm/var/etc/slurm/ |
- Replace the old head hostname with the new head hostname in the cm-wlm-setup.conf file:
# sed -i ‘s/<old-head-hostname>/<new-head-hostname>/g’ /tmp/cm-wlm-setup.conf |
- Import the modified json output into the new cluster:
# cmd -i json:/path/to/cmd/json/files |
- Re-run the cm-wlm-setup with the modified cm-wlm-setup.conf config file:
# cm-wlm-setup --disable --yes-i-really-mean-it --wlm-cluster-name=slurm |
Note: <old-head-hostname> and <new-head-hostname> should be replaced with the real hostnames of the head nodes of the old and new clusters.
Method 2: Editing MySQL database dump
- Stop cmd.service on both head nodes:
# systemctl stop cmd |
- Create a mysqldump from the old head node:
# mysqldump -u root -p cmdaemon > /tmp/cmdaemon.sql |
- Copy the cmdaemon.sql to the new head node:
# scp /tmp/cmdaemon.sql root@192.168.2.139:/tmp |
- Import the database into the new head node
# mysql -u root -p cmdaemon < /tmp/cmdaemon.sql |
- Connect to the cmdaemon database and change the head node hostname and the MAC address of the interfaces:
# mysql -u root -p cmdaemon |
- If IP addresses have to be changed, then this can be done in table “NetworkInterfaces”
MariaDB [cmdaemon]> update NetworkInterfaces set ip='10.141.255.253' where uniqueKey='281474976710800' ; |
- If IP addresses have to be changed, then this can be done in table “NetworkPhysicalInterfaces”
MariaDB [cmdaemon]> update NetworkPhysicalInterfaces set mac='08:00:27:02:60:84' where uniqueKey='281474976710800' ; |
- In case of using Slurm workload manager, copy the config files same as described in Method 1
- Start cmd service on the new head node
# systemcl start cmd |
- The new cluster will have all the configurations imported:
[b90-c7-backup->wlm[slurm]]% list |