Following either method, configurations can be copied between Bright clusters with the same major version (e.g. two Bright 9.2 clusters). These procedures were last tested on two BCM 10 clusters with the latest updates applied on both clusters.
Method 1: Using JSON exporter
- Export configuration in JSON format from the source cluster
# systemctl stop cmd |
- Copy the exported directory to the destination cluster on which the configurations from the source cluster should be imported
# scp -r /tmp/cmd/ root@192.168.2.139:/tmp |
- On the destination cluster, search for the occurrences of the <old-head-hostname> and replace them with <new-head-hostname> using a text editor:
# grep -r "<old-head-hostname>" /tmp/cmd/ |
4. Rename the JSON file under /tmp/cmd/device to use the name of <new-head-hostname>:
# cd /tmp/cmd # mv device/<old-head-hostname>.json device/<new-head-hostname>.json |
Note: In case the hostname will remain the same, then the previous 2 steps can be ignored.
5. Edit ./device/<new-head-hostname>.json and change the MAC addresses to the new mac addresses of the new head node:
# grep mac /tmp/cmd/device/<new-head-hostname>.json |
In case of using slurm wlm
6. (Slurm step only) Copy munge keys
7. (Slurm step only) Copy slurm configuration files.conf, slurmdbd.conf files.
# scp /etc/munge/munge.key root@192.168.2.139:/tmp |
8. (Slurm step only) Copy slurm cm-wlm-setup.conf script to redo the slurm setup:
# scp /root/cm-wlm-setup.conf root@192.168.2.139:/tmp |
9. (Slurm step only) Create slurm configuration directory
# mkdir /cm/shared/apps/slurm/var/etc/slurm |
10. (Slurm step only) Copy slurm configuration files to the slurm directory:
# cp /tmp/slurm* /cm/shared/apps/slurm/var/etc/slurm/ |
11. (Slurm step only) Replace the old head hostname with the new head hostname in the cm-wlm-setup.conf file:
# sed -i ‘s/<old-head-hostname>/<new-head-hostname>/g’ /tmp/cm-wlm-setup.conf |
12. Import the modified json output into the new cluster:
|
13. (Slurm step only) Re-run the cm-wlm-setup with the modified cm-wlm-setup.conf config file:
# cm-wlm-setup --disable --yes-i-really-mean-it --wlm-cluster-name=slurm |
Note: <old-head-hostname> and <new-head-hostname> should be replaced with the real hostnames of the head nodes of the old (source) and new (destination) clusters.
Method 2: Editing MySQL database dump
- Stop cmd.service on both head nodes:
# systemctl stop cmd |
- Create a mysqldump from the old head node:
# mysqldump -u root -p cmdaemon > /tmp/cmdaemon.sql |
- Copy the cmdaemon.sql to the new head node:
# scp /tmp/cmdaemon.sql root@192.168.2.139:/tmp |
- Import the database into the new head node
# mysql -u root -p cmdaemon < /tmp/cmdaemon.sql |
- Connect to the cmdaemon database and change the head node hostname and the MAC address of the interfaces:
# mysql -u root -p cmdaemon |
- If IP addresses have to be changed, then this can be done in table “NetworkInterfaces”
MariaDB [cmdaemon]> update NetworkInterfaces set ip='10.141.255.253' where uniqueKey='281474976710800' ; |
- If IP addresses have to be changed, then this can be done in table “NetworkPhysicalInterfaces”
MariaDB [cmdaemon]> update NetworkPhysicalInterfaces set mac='08:00:27:02:60:84' where uniqueKey='281474976710800' ; |
- In case of using Slurm workload manager, copy the config files same as described in Method 1
- Start cmd service on the new head node
# systemctl start cmd |
- The new cluster will have all the configurations imported:
[b90-c7-backup->wlm[slurm]]% list |