1. Home
  2. Day-to-day Administration
  3. How to Copy Bright Cluster configurations between two clusters
  1. Home
  2. High Availability
  3. How to Copy Bright Cluster configurations between two clusters
  1. Home
  2. Installing BCM
  3. How to Copy Bright Cluster configurations between two clusters

How to Copy Bright Cluster configurations between two clusters

Following either method, configurations can be copied between Bright clusters with the same major version (e.g. two Bright 9.2 clusters). These procedures were last tested on two BCM 10 clusters with the latest updates applied on both clusters.

Method 1: Using JSON exporter

  1. Export configuration in JSON format from the source cluster
# systemctl stop cmd
# cmd -x json:/tmp/cmd
  1. Copy the exported directory to the destination cluster on which the configurations from the source cluster should be imported
# scp -r /tmp/cmd/ root@192.168.2.139:/tmp
  1. On the destination cluster, search for the occurrences of the <old-head-hostname> and replace them with <new-head-hostname> using a text editor:
# grep -r "<old-head-hostname>" /tmp/cmd/

4. Rename the JSON file under /tmp/cmd/device to use the name of <new-head-hostname>:

# cd /tmp/cmd
# mv device/<old-head-hostname>.json device/<new-head-hostname>.json

Note: In case the hostname will remain the same, then the previous 2 steps can be ignored.

5. Edit ./device/<new-head-hostname>.json and change the MAC addresses to the new mac addresses of the new head node:

# grep mac /tmp/cmd/device/<new-head-hostname>.json
      "mac": "08:00:27:01:E0:F0",      "mac": "08:00:27:B5:C4:F3",  "mac": "08:00:27:01:E0:F0",

In case of using slurm wlm

6. (Slurm step only) Copy munge keys 

7. (Slurm step only) Copy slurm configuration files.conf, slurmdbd.conf files.

# scp /etc/munge/munge.key root@192.168.2.139:/tmp
# scp /cm/shared/apps/slurm/var/etc/slurm/slurm.conf root@192.168.2.139:/tmp
# scp /cm/shared/apps/slurm/var/etc/slurm/slurmdbd.conf root@192.168.2.139:/tmp
# scp /cm/shared/apps/slurm/var/etc/slurm/topology.conf root@192.168.2.139:/tmp
# scp /cm/shared/apps/slurm/var/etc/slurm/gres.conf root@192.168.2.139:/tmp
# scp /cm/shared/apps/slurm/var/etc/slurm/cgroup.conf root@192.168.2.139:/tmp

8. (Slurm step only) Copy slurm cm-wlm-setup.conf script to redo the slurm setup:

# scp /root/cm-wlm-setup.conf root@192.168.2.139:/tmp

9. (Slurm step only) Create slurm configuration directory

# mkdir /cm/shared/apps/slurm/var/etc/slurm

10. (Slurm step only) Copy slurm configuration files to the slurm directory:

# cp /tmp/slurm* /cm/shared/apps/slurm/var/etc/slurm/
# cp /tmp/gres.conf /cm/shared/apps/slurm/var/etc/slurm/
# cp /tmp/topology.conf /cm/shared/apps/slurm/var/etc/slurm/
# cp /tmp/cgroup.conf /cm/shared/apps/slurm/var/etc/slurm/

11. (Slurm step only) Replace the old head hostname with the new head hostname in the cm-wlm-setup.conf file:

# sed -i ‘s/<old-head-hostname>/<new-head-hostname>/g’ /tmp/cm-wlm-setup.conf

12. Import the modified json output into the new cluster:

# systemctl stop cmd
# cmd -i json:/tmp/cmd
# systemctl start cmd

13. (Slurm step only) Re-run the cm-wlm-setup with the modified cm-wlm-setup.conf config file:

# cm-wlm-setup --disable --yes-i-really-mean-it --wlm-cluster-name=slurm
# cm-wlm-setup -c /tmp/cm-wlm-setup.conf

Note: <old-head-hostname> and <new-head-hostname> should be replaced with the real hostnames of the head nodes of the old (source) and new (destination) clusters.

Method 2: Editing MySQL database dump

  1. Stop cmd.service on both head nodes:
# systemctl stop cmd
  1. Create a mysqldump from the old head node:
# mysqldump -u root -p cmdaemon > /tmp/cmdaemon.sql
Enter password: 
  1. Copy the cmdaemon.sql to the new head node:
# scp /tmp/cmdaemon.sql root@192.168.2.139:/tmp
  1. Import the database into the new head node
# mysql -u root -p cmdaemon < /tmp/cmdaemon.sql
Enter password: 
  1. Connect to the cmdaemon database and change the head node hostname and the MAC address of the interfaces:
# mysql -u root -p cmdaemon
Enter password:
MariaDB [cmdaemon]> update Devices set hostname='<new-head-hostname>' where hostname='<old-head-hostname>';
MariaDB [cmdaemon]> update Devices set mac='08:00:27:02:60:84' where hostname='<new-head-hostname>' ;
  1. If IP addresses have to be changed, then this can be done in table “NetworkInterfaces”
MariaDB [cmdaemon]> update NetworkInterfaces set ip='10.141.255.253' where uniqueKey='281474976710800' ;
  1. If IP addresses have to be changed, then this can be done in table “NetworkPhysicalInterfaces”
MariaDB [cmdaemon]> update NetworkPhysicalInterfaces set mac='08:00:27:02:60:84' where uniqueKey='281474976710800' ;
  1. In case of using Slurm workload manager, copy the config files same as described in Method 1
  2. Start cmd service on the new head node
# systemctl start cmd
  1. The new cluster will have all the configurations imported:
[b90-c7-backup->wlm[slurm]]% list
Type   Name (key)               Server nodes   Submit nodes                    Client nodes    
------ ------------------------ -------------- ------------------------------- -----------------
Slurm  slurm                    b90-c7-backup  node001..node020,b90-c7-backup  node001..node020
[b90-c7-backup->wlm[slurm]]% configurationoverlay
[b90-c7-backup->configurationoverlay]% list
Name (key)        Priority   All head nodes Nodes            Categories       Roles          
----------------- ---------- -------------- ---------------- ---------------- ----------------
slurm-accounting  500        yes                                              slurmaccounting
slurm-client      500        no                              default,new      slurmclient    
slurm-server      500        yes                                              slurmserver    
slurm-submit      500        yes            b90-c7-backup    default,new      slurmsubmit    
[b90-c7-backup->configurationoverlay]% 
Updated on August 7, 2024

Related Articles

Leave a Comment