1. Home
  2. Day-to-day Administration
  3. How to Copy Bright Cluster configurations between two clusters
  1. Home
  2. High Availability
  3. How to Copy Bright Cluster configurations between two clusters
  1. Home
  2. Installing BCM
  3. How to Copy Bright Cluster configurations between two clusters

How to Copy Bright Cluster configurations between two clusters

Following this method, configurations can be copied between Bright clusters with the same major version (tested on Bright 9.0 with latest updates applied)

Method 1: Using JSON exporter

  1. Export configuration in JSON format from the current existing cluster
# service cmd stop
# cmd -x json:/tmp/cmd
  1. Copy the exported directory to the new cluster on which the configurations from the old cluster has to be imported
# scp -r /tmp/cmd/ root@192.168.2.139:/tmp
  1. On the new cluster, search for the occurrences of the <old-head-hostname>:
# grep -r "<old-head-hostname>" /tmp/cmd/
  1. Edit the resulting files and change the “<old-head-hostname>” to “<new-head-hostname>”
# grep -r "<old-head-hostname>" ./ | awk -F" " '{print $1}'
  1. Copy ./device/<old-head-hostname> to ./device/<new-head-hostname>
  2. Copy ./device/<old-head-hostname>.json to ./device/<new-head-hostname>.json
# cp -r device/<old-head-hostname> device/<new-head-hostname>
# cp device/<old-head-hostname>.json device/<new-head-hostname>.json

Note: In case the hostname will remain the same, then the previous 2 steps can be ignored.

  1. Edit ./device/<new-head-hostname>.json and change the MAC addresses to the new mac addresses of the new head node:
# grep mac /tmp/cmd/device/<new-head-hostname>.json
      "mac": "08:00:27:01:E0:F0",      "mac": "08:00:27:B5:C4:F3",  "mac": "08:00:27:01:E0:F0",

In case of using slurm wlm

  1. Copy munge keys 
  2. Copy slurm configuration files.conf, slurmdbd.conf files.
# scp /etc/munge/munge.key root@192.168.2.139:/tmp
# scp /cm/shared/apps/slurm/var/etc/slurm/slurm.conf root@192.168.2.139:/tmp
# scp /cm/shared/apps/slurm/var/etc/slurm/slurmdbd.conf root@192.168.2.139:/tmp
# scp /cm/shared/apps/slurm/var/etc/slurm/topology.conf root@192.168.2.139:/tmp
# scp /cm/shared/apps/slurm/var/etc/slurm/gres.conf root@192.168.2.139:/tmp
# scp /cm/shared/apps/slurm/var/etc/slurm/cgroup.conf root@192.168.2.139:/tmp
  1. Copy slurm cm-wlm-setup.conf script to redo the slurm setup:
# scp /root/cm-wlm-setup.conf root@192.168.2.139:/tmp
  1. Create slurm configuration directory
# mkdir /cm/shared/apps/slurm/var/etc/slurm
  1. Copy slurm configuration files to the slurm directory:
# cp /tmp/slurm* /cm/shared/apps/slurm/var/etc/slurm/
# cp /tmp/gres.conf /cm/shared/apps/slurm/var/etc/slurm/
# cp /tmp/topology.conf /cm/shared/apps/slurm/var/etc/slurm/
# cp /tmp/cgroup.conf /cm/shared/apps/slurm/var/etc/slurm/
  1. Replace the old head hostname with the new head hostname in the cm-wlm-setup.conf file:
# sed -i ‘s/<old-head-hostname>/<new-head-hostname>/g’ /tmp/cm-wlm-setup.conf
  1. Import the modified json output into the new cluster:
# cmd -i json:/path/to/cmd/json/files
  1. Re-run the cm-wlm-setup with the modified cm-wlm-setup.conf config file:
# cm-wlm-setup --disable --yes-i-really-mean-it --wlm-cluster-name=slurm
# cm-wlm-setup -c /tmp/cm-wlm-setup.conf

Note: <old-head-hostname> and <new-head-hostname> should be replaced with the real hostnames of the head nodes of the old and new clusters.

Method 2: Editing MySQL database dump

  1. Stop cmd.service on both head nodes:
# systemctl stop cmd
  1. Create a mysqldump from the old head node:
# mysqldump -u root -p cmdaemon > /tmp/cmdaemon.sql
Enter password: 
  1. Copy the cmdaemon.sql to the new head node:
# scp /tmp/cmdaemon.sql root@192.168.2.139:/tmp
  1. Import the database into the new head node
# mysql -u root -p cmdaemon < /tmp/cmdaemon.sql
Enter password: 
  1. Connect to the cmdaemon database and change the head node hostname and the MAC address of the interfaces:
# mysql -u root -p cmdaemon
Enter password:
MariaDB [cmdaemon]> update Devices set hostname='<new-head-hostname>' where hostname='<old-head-hostname>';
MariaDB [cmdaemon]> update Devices set mac='08:00:27:02:60:84' where hostname='<new-head-hostname>' ;
  1. If IP addresses have to be changed, then this can be done in table “NetworkInterfaces”
MariaDB [cmdaemon]> update NetworkInterfaces set ip='10.141.255.253' where uniqueKey='281474976710800' ;
  1. If IP addresses have to be changed, then this can be done in table “NetworkPhysicalInterfaces”
MariaDB [cmdaemon]> update NetworkPhysicalInterfaces set mac='08:00:27:02:60:84' where uniqueKey='281474976710800' ;
  1. In case of using Slurm workload manager, copy the config files same as described in Method 1
  2. Start cmd service on the new head node
# systemcl start cmd
  1. The new cluster will have all the configurations imported:
[b90-c7-backup->wlm[slurm]]% list
Type   Name (key)               Server nodes   Submit nodes                    Client nodes    
------ ------------------------ -------------- ------------------------------- -----------------
Slurm  slurm                    b90-c7-backup  node001..node020,b90-c7-backup  node001..node020
[b90-c7-backup->wlm[slurm]]% configurationoverlay
[b90-c7-backup->configurationoverlay]% list
Name (key)        Priority   All head nodes Nodes            Categories       Roles          
----------------- ---------- -------------- ---------------- ---------------- ----------------
slurm-accounting  500        yes                                              slurmaccounting
slurm-client      500        no                              default,new      slurmclient    
slurm-server      500        yes                                              slurmserver    
slurm-submit      500        yes            b90-c7-backup    default,new      slurmsubmit    
[b90-c7-backup->configurationoverlay]% 
Updated on February 22, 2024

Related Articles

Leave a Comment