1. Home
  2. General
  3. How do I upgrade to Bright 7.3?

How do I upgrade to Bright 7.3?

How do I upgrade from Bright 6.0/6.1/7.0/7.1/7.2 to Bright 7.3?

The procedure below can be used to upgrade a Bright 6.0 or 6.1 or 7.0 or 7.1 or 7.2 installation, including those with failover, to Bright 7.3

Supported Linux distributions

An upgrade to Bright 7.3 is supported for Bright 6.06.1, 7.0, 7.1 or 7.2 clusters that are running the following Linux distributions:

  •  RedHat Enterprise Linux 6.5 (RHEL6u5)
  •  RedHat Enterprise Linux 6.6 (RHEL6u6)
  •  RedHat Enterprise Linux 6.7 (RHEL6u7)
  •  RedHat Enterprise Linux 7.0 (RHEL7u0)
  •  RedHat Enterprise Linux 7.1 (RHEL7u1)
  •  RedHat Enterprise Linux 7.2 (RHEL7u2)
  •  CentOS Linux 6.5 (CENTOS6u5)
  •  CentOS Linux 6.6 (CENTOS6u6)
  •  CentOS Linux 6.7 (CENTOS6u7)
  •  CentOS Linux 7.0 (CENTOS7u0)
  •  CentOS Linux 7.1 (CENTOS7u1)
  •  CentOS Linux 7.2 (CENTOS7u2)
  •  Scientific Linux 6.6 (SL6u6)
  •  Scientific Linux 6.7 (SL6u7)
  •  Scientific Linux 7.1 (SL7u1)
  •  Scientific Linux 7.2 (SL7u2)
  •  SuSE Linux Enterprise Server 11 Service Pack 3 (SLES11sp3)
  •  SuSE Linux Enterprise Server 11 Service Pack 4 (SLES11sp4)
  •  SuSE Linux Enterprise Server 12 (SLES12)
  •  SuSE Linux Enterprise Server 12 Service Pack 1 (SLES12sp1)

Prerequisites

  • Extra base distribution RPMs will be installed by yum/zypper in order to resolve dependencies that might arise as a result of the upgrade. Hence the base distribution repositories must be reachable. This means that the clusters that run the Enterprise Linux distributions (RHEL and SLES11) must be subscribed to the appropriate software channels.
  • Packages in /cm/shared are upgraded, but the administrator should be aware of the following:
    • If /cm/shared is installed in the local partition, then the packages are upgraded. This may not be desirable for users that wish to retain the old behavior.
    • If /cm/shared is mounted from a separate partition, then unmounting it will prevent upgrades to the mounted partition, but will allow new packages to be installed in /cm/shared within the local partition. This may be desirable for the administrator, who can later copy over updates from the local /cm/shared to the remote /cm/shared manually according to site specific requirements.
      Since unmounting of mounted /cm/shared is carried out by default, a local /cm/shared will have files from any packages installed there upgraded. According to the yum database, the system is then upgraded even though the files are misplaced in the local partition. However, the newer packages can only be expected to work properly if their associated files are copied over from the local partition to the remote partition
    • If the /cm/shared will be unmounted during the upgrade (i,e if an in-place upgrade is not being performed), then please make sure that the contents of the local /cm/shared are in sync with the remote copy.
  • IMPORTANT:
    • If upgrading from Bright 7.0 or Bright 7.1, then Hadoop deployments must be removed (using cm-hadoop-setup), before proceeding with the upgrade. Please contact Bright Support for further assistance.
      Note: Upgrades of Bright 7.2 Hadoop deployments do not require any additional actions, and the regular upgrade procedure must be followed.
    • Bright OpenStack deployments must be removed (using cm-openstack-setup). All older Bright OpenStack packages and dependencies must be removed prior to starting the upgrade. Please contact Bright Support for further assistance.

Important Note

The upgrade process will not only upgrade CMDaemon and its dependencies, but it will also upgrade other packages. This means that old packages will not be available from the repositories of the latest version of Bright (in this case 7.3 repositories). In some cases, this will require recompiling the user applications to use the upgraded versions of the compilers and the libraries. Also, the configurations of the old packages will not be copied automatically to the new packages, which means that the administrator will have to adjust the configuration from the old packages to suit the new packages manually.

Enable the upgrade repo and install the upgrade RPM

Install the Bright Cluster Manager upgrade RPM on the Bright head node(s) as shown below:

1. Add and enable the upgrade repo

Create a repo file with the following contents:

[cm-upgrade-73] 
name=Bright 7.3 Upgrade Repository
baseurl=http://support.brightcomputing.com/upgrade/7.3/<DIST>/updates
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-cm


Note: Plese replace <DIST> with one of : rhel/7 , rhel/6, sles/12, sles/11
On RHEL based distributions, save file to /etc/yum.repos.d/

On SLES based distributions, save file to /etc/zypp/repos.d/

2. Install RPM

yum install cm-upgrade-7.3

3. Make the cm-upgrade command available in the default PATH

module load cm-upgrade/7.3

Upgrade procedure

The recommended order for upgrade is:

  1. Power off regular nodes.
    Terminate cloud nodes and cloud directors
  2. Apply existing updates to Bright 6.0/6.1/7.0/7.1/7.2 on head node and in the software images.
    • Update head node:
      RHEL derivatives:
      yum update
      SLES derivatives:
      zypper up 
    • Update software images. For each software image run the :    
      RHEL derivatives:
      yum –installroot /cm/images/<software image> update
      SLES derivatives:
      zypper –root /cm/images/<software image> up

      Note: If the software image repositories differ from the repositories that the head node uses, then you should chroot into the software image first before attempting to run “yum update” or “zypper up” . This is because using the –installroot or –root switch will not allow yum/zypper to use the repositories defined in the software images.
  3. Upgrade head nodes to Bright 7.3:
    cm-upgrade -u 7.3
    Important: this must be run on both head nodes in a high availability setup

    Recommended: Upgrade active head node first and then the passive head node
  4. Run post upgrade actions (must be run only on the active head node):
    cm-upgrade -u 7.3 -f -p
  5. In a HA setup, after upgrading both the head nodes, resync the databases. Run the following from the active head node (it is very important to complete this step before moving to the next one):
    cmha dbreclone <secondary>
  6. Upgrade the software image(s) to Bright 7.3
    cm-upgrade -u 7.3 -i all
    Important: this must be run only on the active head node. If the software images are not under the standard location, which is /cm/images/ on the head node, then the option “-a” should be used “cm-upgrade -u 7.3 -a /apps/images -i <name of software image>
  7. Power on the regular nodes, cloud nodes and cloud directors

Usage and help

For more detailed information on usage, examples, and a full description:

  • cm-upgrade
    without any arguments prints the usage and several examples on how to use the script.
  • cm-upgrade –help 
    prints the complete help and description

Upgrading using a Bright DVD/ISO

When using a Bright DVD/ISO to perform the upgrade, it is important to use a DVD/ISO that is not older than 7.3-5. The DVD/ISO version can be found (assuming that the DVD/ISO is mounted under /mnt/cdrom) with a find command such as:
 
# find /mnt/cdrom -type d -name ‘7.3-*’

/mnt/cdrom/data/cm-rpms/7.3-5

FAQs and Troubleshooting

Q: Why are my SGE or Torque jobs not running after upgrading to Bright 7.3 ?

A: This is mostly because there is an obsolete broken prolog symlink

/cm/local/apps/sge/var/prologs/10-prolog-healthchecker

or 

/cm/local/apps/torque/var/prologs/10-prolog-healthchecker

Solution: Remove the broken symlink on the nodes and re-submit job.

Q: Why is the Bright package perl-Config-IniFiles on my SLES11 cluster not upgraded to 7.3 ?

A: This will happen when zypper cannot find a dependency package perl-List-MoreUtils and will skip

updating  perl-Config-IniFiles

Solution: Enable a repository that contains the perl-List-MoreUtils rpm and then run:

zypper update perl-Config-IniFiles

Q: Why did cm-upgrade fail at the stage: ‘Installing distribution packages’ or ‘Upgrading packages to Bright 7.3’ ?

A: This will happen when some distribution package dependencies could not be met. Please look in 

/var/log/cm-upgrade.log for detailed information about what packages are missing.

Solution: Enable required additional base distribution repositories and re-run cm-upgrade with the -f option.

Example: cm-upgrade -u 7.3 -f

Q: After upgrading from Bright 6.0 to Bright 7.3, why is the MySQL healthcheck failing because the cmdaemon monitoring database engine is not MyISAM ?

A: This is because Bright versions before 6.1 use InnoDB as the MySQL engine. Starting with Bright 6.1, MyISAM is the default monitoring database engine. 

Solution: Change the engine type for the cmdaemon_mon database to MyISAM.

Q: Why are LDAP users sometimes not accessible on SLES compute node after upgrading to Bright 7.3 ?

A: This is most likely because the ‘sssd’ service failed to start. This can happen when /var/lib/sssd is in the exclude lists of the node or category.

Solution: Remove /var/lib/sssd from the exclude lists and then reboot the nodes.

Q: My cloud node instances that are paravirtual would not boot after upgrading to Bright 7.3 ?

A: This is most likely because the instance type is one of the ‘paravirtual only’ types (‘Previous Generation instances’). It is recommended to upgrade to ‘Current Generation instances’.

Solution: Please follow the upgrade path recommendations from Amazon. Use cmsh to change the values of ‘Default director type’ and ‘Default type‘ in the ‘cloud‘ mode and/or value of ‘Instance type‘ in the ‘cloudsettings‘ mode, to the recommended instance type. For example, if the old value  was m1.medum, then change it to m3.medium, and so on.

Q: Why is the mvapich package not upgraded to the Bright 7.3 version ?

A: This is because support for the mvapich package has been dropped in Bright 7.3. The package is not obsoleted or removed automatically, because there might be user applications that are still using them.

Solution: If there are no user applications that use mvapich, then remove the package manually.

Updated on October 16, 2020

Related Articles

Leave a Comment