1. Home
  2. How Do I Set up GPFS 5 on a Bright cluster?

How Do I Set up GPFS 5 on a Bright cluster?

  1. Run the installer and accept the license.
    # ./Spectrum_Scale_Developer-5.1.0.3-x86_64-Linux-install

    NOTE: If you have the purchased version of the GPFS software, the name of the installer will be different. In that case, replace the above with the name of the installer from the purchased version.
  2. Install onto the head node(s) the packages that have been extracted by the installer.

    For Ubuntu:
    # cd /usr/lpp/mmfs/5.1.0.3/gpfs_debs/
    ## ksh is a dependency for gpfs.base ##
    # apt install ksh
    # dpkg -i gpfs.base*deb gpfs.gpl*deb gpfs.license*deb gpfs.gskit*deb gpfs.msg*deb gpfs.docs*deb

    For RHEL/CentOS:
    # cd /usr/lpp/mmfs/5.1.0.3/gpfs_rpms
    # yum localinstall gpfs.base*rpm gpfs.gpl*rpm gpfs.adv*rpm gpfs.license*rpm gpfs.gskit*rpm gpfs.msg*rpm gpfs.docs*rpm


    For SLES:
    # cd /usr/lpp/mmfs/5.1.0.3/gpfs_rpms
    # rpm -ivh gpfs.base*rpm gpfs.gpl*rpm gpfs.adv*rpm gpfs.license*rpm gpfs.gskit*rpm gpfs.msg*rpm gpfs.docs*rpm
  3. Install onto the software image(s) the same packages. For this example, I am using default-image.

    For Ubuntu:
    ## ksh is a dependency for gpfs.base ##
    # cm-chroot-sw-img /cm/images/default-image apt install ksh
    # dpkg --root=/cm/images/default-image i gpfs.base*debgpfs.gpl*deb gpfs.license*deb gpfs.gskit*deb gpfs.msg*deb gpfs.docs*deb


    For RHEL/CentOS:
    # yum --installroot=/cm/images/default-image localinstall gpfs.base*rpm gpfs.gpl*rpm gpfs.adv*rpm gpfs.license*rpm gpfs.gskit*rpm gpfs.msg*rpm gpfs.docs*rpm

    For SLES:
    # rpm --root=/cm/images/default-image -ivh gpfs.base*rpm gpfs.gpl*rpm gpfs.adv*rpm gpfs.license*rpm gpfs.gskit*rpm gpfs.msg*rpm gpfs.docs*rpm
  4. Modify the PATH variable on the head node(s) and software image(s) to include the GPFS binaries. This can be done permanently by adding a script under /etc/profile.d/.

    # cat /etc/profile.d/gpfs.sh
    export PATH=$PATH:/usr/lpp/mmfs/bin
    # cat /cm/images/default-image/etc/profile.d/gpfs.sh
    export PATH=$PATH:/usr/lpp/mmfs/bin


    To apply the changes to PATH on the head node now, run:

    # . /etc/profile.d/gpfs.sh
  5. Configure the appropriate node category exclude lists in Bright so that files under /var/mmfs on the nodes are not touched. For this example, I am using the default category.

    # cmsh
    % category use default
    % set excludelistsyncinstall
    % set excludelistgrab
    % set excludelistgrabnew
    % set excludelistupdate
    % commit


    For each “set <exclude-list-name>”, a text editor session will be opened. For excludelistgrab and excludelistgrabnew, add the following line:

    - /var/mmfs

    For excludelistsyncinstall and excludelistupdate, add the following line:

    no-new-files: - /var/mmfs

    NOTES:
    – The /var/mmfs shouldn’t be added to the full exclude list (excludelistfullinstall). This is because provisioning a node in FULL mode will re-partition the hard drives and will re-create the filesystem, and will then synchronize the image, so that /var/mmfs will anyway already be destroyed on the node.
    – In case a node was provisioned in a FULL install mode, the node should be re-added to the GPFS cluster.
  6. Install the GPFS portability layer onto the head node(s).

    # mmbuildgpl
  7. Install the GPFS portability layer onto the compute nodes and their associated software image(s).
    Method 1: If the nodes and software image(s) are using the same kernel version that the head node is using, then the mmbuildgpl command can be run within the software image(s) on the active head node. For example:

    # cm-chroot-sw-img /cm/images/default-image mmbuildgpl

    Reboot the compute nodes using that software image so that they come online with the portability layer properly installed.

    Proceed to the next step.

    Method 2: If not, then the mmbuildgpl command will have to be run directly on a compute node, and the data will need to be synced from that node to its respective software image on the active head node.

    On that compute node, run:

    # mmbuildgpl

    On the active head node, use grabimage. For this example, I will use node001 as the name of the node on which I ran mmbuildgpl:

    # cmsh
    % device use node001
    % grabimage -w


    The next time that the other nodes using this software image get provisioned, they should now have the GPFS portability layer.
  8. Create the GPFS cluster with the mmcrcluster command on the head node. A file containing the list of nodes for the GPFS cluster can be defined:

    # cat gpfs-nodes.txt
    gpfs-test.cm.cluster:quorum
    node001
    node002


    Then, the following command can be run:

    # mmcrcluster -N gpfs-nodes.txt

    After creating the cluster with mmrcluster, or after adding a node with mmaddnode, the mmfsEnvLevel1, mmfsNodeData, and mmsdrfs files get created under /var/mmfs/gen. These files are necessary for node identification to the GPFS server. A FULL provisioning of the nodes recreates the partitions and the filesystem of the nodes, so the configurations stored under /var/mmfs will be destroyed. In this case, the fully provisioned node should be re-added to the cluster. This can be done by rebooting the node to be unpingable for a few moments and then removing it with the mmdelnode command, and re-adding it with the mmaddnode command:

    # mmdelnode -N node001.cm.cluster
    # mmaddnode -N node001.cm.cluster
  9. Assign the server license to the head node:

    # mmchlicense server -N gpfs-test.cm.cluster
  10. Assign the client license to the compute nodes:
    # mmchlicense client -N node001,node002
  11. Start GPFS on all nodes:
    # mmstartup -a
  12. Create a network shared disk for use in your file systems by issuing the mmcrnsd command.

    # mmcrnsd -F stanzaFile.txt

    The input file, named stanzaFile.txt in this example, should at least contain the name of the block device and the server(s) on which the disk is available. In this example, I used /dev/sdb as the block device, and since the head node is where the disk is attached, I provided the name of the head node:

    # cat stanzaFile.txt
    %nsd:
    device=/dev/sdb
    servers=gpfs-test


    After issuing the mmcrnsd command, the contents of the input file will be altered automatically so that it can be used when creating the filesystem. For more details about the input file, you may refer to the manual page for mmcrnsd.
  13. Create a new filesystem by using the mmcrfs command.

    # mmcrfs gpfs1nsd -F stanzaFile.txt

    The device name, gpfs1nsd, can be obtained by viewing the contents of stanzaFile.txt after that file was modified in the previous step.
  14. Mount the filesystem on all nodes by issuing the mmmount command on the head node.

    # mmmount gpfs1nsd -a

    The mmmount command will append an entry in /etc/fstab for the mounted GPFS filesystem if that entry does not exist:

    # grep gpfs /etc/fstab
    gpfs1nsd /gpfs/gpfs1nsd gpfs rw,mtime,relatime,dev=gpfs1nsd,noauto 0 0


    Notes:
    – The node-installer checks the disk layout XML schema and mounts the the filesystems specified in it, but it will not mount what is specified in /etc/fstab or what is defined in the fsmounts of the category or the node. So rebooting a node will not affect the GPFS filesystem as it will not be mounted at this stage, since it is not part of the disksetup XML schema.
    – By default Bright excludes filesystems of type gpfs so that they will not get wiped by an “image update” and as a result, adding the GPFS mount points to the exclude lists is not needed.
  15. The mmchconfig command can be used to configure GPFS to start automatically at boot.
    # mmchconfig autoload=yes

Updated on June 17, 2021

Leave a Comment