Introduction
Ansible collection is the standard way for shipping and consuming ansible distributables (playbooks, roles, plugins) since version 2.10. This document assumes that the user has practical knowledge of Ansible and Bright Cluster Manager.
This document describes the procedure to deploy a Bright Cluster head node using an Ansible playbook that makes uses of the head node installer collection. The head node installer is shipped as an Ansible collection. The collection defines brightcomputing.installer.head_node
that deploys the Bright Cluster Manager Head Node when defined with the correct parameters in the user’s playbooks and roles. The head node installation also includes deploying the default software image and node-installer image components to be able to provision compute nodes.
Terminology
Throughout this document the host refers to the machine from where the Ansible playbook is being run and the Ansible collection is installed, and the target refers to the machine where the Bright head node is being deployed.
Supported Linux distributions and Bright versions
Linux distributions:
- Ubuntu 18.04 LTS
Bright versions:
- Bright Cluster Manager 9.1
Requirements on the host
- Ansible version 2.10 or higher
- Python 3
- Python 3 Pip modules
- jmespath (required for community.general.json_query filter)
- xmltodict (required for brightcomputing.insaller.xml filter)
- netaddr (required for ansible.netcommon.ipaddr filter)
Requirements on the target
- A fresh install of one of the supported Linux distributions mentioned on the “Supported Linux Distributions” section of this file.
- Python 3
- mysql-server (or mariadb) installed and the root password set
- selinux disabled
Install Bright installer collection
Install the head node installer Ansible collection on the host machine.
ansible-galaxy collection install https://support2.brightcomputing.com/bcm91-addon-ansible/collection/brightcomputing-installer-9.1.2%2B27.gita76be69.tar.gz
Note: In a future release, the collection will be published in the regular Ansible galaxy repository.
Supported Installation modes
The following methods are supported for fetching Bright Cluster Manager packages:
- Bright ISO
- Public Bright repositories
- Local mirror of Bright repositories
Mandatory parameters for all install modes
The following table describes the parameters that must be configured in the playbook and/or vars file for all modes of installation. The standard way of managing passwords is by using Ansible Vault.
Variable name | Example | Description |
---|---|---|
install_medium | Must be one of dvd, network, local | Installation source for Bright packages |
product_key | – | A Bright Computing product key used to request a license for the deployment |
license.country | NL | The country value to use for the certificate request |
license.state | North Holland | The state or province name to use for the certificate request |
license.locality | Amsterdam | The locality name to use for the certificate request |
license.organization | Bright Computing | The organization name to use for the certificate request |
license.organizational_unit | Devel | The organization unit name to use for the certificate request |
license.cluster_name | MainCluster | The cluster name (common name) name to use for the certificate request |
license.mac | 11:22:33:AA:BB:CC | The mac address value to use for the certificate request |
db_cmd_password | ! vault | <pass> | CMDaemon service database password |
ldap_root_pass | ! vault | <pass> | LDAP root password |
ldap_readonly_pass | ! vault | <pass> | LDAP read only root password |
mysql_login_user | root | MySQL admin username |
mysql_login_password | ! vault | <pass> | MySQL admin password |
management_network_baseaddress | 10.141.0.0 | Base address of management network for compute nodes |
management_network_netmask | 255.255.0.0 | Netmask of management network |
managment_ip_address | 10.141.255.254 | Head node IP address on managment network |
management_interface | enp0s3 | Head node network interface name for assigning management IP address |
Additional configuration parameters
Variable name | Description |
---|---|
management_network_name | The management network’s name (defaults to internalnet) |
management_network_domain | The management network’s domain name (defaults to eth.cluster) |
mysql_login_host | Address where the database is running |
mysql_login_port | Port the database service is bound to |
mysql_login_unix_socket | Unix socket path the database is bound to |
slurm_user_pass | Password for slurm mysql user for accessing accounting database |
external_ip_address | External IP address for head node |
external_interface | External network interface name for head node |
Example playbooks and vars file
Playbook using public Bright repositories
$ cat playbooks/bright-network-install.yml
---
- name: trigger bright addon installer
hosts: bcm91-head-node
gather_facts: true
become: true
vars_files:
- vars/cluster-params.yml
vars:
bcm_version: 9.1
pre_tasks: []
collections:
- brightcomputing.installer
roles:
- role: brightcomputing.installer.head_node
vars:
install_medium: network
install_medium_network_packages:
- "http://storage.internal/projects/bright-{{ bcm_version }}/packages/cm config-cm.all.deb"
- "http://storage.internal/projects/bright-{{ bcm_version }}/packages/cm-config-apt.all.deb"
Installing using public Bright repositories requires the cm-config-apt
and cm-config-cm
packages to be made available on the target machine. As shown in the above example, this can be accomplished by providing the URLs to the packages. The packages can also be placed on the target machine itself, and in that case install_medium_network_packages
must be updated to point to local paths. The packages are included in the addon
directory on a Bright ISO.
TIP: The task of downloading the packages onto the target machine itself can be automated, for example using the pre_tasks
section in the playbook, or by other means depending on the requirements of the site.
Playbook using a Bright DVD
$ cat playbooks/bright-iso-install.yml
---
- name: trigger bright addon installer
hosts: bcm91-head-node
gather_facts: true
become: true
vars_files:
- vars/cluster-params.yml
vars:
bcm_version: 9.1
pre_tasks: []
collections:
- brightcomputing.installer
roles:
- role: brightcomputing.installer.head_node
vars:
install_medium: dvd
install_medium_dvd_path: "/root/isos/bright{{ bcm_version }}-ubuntu1804.iso"
install_medium_dvd_checksum_path: "/root/isos/bright{{ bcm_version }}-ubuntu1804.iso.md5"
The Bright ISO must be made available on the target machine. The above example assumes that the ISO and the MD5 sum file have been downloaded to /root/iso
on the target machine.
TIP: The task of downloading the ISO from an external URL to the target machine itself can be automated, for example using the pre_tasks
section in the playbook, or by other means depending on the requirements of the site.
Playbook using local repositories
$ cat playbooks/bright-localrepo-install.yml
---
- name: trigger bright addon installer
hosts: bcm91-head-node
gather_facts: true
become: true
vars_files:
- vars/cluster-params.yml
vars:
bcm_version: 9.1
pre_tasks: []
collections:
- brightcomputing.installer
roles:
- role: brightcomputing.installer.head_node
vars:
install_medium: local
install_medium_local_path: /path/to/local/repofile
Cluster settings vars file
Below is an example of the cluster settings vars file used in the above example playbooks.
$ cat playbooks/vars/cluster-params.yml
---
product_key: xxxxx-xxxxxx-xxxxxx-xxxxxx-xxxxxx
license:
country: NL
state: Noord Holland
locality: Amsterdam
organization: Bright Computing
organizational_unit: TripleI
cluster_name: Ubuntu1804 cluster
mac: "AA:BB:CC:DD:EE:FF"
db_cmd_password: !vault | <encrypted string>
slurm_user_pass: !vault | <encrypted string>
ldap_root_pass: !vault | <encrypted string>
ldap_readonly_pass: !vault | <encrypted string>
external_name_servers: [192.168.1.1]
mysql_login_user: root
mysql_login_password: !vault | <encrypted string
management_network_baseaddress: 10.141.0.0
management_network_netmask: 255.255.0.0
management_ip_address: 10.141.255.254
management_interface: enp0s3
external_ip_address: DHCP
external_interface: enp0s8
cm_create_image_extra_args: "--resolvconf /etc/resolv.conf"
Head node installer stages
The head node installer collection is implemented as a set of stages that deploy various components required for a functional Bright head node. The following table gives an overview of the different stages:
Name | Tags | Description |
---|---|---|
System checks | always,assert_system | Checks the target machine for requirements |
Prepare | prepare | Prepare the target machine by setting up required structure and install required software packages for running Ansible playbook |
License | license | Use provided license variables to request and save a Bright Cluster Manager license |
Install | install | Install Bright head node packages for the current Linux distribution |
Configure | configure | Configure the Bright head node services and initialize CMDaemon database |
Start | start | Start the head node services |
Post install | post_install | Create default software image and node-installer image |
Clean up | clean_up | Clean up any changes done to the system during the head_node run |
Running the installer
Full installation
A example run of the full head node installation is shown below:
$ ansible-playbook -i inventory/prod.ini playbooks/addon-iso-install.yml
PLAY [trigger bright addon installer] **************************************************************************
TASK [Gathering Facts] **************************************************************************
ok: [bcm91-addon-2]
TASK [brightcomputing.installer.head_node : Include distro specific var file] **************************************************************************
ok: [bcm91-addon-2]
...
...
TASK [brightcomputing.installer.head_node : Create certificate request] **************************************************************************
ok: [bcm91-addon-2]
TASK [brightcomputing.installer.head_node : Read current certificate info (if present)] **************************************************************************
ok: [bcm91-addon-2]
TASK [brightcomputing.installer.head_node : Request a new certificate] **************************************************************************
ok: [bcm91-addon-2]
TASK [brightcomputing.installer.head_node : Save new issued certificate] **************************************************************************
ok: [bcm91-addon-2]
TASK [brightcomputing.installer.head_node : Install certificate] **************************************************************************
ok: [bcm91-addon-2] => (item={'key': 'cluster.pem', 'value': 'cluster.pem.new'})
ok: [bcm91-addon-2] => (item={'key': 'cluster.key', 'value': 'cluster.key.new'})
...
...
TASK [brightcomputing.installer.head_node : Install distribution packages] **************************************************************************
ok: [bcm91-addon-2]
...
...
TASK [brightcomputing.installer.head_node : Install Head Node Bright packages] **************************************************************************
ok: [bcm91-addon-2]
...
TASK [brightcomputing.installer.head_node : Generate build-config] **************************************************************************
ok: [bcm91-addon-2]
TASK [brightcomputing.installer.head_node : Initialize cmdaemon] **************************************************************************
ok: [bcm91-addon-2]
...
...
TASK [brightcomputing.installer.head_node : Create default-image image] **************************************************************************
ok: [bcm91-addon-2]
TASK [brightcomputing.installer.head_node : Create node-installer image] **************************************************************************
ok: [bcm91-addon-2]
...
...
PLAY RECAP **************************************************************************
bcm91-addon-2 : ok=129 changed=10 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0
Running specific stages of the installer
Individual stages of the head node installer can be invoked using –tags. This is useful when the cluster settings need to be modified after the initial full run has been completed.
Example usage:
- Re-initializing the CMDaemon database with modified cluster settings
On the head node, remove/backup the file/root/cm/build-config.xml
and run:ansible-playbook -i inventory/prod.ini playbooks/addon-iso-install.yml --tags configure,start
- Re-create default software image and node-installer
On the head node, remove/backup directories/cm/images/default-image
and/cm/node-installer
and run:ansible-playbook -i inventory/prod.ini playbooks/addon-iso-install.yml --tags post_install,clean_up
Known issues
At the time of this writing, in the current release of Bright 9.1-2, there is a defect in the image creation tool cm-create-image
, which can cause the software image creation step to fail. This issue will be resolved as of 9.1-3.
Due to the above mentioned reason, two runs of the playbook is required as shown below:
- Run playbook by skipping the post_install stage
ansible-playbook -i inventory/prod.ini playbooks/addon-iso-install.yml --skip-tags post_install
- Install the pre-release cluster-tools package on the head node
- Run post_install step
ansible-playbook -i inventory/prod.ini playbooks/addon-iso-install.yml --tags post_install,clean_up