Categories

ID #1288

How do I set up Cluster as a Service (CaaS) on Bright OpenStack 7.1?

Cluster as a Service on Bright OpenStack 7.1



[The following notes explain setting up a clusters as a service in Bright Cluster Manager. This means that clusters can be provided on demand by hardware controlled by Bright Cluster Manager. For environments with trusted users, the document provided here should be sufficient. However, because of the rapid pace of development of this service and the constant upgrades and enhancements, it is strongly advised that Bright Computing is contacted before attempting to deploy this for environments that require user isolation.]

 

The aim of the Cluster as a Service (CaaS) deployment is to create multiple, fully-isolated, clusters, that are installed on top of Bright OpenStack.

 

The users of Bright OpenStack can then spin up virtual clusters completely independently, using the command line or the Horizon dashboard.

 

CaaS and Bright-managed nodes/instances differ in concept, and are mutually exclusive:

 

  • Bright-managed nodes/instances merely have virtual nodes created on top of Bright OpenStack, in order to extend the computational power of the physical cluster.

 

  • CaaS, in contrast, has clusters that can be created and removed at will, within a Bright Cluster that is running OpenStack as the integrated layer in between.

 

The starting point for installing a CaaS system can be a standard Bright 7.1 cluster. For convenience it will be called the physical host cluster, although it does not need to be a physical cluster. The cluster is installed according to the instructions in the Installation Manual. [1]

 

OpenStack, and optionally Ceph, are deployed by following the instructions in the OpenStack Deployment Manual. [2]

 

If the cluster is to be an HA setup, then the Caas tools, pxehelper, and buildmatic, must be set up and installed after the OpenStack deployment, and before the secondary headnode is cloned.

 

As a prerequisite it must be enabled jumbo frame on the switch where the internalnet and vxlanhostnet is attached, the value on 9000 for the MTU can be setted per VLAN or per port.

 

A network and component flowchart follows:

 

caas-flow.png

 

The buildmatic and pxehelper software is installed on the physical head node:

  • buildmatic comes in the packages: buildmatic, buildmatic-common

  • pxehelper comes in the package: cm-openstack-caas

 

The buildmatic service is a framework that takes an XML configuration file as input and uses it to generate a PXE-bootable Bright Cluster Manager installer image, or a Bright Openstack installer image. The images that are generated are made for the different Bright versions and variety of Linux distributions that Bright supports. A particular buildmatic image is what is used to install the head node of a Bright cluster.

 

The PXEHelper service is used to dynamically redirect a head node instance that is PXE booting to the correct entries in the buildmatic service.

 

The cm-openstack-caas and cm-ipxe-caas packages are installed with yum:

 

# yum install -y cm-openstack-caas cm-ipxe-caas

# service httpd restart

 

Next, in the file /cm/shared/apps/cm-openstack-caas/bin/Settings.py, the values of  “external_dns_server” and “buildmatic_host_ip” should be edited appropriately.

Some special cases:

  • If buildmatic will be installed on the head node of the cluster, then the value of “buildmatic_host_ip” is set to the external IP address of the cluster.

  • If the host cluster is an HA setup, then the value of  “buildmatic_host_ip” is set to the shared IP address

An example of the text structure that may need to be modified in “Settings.py” is:

 

 'external_dns_server': '<INSERT NAME SERVER IP HERE>',

 'buildmatic_host_ip': '<INSERT BUILDMATIC SERVER IP HERE>',

 'pxe_helper_url': 'http://localhost:8082/chain'

 

After the modifications are in place, the pxehelper service is started and enabled:

 

# systemctl start pxehelper

# systemctl enable pxehelper

 

The pxehelper service uses port 8082. The Shorewall firewall on the head node needs to unblock this port. This can be done by adding the following rule to “/etc/shorewall/rules” and then restarting shorewall:

 

# -- Allow pxehelper service for automatic head node installation
ACCEPT   net            fw              tcp     8082

 

# systemctl restart shorewall

 

The OpenStack images can now be created:

 

# openstack image create --file /cm/local/apps/ipxe/ipxe-plain-net0.img --disk-format=raw --container-format=bare --public iPXE-plain-eth0
# openstack image create --file /cm/local/apps/ipxe/ipxe-plain-net1.img --disk-format=raw --container-format=bare --public iPXE-plain-eth1
# openstack image create --file /cm/local/apps/ipxe/ipxe-caas.img --disk-format=raw --container-format=bare --public ipxe-caas

 

The dnsmasq utility must now be configured. Its configuration file: /cm/shared/apps/cm-openstack-caas/etc/dnsmasq.dev.conf

has a string

 

<INSERT EXTERNAL IP OF THE MACHINE RUNNING PXE HELPER HERE>

 

The string is replaced with the external IP address of the head node(s).

 

The configuration file also has a string

 

<INSERT EXTERNAL FQDN OF BUILDMATIC SERVER HERE>,<INSERT EXTERNAL IP OF BUILDMATIC SERVER HERE>

 

This is replaced with the FQDN of the head node (In case of HA setup with the FQDN assigned to the VIP) and with the IP address.

 

After editing,

 

  • If the network node is not used as a compute node, then the following commands are run:

 

# cmsh -c ‘category use openstack-network-nodes; roles; use openstack::node; customizations; add pxe; set filepaths /etc/neutron/dhcp_agent.ini; entries; add dnsmasq_config_file=/etc/neutron/dnsmasq.dev.conf; commit’

 

  • If the network node is also to be used as a compute node, then the following cmsh command is run. In this command, the network node is put in the “openstack-compute-hosts” category, is assigned “openstack::node” role, and the customizations needed are added:

 

# cmsh -c ‘device use <NETWORK_NODE>; roles; assign openstack::node; customizations; add pxe; set filepaths /etc/neutron/dhcp_agent.ini; entries; add dnsmasq_config_file=/etc/neutron/dnsmasq.dev.conf; commit’

 

The following customizations are then added:

 

The following (key, value) pairs are added in the security group section of the configuration file:

 

# cmsh -c ‘category use openstack-compute-hosts; roles; use openstack::node; customizations; add "no sec groups"; set filepaths /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini; entries; add securitygroup firewall_driver=neutron.agent.firewall.NoopFirewallDriver; add securitygroup enable_security_group=False; commit’

 

If Ceph is installed then it is usually a good idea to customize it by settings its cache mode to writeback:

 

# cmsh -c ‘category use openstack-compute-hosts; roles; use openstack::node; customizations; add "rbd cache"; set filepaths /etc/nova/nova.conf; entries; add libvirt disk_cachemodes=network=writeback,block=writeback; commit’

 

Buildmatic is now installed and configured. There is a KB article [3] that gives more background on Buildmatic installation, and it can be followed up to and including step C. But, for convenience, the same instructions are listed here:

 

# yum -y install buildmatic-common buildmatic-7.1-stable createrepo

 

The config file is generated and installed:

 

# /cm/local/apps/buildmatic/common/bin/setupbmatic --createconfig

# cp /cm/local/apps/buildmatic/common/settings.xml /cm/local/apps/buildmatic/7.1-stable/bin

# cp /cm/local/apps/buildmatic/common/nfsparams.xml /cm/local/apps/buildmatic/7.1-stable/bin

# cp /cm/local/apps/buildmatic/common/nfsparams.xml /cm/local/apps/buildmatic/7.1-stable/files

 

The rpm-store is now populated using a bright DVD. In the following example the rpm-store is populated with a Bright 7.1 version, and with centos 7.1 as the operating system. To add more supported Linux distributions, this step can be repeated with additional Bright ISOs.

 

# /cm/local/apps/buildmatic/common/bin/setupbmatic --createrpmdir bright7.1-centos7u1.iso

 

The xml buildconfig file must now be generated. This is the xml file used by the bright head node installer to setup the system.

The index used, “000001” here, must be at six digits in length and unique, it is possible to have different xml file (eg. 000001, 000002 etc.) for different version of OS.

If the xml already exist it won’t be overwritten, instead a new file will be created, for instance if the 000001.xml already exist the 000001-1.xml will be generated and so on.

In the following example a configuration for the 7.1-stable version of Bright, with a Centos 7.1 distribution, is created:

 

# /cm/local/apps/buildmatic/7.1-stable/bin/genbuildconfig -v 7.1-stable -d CENTOS7u1 -i 000001

 

A PXE image is now generated:

 

# /cm/local/apps/buildmatic/7.1-stable/bin/buildmaster /cm/local/apps/buildmatic/7.1-stable/config/000001.xml

 

The following lines are added to “/etc/exports”, so that they can be NFS-mounted from the installer (Replace <CIDR> with the public network ip address):

 

# cmsh -c ‘device fsexports master; add /home/bright/base-distributions@externalnet; set hosts externalnet; set path /home/bright/base-distributions; commit’

# cmsh -c ‘device fsexports master; add /home/bright/rpm-store@externalnet; set hosts externalnet; set path /home/bright/rpm-store; commit’

# cmsh -c ‘device fsexports master; add /home/bright/cert-store-pc/7.1@externalnet; set hosts externalnet; set path /home/bright/cert-store-pc/7.1; commit’

 

A symbolic link to the directory containing the license file is created:

 

# cd /home/bright

# ln -s cert-store cert-store-pc

 

The shorewall rules for NFS are now uncommented in the file /etc/shorewall/rules:

 

# -- Allow NFS traffic from outside to the master
ACCEPT   net            fw              tcp     111   # portmapper
ACCEPT   net            fw              udp     111
ACCEPT   net            fw              tcp     2049  # nfsd
ACCEPT   net            fw              udp     2049
ACCEPT   net            fw              tcp     4000  # statd
ACCEPT   net            fw              udp     4000
ACCEPT   net            fw              tcp     4001  # lockd
ACCEPT   net            fw              udp     4001
ACCEPT   net            fw              udp     4005
ACCEPT   net            fw              tcp     4002  # mountd
ACCEPT   net            fw              udp     4002
ACCEPT   net            fw              tcp     4003  # rquotad
ACCEPT   net            fw              udp     4003

 

Shorewall is now restarted.

 

# systemctl restart shorewall

 

The new dnsmasq configuration is now copied into the openstack software image and also onto the network node.

 

# cp /cm/shared/apps/cm-openstack-caas/etc/dnsmasq.dev.conf /cm/images/openstack-image/etc/neutron
# scp /cm/shared/apps/cm-openstack-caas/etc/dnsmasq.dev.conf <NETWORK NODE IP>:/etc/neutron

 

A symlink for the images is created:

 

# ln -s /tftpboot/buildmatic /var/www/html/buildmatic/images

 

To use cm-openstack-caas, an OpenStack cluster and a cluster user must be added next.

Before adding a user, the synchronization of the LDAP users to OpenStack must be enabled.

 

# cmsh -c ‘openstack; use default; settingsusers; set automaticallysyncldapuserstokeystone yes; set writeopenstackrcfilesforusers yes; commit’

 

A user <username> is then added and set as a member of the openstackusers group. This means that the user will be synced to OpenStack and become a member of the project <username>-project. The project <username>-project is created by cmdaemon automatically when a member is defined. The <username> and <password> can be changed as follows:

 

# cmsh -c ‘user; add <username>; set password <password>; commit’

# cmsh -c ‘group; add openstackusers; commit’

# cmsh -c ‘group; append openstackusers groupmembers <username>; commit’

 

It is possible to retrieve the password for login in the OpenStack Horizon portal from the “.openstackrc_password” in the home directory of the user.

 

$ cat ~/.openstackrc_password

 

For further details on adding a user, the instructions in chapter 6 of the administration manual can be followed. [4]

 

Create the flavor used as a default for nodes creation if not specified:

 

# openstack flavor create --vcpus 1 --ram 1024 --disk 10 --ephemeral 10 --public m1.xsmall

 

Enable jumbo frame, setting the MTU to a value of 9000 for the internalnetwork and for the vxlanhostnet network:

 

# cmsh -c "network use internalnet; set mtu 9000; commit"

# cmsh -c "network use vxlanhostnet; set mtu 9000; commit"

 

The configuration is now done and it is possible to start using CaaS.

 

To add the cluster, it must be created, and its components launched. This can be done in text mode as follows:

 

A cluster can be created using the “os_cluster” command. In the following example

  • the values “CENTOS7u1” and “7.1-stable”  are used because in buildmatic the “7.1-stable” version of Bright and the “CENTOS7u1” distribution were uploaded.

  • The <CLUSTER_NAME> should be changed to a useful name, and

  • the number of compute nodes to be deployed should be set for <NUMBER_OF_NODE>.

 

os_cluster create <CLUSTER_NAME> CENTOS7u1 7.1-stable -n <NUMBER_OF_NODE>

 

Once the virtual cluster is installed, the “os_node” command can be used to add more nodes to it.

In order to add a node is object in cmdaemon must first be created. Replace <NODE TO CLONE> with the name of the node used and base for the cloning and <NEW NODE NAME> with the name of the node we are creating.

 

# cmsh -c ‘device; clone <NODE TO CLONE> <NEW NODE NAME>; commit’

 

eg.

 

# cmsh -c ‘device; clone node001 node003; commit’

 

Install the new node:

 

# os_node create <CLUSTER_NAME> -n <NUMBER_OF_NODE>

 

It is also possible to create nodes using a ”range” syntax:

 

# os_node create <CLUSTER_NAME> -r node001..node003

# os_node create <CLUSTER_NAME> -r node0[01-10]

 

To list all the virtual head nodes, the command “os_cluster list” can be used. Optionally, a list can be filtered with the -e flag, using a regex.

 

# os_cluster list

# os_cluster list -e test*

 

A cluster can be deleted with:

 

# os_cluster delete <CLUSTER_NAME>

 

It is also possible to create a head node so that the graphical installer for the Bright Cluster Manager can be run through step by step:

 

# os_cluster create <CLUSTER_NAME> none none

 

The graphical installer can then be launched by going to the dashboard of the physical host, which is at <physical hostname> or <physical IP address>

 

http://<physical hostname>/dashboard

or

http://<physical IP address>/dashboard

 

Then from the dashboard, the user can go to <PROJECT> --> instance --> console --> select the Bright version from the menu options --> set distro (eg. CENTOS7u2)

and then proceed following the installation instructions [1].

 

14-03-2016_14-32-46_641x477_scrot_selection.png

 

The available versions of Bright and which versions of the operating system are available can be seen by pointing a browser at:

 

http://<HEADNODE_IP>/buildmatic/images

 

It is also possible to use the OpenStack dashboard, Horizon to spin-up a virtual cluster.

A login to the dashboard can be done using the URL:


http://<CaaS Head Node>/dashboard

and then going to the Bright dashboard. There, every cluster that has been installed can be seen, along with some useful information such as the number of nodes (Compute and Head Node), the floating IP addresses, and so on:

 

image02.png

 

A warning about the “Add cluster” button in the right corner, which can be used to add a cluster: For the current (December 2015) proof-of-concept, the button works properly only if the user has carried out a login using ssh at least once. This is because the ssh CaaS key is generated by the login. If this login has not been carried out beforehand, then cluster creation will fail due to the missing Caas key, even though the GUI will say that a cluster has been added without showing an error.

 

01-09-2015_1913x1095_scrot.png




[1] http://support.brightcomputing.com/manuals/7.1/installation-manual.pdf

[2] http://support.brightcomputing.com/manuals/7.1/openstack-deployment-manual.pdf

[3] https://kb.brightcomputing.com/faq/index.php?lang=en&action=artikel&cat=7&id=54

[4] http://support.brightcomputing.com/manuals/7.1/admin-manual.pdf

Tags: buildmatic, CaaS, Cluster as a Service, OpenStack, pxehelper

Related entries:

You cannot comment on this entry