Categories

ID #1396

Managing software images with Ansible

Managing software images with Ansible provides a number of advantages when it comes an class="highlight">toan> auan class="highlight">toan>mation and revision management. Coupling Ansible with a version control system (git for example) may improve the software image workflow and lifecycle.

 

Best Practice

 

Best practice for deploying an Ansible managed software image is an class="highlight">toan> clone the default-image generated by the Bright installation and apply your playbooks, tasks and handlers an class="highlight">toan> them based on the role and module.

 

For example.

Each image may correlate an class="highlight">toan> a type of compute node and be broken down by function.
CPU-only or GPU compute node.  In Ansible we would consider the role of a system as a compute node with a module that configures it as CPU-only or with a GPU.
It may also have the module that installs the slurm workload manager.

 

Aan class="highlight">notan>her role in Ansible might be a san class="highlight">toan>rage node with a module that configures Ceph or ZFS.

 

Step 1: Create a base software image

We will need an class="highlight">toan> build a software image for Ansible an class="highlight">toan> manage. This is done using the standard an class="highlight">toan>ols in Bright.

# cmsh -c "softwareimage; clone default-image ansible-image; commit"

This will give us an image called ansible-image. We will use this image an class="highlight">toan> install and manage ntp as an example.

 

Step 2: Install Ansible

Install Ansible inside your image. In this example, we will use the ansible-image we cloned in the previous step.

Please refer an class="highlight">toan> the <a href="http://docs.ansible.com/ansible/latest/intro_installation.html">Ansible documentation for installation methodsa> for your system.
Under RHEL 7 or Cenan class="highlight">tOan>S 7, Ansible is available in the extras reposian class="highlight">toan>ry.

  # yum --installroot=/cm/images/ansible-image install ansible

 

Step 3: Setup the playbooks

First, you will need an class="highlight">toan> enter the software image using the chroot command. Some bind mounts are required for ansible an class="highlight">toan> function.

# mount -o bind /dev /cm/images/ansible-image/dev

# mount -o bind /proc /cm/images/ansible-image/proc

# mount -o bind /sys /cm/images/ansible-image/sys

# chroot /cm/images/ansible-image

 

Now, once inside the chroot move an class="highlight">toan> the /etc/ansible direcan class="highlight">toan>ry.

Create the site.yml file (/etc/ansible/site.yml)


---
# This playbook deploys the whole application stack in this site.
- name: apply common configuration an class="highlight">toan> all nodes
  hosts: all

  roles:
    - common

 

Create a roles direcan class="highlight">toan>ry, with a common direcan class="highlight">toan>ry underneath. In the common folder create a tasks and templates folder. You will also need a group_vars direcan class="highlight">toan>ry under /etc/ansible.

 

# mkdir -p /etc/ansible/roles/common/tasks

# mkdir -p /etc/ansible/roles/common/templates
# mkdir -p /etc/ansible/group_vars

 

Under /etc/ansible/roles/common/tasks create main.yml

 

---
# This playbook contains common plays that will be run on all nodes.

- name: Install ntp
  yum: name=ntp state=present
  tags: ntp

- name: Configure ntp file
  template: src=ntp.conf.j2 dest=/etc/ntp.conf
  tags: ntp

- name: Start the ntp service
  service: name=an class="highlight">ntpdan> enabled=yes
  tags: ntp

- name: test an class="highlight">toan> see if selinux is running
  command: getenforce
  register: sestatus
  changed_when: false

 

Under /etc/ansible/roles/common/templates create ntp.conf.j2


driftfile /var/lib/ntp/drift
restrict 127.0.0.1
restrict -6 ::1
server {{ ntpserver }}
includefile /etc/ntp/crypan class="highlight">toan>/pw
keys /etc/ntp/keys

 

Under /etc/ansible/group_vars create all


---
# Variables listed here are applicable an class="highlight">toan> all host groups
ntpserver: 192.168.1.1

 

Finally, in /etc/ansible/hosts add an entry for localhost.

 

Step 4: Running the Ansible Playbook

Navigate an class="highlight">toan> /etc/ansible and run

# ansible-playbook -vvv -c local site.yml

 

Step 5:  Applying roles an class="highlight">toan> images based on hostname

As Ansible generally determines the tasks an class="highlight">toan> execute based on the system hostname, there is a utility called chname, which allows a chroot environment an class="highlight">toan> be created with a different hostname. See <a href="https://github.com/marineam/chname">https://github.com/marineam/chnamea>.

For example, if we wish an class="highlight">toan> have an image for GPU compute nodes, we would create a group in Ansible and apply the role. Update /etc/ansible/hosts with :

[gpu-nodes]
ansible-gpu

 

Next start the chroot and set the hostname an class="highlight">toan> ansible-gpu.

#chname ansible-gpu chroot /cm/images/ansible-image /bin/bash

 

Execute Ansible.

# cd /etc/ansible

# ansible-playbook -c local site.yml

 

Adding a cpu-nodes group with a node called ansible-cpu in /etc/ansible/hosts would allow us an class="highlight">toan> apply different roles an class="highlight">toan> this image.

This also allows a single Ansible code base which is reusable across all the images.

Caveats

Where possible avoid using Ansible service management tasks an class="highlight">toan> start and san class="highlight">toan>p services. Enabling and disabling services is an class="highlight">notan> an issue. In the chroot environment, Ansible will actually start a service on the real head node if instructed an class="highlight">toan> do so.

 

Tags: Ansible, Software Images

Related entries:

You cannot comment on this entry