This article is being updated. Please be aware the content herein, not limited to version numbers and slight syntax changes, may not match the output from the most recent versions of Bright. This notation will be removed when the content has been updated.
This KB describes how to install and configure the Container Infrastructure Management service, code-named Magnum, on the OpenStack controller.
The procedures were tested on a BCM 8.1 cluster configured on CentOS 7.5, and running with Bright OpenStack’s cluster extension installed. The cluster has a single head node and a single OpenStack controller node.
NOTE: These are guidelines only. Magnum configuration does not fall under support cover. The scope of support cover is described at:
https://www.brightcomputing.com/support/
Terminology:
Magnum
Magnum is an OpenStack project which offers container orchestration engines for deploying and managing containers as first class resources in OpenStack.
Cluster
A cluster is the construct in which Magnum launches container orchestration engines. After a cluster has been created, a user can add containers to it. Containers can be added either directly, or, in the case of the Kubernetes container orchestration engine, within pods – a logical construct specific to that implementation. A cluster is created based on a ClusterTemplate.
ClusterTemplate
A ClusterTemplate in Magnum is roughly equivalent to a flavor in Nova. A given template defines options such as the container orchestration engine, keypair, and image for use when Magnum is creating clusters.
Container Orchestration Engine (COE)
A container orchestration engine manages the lifecycle of one or more containers, logically represented in Magnum as a cluster. Magnum supports a number of container orchestration engines, each with their own pros and cons, including Docker Swarm, Kubernetes, and Mesos.
Magnum API service
This service accepts API requests from users. It authenticates, authorizes, and communicates with magnum-conductor.
Magnum conductor
This communicates with the COE (Container Orchestration Engine). It does the actual work of creating cluster templates, clusters, services, and containers.
Install and configure:
Note: Magnum creates clusters of compute instances on the Compute service (Nova). These instances must have basic Internet connectivity and must be able to reach Magnum’s API server. Make sure that the Compute and Network services are configured accordingly.
Prerequisites:
Before you install and configure the Container Infrastructure Management service, you must create a database, service credentials, and API endpoints.
To create the database, complete these steps:
Use the database access client to connect to the database server on the controller node as the root user:[root@node006 ~]# mysql -u root -p
Create the magnum database:
CREATE DATABASE magnum;
Grant proper access to the magnum database:GRANT ALL PRIVILEGES ON magnum.* TO 'magnum'@master\
IDENTIFIED BY ‘MAGNUM_DBPASS’;GRANT ALL PRIVILEGES ON magnum.* TO 'magnum'@'%'\
IDENTIFIED BY ‘MAGNUM_DBPASS’;GRANT ALL PRIVILEGES ON magnum.* TO 'magnum'@'localhost'\
IDENTIFIED BY ‘MAGNUM_DBPASS’;
Replace MAGNUM_DBPASS with a suitable password.
Exit the database access client.
On the head node Source the admin password:[root@maa-test4 ~]# . .openstackrc_password
To create the service credentials, complete these steps:
Create the magnum user:# openstack user create --domain default \
–password PASSWORD magnum
Add the admin role to the magnum user:# openstack role add --project service --user magnum admin
Create the magnum service entity:# openstack service create --name magnum \
--description "OpenStack Container Infrastructure Management Service" \
Container-infra
Create the Container Infrastructure Management service API endpoints:
[root@maa-test4 ~]# openstack endpoint create --region openstack container-infra admin \
http://oshaproxy:9511/v1
[root@maa-test4 ~]# openstack endpoint create --region openstack container-infra internal \
http://oshaproxy:9511/v1
[root@maa-test4 ~]# openstack endpoint create --region openstack container-infra public \
http://10.2.62.16:9511/v1
Magnum uses the AWS CloudFormation template format; so we will need to install Heat’s compatible CloudFormation API and configure it:
Install the packages in the software image:# yum install openstack-heat-api-cfn.noarch
On the head node, create the service:
[root@maa-test4 ~]# openstack service create –name heat-cfn –description “Orchestration” \
cloudformation
On the head node, create the endpoints:
[root@maa-test4 ~]# openstack endpoint create --region openstack cloudformation admin \
http://oshaproxy:8000/v1
[root@maa-test4 ~]# openstack endpoint create --region openstack cloudformation internal \
http://oshaproxy:8000/v1
[root@maa-test4 ~]# openstack endpoint create --region openstack cloudformation public \
http://10.2.62.16::8000/v1
Replace the preceding IP address with the IP address of the head node that HAproxy listens on.
Add the following line in “/etc/heat/heat.conf”, under section “[heat_api_cfn]”:
workers = 4
Append the below lines to “/etc/haproxy/haproxy.cfg” on the head node:
listen orchestrationAPI-heat-cfn
bind 0.0.0.0:8000
server auto-node006::10.141.0.6:8000 10.141.0.6:8000 check
Magnum requires additional information in the Identity service to manage clusters. To add this information, complete these steps:
- Create the magnum domain that contains projects and users:
# openstack domain create –description “Owns users and projects \
created by magnum” magnum - Create the magnum_domain_admin user to manage projects and users in the magnum domain:
# openstack user create –domain magnum –password PASSWORD \
magnum_domain_admin
- Add the admin role to the magnum_domain_admin user in the magnum domain to enable administrative management privileges by the magnum_domain_admin user:
# openstack role add –domain magnum –user-domain magnum \
–user magnum_domain_admin admin
Install and configure components:
The following procedures will be done inside the software image used by OpenStack:
# chroot /cm/images/<image-name>
Install the OS-specific prerequisites:# yum install python-devel openssl-devel mysql-devel \
libxml2-devel libxslt-devel postgresql-devel git \
libffi-devel gettext gcc
Create the magnum user and the necessary directories:
Create user with corresponding group:# groupadd --system magnum
# useradd --home-dir "/var/lib/magnum"\
--create-home \
--system \
--shell /bin/false \
-g magnum \
magnum
Create these oher Magnum directories:# mkdir -p /var/log/magnum
# mkdir -p /etc/magnum
Set the ownership of the directories:# chown magnum:magnum /var/log/magnum
# chown magnum:magnum /var/lib/magnum
# chown magnum:magnum /etc/magnum
Install virtualenv and python prerequisites:
Install virtualenv and create one for magnum’s installation:# easy_install -U virtualenv
# su -s /bin/sh -c "virtualenv /var/lib/magnum/env" magnum
Install python prerequisites:# su -s /bin/sh -c "/var/lib/magnum/env/bin/pip install tox pymysql \
python-memcached" magnum
Clone the stable/Pike branch, and install magnum:# cd /var/lib/magnum
# git clone --single-branch -b stable/pike https://git.openstack.org/openstack/magnum.git
# chown -R magnum:magnum magnum
# cd magnum
# su -s /bin/sh -c "/var/lib/magnum/env/bin/pip install -r requirements.txt" magnum
# su -s /bin/sh -c "/var/lib/magnum/env/bin/python setup.py install" magnum
Copy policy.json and api-paste.ini:# su -s /bin/sh -c "cp etc/magnum/policy.json /etc/magnum" magnum
# su -s /bin/sh -c "cp etc/magnum/api-paste.ini /etc/magnum" magnum
Generate a sample configuration file:# su -s /bin/sh -c "/var/lib/magnum/env/bin/tox -e genconfig" magnum
# su -s /bin/sh -c "cp etc/magnum/magnum.conf.sample \
/etc/magnum/magnum.conf" magnum
Edit the “/etc/magnum/magnum.conf” file so that it looks like the following, while adjusting the appropriate values to match your own environment:
If you can’t remember the RabbitMQ password that you set for the openstack account, then you can grep for it in /etc/nova/nova.conf on the controller node. For example:[root@node006 ~]# grep ^transport_url /etc/nova/nova.conf
transport_url = rabbit://openstack:8y4LCe4sBL1y2ipiGpbtRsjGgmeq7i@node006:5672/
The text RABBITMQPASSWORD in the following magnum.conf file is thus replaced by 8y4LCe4sBL1y2ipiGpbtRsjGgmeq7i:
[DEFAULT]
host = node006
log_file = magnum.log
log_dir = /var/log/magnum
transport_url = rabbit://openstack:RABBITMQPASSWORD@node006:5672/
[api]
host = 10.141.0.6
[barbican_client]
[certificates]
cert_manager_type = local
storage_path = /var/lib/magnum/certificates/
[cinder]
[cinder_client]
region_name = openstack
[cluster]
[cluster_heat]
[cluster_template]
[conductor]
[cors]
[database]
connection = mysql+pymysql://magnum:PASSWORD@node006:3307/magnum
[docker]
[docker_registry]
[drivers]
#disable certificate authority validation
verify_ca = false
[glance_client]
[heat_client]
[keystone_auth]
[keystone_authtoken]
memcached_servers = node006:11211
auth_version = v3
auth_uri = http://oshaproxy:5000/v3
project_domain_name = default
project_name = service
user_domain_name = default
password = PASSWORD
username = magnum
auth_url = http://oshaproxy:35357
#do not change auth_type = password
auth_type = password
admin_user = magnum
admin_password = PASSWORD
admin_tenant_name = service
[magnum_client]
[matchmaker_redis]
[neutron_client]
[nova_client]
[oslo_concurrency]
lock_path = /var/lib/magnum/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
driver = messaging
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_policy]
[profiler]
[quotas]
[trust]
trustee_domain_name = magnum
trustee_domain_admin_name = magnum_domain_admin
trustee_domain_admin_password = PASSWORD
[x509]
Populate the Magnum database:# su -s /bin/sh -c "/var/lib/magnum/env/bin/magnum-db-manage upgrade" magnum
Set up Magnum log rotation:#cd /var/lib/magnum/magnum
# cp doc/examples/etc/logrotate.d/magnum.logrotate /etc/logrotate.d/magnum
Finalize installation
Create init scripts and services:
#cd /var/lib/magnum/magnum
# cp doc/examples/etc/systemd/system/magnum-api.service \
/etc/systemd/system/magnum-api.service
# cp doc/examples/etc/systemd/system/magnum-conductor.service \
/etc/systemd/system/magnum-conductor.service
Using cmsh, 3 new services can be added to the controller node. The services are openstack-heat-api-cfn, magnum-api and magnum-conductor, and they can be set to autostart and to be monitored. For example, for the openstack-heat-api.cfn, and if the controller node is node006:
[root@maa-test4 ~]# cmsh
[maa-test4]% device use node006
[maa-test4->device[node006]]% services
[maa-test4->device[node006]->services]% add openstack-heat-api-cfn
[maa-test4->device[node006]->services]% set autostart yes
[maa-test4->device[node006]->services]% set monitored yes
[maa-test4->device[node006]->services]% commit
Repeat the same steps to add magnum-api and magnum-conductor.
Exit the Chrooted environment and reboot your OpenStack environment/nodes
Append the following to /etc/haproxy/haproxy.cfg on the head node:
listen magnum
bind 0.0.0.0:9511
server auto-node006::10.141.0.6:9511 10.141.0.6:9511 check
Restart HAproxy on the head node:
# systemctl restart haproxy.service
If you will be using named on the head node for recursive queries from the clusters that you will be building via Magnum; then you can tell named to accept recursive queries from any source IP:
Modify “/cm/local/apps/cmd/etc/cmd.conf”
Set PublicDNS from false to true
PublicDNS = true
Restart cmd on the head node[root@maa-test4 ~]# systemctl restart cmd
Verify that openstack-heat-api-cfn, magnum-api and magnum-conductor services are running on the controller node:
# systemctl status magnum-api
# systemctl status magnum-conductor
# systemctl status openstack-heat-api-cfn
Install the command-line client
The package “python2-magnumclient.noarch” provides the magnum command-line client that can be used to interact with magnum. It is installed on the head node and on the OpenStack software image as part of Bright’s OpenStack deployment. If for some reason the package is not available then you can install it on the head node and in the OS software image, for example as follows:
[root@maa-test4 ~]# yum install python2-magnumclient.noarch
[root@maa-test4 ~]# yum install python2-magnumclient.noarch --installroot=/cm/images/<SOFTWARE_IMAGE>
Verify operation
Perform these operations on the head node to verify that magnum-conductor is up:
[root@maa-test4 ~]# . .openstackrc_password
[root@maa-test4 ~]# magnum service-list
+----+------+------------------+-------+---------------------------+---------------------------+
| id | host | binary | state | created_at | updated_at
+----+------+------------------+-------+---------------------------+---------------------------+
| 1 |node006| magnum-conductor| up | 2018-09-28T10:19:00+00:00 | 2018-09-28T15:55:46+00:00 |
+----+------+------------------+-------+---------------------------+---------------------------+
Launch a test cluster
We will create a test Docker Swarm cluster using a Fedora Atomic image.
[root@maa-test4 ~]# wget https://fedorapeople.org/groups/magnum/fedora-atomic-ocata.qcow2
[root@maa-test4 ~]# openstack image create --disk-format=qcow2 --container-format=bare --file=fedora-atomic-ocata.qcow2 --property os_distro='fedora-atomic' fedora-atomic-ocata
Before creating the cluster, a template must created and the cluster will be based on this template.
[root@maa-test4 ~]# openstack coe cluster template create --name dockertemp2 --image 604f9bfc-2cd7-4895-9617-a68d98bfa77c --docker-volume-size 5 --external-network c541029e-eb65-4771-a747-a76088162cec --dns-nameserver 10.141.255.25 --master-flavor 9fcdfc68-9a85-4b90-bab7-3fdf285c1d19 --flavor 9fcdfc68-9a85-4b90-bab7-3fdf285c1d19 --coe swarm --tls-disabled
Note: we are using IDs instead of component names above, as there is an issue with Magnum resolving the names to IDs while building the cluster
You should replace the above IDs with the ones corresponding to your environment
Create the cluster:
[root@maa-test4 ~]# openstack coe cluster create --name dockercluster --cluster-template dockertemp2 --node-count 1 --keypair keypair
[root@maa-test4 ~]# openstack coe cluster show dockercluster
+---------------------+------------------------------------------------------------+
| Field | Value |+---------------------+------------------------------------------------------------+
| status | CREATE_COMPLETE |
| cluster_template_id | b143ea59-bee9-4701-b9f1-95a986e0e7af |
| node_addresses | [u'10.141.152.12'] |
| uuid | 9440f792-8894-4749-a176-eba10203aea8 |
| stack_id | 0e93f875-a0cd-4b61-9a06-dd9024a35e80 |
| status_reason | Stack CREATE completed successfully |
| created_at | 2018-10-02T13:29:16+00:00 |
| updated_at | 2018-10-02T13:36:28+00:00 |
| coe_version | 1.2.5 |
| faults | |
| keypair | keypair |
| api_address | tcp://10.141.152.3:2376 |
| master_addresses | [u'10.141.152.3'] |
| create_timeout | 60 |
| node_count | 1 |
| discovery_url | https://discovery.etcd.io/638e7384c59885070a35675fcdba6cc3 |
| master_count | 1 |
| container_version | 1.12.6 |
| name | dockercluster |
+---------------------+------------------------------------------------------------+
Create a container in the swarm cluster. This container will ping the address 8.8.8.8 four times:[root@dockercluster-ravfonb644f7-master-0 ~]# docker run --rm -it cirros:latest ping -c 4 8.8.8.8
Unable to find image 'cirros:latest' locally
Trying to pull repository docker.io/library/cirros ...
sha256:38e8f9e7bc8a340c54a5139823dc726d67dd7408ed7db9e3be41cb1517847f56: Pulling from docker.io/library/cirros
3d6427f49fe3: Pull complete
7f41e3d981b9: Pull complete
56f8ef4ed3d7: Pull complete
Digest: sha256:38e8f9e7bc8a340c54a5139823dc726d67dd7408ed7db9e3be41cb1517847f56
Status: Downloaded newer image for docker.io/cirros:latest
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=116 time=10.667 ms
64 bytes from 8.8.8.8: seq=1 ttl=116 time=10.805 ms
64 bytes from 8.8.8.8: seq=2 ttl=116 time=9.778 ms
64 bytes from 8.8.8.8: seq=3 ttl=116 time=11.061 ms
--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 9.778/10.577/11.061 ms
Troubleshooting
Refer to the upstream troubleshooting guide:
https://docs.openstack.org/magnum/latest/admin/troubleshooting-guide.html