The type of virtual machine storage that is needed can be configured, either by configuring flavors, or by manually creating a volume and attaching it to the machine.
When using Cinder, multiple storage options are possible. By default, there are two storage options types: NFS and Ceph. Both will be configured in this FAQ/article. Other storage backends are possible too, so long as they are supported by Cinder.
NFS Export Storage Configuration
A new NFS export can be configured as follows:
#cmsh
%device use <node_name>
%roles
%assign storage
%exit;exit
%fsexports
%add nova_instances
%set path <path_to_nfs_exports>
%set hosts internalnet
%set write yes
%commit
The configuration can be validated by running the following check, which shows the exports list:
#showmount -e localhost
This assumes that Ceph storage is pre-installed. If it is not, then Ceph can be installed and added to the cluster by following the KB article at https://kb.brightcomputing.com/knowledge-base/how-do-i-integrate-bright-openstack-with-an-external-ceph-storage/, or Bright support (http://support.brightcomputing.com) can be contacted for help.
Configuring Cinder
The following sample session shows how, within the OpenStackController configuration overlay, the OpenStack::Volume role is used to set the volume back end property type to nfs, and given the unimaginative name nfs_storage:
# cmsh
% configurationoverlay
[configurationoverlay]% use openstackcontrollers
[configurationoverlay[OpenStackControllers]]% roles
[configurationoverlay[OpenStackControllers]->roles]% use openstack::volume
[configurationoverlay[OpenStackControllers]->roles[Openstack::Volume]]% volumebackends
[configurationoverlay[OpenStackControllers]->roles[Openstack::Volume]->volumebackends]% add nfs nfs_storage
Setting the NFS Storage Parameters
%set nfsmountpointbase /var/lib/cinder/volumes
<—- this is where Cinder stores volumes
%set nfssparsedvolumes yes
<— allows storing/reading sparsed files ( QCow2)
%set nfsshares host:mount_point
<—- set to the host and mount point of NFS export configured in the preceding
%commit
The Ceph and NFS backends should now show up in the Cinder configuration:
%list
Name (key)
-----------
ceph
nfs
Other types of storage backends can be configured for Cinder using cmsh with the same procedures. For instance: solidfire, netapp, and gpfs, can also be configured.
Configuring OpenStack to use these storage backends
There are multiple ways to configure OpenStack to use these backends. Cinder scheduling/weighing capabitilies can be used, or filters can be added.
Some hosts can also be configured to use Ceph, and others to use NFS, and create host aggregates.
Cinder types can also be created. Cinder type creation will be shown in this article.
Creating a Cinder volume type:
To create a Cinder volume type, the following commands are run:
#openstack volume type create ceph
#openstack volume type set --property volume_backend_name=ceph ceph
Creating a volume type for NFS:
To create an NFS volume type, the following commands are run:
#openstack volume type create nfs
#openstack volume type set --property volume_backend_name=<name_of_nfs_storage_backend> nfs
Creating a volume with specific volume type:
cinder create --name nfs_volume --volume-type nfs 100G
Starting a machine from that volume:
nova boot --block-device source=volume,id=VOLUME_ID,dest=volume,shutdown=preserve ServerOne
<– ServerOne is a name set for the machine by the nova command
Creating a Ceph-backed volume:
cinder create --name ceph_volume --volume-type ceph 100G
NFS volumes do not support snapshots. If snapshots from the volume are needed then a Ceph-backed volume must be created.