How can VXLAN traffic be offloaded to InfiniBand ?
If the hardware on which a Bright OpenStack cluster is running has InfiniBand support, then the InfiniBand network can be used to offload the VXLAN traffic. In other words, the tenant (project) networks can run over the InfiniBand fabric instead of over ethernet.
This has been tested on Bright Openstack 7.3
1. Bright OpenStack 7.3
2. InfiniBand HCA installed on the hardware of the OpenStack cluster
3. IPoIB configured.
1. Remove the current VXLAN network
%use ibnet <----assuming ibnet is the name of the current infiniBand network
%set openstacknetworktype vxlan\ host
%set openstackvlanrange 1:500000 <---- can be anything, from 1 to maximum allowed VXLANs
%set openstackphysicalnetworkname phyvxlanhostnet
%set baseaddress <required base address>
%set broadcastaddress <required broadcast address>
A proper IP address must be configured for all the InfiniBand interfaces of the OpenStack cluster. The addresses must be assigned by the Bright Cluster Manager to the InfiniBand network, by setting the network to ibnet. For example:
%device use node001
%set ip <ip address>
%set network ibnet
The OpenStack cluster, including the head node, should then be rebooted:
%reboot -n <all the nodes>
After the cluster is up and running again, it is then possible to create project networks. For example (using cmsh rather than neturon commands):
%set networktype vxlan
%set network vxlan-network1
%set cidr 10.100.0.0/16
%set end 10.100.255.254