1. Home
  2. Configuring
  3. How can I have multiple network interfaces on a node in the same IP subnet?

How can I have multiple network interfaces on a node in the same IP subnet?

When you configure multiple network interfaces on a single machine with an IP address in the same IP subnet, you will need to do some additional configuration work to allow the networking stack in the Linux kernel to use these interfaces properly. By default you will find that only one of the IP addresses that you assign within an IP subnet is usable. This is because the kernel may respond to an incoming packet through a different interface than the interface where the packet came in.

Setting kernel parameters

The  net.ipv4.conf.<interface>.accept_local kernel parameter needs to be set to 1. In addition, a number of other ARP and reverse path kernel parameters should be set appropriately. There are several ways of accomplishing this, but on a Bright cluster, the easiest way is to create a file /etc/sysctl.d/99-multi-ip-in-subnet.conf in the relevant software images (e.g. /cm/images/default-image) with for example the following contents:

# Set defaults

# Set ARP and reverse path settings for ib0

# Set ARP and reverse path settings for ib1

# Set accept_local for interfaces

It is important to substitute ib0 and ib1 with appropriate interface names, and expand for any further interfaces. Alternatively the Linux kernel allows all and default to be specified instead of an actual interface name.

Setting up routes

A number of routes have to be created on all nodes. This can be done by creating the following files on each node:

  • /etc/sysconfig/network-scripts/route-<interface>
  • /etc/sysconfig/network-scripts/rule-<interface>
  • /etc/iproute2/rt_tables

In Bright the easiest way of accomplishing this is to use a finalize script which executes after the node has finished provisioning, but before systemd is started. Setting a finalize script for a category can be done using Bright View or CMSH. In CMSH:

[root@mdv-cluster ~]# cmsh 
[mdv-cluster]% category use default 
[mdv-cluster->category[default]]% set finalizescript [filename]

[mdv-cluster]% commit

For more information about finalize scripts, please consult the Bright Cluster Manager documentation.

The following finalize script can be set for a category or for individual nodes to generate the appropriate content:


INTERFACES="ib0 ib1"

bits_by_netmask () {
   c=0 x=0$( printf '%o' ${1//./ } )
   while [ $x -gt 0 ]; do
       let c+=$((x%2)) 'x>>=1'
   echo $c ; }

for interface in $INTERFACES; do
    eval netmask=\$CMD_INTERFACE_${interface}_NETMASK
    eval src=\$CMD_INTERFACE_${interface}_IP
    IFS=. read -r i1 i2 i3 i4 <<< "$src"
    IFS=. read -r m1 m2 m3 m4 <<< "$netmask"
    base=`printf "%d.%d.%d.%d\n" "$((i1 & m1))" "$((i2 & m2))" "$((i3 & m3))" "$((i4 & m4))"`
    bits=`bits_by_netmask $netmask`

    echo $net dev $interface src $src table $tbl >/localdisk/etc/sysconfig/network-scripts/route-$interface
    echo from $src table $tbl >/localdisk/etc/sysconfig/network-scripts/rule-$interface

    if ! grep -q $tblnum /localdisk/etc/iproute2/rt_tables; then
      echo $tblnum $interface >>/localdisk/etc/iproute2/rt_tables

Pay attention to modify the INTERFACES list appropriately in the finalize script that you set for the category. After the finalize script has been set, the nodes should be rebooted. When they come up, you should check that the files have been properly generated on the nodes. For example:

[root@node001 ~]# cat /etc/iproute2/rt_tables
200 ib0
201 ib1
[root@node001 ~]# cat /etc/sysconfig/network-scripts/route-ib0                                               dev ib0 src table ib0 
[root@node001 ~]# cat /etc/sysconfig/network-scripts/route-ib1                                              dev ib0 src table ib1 
[root@node001 ~]# cat /etc/sysconfig/network-scripts/rule-ib0  
from table ib0
[root@node001 ~]# cat /etc/sysconfig/network-scripts/rule-ib1  
from table ib1 

Updated on February 3, 2021

Was this article helpful?

Related Articles

Leave a Comment