In RHEL7 it is possible to bind two or more network interfaces together to enhance fault-tolerance or to increase throughput. Bonding is the tried and true method for making this happen but teaming, which employs the teamd service provides a new way to aggregate interfaces. Both ways of going about this are valid for the exam. Below are the steps for aggregating interfaces using teamd via the Network Manager Command Line Interface, or nmcli:
1. Install teamd if it’s not already installed. The following command will install teamd and libteam as a dependency:
yum -y install teamd
2. Choose the interfaces you’ll be using for the aggregate link.
You’ll need 2 or more. Realistically, you could create a team interface with only one slave interface, or port but this would defeat the purpose of link aggregation. The ip utility with the ‘link show’ argument will display interfaces installed on the system:
ip link show
Since I’ve added a couple of NICs to the KVM instance I’m using to practice on the above command shows my two new NICs with udev assigned names of ens10 and ens11.
3. Delete any connections automatically assigned to your chosen NICs.
Doing this will help prevent confusion later. NetworkManager has a habit of automatically assigning connection information to a newly added interface. Get rid of this by first looking to see what you’ve got with ‘nmcli connection show’ and then running ‘nmcli connection delete ‘ on any connection information on the interfaces you’ve chosen to aggregate.
4. Choose a runner configuration file and edit it to match your site-specifics.
Interface teaming uses runners to define the behavior of a teamed interface. For instance, the activebackup runner keeps one interface active at a time and waits for something to happen to that interface. If it notices a state change it will simply switch to sending and receiving data over another interface in the team to ensure availability. A nice explanation of what each runner does is available in the teamd.conf(5) manpage under the heading runner.name.
The good news is it is completely unnecessary to memorize the format of the runner configuration file as examples are stored in /usr/share/doc/teamd-x.yy/example_configs where x.yy is the major/minor version numbers of the version of teamd that you installed in step 1. Navigate to this directory and choose a runner that fits the needs of your aggregated interface and copy that runner config file to /etc/sysconfig/network-scripts. I have chosen the roundrobin.conf runner configuration file.
cp roundrobin.conf /etc/sysconfig/network-scripts/
Now edit the file to match your site specifics. In the case of my /etc/sysconfig/network-scripts/roundrobin.conf file, I will need to change the names of the interfaces in the JSON to the names of the interfaces I chose to aggregate in step 2.
5. Add the team interface:
nmcli may now be employed to create the team interface with the runner we chose in the last step. This process is lavishly illustrated in the nmcli-examples(5) manpage under the heading Example 7.
nmcli con add type team con-name Team1 ifname Team1 config roundrobin.conf
Now would also be a good time to give your team interface an IP address and method:
nmcli con mod Team1 ipv4.addresses 192.168.122.100/24
nmcli con mod Team1 ipv4.method manual
Check your work with:
nmcli con show
and verify that a new connection named Team1 of type team is bound to a device named Team1
6. Add slave interfaces to your teamed interface:
Up to now the team interface we’ve created has been an intellectual concept. We will make it useful by adding real, physical interfaces to that team interface like this:
nmcli con add type ethernet con-name Team1-slave1 ifname ens10 master Team1
nmcli con add type ethernet con-name Team1-slave2 ifname ens11 master Team1
Now, check your work by running ‘nmcli con show’. If you’ve been successful you should see Team1, Team1-slave1, and Team1-slave2 all connected. Additionally, if you run ‘ip addr show Team1’ you’ll find that device in an UP status with and ip address assigned.
Lastly, check your runner setup and port assignments with ‘teamdctl Team1 state’. With the configuration we’ve used you should see ‘runner: roundrobin’ with 2 ports assigned and both ports with ‘link up’ in beneath their respective headings. It would be useful to reboot the host at this point to guarantee that the configuration changes you’ve made are persistent. Run ‘nmcli con sho’ after a reboot to verify this. The Team1 interface should automatically starts because each of the ports are set to autoconnect. You can verify this by running ‘nmcli con show Team1-slave1’ and verifying that the ‘connection.autoconnect’ is set to yes.
Since the exam objective is to “configure aggregated network links between two Red Hat Enterprise Linux systems” it will also be necessary to run this setup on the other RHEL system.
dcd
Primary Source:
RHEL7 Networking Guide https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/sec-Configure_a_Network_Team_Using-the_Command_Line#sec-Configure_Network_Teaming_Using_nmcli
Secondary Source:
Red Hat Certified Engineer (RHCE) Complete Video Course with Virtual Machines, 2/e, 5.7 Configuring Network Teams Van Vugt, Sander, Pearson IT Certification