This site requires JavaScript to be enabled
Welcome Guest|
Recent searches
IE BUMPER

Configure Network Teaming

Number of views : 0
Article Number : KB0012556
Published on : 2020-01-17
Last modified : 2020-01-17 13:36:34
Knowledge Base : IT Public Self Help

LACP

Link aggregation or IEEE 802.1AX-2008 is a computer networking term which describes using multiple network cables/ports in parallel to increase the link speed beyond the limits of any one single cable or port, and to increase the redundancy for higher availability.

Link aggregation addresses two problems with Ethernet connections: bandwidth limitations and lack of resilience.

With regard to the first issue: bandwidth requirements do not scale linearly. Ethernet bandwidths historically have increased by an order of magnitude each generation: 10 Megabit/s, 100 Mbit/s, 1000 Mbit/s, 10,000 Mbit/s. If one started to bump into bandwidth ceilings, then the only option was to move to the next generation which could be cost prohibitive. An alternative solution, introduced by many of the network manufacturers in the early 1990s, is to combine two physical Ethernet links into one logical link via channel bonding. Most of these solutions required manual configuration and identical equipment on both sides of the aggregation.

The second problem involves the three single points of failure in a typical port-cable-port connection. In either the usual computer-to-switch or in a switch-to-switch configuration, the cable itself or either of the ports the cable is plugged into can fail. Multiple physical connections can be made, but many of the higher level protocols were not designed to failover completely seamlessly.

Link aggregation is often abbreviated LAG. Other terms include trunking, link bundling, Ethernet/network/NIC bonding, NIC teaming, port channel, EtherChannel, Multi-link trunking (MLT), Smartgroup (from ZTE), and EtherTrunk (from Huawei).

 

IP Multipathing (IPMP)

A possible alternative to using LACP on the Solaris servers is also a process called IPMP (IP Multi-pathing). It should be able to provide the same network redunancy without the necessity of renaming all of the network interfaces in the global and non-global zones. IPMP would be a lighter weight alternative to LACP that could be put on our existing servers without requiring a reboot. For details, see an excellent writeup of Solaris IPMP. IPMP is especially useful for Solaris systems that have multiple zones (VMs) running within them for the following reasons (taking UTDirect as an example):

Implementing LACP on solaris creates a new interface device name for each connection. For example, we currently use bge0,1,2 for our 3 VLANs that we talk to on each machine, each zone also has a logical interface on each of those real interfaces; bge0:1,bge1:1,bge2:1 and so on for each zone. When you add the connections for LACP, say bge4,5,6, and then combine bge0 and bge4 using the dladm tool you create an aggregate interface called aggr1. The same goes for bge1+bge5 = aggr2, bge2+bge6 = aggr3. You then reconfigure the machine to use these new interfaces, which implies that every logical interface defined in a zone must also be reconfigured. We use between 2 to 7 zones on each machine we run. Counting the global, we then have between 9 to 24 interface name changes per machine, and will require a reboot.

If we use IPMP, all we need to do is plumb the new interface, put it in standby mode and group it with its sister interface. Eg. Ifconfig bge0 group ipmp_grp1, ifconfig bge4 plumb group ipmp_grp1 standby, that’s it. Now, if bge0 loses link, the mpathd daemon (which starts as soon as a group interface is defined) moves the IP's assocatiate with bge0 to bge4, including all the logical interfaces defined on bge0. This means no reconfigurations (only additions) for existing machines is needed and no reboots.

IPMP on Solaris step-by-step instructions.

Network Teaming on Windows

Windows Operating Systems do not provide a network teaming feature. This is handled by advanced features of the drivers instead. The process for Teaming NICs varies by vendor (Broadcom/Intel). Broadcom has developed BACS (Broadcom Advanced Control Suite) to configure Teams. Intel adds a Teaming tab to the NIC properties in Device Manager. Note that Broadcom and Intel teaming software can work with NICs from other vendors, as long as one of thier own NICs is a member of the team (for example, BACS can be used to team a two Broadcom NIcs, or a Broadcom and an Intel NIC, but not two Intel NICs.)

 

Thank You! Your feedback has been submitted.

Feedback