bond和team 这两个基本上功能是一样的,从实现方式上来讲,bond需要的是内核模块bonding属于内核态进程,而team使用teamd作为守护进程,是用户态的进程
它俩都可以实现ab、负载均衡、LACP模式等
1 2 3 4 5 6 7 8 modprobe bonding lsmod | grep bonding bonding 241664 0 tls 131072 1 bonding echo "bonding" | sudo tee /etc/modules-load.d/bonding.conf
网卡绑定模式 round-robin平衡轮询(默认) mod=0, 也叫balance-rr,也就是最最基础的轮询 传输数据包依次传输,按包分配
但延迟不稳定,会干扰吞吐量
active-backup主备 mod=1 只有一个设备处于活动状态,down掉后会立即由备份链路切换为主链路 主备链路mac地址对外一致
资源利用率较低
balance-xorIP哈希轮询 mod=2 ,异或运算
基于特定HASH策略进行数据包传输,根据源目的MAC进行计算
broadcast广播 mod=3,有几个网卡,就发几份数据
802.3ad动态链路聚合LACP mod=4,创建一个聚合组,共享同样的速率和双工设定 即华为的EtherTrunk,思科的PortChannel
动态链路聚合需要与上联交换机进行协商,也就是对端也需要配置
balance-tlb适配器传输负载均衡 mod=5,根据当前网卡负载,决定新的流量走哪个网卡 即动态发送,固定接收,适合上传多下载少的情况
需要ethtool支持获取每个网卡的速率
balance-alb适配器适应性负载均衡 mod=6,除了LACP以外最省事的模式
其包含了balance-tlb模式,通过arp实现动态发送动态接收 适合上传下载都多的情况
team配置active backup模式
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 nmcli con add con-name team0 type team team.runner activebackup \ ipv4.addresses 10.1 .1 .1 /24 ipv4.method manual nmcli con add con-name team0-port1 ifname eth1 type team-slave master team0 nmcli con add con-name team0-port2 ifname eth2 type team-slave master team0 nmcli con up team0-port1 nmcli con up team0-port2 nmcli con up team0 nmcli con add con-name team0 type team team.runner activebackup ipv4.addresses 10.1 .1 .2 /24 ipv4.method manual nmcli con add con-name team0-port1 ifname eth1 type team-slave master team0 nmcli con add con-name team0-port2 ifname eth2 type team-slave master team0 nmcli con up team0-port1 nmcli con up team0-port2 nmcli con up team0
验证 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 nmcli con show NAME UUID TYPE DEVICE System eth0 5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 ethernet eth0 lo 91687d43-44bd-4dfe-91a6-7968fdd10064 loopback lo team0 b029f559-5edf-4330-baaf-9972abb45dc5 team nm-team team0-port1 e62293ee-db03-4d66-bd6a-679e36dc66be ethernet eth1 team0-port2 e6fb023b-4ca3-4371-9c30-623160129ef9 ethernet eth2 nmcli device DEVICE TYPE STATE CONNECTION eth0 ethernet connected System eth0 lo loopback connected (externally) lo nm-team team connected team0 eth1 ethernet connected team0-port1 eth2 ethernet connected team0-port2 teamdctl nm-team state setup: runner: activebackup ports: eth1 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 eth2 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: eth1 可以看到node1和2都是eth1主用 让node1和node2互ping之后,进入交换机查看mac 可以发现SW1的三个端口都已经学习到了,而SW2只有e0/2学习到了,这是因为node1和node2的活跃口 都是eth1,所以流量只走了SW1
断开node1的eth1
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 ip link set eth1 down teamdctl nm-team state setup: runner: activebackup ports: eth1 link watches: link summary: down instance[link_watch_0]: name: ethtool link: down down count: 1 eth2 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: eth2 此时活跃口变成了eth2 此时依然可以ping通 !ping ping 10.1 .1 .2 PING 10.1 .1 .2 (10.1.1.2) 56 (84) bytes of data. 64 bytes from 10.1.1.2: icmp_seq=1 ttl=64 time=1.14 ms 64 bytes from 10.1.1.2: icmp_seq=2 ttl=64 time=1.19 ms 64 bytes from 10.1.1.2: icmp_seq=3 ttl=64 time=1.16 ms
team配置LACP LACP是自动协商的协议,两台交换机中间两张网卡做绑定LACP是最简单的例子,但不符合实际生产环境:
生产环境中,一定是同一台服务器的两张网卡连不同的TOR,但想要将LACP看成同一个链路,如何跨交换机做接口绑定,就需要将交换机做堆叠/集群,M-LAG华为跨设备链路聚合,vPC思科私有跨设备链路聚合
我这里使用的是NXOS9k版本
vPC配置 nxos1
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 configure feature lacp feature vpc interface mgmt0 ip address 10.1 .1 .1 255.255 .255 .0 vpc domain 1 role priority 1 peer-keepalive destination 10.1 .1 .2 source 10.1 .1 .1 interface e1/3 switch mode trunk channel-group 100 force mode active int po 100 switch mode trunk switch trunk allowed vlan 1 vpc peer-link int e1/1 switch mode trunk switch trunk allowed vlan 1 channel-group 1 force mode active no shut int port-channel 1 vpc 1 no shut int e1/2 switch mode trunk switch trunk allowed vlan 1 channel-group 2 force mode active no shut int port-channel 2 vpc 2 no shut
nxos2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 configure feature lacp feature vpc interface mgmt0 ip address 10.1 .1 .2 255.255 .255 .0 vpc domain 1 peer-keepalive destination 10.1 .1 .1 source 10.1 .1 .2 int e1/3 switch mode trunk channel-group 100 force mode active int po 100 switch mode trunk switch trunk allowed vlan 1 vpc peer-link int e1/1 switch mode trunk switch trunk allowed vlan 1 channel-group 1 force mode active no shut int port-channel 1 vpc 1 no shut int e1/2 switch mode trunk switch trunk allowed vlan 1 channel-group 2 force mode active no shut int port-channel 2 vpc 2 no shut
服务器配置LACP 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 nmcli con add con-name team1 ifname team1 type team team.runner lacp \ ipv4.addresses 20.1 .1 .1 /24 ipv4.method manual nmcli con add con-name team1-port1 ifname eth3 type team-slave master team1 nmcli con add con-name team1-port2 ifname eth4 type team-slave master team1 nmcli con up team1-port1 nmcli con up team1-port2 nmcli con up team1 nmcli con add con-name team1 ifname team1 type team team.runner lacp ipv4.addresses 20.1 .1 .2 /24 ipv4.method manual nmcli con add con-name team1-port1 ifname eth3 type team-slave master team1 nmcli con add con-name team1-port2 ifname eth4 type team-slave master team1 nmcli con up team1-port1 nmcli con up team1-port2 nmcli con up team1 nmcli con show NAME UUID TYPE DEVICE System eth0 5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 ethernet eth0 lo 1b52d09f-42ca-489c-ad1c-2cc43678f844 loopback lo team0 3a89873b-d583-4aff-ba15-3264a5a640d6 team nm-team team1 9ba59691-9b0d-44ce-967b-4bf294aebd38 team team1 team0-port1 275f6e5f-c5a6-47e3-9884-bf2c44fa2815 ethernet eth1 team0-port2 5ffda606-20ac-41b8-9c00-d37f5d5b3b26 ethernet eth2 team1-port1 f32dcdb1-0931-4255-a915-1e90d9d2fc4a ethernet eth3 team1-port2 35dc61c9-39df-47e3-926a-8460db3f1833 ethernet eth4 teamdctl team1 state setup: runner: lacp ports: eth3 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: aggregator ID: 6 , Selected selected: yes state: current eth4 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: aggregator ID: 6 , Selected selected: yes state: current runner: active: yes fast rate: no do show vpc ... vPC status ---------------------------------------------------------------------------- Id Port Status Consistency Reason Active vlans -- ------------ ------ ----------- ------ --------------- 1 Po1 up success success 1 2 Po2 up success success 1