If you're not using NetworkManager or Wicd, or some other similar tool to automatically manage your network interfaces for you, this post is for you. In the Debian world, you have a single file that manages your network interfaces. It can manage VLANs, bonded interfaces, virtual interfaces and more. You can establish rules on what the interface should do before brought online, what it can do while online, and what it can do after online. These same rules could be applied for taking the interface down as well. Let's look at some of these.
First, let's look at the basic setup for getting an interface online with DHCP. The file we'll be looking at this entire time is the /etc/network/interfaces file:
auto eth0 allow-hotplug eth0 iface eth0 inet dhcp
The first line tells the kernel to bring the "eth0" interface up when the system boots. The second line tells the kernel to start the interface if a "hotplug" event is triggered. The third line defines the configuration of the "eth0" interface. In this case, it should use IPv4, and should request an IP address from a DHCP server. A static configuration could look like this:
auto eth0 allow-hotplug eth0 iface eth0 inet static address 10.19.84.2 network 10.19.84.0 gateway 10.19.84.1 netmask 255.255.255.0
The first two lines remain the same. In the third line, we have decided to use static addressing, rather than dynamic. Then, we followed through by configuring the interface. It's important to note that the indentation is not required. I only indented it for my benefit.
What about bonding? Simple enough. Suppose you have 2 NICs, one on the motherboard, and other in a PCI slot, and you want to ensure high availability, should the PCI card die. Then you could do something like this:
auto eth0 iface eth0 inet manual post-up ifconfig $IFACE up pre-down ifconfig $IFACE down auto eth1 iface eth1 inet manual post-up ifconfig $IFACE up pre-down ifconfig $IFACE down auto bond0 iface bond0 inet static bond-slaves eth0 eth1 # LACP configuration bond_mode 802.3ad bond_miimon 100 bond_lcap_rate faste bond_xmit_hash_policy layer2+3 address 10.19.84.2 network 10.19.84.0 gateway 10.19.84.1 netmask 255.255.255.0
Technically, I don't need to tell the kernel to bring up interfaces eth0 and eth1, if I tell the kernel to bring up bond0, and slave the eth0 and eth1 interfaces. But, this configuration illustrates some points. First, there are the pre-up, up, post-up, pre-down, down, and post-down commands that you can use in your network interfaces(5) file. Each does something to the interface at different times during the configuration. Also notice I'm using the $IFACE variable. There are others that exist, that allow you to create scripts for your interfaces. See http://www.debian.org/doc/manuals/debian-reference/ch05.en.html#_scripting_with_the_ifupdown_system for more information.
On the bonded interface, I'm putting in two slaves, then setting some bonding configuration that I want, such as using 802.3ad mode. Of course, the interface is static, so I provided the necessary information.
What if we wanted to add our bonded interface to a VLAN? Simple. Just append a dot "." and the VLAN number you want the interface in. Like so:
auto bond0 iface bond0 inet manual bond-slaves eth0 eth1 # LACP configuration bond_mode 802.3ad bond_miimon 100 bond_lcap_rate faste bond_xmit_hash_policy layer2+3 auto bond0.42 iface bond0.42 inet static address 10.19.84.2 network 10.19.84.0 gateway 10.19.84.1 netmask 255.255.255.0 # necessary due to a bonding bug in vlan tools vlan-raw-device bond0
Bring the interface up, the verify that the kernel has assigned it to the right VLAN:
$ sudo cat /proc/net/vlan/config VLAN Dev name | VLAN ID Name-Type: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD bond0.42 | 42 | bond0
Notice that I specified "vlan-raw-device bond0". This is due to a bonding bug in the VLAN tools, where merely specifying which VLAN the interface should be in by its name is not enough. You must also tell the kernel the bonded interface that the VLAN interface should be in.
How about bridged devices:
auto bond0 iface bond0 inet manual bond-slaves eth0 eth1 # LACP configuration bond_mode 802.3ad bond_miimon 100 bond_lcap_rate faste bond_xmit_hash_policy layer2+3 auto bond0.42 iface bond0.42 inet manual post-up ifconfig $IFACE up pre-down ifconfig $IFACE down # necessary due to a bonding bug in vlan tools vlan-raw-device bond0 auto br42 iface br42 inet static bridge_ports bond0.42 address 10.19.84.1 netmask 255.255.255.0 network 10.19.84.0 gateway 10.19.84.1
The only new thing here is the "bridge_ports" command. In this case, our bridged device is bridging our bond0.42 interface, which is in VLAN 42. Imagine having a KVM or Xen hypervisor that has a guest that needs to be in several VLANs. How would you setup all those bridges? Simple. Just create a VLAN interface for each VLAN, then create a bridge for each bonded interface in that VLAN.
Lastly, what about virtual IPs? I've heard that you can assign multiple IP addresses to a single NIC. How do you set that up? Simple. Just add a colon ":" the append a unique number. For example, say I have only one NIC, but wish to have 2 IP addresses, each in different networks:
auto eth0 iface eth0 inet static address 10.19.84.2 netmask 255.255.255.0 network 10.19.84.0 gateway 10.19.84.1 auto eth0:1 iface eth0:1 inet static address 10.13.37.2 netmask 255.255.255.0 network 10.13.37.0
It's important to note that you generally only need one default gateway to get out. Your kernel will route packets accordingly. If you must specify multiple gateways, then you must manually make edits to the kernel's routing table, if everything isn't setup correctly.
Of course, we could combine everything we learned here. See if you can make out what each interface is doing:
auto eth0 iface eth0 inet manual pre-up ifconfig $IFACE up post-down ifconfig $IFACE down auto eth1 iface eth1 inet manual pre-up ifcanfig $IFACE up post-down ifconfig $IFACe down auto bond0 iface bond0 inet manual bond-slaves eth0 eth1 eth2 eth3 # LACP configuration bond_mode 802.3ad bond_miimon 100 bond_lacp_rate faste bond_xmit_hash_policy layer2+3 auto bond0.42 iface bond0.42 inet static address 10.19.84.2 netmask 255.255.255.0 netwark 10.19.84.0 gateway 10.19.84.1 # necessary due to a bonding up in vlan tools vlan-raw-device bond0 auto bond0.42:1 iface bond0.42:1 inet manual pre-up ifconfig $IFACE up post-down ifconfig $IFACE down # necessary due to a bonding bug in vlan tools vlan-raw-device bond0 auto br42 iface br42 inet static bridge_ports bond0.42:1 address 10.13.37.2 netmask 255.255.255.0 network 10.13.37.0
Lastly, MTU. There is a lot of misinformation out there about frame size. In my professional experience, setting the MTU to 9000 bytes does not result in improved performance. Not noticeably at least. But it does have an effect on the CPU. Setting a larger frame size can result in much lower CPU usage, both on the switch, and in your box. However, some protocols, such as UDP, might break with a 9k MTU. So, use appropriately. At any event, here is how I generally set my MTU when dealing with multiple interfaces:
auto eth0 iface eth0 inet manual pre-up ifconfig $IFACE up post-down ifconfig $IFACE down mtu 9000 auto eth1 iface eth1 inet manual pre-up ifcanfig $IFACE up post-down ifconfig $IFACe down mtu 9000 auto bond0 iface bond0 inet manual bond-slaves eth0 eth1 # LACP configuration bond_mode 802.3ad bond_miimon 100 bond_lacp_rate faste bond_xmit_hash_policy layer2+3 mtu 9000 auto bond0.42 iface bond0.42 inet static address 10.19.84.2 netmask 255.255.255.0 network 10.19.84.0 gateway 10.19.84.1 mtu 9000 # necessary due to a bug in vlan tools vlan-raw-device bond0 auto bond0.43 iface bond0.43 inet static address 10.13.37.2 netmask 255.255.255.0 network 10.13.37.0 mtu 1500 # necessary due to a bug in vlan tools vlan-raw-device bond0
Note that I set the MTU to 9000 on all interfaces except for bond0.43, which is 1500. This is perfectly acceptable. In all reality, setting the MTU to 1500 on bond0.43 is just capping what bond0 can really do. But, it is important to set the MTU on each interface, otherwise the frame size of 1500 bytes will get set, and you'll end up chopping up your packets anyway. You must also set the MTU to 9000 on the switch ports as well, and any other server and interfaces that you want jumbo frames on.
{ 1 } Comments