Pages

11.08.2016

Current Trends in DC Networking - Cumulus Networks

Hopefully by now you have heard about Cumulus Networks. If not here is a quick intro.

Cumulus Networks is a full feature Linux distribution for data center (DC) routers and switches. Cumulus Linux is designed to simplify the deployment and automation of DC networks. With that said its not your normal network OS. The configuration and management is more inline with a Debian server than a network switch.

So what? Why change what has been working for several decades?

That is the point. What has worked for us in the past is not holding up to the rapidly changing DC space. Technologies are integrating, workflows are merging and yet we still grind away at notepad just to copy/paste into the CLI when new switches need deployed or VLANs need provisioned. Why?

Cumulus Networks is trying to lead the charge in changing this. By building a network OS that can better integrate with the tools already proven by application and server teams, we can quickly deploy and automate the network infrastructure with ease.

For a deeper introduction into Cumulus Networks check out the awesome Tech Field Day videos from #NFD9.





Honestly I recommend all the Cumulus presentations from #NFD9 found here: Tech Field Day .

OK Im taking the sales/marketing hat off and throwing it back into the dark corner it came from. I promise!


Links:

Ethernet Bridging VLANS (Cumulus)
Comparing Traditional Bridge Mode to VLAN Aware Bridge Mode (Cumulus)
Quagga Routing Suite
Configuring Quagga (Cumulus)
Configuring Border Gateway Protocol BGP (Cumulus)
Configuring BGP Unnumbered with Cisco IOS


Download and installation instructions for Cumulus VX can be found here. Make sure to add 4 virtual network adapters per switch to support the topology we are running for this series.


Lets dive in!

Hostname and Services

Once the Cumulus vm is booted up you are presented with a login prompt. Use the username "cumulus" and password "CumulusLinux!".

First thing we need to change is the hostname. Hostname config is modified in /etc/hostname and /etc/hosts. Sudo (root permissions) is needed for pretty much everything . "sudo su -" can also be used to save the typing.



In /etc/host replace "cumulus" with the hostname as shown. A reboot will be required to apply the new hostname.

NOTE: You have several options for modifying files, vi or nano. As a normal human I chose vi. But you are welcome to use nano, FREAKS!!!!

Next we need to enable the routing services needed. Quagga is the package used to mange routing and bgpd is used for BGP. The configuration file is located at /etc/quagga/daemons.



Once saved, enable and start the quagga deamon.



For status you are looking for "Active (running)" and all enabled services above to show "Up"

Interfaces

Now lets configure the interfaces. Run "ifconfig" to get a list of all interfaces.

For this topology you will see one "mgmt" interface used for remote console and three "swp" interfaces. These are the switchports on the device.

Interface configuration is located in /etc/network/interfaces. I suggest spending a little time reading through the Cumulus documentation as interface configuration is different from what we are used to.



iface l0 will be the Loopback interface for the switch

iface mgmt is for internal use for Cumulus and quagga. Ignore it.

The mgmt interface (iface eth0) is placed in a vrf and given an IP in my lab network for remote access.

iface swp1 - 3 use the alias command similar to "interface description". Just like L0 this is where we would configure IP addresses. But we are not going to do that. Ill explain in a bit.

For each of the leaf switches we will also need to configure a trunk to the host for passing VLANs into the fabric and SVIs for routing. For this we will be using bridges.



If your familiar with Linux, chances are you have stumbled across bridge interfaces. If you have ever installed GNS3 or dynamips/dynagen on Linux you are as well.

iface bridge defines the bridge interface, member interfaces and the VLANs on the bridge. Look at that we are going to enable STP too. Might be a good idea huh?

NOTE: Cumulus supports two bridge modes, Traditional and the recommended VLAN aware mode. With Cumulus 3.1, VXLAN is now supported with VLAN aware mode. So, that is what we are using.

iface bridge.101 and 201 create the SVI for the two VLANs and assign it an IP address.

Make sense? Perfect, lets keep moving!

Now we need to restart the network interface service to apply the changes. Then run "brtcl show" to verify the bridge is up and functioning.



Now would also be a good time to check your network links.



Interface status shows UP. "UP" and "LOWER_UP" are the equivalent to UP/UP is Cisco land.


Routing!

Alright, now its time to configure some routing! Screw these damn config files us CLI junkies want our "config t"!!

Well... Not so much. There is an "industry standard" cli you can use (vtysh) but Im noticing its not working too well with configuration. Mostly just show commands. But dont worry the quagga configuration is familiar.

First we need to create or modify /etc/quagga/bgpd.conf.



Next create /etc/quagga/Quagga.conf. The following config is for spine1 and similar for spine2



So I mentioned no IP addresses before and now you can see no neighbor addresses configured either.

Cumulus has a cool feature that allows you to establish BGP peers with neighbors via IPv6 link-local address (which are enabled by default). IPv4 prefixes can then be advertised over the IPv6 peering.

This feature removes the need for IP address allocation and management and also simplifies your automation and operations time. Which is what we are after, right?

The BGP config is pretty familiar compared to other vendor's configuration. The peer-group "fabric" is created and add interfaces to the fabric used for peering. Usually for spines its all interfaces and for leafs its whatever you use for uplink interfaces.

Pretty simple to templatize.

Since we are using IPv6 link-local for peering we need to activate the neighbors under the IPv6 address family.

The above interface commands are required for IPv6 link-local BGP peering.

For leaf switches you will want to add an IPv4 address family to advertise loopbacks and host subnets to the above configuration. Im using "redistribute connected" so Im not manually specifying subnets. You know, for automation purposes

On leaf1 only, I am also generating a default to represent the fabric edge into the rest of the DC. Usually your default would come from somewhere else in your DC.



Now we need to restart the quagga bgpd service and we are all set!



Give me my CLI!!!

Now we can finally get into the CLI and poke around. To enter the CLI we will use "vtysh"



From here you have the familar show commands you are used too, just minus a few. Lets check BGP to see if its working.



Nice! BGP peering is established, prefixes are properly learned (notice the IPv6 next hop) and routes are in the RIB. We are good to go now!

In the next post we will get VXLAN setup across this topology and then we will follow it up with automating both Arista and Cumulus deployment.

We also still need to talk about the Cumulus host agent, but Ill save that for a future post once we get our servers up.

Same as all post in this series, you can find all the device configs and scripts on my Github account.