Current Trends in DC Networking - Cumulus EVPN VXLAN

In my previous post on Cumulus Networks we covered the basics and getting BGP peering setup with our data center (DC) topology. Now we need to get VxLAN working to move forward with our design.

Cumulus has had a VXLAN solution called LNV (Lightweight Network Virtualization) for a while. But with version 3.2 of Cumulus linux we now have the option to use an EVPN control plane. I spoke briefly about EVPN in my Introduction to VxLAN post and it appears the market is shifting to EVPN control plan as the popular VxLAN solution. Cisco and Juniper both support EVPN and Arista will hopefully release their version sometime early 2017.

Ive never been a big fan of EVPN as the VxLAN control plane, mostly due to its complexity, but EVPN has a lot of potential behind it that could introduce some cool features in the future.

So lets dig in.

Ethernet Virtual Private Network EVPN (Cumulus)
Lightweight Network Virtualization LNV (Cumulus)
Ethernet Virtual Private Networks EVPN (Broadband Forum)

We are starting off with the previous posts build. Configurations can be found on my Github here.

Requirements Install

First up we need install the EVPN package onto the switches from the Cumulus early access repository. This will need to go on all leaf and spine switches of the fabric in order to support the new communities being used within BGP.

Edit the /etc/app/source.list file and un-comment the early access repositories. Then save and exit

Now install the EVPN package via apt-get.

After the upgrade finishes the switch will reboot. After the first reboot I  got a banner message stating I needed to run apt-get upgrade again. If you see this run apt-get upgrade once more and you are set.

Double check you are running version 1.0.0+CL6 of Quagga or newer.

I show version 1.0.0+CL3u6 which is good.

VxLAN Configuration

In our design the leaf switches need to be configured as VTEPs. So we need to create vxlan interfaces to terminate the VxLAN IDs (VID) and translate the VLANs into the VxLAN fabric.

Edit /etc/network/interfaces and add the vxlan interfaces. I am using vxlan10101 and vxlan10201

Each vxlan interface is configured with a "vxlan-id" (VID) and pointed to the loopback0 addresss with "vxlan-local-tunnelip" to assign a VTEP IP.

Next "bridge-access" is configured to associate the VID to a VLAN.

Under the primary bridge interface you need to associate the vxlan interfaces to the bridge along with the physical interface using "bridge-ports"

Save the config and restart. Then make sure the vxlan interfaces are associated with the bridge using brctl show

EVPN configuration

EVPN is just another address-family to BGP. So configuration is pretty much just adding a new address-family to Quagga's BGP section. Since the EVPN control plane needs to exist across the whole fabric you will need to add EVPN configuration to all leaf and spine switches.

To configure EVPN add the following to the BGP section of /etc/quagga/Quaga.conf.

 The only difference in configuration between leafs and spines is on spine switches you do not need the "advertise-vni" option set since spines do not advertise any local VNIs.

Next, save and restart the quagga service and verify its functioning as before.


All EVPN verification will be done inside the CLI (vtysh).

First we should check to make sure our BGP peers are still up

Awesome! Next we should verify the EVPN peers are established.

Nice! Notice how EVPN is learning 4 prefixes from its peers compared to the two (VTEP IDs) from BGP.

Everything looks good. A ping test will be our final verification.

Im running ping tests from host2 to host3 on vlan 101

Looking at the ARP table on host2 you can see it learns the mac address of host3 which means layer two is being tunneled through the fabric as expected.

We can also see the leaf switches are matching up mac to vxlan as well using bridge fdb show 

Additional EVPN commands:

  • show bgp evpn vni
  • show evpn vni
  • show bgp evpn route

Alright, we have covered VXLAN in good detail with both Arista and now Cumulus. Now its time to get out of our comfort zone and do some programming to automate it all! Up next we will dig into Ansible and how we can utilize it to deploy this lab automatically.

As always the configuration for this lab can be found on my github here.