Current Trends in DC Networking - Arista VXLAN

In my last post I covered the fundamentals and basic design of VxLAN. Now its time to get to the fun stuff, configuration! This post will focus on Arista's VxLAN configuration using vEOS.  Arista has been making serious traction in the DC space.

Throughout this series we will be building on the following topology. EBGP will be used as the routing protocol between spine and leaf nodes as it is the popular option in the DC at the moment. Each leaf will peer with both spines, but notice there is no connection between spines. Why? Well as usual Ivan Pepelnjak does a great job explaining it here.

Leaf switches will act as both the VLAN gateway and VTEP. Two vlans/VNIs will be extended between the three pods.

So lets get started!


Users Manual - Arista (EOS-4.15.4F)

For this series I will be using Arista vEOS version 4.15.0F on top of VMware ESXi (5.5) to virtualize the network.
NOTE: If you are deploying vEOS on VMware ESXi you will need to allow promiscuous mode and MAC changes within the vSwitch security settings. vEOS loves to change MACs and ESXi does not appreciate it.

Leaf/Spine Routing Configuration

First thing we need to setup is the routing on the leaf/spines fabric. Routings only function on the fabric is to advertise VTEP IPs between each other and to establish multicast if its used. EBGP is what I chose but you can just as easily use OSPF. As long as all VTEPs know about each other.

Starting with the spines, as they are the simplest to setup. Peering will use the link IP address to keep the configuration simple.

Outside of the link addresses this is everything needed on the spines. If you choose to use loopbacks over link addresses for eBGP peering you will also need to either use an IGP or static routing to advertise them.

Leafs have a little more to configure. We will configure both the eBGP peering along with loopback and vlan interfaces.

The loopback is used for the VTEP IP address and the vlan interfaces will be used for routing between VLANs/VNIs locally. This is optional but without them you will need to establish a central point of routing within your fabric. BGP then needs to advertise the VTEP IP (Loopback0) into the network. A route-map is used to limit BGP to only advertise the VTEP IP.

In the real world you will also want to advertise the subnets within each pod so they can be accesable outside the fabric. Here I am just proving they are not needed.

Verify peering is established and we are getting the Loopback0 prefixes.

VxLAN Configuration

What I love the most about Arista's configuration of VxLAN is the simplicity. Its just another virtual interface. Arista terms it a VTI (VxLAN Tunnel Interface). If you're used to configuring SVIs (VLAN interfaces) you should be fine here.

Lets look at what is needed to get VXLAN up and going with a VTI.

Well hell, thats not so bad is it? Lets break this down some more so you get your moneys worth out of this blog!

vxlan source-interface Loopback0 - VTEP IP. Basically sourcing traffic from the loopback0 IP address. This will be the source address in the frames exchanged between VTEPs.

vxlan udp-port 4789 - UDP port used for the transmission of VxLAN encapsulated packets.

vxlan vlan 101 vni 800 - This is our VLAN to VNI mapping. Need one line per VLAN/VNI,

vxlan flood vtep - Arista calls this HER (Head End Replication). Instead of using a multicast control plane we are manually specifiying what VTEPs to forward BUM traffic. Notice the VTEPs listed are the Loopbacks of spine2 and spine3.

As for basic VxLAN we are now up and running. But we have one other requirment in our lab. Notice the gateway address of both VLANs across all three pods. Its the same IP address. Arista calls this a virtual router address. This allows us to have the same gateway address active in each pod on each VTEP.

Configuration is simple.

Just like NHRPs the physical address on the VLAN interfaces is unique. A virtual-router address is then added which is the same across all spine switches.

The biggest benefit to a virtual router address is traffic routing within the same pod does not have to leave the pod to hit a central gateway. Also we can now shift VMs from pod to pod without having to change the default gateway or use a central point of routing.

And thats it for our VxLAN configuration. Let see if we are functioning and look at some basic commands.


show active will show all active configuration of the interface. Basically a "show run int ethx/x"

Now lets ping the other pods VLAN IP and see what we get

Verify routing between VTEPs and VLANs.

Nice! Now we can see whats going on in the ARP and Mac address table.

Notice how 172.16. addresses list both the VLAN and VxLAN interface? The MAC address listed is the MAC address of the remote VTEP. ARP is functioning!

The MAC address table looks good as well. Same thing MAC address is the remote VTEP.

Everything looks to be in order. Now lets look at a couple VxLAN specific commands.

Show vxlan address-table is just the MAC table with the VTEPs added.

Show vxlan flood vtep shows the VTEPs BUM traffic will be flooded to.

Show vxlan vtep will show you each VTEP known from the local device.

So there you go! Hands on with an Arista VxLAN lab. If you would like to see the config files for this lab they are hosted up on my GitHub. As I move through this series I will keep all my config files and scripts there.

Next up will be the same lab but with Cumulus Networks. This one is gonna be fun!!