Pages

3.01.2017

Current Trends in DC Networking - Arista Ansible Config


Over the past few post I have covered the basics of Ansible, now its time to dig in and see how we can leverage it to build configurations. The cool thing about Ansible is there is a number of ways to accomplish most task. From here on out, it’s honestly about exploring what options are available within Ansible and expanding skills as you learn.

This post we will focus on configuring the Arista fabric from scratch using the configurations we have developed over the series. Configuration can be approach two ways.
  1. Push configuration to each switch with Ansible modules or roles directly.
  2. Build the configurations in full, then push them to each switch.

Both options have their place. The first might be a good fit for interacting directly with a large group of production switches to make smaller changes. The second would be good for pre-building configuration for groups of devices and pushing once they are installed. It would also be a good way to integrate with an existing configuration management tool or get one started.

This post we will focus on the second options, mostly because it can be used for any vendor, not just Arista. And Im sure just like me most of us still have some Cisco or Juniper gear sitting around in our DCs so we can still benefit from this exercise.

Alright,  lets dig in!


We are working with the same topology from this series and all prior configurations can be found on my github. We will need them.

I have wiped the configuration off all fabric switches outside of the management interface for basic connectivity. The Ansible setup is on Host2 which was configured in the previous posts here and here.

There are 4 steps we need to complete in order to build and push configuration to the switches

  1. Create configuration Template
  2. Assign Variables
  3. Build Playbook
  4. Create config files and push configuration

Configuration Template

The first step is to build a template of the common configuration used across all devices. Most of us already have a common config or “golden config” which I recommend as your starting point. Im going to split this up into two templates, one for spines and one for leafs.

Create a templates directory inside  dc-network along with two files, eos-leaf.j2 and eos-spine.j2.



Make sure the file and directory names are correct as they must match the device groupings inside the hosts file.  Next copy the configuration of arista-spine1 and arista-leaf1 into eos-leaf.j2 and eos-spine.j2. Save these files.


Next we need to update our hosts file to reflect the leaf and spine groups. This is done because Ansible will automatically use the template for all devices in a group based off the name of the template file. Here is the updated hosts file.

[eos-spine]
arista-spine1 role=spine
arista-spine2 role=spine

[eos-leaf]
arista-leaf1 role=leaf
arista-leaf2 role=leaf
arista-leaf3 role=leaf

[cumulus-spine]
cumulus-spine1 role=spine
cumulus-spine1 role=spine

[cumulus-leaf]
cumulus-leaf1 role=leaf
cumulus-leaf2 role=leaf
cumulus-leaf3 role=leaf


[eos:children]
eos-spine
eos-leaf

[cumulus:children]
cumulus-spine
cumulus-leaf

Notice at the bottom I used [group:children] to group eos and cumulus back up into main groups.

Next replace all the device specific configuration within the templates with a variable. Variables inside jinja2 templates were covered already, remember?

Yeah, we used them with {{ inventory_hostname }}. Jinja denotes a variable with the double-curly brackets. So lets walk through the eos-leaf.j2 file and assign variables. Keep track of the names you assign as you move along.

hostname {{ inventory_hostname }}
!
snmp-server community rbhome ro
!
spanning-tree mode mstp
!
aaa authorization exec default local
!
no aaa root
!
username admin privilege 15 role network-admin secret 5 $1$2yTIv5bw$pWK2nDyHQyqTyVtFjAwMq1
!
{% for vlan in vlans %}
vlan {{ vlan.id }}
  name {{ vlan.name }}
{% endfor %}

Hostname is an easy one. We can just use {{ inventory_hostname }}.

The username and password could be an option if we have different logins for each DC or customer. Just an option.

Now what the hell is going on with the last few lines and vlans?

I got a little fancy to show the power of Ansible and jinja2 templates. If your familiar with Python this makes sense. It’s a for loop. We don’t want to assume each device needs the same vlans, so we create the variable vlans. Now we can just assign which vlans go on each switch and the for loop will build the configuration shown below. Awesome stuff!

vlan 101
  name PROD-A
vlan 201
  name PROD-B

TIP: When working through templates look for areas of repeating code. For loops are great for reducing the template along with adding flexibility into your configs.

Ok lets move on to the interfaces.

interface Ethernet1
   description Spine-1
   no switchport
   ip address {{ spine1_link_ip }}
!
interface Ethernet2
   description Spine-2
   no switchport
   ip address {{ spine2_link_ip }}
!
interface Ethernet3
   switchport mode trunk
!
interface Loopback0
   ip address {{ loopback_ip }}
!
interface Management1
   ip address {{ mgmt_ip }}
!
interface Vlan101
   ip address {{ vlan101_ip }}
   ip virtual-router address {{ vlan101_gw }}
!
interface Vlan201
   ip address {{ vlan201_ip }}
   ip virtual-router address {{ vlan201_gw }}
!
interface Vxlan1
   vxlan source-interface Loopback0
   vxlan udp-port 4789
{% for vlan in vlans %}
   vxlan vlan {{ vlan.id }} vni {{ vlan.vni }}
{% endfor %}
   vxlan flood vtep 1.1.1.1 2.2.2.2 3.3.3.3

Interfaces are a great place to leverage jinja2 and add tons of flexability, but I chose to keep it simple and assign variables for each interface IP address, along with the gateway address on the VLAN interface.

I did use another for loop to assign VXLAN VNIs to VLANs within the vxlan interface as this will tie in with how we setup the vlans variable later in the post.

Last is the routing configuration.

!
route-map VXLAN permit 10
   match interface Loopback0
!
router bgp {{ bgp_asn }}
   neighbor 10.0.11.1 remote-as {{ spine1_asn }}
   neighbor 10.0.11.1 maximum-routes 12000
   neighbor 10.0.21.1 remote-as {{ spine2_asn }}
   neighbor 10.0.21.1 maximum-routes 12000
   network {{ loopback_ip }}
   redistribute connected route-map VXLAN
!
management api http-commands
   protocol http
   no shutdown
!
!
end

Not much here. Just the BGP ASNs and loopback used for BGP RID.

We finished our template and now have the following variables for the leaf template.
{{ vlans }}
{{ spine1_link_ip }}
{{ spine2_link_ip }}
{{ loopback_ip }}
{{ mgmt_ip }}
{{ vlan101_ip }}
{{ vlan201_ip }}
{{ bgp_asn }}
{{ spine1_asn }}
{{ spine2_asn }}

Ill leave the spine template for you to work through. If you would rather you can pull both of mine from here.

Variables

This is where things start to get a little odd and it continues to feel that way as you dig deeper into how variables are used with different areas of Ansible. To further complicate this, Ansible has a order of priority for the different variable locations you can use. I suggest spending some time and read the Ansible Variable documentation.

As I mentioned, start simple and play around and read the Ansible docs. Trust me, you will pick this up quicker than you think.

We will be using the more popular locations for variables and keeping this simple. The group_vars and host_vars directories. So first we need to create them and the YAML files needed.



Once again the name must match exactly to the device name in the hosts file.

group_vars is the primary central point for all vars and has a single file. all.yml

host_vars is for host specific variables (such as ip addresses) and has a file for each device.

Now edit arista-leaf1.yml and add your variables. 

---

spine1_link_ip: "10.0.11.2/24"
spine2_link_ip: "10.0.21.2/24"

loopback_ip: "1.1.1.1/32"
mgmt_ip: "10.0.0.223/24"


vlan101_ip: "172.16.101.11/24"
vlan201_ip: "172.16.201.11/24"


bgp_asn: "45113"

Standard YAML file starts with three dashes.

Assigning variables is pretty straight forward, variable-name: value. The quotation marks are saying this variable is a string. Use the same format for each of the leaf switches.

For  arista-spine1.yml  it’s similar

---
leaf1_link_ip: "10.0.11.1/24"
leaf2_link_ip: "10.0.12.1/24"
leaf3_link_ip: "10.0.13.1/24"

loopback_ip: "11.11.11.11/32"
mgmt_ip: "10.0.0.221/24"

bgp_asn: "45111"

This covers the host specific variables, but we still have a few left. These are the variables we assigned which can be the same across all devices. So we put them in group_var/all.yml.

Layout is the same as any other variable file.

---

vlans:
  - { id: 101, name: PROD-A, vni: 800 }
  - { id: 201, name: PROD-B, vni: 801 }

vlan101_gw: "172.16.101.1/24"
vlan201_gw: "172.16.201.1/24"

spine1_asn: "45111"
spine2_asn: "45112"
leaf1_asn: "45113"
leaf2_asn: "45114"
leaf3_asn: "45115"

username: "admin"
pwd: "admin"

The vlans variable is broken down into a list of dictionaries. With each row or list representing a single vlan and all the elements associated with that vlan.

If you noticed in our template above we have vlan.id vlan.name and vlan.vni. This calls specific elements of each list. Just another powerful tool within jinja2 and Ansible.

Username and pwd we will use at the end to push configuration to devices.

Playbook

OK, we have our templates setup, our variables assigned per device. Now its time to build our playbook.

First we need a directory we can push configuration files to when they are built. Create a new directory inside dc-network called configs. Then create a playbook file named config-push.yml



Edit config-push.yml and add the following

---

  - name: Build Arista Spine Configuration
    hosts: eos-spine
    connection: local
    gather_facts: no

    tasks:
      - name: BUILD CONFIGURATION
        template:
          src=templates/eos-spine.j2
          dest=configs/{{ inventory_hostname }}.conf
       tags: build-spine


  - name: Build Arista leaf Configuration
    hosts: eos-leaf
    connection: local
    gather_facts: no

    tasks:
      - name: BUILD CONFIGURATION
        template:
          src=templates/eos-leaf.j2
          dest=configs/{{ inventory_hostname }}.conf
       tags: build-leaf


Here we have two plays, each using the represented hosts group. The template feature is used to build the output files.

  • Src – specifies the source jinja2 template used to build the configuration file.
  • Dest – specifies the location and filename of the created file.

Tags are also assigned to each task so later we can run the playbook for one group or the other by specifying the tag.

That should do it. Now fire up ansible and see what you get!



If your playbook came back successful, you should have all the configuration files now inside your configs directory. Open then up and make sure they got the correct name, IPs, vlans etc.

If you are happy with the configs built then its time to move along to pushing the config to the devices.


Configuration Push

Arista has its own configuration management built into eos_config which can do the same. But Im going the more generic route and using NAPALM, which supports multiple vendors. The NAPALM module allows us to work with and install configuration on a remote device. NAPALM is actually a pretty awesome tool itself and is worth checking out.

We first need to install NAPALM and the Napalm-Ansible module. Make sure you are in the dc-network directory.




Next open config-push.yml and add the PUSH CONFIGS task after each build task

---

  - name: Build Arista Spine Configuration
    hosts: eos-spine
    connection: local
    gather_facts: no

    tasks:
      - name: BUILD CONFIGURATION
        template:
          src=templates/eos-spine.j2
          dest=configs/{{ inventory_hostname }}.conf
        tags: build_spine

      - name: PUSH CONFIGS
        napalm_install_config:
          hostname={{ inventory_hostname }}
          username={{ username }}
          password={{ pwd }}
          dev_os=eos
          config_file=configs/{{ inventory_hostname }}.conf
          commit_changes=true
          replace_config=false
          get_diffs=false
        tags: push_spine

  - name: Build Arista leaf Configuration
    hosts: eos-leaf
    connection: local
    gather_facts: no

    tasks:
      - name: BUILD CONFIGURATION
        template:
          src=templates/eos-leaf.j2
          dest=configs/{{ inventory_hostname }}.conf
        tags: build_leaf

      - name: PUSH CONFIGS
        napalm_install_config:
          hostname={{ inventory_hostname }}
          username={{ username }}
          password={{ pwd }}
          dev_os=eos
          config_file=configs/{{ inventory_hostname }}.conf
          commit_changes=true
          replace_config=false
          get_diffs=false
        tags: push_leaf


Here we specify the hostname, username and password variables along with os type and the config file location inside the napale_install task. That’s it, NAPALM does the rest!

Save and run the playbook again. If all is successful the Arista devices are now configured.



Since we have already built the configs you can use the tags to only run the config push tasks.
ansible-playbook config-push.yml --tags=push_spine,push_leaf


And there we go. Fully configured and functioning Arista leaf/spine running VXLAN built using Ansible automation. This first build took a while but now that it is functioning it can quickly be re-used and re-purposed for future work. 

Next up is the same thing with the Cumulus fabric