Apstra Week1 - My Thoughts

Wow, what a week to start my new role at Apstra!

First Cisco announces their Intent Based Networking Solution (IBNS) Network Intuitive, the same week I accept my new position. And now it appears every vendor is starting to rebrand their “SDN” products as IBNS. I think I jumped into the middle of this at an exciting time!

Ive always been a big fan of Apstra and their IBNS approach to DC networking. Im sure just like most, my opinion of IBNS has shifted and changed since I first heard the term over a year ago. But I agree with the principle and feel its the right direction for the network industry.

Im not writing this post to comment or criticize any of the recent announcement or try and convince anyone that IBNS is the solution for your network. The focus of this post is to express my inital thoughts on my first week with the company and what I see most important moving forward for Apstra and IBNS as a whole. 


Current Trends in DC Networking - CoreOS Install

As I outlined in my last post, CoreOS Container Linux is not your traditional operating system. The same can be said about the base install of CoreOS.

Depending on the use case there are multiple deployment options for CoreOS. For cloud deployments all major providers are supported.

For bare metal installs you can either iPXE/PXE install to memory or install to disk. Also with VMWare you have the option to install through an OVA. and point to your cloud-config or ignition file via a config drive.

Even though I am running this lab on VMWare vSphere Im going to use the bare metal install.


Current Trends in DC Networking - CoreOS

Now that the network side of this series is complete we can move onto the servers. Containers and micro-services are all the rage these days and continuing to grow in popularity. When virtualization took over the data center we saw a shift in many aspects of data center server and application management. We even started to see a shift in how networks were designed and networking logic started shift into the servers.

Now with the rise of containers upon us, we are seeing a lot of the same things happening as applications and services start to move into containers. And the shift of network control is continuing to move further into the servers.

Operating systems have all embraced the support for containers, including Windows now. CoreOS has emerged with a unique approach as a container focused server OS with Container Linux (formerly just CoreOS) and has built a solid reputation against the giants in the server world.

So lets dig into CoreOS as see what we can do with it!


Current Trends in DC Networking - Cumulus Config w/ Ansible Roles

This post will finish up the networking side of this series. Up till now we have learned about VxLAN and deploying it in both Arista and Cumulus spine/leaf fabrics. We also learned how to automate the creation of configuration files and how to deploy them onto Arista gear using Ansible and NAPALM.

Next we need to expand the Ansible playbooks we have created to include the configuration of Cumulus switches. I will also take this opportunity to cover Ansible roles.

I was hoping to cover the Cumulus Ansible module for this post, but at the time of this writing (April 2017) the current version does not support VxLAN configuration. So we will have to just work with static files.

So lets get started!


Current Trends in DC Networking - Arista Ansible Config

Over the past few post I have covered the basics of Ansible, now its time to dig in and see how we can leverage it to build configurations. The cool thing about Ansible is there is a number of ways to accomplish most task. From here on out, it’s honestly about exploring what options are available within Ansible and expanding skills as you learn.

This post we will focus on configuring the Arista fabric from scratch using the configurations we have developed over the series. Configuration can be approach two ways.
  1. Push configuration to each switch with Ansible modules or roles directly.
  2. Build the configurations in full, then push them to each switch.

Both options have their place. The first might be a good fit for interacting directly with a large group of production switches to make smaller changes. The second would be good for pre-building configuration for groups of devices and pushing once they are installed. It would also be a good way to integrate with an existing configuration management tool or get one started.

This post we will focus on the second options, mostly because it can be used for any vendor, not just Arista. And Im sure just like me most of us still have some Cisco or Juniper gear sitting around in our DCs so we can still benefit from this exercise.

Alright,  lets dig in!


Current Trends in DC Networking - Ansible Basics

In the last post we covered the install and a quick introduction to Ansible. Today I will go over the basics of Ansible and how to build and run playbooks.

Ansible is very powerful and flexible. Configuration and usage gets deep quick. The best way I have learned to dig in, is to start simple and build from there. So that is what we will do today, build a basic playbook and run some commands on our remote switches.


Current Trends in DC Networking - Ansible

Moving along in the series, I guess its time to start automating something, right? Yes lets automate!

Ansible has been around for a while and is popular among server and network engineers. As the product matures the community backing and support keeps growing and getting stronger.

On the server side of the house, Ansible is great at automating the build and deployment of service stacks on hardware, VMs or in the cloud. This is were Ansible started and is the strongest in my opinion. On the network side, Ansible is still growing and has some challenges with managing network devices. I dont blame Ansible for this, but blame the vendors for the outdated means of communication with their equipment. This is changing but its going to take time.

With that being said, Ansible is still a great tool to automate the configuration and deployment of network equipment.

Over the next few post I am going to cover the basics of Ansible and then walk through building out both the Arista and Cumulus networks we just built up.

So lets get started!


Current Trends in DC Networking - Cumulus EVPN VXLAN

In my previous post on Cumulus Networks we covered the basics and getting BGP peering setup with our data center (DC) topology. Now we need to get VxLAN working to move forward with our design.

Cumulus has had a VXLAN solution called LNV (Lightweight Network Virtualization) for a while. But with version 3.2 of Cumulus linux we now have the option to use an EVPN control plane. I spoke briefly about EVPN in my Introduction to VxLAN post and it appears the market is shifting to EVPN control plan as the popular VxLAN solution. Cisco and Juniper both support EVPN and Arista will hopefully release their version sometime early 2017.

Ive never been a big fan of EVPN as the VxLAN control plane, mostly due to its complexity, but EVPN has a lot of potential behind it that could introduce some cool features in the future.

So lets dig in.