Pages

5.03.2013

Cisco Nexus - Part 1 - Introduction

I'm going to shift gears some and open up a series of post on the Cisco Nexus line. Since I will be working these into my CCIE R&S studies, not all posts will be back-to-back and will spread out over a few months.

I know a lot of people have been exposed to Nexus and Data Center (DC) networking for a while and most likely know a ton more about the subject. My aim for this series is to work from the ground up, giving someone with limited or no knowledge of DC networking a solid understanding of what Nexus can provide in this space.

So let’s dig in!


NX-OS


The Nexus line has always been geared towards the DC and based its roots from the MDS storage line SAN-OS. Here is a quick post up on Cisco Blogs going over the history of Nexus. Nexus is marketed to address several key needs with DC design.

  • High reliability with minimum downtime
  • Highly scalable and flexible
  • Uilization of all links – No idle links due to Spanning-Tree (STP)
  • Integrate with virtualization platforms
  • Increase computing power and network performance in smaller forms
NX-OS is the underlying OS used on the Nexus platform. As mentioned above its roots are based from SAN-OS but it still has a similar feel to IOS. NX-OS is Linux based, more specifically MontaVista embedded Linux. Unlike IOS, NX-OS runs off two files:
  • Kickstarter - Used to boot the system and call the NX-OS binary.
  • NX-OS binary – Contains system daemons and operational functions.
NX-OS is a modular design, with each daemon or process having its own protected memory space. By segregating memory space for processes, reliability is increased by limiting the impact of a failed process. NX-OS also differs from IOS by disabling the majority of processes by default. You are then required to enable them when needed. This will make a lot of security engineers happy!

 

Hardware


Just like any other line of hardware there are a ton of options and features you need to consider for each design. I'm not digging deep into those levels but just going over the basics of each model and what they were designed to accomplish.

Nexus 7000 series

The Nexus 7000 (N7k) series is a high performance chassis based solution. Similar to the 6500 and 7200 series family, the N7k provides a slew of high-density line cards supporting 10Gbps, 40Gbps and 100Gbps speeds. Cisco claims the N7k can support up to 768 line-rate 10GE interfaces, 96 40GE interfaces or 32 100GE interfaces. The N7k is also rated with a max switching capacity of 17.6 Tbps and up to 11.5 billion packets per second. This switch has some horsepower behind it!

The N7k comes in four flavors currently, a 4, 9, 10 and 18 slot solution.

The N7k has 3 types of line cards:

  • F-series – Basic Layer 2/3 card geared towards access and basic distribution layer functionality.
  • M-series - High performance Layer 3 card geared towards intense layer 3 functionality. Supports OTV, MPLS, LISP etc
  • Service Modules – Currently Cisco has only release a NAM and ACE module for the N7k but I'm sure there are more to come.
The N7k is designed to be the core of any DC network infrastructure providing backbone functionality and intercommunication between pods.

Nexus 6000 series

The Nexus 6000 (N6k) has just recently been released (Q1 2013) and is being marketed to fill the gap between the N7K and the N5k as a high density, compact solution.  The N6k is more compact than the N7K with a footprint of 4 RU or smaller. But unlike the N5k, the N6k provides a more robust feature set.

The N6k has two models currently, the 6004 (4RU) and 6001 (1RU). The 6001 houses 48 fixed 10GE (SFP+) Ethernet ports and 4 40GR (QSFP+) fabric ports

The fully populated 6004 can support up to 384, 10GE line-rate ports or 96, 40GE line-rate ports and is rated with a max switching capacity of 7.68Tbps.The 6004 is equipped with  four expansion modules of which support both 10GE and 40GE connections.

Another feature the N6k provides is the use of breakout cables


Each 40GE port (QSFP+) can support either 1x40GE or 4x10GE connections. For densely populated racks with multiple 10GE interfaces per server, breakout cables can provide a clean and condensed solution.

The N6k provides a richer feature set compared to the N5k for layer-2 and layer-3 forwarding    without the limitations of the N5k. Check out this Ciscolive presentation for all the details on the N6k.

Nexus 5000 series

The Nexus 5000 (N5k) is the access layer switch in the Nexus line. The N5k is where the Nexus line starts to breakout into its own world and shifts away from the traditional switching model. Since the N5k is designed as an access layer switch you would assume designs would place one (or multiple) N5ks in each rack. This is not the case. N5ks use what are called Fabric Extenders or what is termed FEX to extend N5ks to multiple racks.

Think about taking the cards in a chassis bases solution (6500 for example) and moving them to separate racks so the network ports are closer to the edge devices while still benefiting from a single central device for management. This is what N5ks with FEXes accomplish. By extending FEXes across the pod or DC you are able to logically consolidate a large number of racks into a single switch while still keeping cable management central to each pod or rack.

FEXes, just like line cards cannot operate as standalone units and must connect to a parent switch (N5k, N6k or N7k). All switching, MAC learning and forwarding must be sent up from the FEX to the parent switch. Even the binaries ran on the FEXes are synced with the parent switch once connected.

The N5k comes in two models:

  • 5548
    • 960Gbps throughput
    • Up to 48 ports
      • 32 1/10Gbps (SFP+) fixed ports
    • 1 Expansion slot (16 ports)
    • Max 24 Fabric Extenders each
  • 5596
    • 1.92Tbps throughput
    • Up to 96 ports
      • 48 1/10Gbps (SFP+) fixed ports
    • 3 Expansion slot (16 ports)
    • Max 24 Fabric Extenders each
The N5k does not support layer3 functionality without upgrading the unit with a L3 module. However the L3 module introduces several limitations to the N5k that must be considered in your design. The big limitations are:
  • Limit of only 16 FEX per N5k
  • Unable to use ISSU (In-Service Software Upgrade)
It is usually recommended to push layer-3 features up to the N7k. But smaller environments are finding the jump into N7Ks is expensive and overkill for their needs.

Nexus 2000 Series Fabric Extenders (FEX)

FEXes are pretty straight forward after you get the initial concept. Just like line cards there are a variety of FEXes available. The 2200 series is the current line of FEXes supported as of this writing. 2000 and 2100 series are on the market but have been phased out by the 2200 series. FEXes are 1RU units and have three types of fixed interfaces:

  • Host Interfaces – Exactly what it says. Used to connect end host
    • Type: 100/1000Mbps copper or  10GE (SFP+) fiber
  • Fabric Interfaces – Used to up-link the FEX to the Nexus Fabric
    • Type: 10GE SFP+
  • FCoE Fabric Interfaces – Used to connect FCoE storage networks
    • Type: FCoE SFP+
Currently 2200 series FEX models support either 24 or 48 copper ports (2224 or 2248) with 4 Fabric up-links or for 10GE support the 2232 is available with 32 10GE SFP+ ports and 8 Fabric up-links.

Cisco has also deployed a blade solution for blade server called the B22 Blade FEX. The B22 is a module installed in HP, Dell or Fujitsu solutions allowing the blade server to host an on board FEX which connects directly to the Nexus fabric.

The B22 supports 16 1/10GE host interfaces and has 8 SFP+ fabric interfaces.

Nexus 1000v

The Nexus 1000v is a software switch used to replace the virtual switch (vSwitch) in a VMWare vSphere or Microsoft Hyper-V virtualization environment. Vendor vSwitches have always been limiting in feature set and visibility from a network standpoint. By integrating a Nexus switch into the virtualization environment, the network edge is able to be extended closer to server resources.

Network administrators are also able to gain more control over access ports for VMs .With the use of Cisco Nexus 1110 Virtual Service Appliance network administrators are able to centrally manage and control their 1000v vSwitches.

Now that we have an understanding of what hardware is out there, we will dig into the actual design scenarios for a DC deployment.