Pages

5.23.2017

Current Trends in DC Networking - CoreOS Install


As I outlined in my last post, CoreOS Container Linux is not your traditional operating system. The same can be said about the base install of CoreOS.

Depending on the use case there are multiple deployment options for CoreOS. For cloud deployments all major providers are supported.



For bare metal installs you can either iPXE/PXE install to memory or install to disk. Also with VMWare you have the option to install through an OVA. and point to your cloud-config or ignition file via a config drive.

Even though I am running this lab on VMWare vSphere Im going to use the bare metal install.





This post we will be using our topology from this series and installing to disk on all three servers. For the network fabric, Im working with the Arista configuration.

Why? Well Arista paid me more than Cumulus!

Just kidding. No one paid me for any of this. Im not that lucky.

Before installing we need servers. Right? Yes, servers…

I setup all three servers with the following specs.

  • 1 x CPU / 1 socket
  • 4GB RAM (more if you plan to run a lot of containers)
  • 30GB hard drive.
  • 2x network adapters 
    • Mgmt. network connection
    • Fabric connection to corresponding leaf.
To install CoreOS we first need to boot the server to a liveCD so we can install to disk. You are welcome to use any liveCD you like but I chose Fedora Workstation Live.


First mount the ISO and boot the server. After Fedora is booted it gives you the option to “Try Fedora” or “Install to Hard Drive”. 



















Select Try Fedora.

Once at the desktop click on Activities in the upper left and type “terminal” in the search box and open Terminal.

In order to install CoreOS we need two files copied to the server.

  • Coreos-install (script used to pull latest CoreOS version from the online repository)
  • Cloud-config or Ignition file (all your settings to make your server a server)


First download coreos-install. Make sure the server has an IP address and internet connectivity. If not download the file from another server and drop it on a share somewhere you can SCP it to the server.


wget https://raw.githubusercontent.com/coreos/init/master/bin/coreos-install
chomod +x coreos-install

Before we can upload our cloud-config or ignition file we need to create it first.

Up until a year or so ago cloud-config was the only option to apply settings to a CoreOS install. But it has limitations. Depending on what you need to apply, CoreOS might have order of operations issues with booting and applying configuration. That is why CoreOS created Ignition.


For our lab we need to extend vlans 101 and 201 to the servers. Through lots of trial and error and plenty of failed install attempts, I found cloud-config could not apply the advanced network options. So we will be working with Ignition in this post.

Ignition uses JSON configuration files and its primary job is to write and edit files in the early stages of OS installation.

Since CoreOS uses the systemd init system we will need to apply network changes using networkd. This is my first run in with networkd and it’s a little different than other flavors of linux networking Im used to. But spending a little time reading up on it helped a ton in getting this Ignition file built.

Here are a couple resources I used



Ignition has five main categories needed in each configuration file:

  • “Ignition” – Sets Ignition version
  • “storage” – Blah storage… Edit or create files on the file system. Attach drives and external storage. 
  • “systemd” – Systemd state and settings
  • “Networkd” –  The good stuff!! All the fun networking settings
  • “Passwd” – Usernames, passwords / ssh keys and groups. 


The Ignition file I have created is as follows. Also available on my Github blog repository under the CoreOS folder here.


{
  "ignition": {
    "version": "2.0.0"
  },
  "storage": {
    "files": [
      {
        "filesystem": "root",
        "path": "/etc/hostname",
        "mode": 420,
        "contents": {
          "source": "data:,labsvr1"
        }
      }
    ]
  },
  "systemd": {},
  "networkd": {
    "units": [
    {
      "name": "00-ens160.network",
      "contents": "[Match]\nName=ens160\n\n[Network]\nDNS=10.0.0.1\nAddress=10.0.0.226/24\nGateway=10.0.0.1"
      },
      {
      "name": "10-ens192.101.netdev",
      "contents": "[NetDev]\nName=ens192.101\nKind=vlan\n\n[VLAN]\nId=101"
      },
      {
      "name": "10-ens192.102.netdev",
      "contents": "[NetDev]\nName=ens192.201\nKind=vlan\n\n[VLAN]\nId=201"
      },
      {
      "name": "00-ens192.network",
      "contents": "[Match]\nName=ens192\n\n[Network]\nDHCP=no\nVLAN=ens192.101\nVLAN=ens192.201"
      },
      {
      "name": "20-ens192.101.network",
      "contents": "[Match]\nName=ens192.101\n\n[Network]\nAddress=172.16.101.11/24"
      },
      {
      "name": "20-ens192.201.network",
      "contents": "[Match]\nName=ens192.201\n\n[Network]\nAddress=172.16.201.11/24"
      }
    ]
  },
  "passwd": {
    "users": [
      {
        "name": "that1guy15",
        "passwordHash": "$1$xyz$cEUv8aN9ehjhMXG/kSFnM1",
        "sshAuthorizedKeys": ["AAAAB3NzaC1yc2EAAAADAQsadfsswqoevelknavpoiwefr;ksdvfdkljaqwefrgoihsadfv1cUvV69Xxop0qMAIumA0xQvBED0pQogqIZQiC6CDreCIK4QgLpuy4vor4xlvkVZHdt37hCSjBLrIYEgO4pYtZa6EZsLSf+oQKvStnyCHohJFyNcHYjXF1P6XpDfEXIU5py+0kYcNXYEgcbe3FlvB2E7YCqUeVQGW2E1azaAHlmFwobZgTgurQMmVlLqFHIQsaGGChCojJY"
],
        "create": {
          "groups": ["sudo", "docker"]
        }
      }
    ]
  }
}

We have several areas of focus here.

Storage / files: We update the “/etc/hostname” file to “labsvr1”
Networkd / units:

  • Configure the ens160 network interface on our mgmt. network with DNS and gateway. 
  • Create two sub-interfaces (netdev) for vlan 101 and 201 and tag them with a vlan
  • Configure network settings for ens192 and associate both vlans with it
  • Configure network settings for both sub-interfaces with IP address
This creates both networkd “netdev” and “network” files for each interface which can be located at /etc/system/network

The number at the front of each name is the order networkd loads the config and priority.

Passwd / users: 

  • Creates the user “that1guy15”.
  • Applies a password from a hash (hash above is for “password”) and associates a ssh public key to the user. 
  • Adds the user to the sudo and docker groups.

Pay close attention to your format and layout of your ignition file. Any error will cause CoreOS to back out completely during install and not apply any configuration. Leaving you with a useless server and having to start over.

Save the ignition file to a network share and download it to the server via SCP.


scp that1guy15@10.0.0.250:/storage/ignition.json ./


I placed it on my fileserver in the storage directly. “./” moves it to the current directory next to the coreos-install file downloaded above.

Once you have both files on the server its time to install CoreOS Container Linux. Run the coreos-install file with the following flags.


sudo ./coreos-install -d /dev/sda -i ignition.json -C stable

-d – location of install
-I – Ignition file to use. (-c if using a cloud-config file)
-C – specifies what release channel to use (stable, beta, alpha)

And your off.  

Once finished you will be instructed to reboot. Once rebooted and presented a login screen you should see each interface present and setup with the correct IP address.

Then try and login and if successful you are all setup!


We now have a functioning CoreOS install.

Repeat the same install proccess for the other two servers. Make sure to update the hostname and IP addresses in the ignition file before installing.

In my next post we will move on to Docker and Swarm.