Kubernetes cluster with flannel overlay network

This is the third an final post in the series where we play around with Docker and Kubernetes. The first two posts are available here:

Multi node kubernetes cluster
 Docker overlay network using flannel

In this tutorial I’ll try to explain how to bring up a multi node kubernetes cluster with an overlay network. This essentially combines what I’ve explained in previous posts. An overlay is necessary to fulfill the networking requirements for a fully functional kubernetes cluster. All this is taken care of automagically when the cluster is brought up on GCE, but the manual configuration is slightly complicated both because its non-trivial to set up so many components correctly and with so many tools available for the same job, its difficult to figure out which one to pick. I picked flannel because of its simplicity and community backing it up.

As before, the code for doing everything being explained here is available HERE. Feedback/suggestions to improve this are most welcome. I’ll bring up the cluster on a local box using Vagrant, but the script can be run on any cloud. As of now, the script is only compatible with ubuntu 14.04. This is rough sketch of how the components are structured. Flannel makes sure all containers are brought up on the same network.

kube_flannel

Bootstrapping:

  • Bringing up the cluster: running “vagrant up” from inside the directory will bring up two machines, a master and a node with static IP’s

    Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

    Create a private network, which allows host-only access to the machine

    using a specific IP.

    config.vm.define “kube-master” do |master| master.vm.box = “trusty64” master.vm.network “private_network”, ip: “192.168.33.10” master.vm.hostname = “kube-master” end

config.vm.define “kube-slave1” do |slave| slave.vm.box = “trusty64” slave.vm.network “private_network”, ip: “192.168.33.11” slave.vm.hostname = “kube-slave1” end end

&nbsp;</li>

  * Clone the repo <a href="https://github.com/ric03uec/k8s-installer" target="_blank">https://github.com/ric03uec/k8s-installer</a> at any location, preferably, &#8220;/tmp&#8221;
  * the script usage is something like this <pre class="lang:sh decode:true " title="installer usage">Usage:

./kube-installer.sh

Options: –master <master ip address> Install kube master with provided IP –slave <slave ip address> <master ip address> Install kube slave with provided IP

    &nbsp;</li>

      * for master, run &#8220;sudo ./kube-installer.sh &#8211;master 192.168.33.10&#8221;
      * for slave, run &#8220;sudo ./kube-installer.sh &#8211;slave 192.168.33.11 192.168.33.10&#8221;</ul>

    **Master:**

    The installer executes following steps for master node

      * download and extract kubernetes master specific binaries(kube-apiserver, kube-controller-manager, kube-scheduler, kubectl)
      * download and install etcd
      * copy configuration files for  etcd, kube-apiserver, kube-controller-manager and kube-scheduler to appropriate locations
      * start all services on master
      * update subnet configuration for flannel in etcd

    **Node(s):**

    the installer executes following steps for slave nodes

      * install docker
      * download and extract kubernetes node specific binaries(kube-proxy, kubelet)
      * install flannel
      * copy configuration files for flannel, docker, kubelet, and kube-proxy to appropriate locations
      * update docker config to use flannel bridge

    **The routing process:**

    The following links provide more details on how the routing takes place using kube-proxy and the overlay network

    <a href="https://raw.githubusercontent.com/coreos/flannel/master/packet-01.png" target="_blank">https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/services.md#the-gory-details-of-virtual-ips</a>

    <a href="https://raw.githubusercontent.com/coreos/flannel/master/packet-01.png" target="_blank">https://raw.githubusercontent.com/coreos/flannel/master/packet-01.png</a>

    &nbsp;
Share Comments
comments powered by Disqus