Multi node kubernetes cluster

This guide demonstrates how to build a 2 node kubernetes cluster. Kubernetes comes with a set of scripts to install on different cloud providers and locally on vagrant box that can be used for the cluster setup but the idea here to provide an installation mechanism that is provider and OS agnostic. So, at the end of this guide, we’ll have a script that can be run on any two machines that can communicate with each other. The script will download the specified kubernetes and etcd release, install all necessary components to bring up kubernetes master and slave nodes,  and configure the components before booting them up.

For the impatient folks who just want want to copy-paste-run the script, here’s the link: “https://gist.github.com/ric03uec/81f6dc1208c87e4f4b86#file-kube-install-sh”

[edit: if you’re not able to see any of the gists below due to rate limit issues, HERE is the link to all the snippets]

Some background:

Quoting from kubernetes github page:

Kubernetes is an open source system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.

Kubernetes provides a container management layer over Docker that makes it very easy to scale a container based application or microservices. It introduces additional constructs like ‘Pods‘, ‘ReplicationControllers‘, ‘Services‘ and ‘Namespaces’ which are used to interact with containers. Actual Docker containers or images are never manipulated directly but only through Kubernetes constructs. It provides a command line tool called kubectl to manipulate objects and a full REST Api to do the same functions remotely. Reading of kubernetes Design Document is highly recommended.

A minimal kubernetes cluster will have two nodes. One will act as the master and other one as slave. Following components are installed on master

– etcd: highly available key-value store. Used for storing all the cluster information

– kube-apiserver: provides the REST api endpoint

– kube-scheduler: decides which nodes will run the containers defined in Pod(s)

– kube-controller-manager: maintains a the state of Pod(s) as defined in manifest

and following components are installed on slave

– kube-proxy: used by ‘Services’ to create iptable rules to connect to Pod(s)

– kubelet: talks to Docker to start/stop/destroy containers

Environment Setup:

For this tutorial, I’ll use Vagrant as the provider and bring up two fresh Ubuntu14.04(x86_64) machines on the local system. As I mentioned earlier, this can as well be done on two Digital Ocean or AWS machines that can connect to each other. Absolutely no changes have to be made in the scripts for this.

Put the following Vagrant file in any folder, say /home/kube/Vagrantfile.

and run the following commands. This Vagrantfile creates two machines, with names ‘kube-master‘ and ‘kube-slave‘. We’ll use ‘kube-master‘ to install the kubernetes master services and bring up etcd. ‘kube-slave’ will be used to install kubernetes slave components.

After this, you should have two terminals, one ssh’d into the machine kube-master and one into machine kube-slave.

Steps:

The best documentation of the code is code itself and that’s why I’ve tried to make the script as organized and readable as I could. I also threw in some comments just in case. So I’ll just explain the functions used and what they do. You’ll get a better idea once you go through the script itself. The script used environment variables heavily to make almost everything configurable.

– _updatehosts(): This function updates the /etc/hosts file to add the names for master and slave nodes. nothing fancy here

– _installdocker(): Docker is installed on slave if its not already there. You can install it manually and comment this function out

– _stopservices(): Running a sanity check on the services and stopping any of the services that might be running accidentally.

– _installetcd(): The fun begins here. Downloads and extracts the etcd server at a predefined path (/usr/bin in this case).

download_kubernetes_release(): Downloads and extracts the kubernetes binaries in /tmp folder.

update_master_binaries(): Copy the kubernetes binaries at a predefined path (/usr/bin in this case)

update_services_config(): This is the main function where configuration of all services take place. All the config files are located at /etc/default path. Since only the last config for any parameter is read, we just insert the configuration at the bottom of those files. So e.g. the config file /etc/default/kube-apiserver looks like following

remove_redundant_config(): When running inside the master, remove config and upstart files for slave services and when running inside slave, remove config and upstart files for master.

– _startservices(): Start the services in master and slave nodes

check_service_status(): Check if all the services are running correctly or not

Testing:

Moment of truth. Execute the following commands on the master node and you should see similar output. Also, after executing the second command, it might take a few minutes to change the status to ‘RUNNING’ because the image will be pulled from docker hub.

 

Hope this helped you understand the basics of kubernetes and get the rather complicated setup working correctly.

Next Up: Create a multi node cluster by using weave or flannel

 

Share Comments
comments powered by Disqus