Provision of ec2 instances over AWS cloud & Configuration of Kubernetes multinode cluster.

Priyanka Hajare
12 min readMay 5, 2021

🌀 Kubeenetes: Container management tool

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

There are many benefits and components provided by the Kubernetes platform with many options about installing and setting up the Kubernetes cluster such as single-node, multiple-node and cloud-based deployments.

In this article, we will focus on setting up multiple-node cluster

🌀 What is Kubernetes Cluster?

A Kubernetes cluster is a set of node machines for running containerized applications.

Kubernetes multinode cluster contains Master and multiple slave / worker nodes.

The control plane i.e master is responsible for maintaining the desired state of the cluster, such as which applications are running and which container images they use. Nodes actually run the applications and workloads.

usecase: Suppose we have k8s cluster in which multiple worker nodes are there and if one of the node goes down we don’t have to worried because kubernetes manages this failure with the help of its autoscalling feature.

🌀 Now Let’s see how to configure the Kubernetes multinode cluster using Ansible roles.

I have conigured k8s multinode cluster containing one Master and two worker nodes.

🔷 Pre-requisite :

We require the one controller node configured with ansible. To know how to configure ansible you can refer my blog which I already wrote to get how to configure ansible over VM.

Link for blog 👉 : https://priyanka-hajare2700.medium.com/configuration-of-docker-using-ansible-6f9bbd0df6d3

  • For setting up this cluster first of all I have launched 3 Instances (1 for Master and 2 as worker nodes) over AWS cloud using ansibble roles.
  • Therefore first of all let’s see how I have launched / provisioned instances over aws using ansible roles.

🔷 Provisioning of Instances over AWS cloud using Ansible roles and .

  • Go to ansible controller node and check whether ansible has installed on controller node or not.
  • Create IAM user to get access key and secrete key which are essential for launching instances using ansible role.
  • Private key of aws in .pem format
  • Here we require dynamic inventory.

Now let’s get started..

I created one directory as workspace named k8s_task where I have created roles (i.e aws_instance_provision, k8s_master, k8s-workers) for respective task , folder for dynamic inventory, private key (in .pem format) ansible configuration file and one yml file(i.e cluster_setup.yml) to run the task of ansible roles.

  • I have craeted ansible configuration file in the same workspace rather than using configuraion file from root directory. Because .yml file first checks configuration file from that same folder where that .yml file is exists

🔹 Ansible configuration file (ansible.cfg) :-

  • inventory: It is a keyword where I have given path of dynamic inventory i.e the folder where I have kept ec2.py and ec2.ini scripts.
  • Ansible does configuration with SSH protocol so the Ansible has to log in and we have to provide the remote_user
  • privilege escalation:- The ec2-user gets root power with sudo and we have to add the configuration rule in ansible Inventory to use sudo with user ec2-user. The concept of giving the root power to a normal user with sudo is called privilege escalation

🔹 First let’s understand and setup dynamic inventory.

Dynamic Inventory :- The Inventory is configured with Scripts. The Script will automatically pick the IP’s and configure the inventory

We have to update the inventory in Ansible to Configure Managed Node. We have to add some lines in this configuration file.

We are going to launch Managed node in AWS so, There is a famous python script to retrieve the IP’s — ec2.py and we are also using ec2.ini for Making Group.

Link to get ec2.py file : - https://github.com/ansible/ansible/blob/stable-2.9/contrib/inventory/ec2.pyLink to get ec2.ini file : - https://bit.ly/3gBmIgZ

I have downloaded the the ec2.py file and ec2.ini file and both kept inside one folder i.e inv_dir. And made both the file executable.

command to execute files :- chmode +x <file.name>

e.g chmode +x ec2.py

Now set the environmental variable which are need for boto.

Commands to set the enviornment variable:-

✔ export AWS_ACCESS_KEY_ID = “ AXXXXXXXXXXXXXX”

✔ export AWS_SECRET_ACCESS_KEY = “XXXXXXXXXXXXXXXXXXXXXXXX ”

Credentials of aws can also be optionally specified in ec2.ini file. For this you can get two kewords i.e

  • aws_access_key_id:
  • aws_secret_access_key:

Credentials specified here are ignored if the environment variables are set by using expose command.

🔹 Now let’s create Ansible-roles

Command to create roles:-
ansible-galaxy role init <role_name>

I created 3 roles which are as follows:

  1. aws_instance_provision : this role I created to launch EC2 instances

2. k8s_master : this role I created to configure master of K8S.

3. k8s-workers: this role I created to configure worker nodes of K8S.

🔹 Now let’s put variables and task inside aws_instance_provision roll.

  • create variables in main.yml file of vars folder of aws_instance_provision.

I created following variables inside aws_instance_provisio/vars/main,yml

  • In task/main.yml file we have to write the tasks / plays to launch instances.

🔹 Ansible role to configure Kubernetes Master node

Now let’s put variables and task inside k8s_master roll which I have created earlier .

k8s_master this role will configure master of k8s cluster on the ec2.instance which we will launch as master by using aws_instance_provision role.

  • Steps for configuration of master of k8s cluster. Let’s have look on each task and play one by one as per the steps of configuration.
  1. We require the following softwares / packages
  • docker
  • iproute-tc — “iproute-tc” software on linux that manages the traffic control.
  • kubectl
  • kubelet
  • kubeadm

To install kubectl , kubeadm & kubelet we have to configure another yum remository for kubernetes. I have written task for this using yum_repository module.

2. Now we have repository for all the softwares so now let’s install all the softwares mentioned above. I have used loop here to install all five softwares one after another it makes our task easy.🤩

variable (i.e software )used in loop has defined in vars folder of k8s_master role.

3. Now let’s write task to start and enable the services of docker and kubelet.

for this task also I have used loop & used variable service which is defined in vars folder of k8s_master role.

4. Wrote play /task to pull docker images using kubeadm:

5. Change driver of docker from cgroupfs to systemd

Changing the settings such that your container runtime and kubelet use systemd as the cgroup driver stabilized the system. To configure this for Docker, set native.cgroupdriver=systemd.

For this I created file i.e daemon.json at controller node at location /root/k8s/daemon.json and created task to copy this file to master node at location /etc/docker/daemon.json (This task I have written using copy module).

  • content of daemon.json file :

I created two variables (i.e src & dest )to store above path of loactions which are as follows

src : /root/k8s/daemon.json (It contains path of location of daemon.json file at controller node of ansible)

dest: /etc/docker/daemon.json (It contains path of location of daemon.json file at master node of ansible)

(note: In my case I have created variables name same as parameters of copy module )

6. Now restart the services of docker

After changing any system setting we need to restart docker to make the changes successful. Here i have used service module to restart docker.

7. Now we have to initialize master

To make master ready in k8s cluster we have to initialize the master . I did the same task using following command which is shown in following figure.

we can start the Kubernetes services with kubeadm init, we need to provide a range for pods. By default, Kubernetes require 2GiB RAM and 2 vCPU. the initialization fails if we don’t provide those resources. If we use less than that it will give error therefore to ignore the errors I used — ignore-preflight-errors=NumCPU to ignore cpu error and — ignore-preflight-errors=Mem to ignore the memory errors while initializing the master.

8. Running all the commands given after initializing master.

  • created directory .kube which is the main folder where kubernetes keeps all the required files.
  • After that we need to copy the content from /etc/kubernetes/admin.conf to $HOME/.kube/config.
  • Then we need to change the owner permission of the configuration file (config).

9. Adding flannel network plugin to cluster -

Kuberntes cluster need a Network plugin for Overlay underlay networking and coreDNS pods depend on the Network plugin.

10. Generating Token — which will be used by worker to join with master.

After creating token it will be stores into variable tokens because I have assigned tokens as a variable in register parameter. tokens.stdout is shows the exact token created by master.

  • Here I have used local_action module to create file i.e join-command first on localhost and then copy this file from localhost to worker nodes. join-command file conatins the token created by master because we have stored the content stored in key-word i.e tokens_stdout_lines[0] in join-command file

📌 All the above steps I have put in the main.yml file of tasks folder of k8s_master role. Below is the complete file of task (tasks/main.yml)

  • Now let’s create and define all the varibles which I have used in tasks/plays of k8s_master role.
  • path to go inside main.yml file of vars folder of k8s_master is :- k8s_master/vars/main.yml

I have created and defined following variables for k8s_master role

🔹 Let’s Configure worker nodes of k8s_cluster using ansible role

I have created k8s-workers role to configure the workers of the k8s cluster I m going to create two slaves /workers . k8s-workers role will configure both the instances as worker node at the same time.

  • steps which I have used through ansible modules to configure worker nodes are as follows:
  1. Configured the Yum repo for kubernetes fo(kubectl, kubelet , kubeadm)
  2. Installed docker , iproute-tc, kubectl ,kubeadm, kubelet
  3. Enabling the docker and Kubernetes
  4. Pulling the config images
  5. Configuring the docker daemon.json file
  6. Restarting the docker service
  7. Copy the k8s configuration file from controller node to worker node.
  8. Load settings from all system configuration files
  9. Joining worker to cluster using token generated by master
  • Now let’s create and define all the varibles which I have used in tasks/plays of k8s-workers role.
  • path to go inside main.yml file of vars folder of k8s_master is :- k8s-workers/vars/main.yml

I have created and defined following variables for k8s-workers role

files/k8s.conf file to make sure that Master Node’s iptables can correctly see bridged traffic so we have to ensure net.bridge.bridge-nf-call-iptables is set to 1 in the sysctl config.

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
  • Below is the tasks/main.yml file of the k8s-workers role In which I have written all the above mentioned steps using modules of ansibles .

🔹 Now let’s write playbook to use created roles :

I have created playbook named cluster_setup.yml inside workspace (i.e k8s_task ) which uses created roles to execute the task written in nsible roles.

  • refresh_inventory: Here we have to use refresh_inventory so that inventory will be refreshed while running.
  • In this file we also have to specify the names of the roles which we are using to excute the task.
  • I have used vars_prompt in the cluster_setup.yml file to make project dynamic therefore After executing / running playbook i.e cluster_setup.yml . It will ask how many masters and workers we want that time we have to give input give input in the integer.
  • hosts: In this parameter we have to assign the tag names of the instances which will be launched over aws cloud.

tag name of Master : tag_Name_k8s_master

tag name of workers : tag_Name_k8s_workers

👉These above tag names stores into ec2.py script after launching instances. To get these tag names run the following command .

./ec2.py --list

By using above command we can see and find the list of tag names of instances with IP’s of instances as shown below 👇.

🔹 Now let’s run the file cluster_setup.yml which will launch 3 ec2 instances 1 as master and two as workers and also configure k8s cluster over that instances .🤩

Command to run the cluster_setup.yml playbook:-

ansible-playbook cluster_setup.yml
  • I want to launch one instance for master and two instances as worker therefore I have entered 1 & 2 respectively as shown below by highlighting the lines with yellow outline .

Now 3 Instances has launched and one of them configured as master and other two has configured as workers i.e k8s cluster has configured ..!🤩

Let’s verify ….

  • Verified from console of AWS account.

we can see that three instances has launched which are in running state ..!🤩

  • Now let’s verify whethere kubernetes multinode cluster with one master and two worker nodes has setted up or not …

Go to master instances and run the following command over master .

commad :- kubectl get nodes

As we can see here k8s cluster has configured with 2 workes and a master as it is showing in ready state. 🤩

  • Now let’s create one pod with an application to check the k8s cluster. Here I am going to create the deployment using the httpd image which is pre-configured with the webserver.
Coomand to create deplyment:
kubectl create deployment <deployment_name> --image=<image_name>
  • Then checked created deployment using following command
kubectl get pods
  • Let’s expose the created pod i.e webapp1 by using NodePort service so that we can access this pod from outside (using internet)

As we can see that pod has exposed .

  • Now let’s access application from chrome using IP of any worker node of k8s_cluster and port no. on which application is running .

To get port no on run command kubectl get svc

Now access the application using port no. 30938 as we can see in above picture and public IP of any one of the worker node.

I have 2 worker node in kubernetes cluster i.e

Now Kubernetes cluster has configured ….! And also it is working great 🤩Now we can use this cluster for deploying any containerized applications.

we also have seen configuration of Kubernetes cluster using ansible is so faster than configuring that by manually .

Thank you for reading …!!!

--

--