Provision of ec2 instances over AWS cloud & Configuration of Kubernetes multinode cluster.

🌀 Kubeenetes: Container management tool

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

There are many benefits and components provided by the Kubernetes platform with many options about installing and setting up the Kubernetes cluster such as single-node, multiple-node and cloud-based deployments.

In this article, we will focus on setting up multiple-node cluster

🌀 What is Kubernetes Cluster?

A Kubernetes cluster is a set of node machines for running containerized applications.

Kubernetes multinode cluster contains Master and multiple slave / worker nodes.

The control plane i.e master is responsible for maintaining the desired state of the cluster, such as which applications are running and which container images they use. Nodes actually run the applications and workloads.

usecase: Suppose we have k8s cluster in which multiple worker nodes are there and if one of the node goes down we don’t have to worried because kubernetes manages this failure with the help of its autoscalling feature.

🌀 Now Let’s see how to configure the Kubernetes multinode cluster using Ansible roles.

I have conigured k8s multinode cluster containing one Master and two worker nodes.

🔷 Pre-requisite :

We require the one controller node configured with ansible. To know how to configure ansible you can refer my blog which I already wrote to get how to configure ansible over VM.

Link for blog 👉 : https://priyanka-hajare2700.medium.com/configuration-of-docker-using-ansible-6f9bbd0df6d3

🔷 Provisioning of Instances over AWS cloud using Ansible roles and .

Now let’s get started..

I created one directory as workspace named k8s_task where I have created roles (i.e aws_instance_provision, k8s_master, k8s-workers) for respective task , folder for dynamic inventory, private key (in .pem format) ansible configuration file and one yml file(i.e cluster_setup.yml) to run the task of ansible roles.

🔹 Ansible configuration file (ansible.cfg) :-

🔹 First let’s understand and setup dynamic inventory.

Dynamic Inventory :- The Inventory is configured with Scripts. The Script will automatically pick the IP’s and configure the inventory

We have to update the inventory in Ansible to Configure Managed Node. We have to add some lines in this configuration file.

We are going to launch Managed node in AWS so, There is a famous python script to retrieve the IP’s — ec2.py and we are also using ec2.ini for Making Group.

Link to get ec2.py file : - https://github.com/ansible/ansible/blob/stable-2.9/contrib/inventory/ec2.pyLink to get ec2.ini file : - https://bit.ly/3gBmIgZ

I have downloaded the the ec2.py file and ec2.ini file and both kept inside one folder i.e inv_dir. And made both the file executable.

command to execute files :- chmode +x <file.name>

e.g chmode +x ec2.py

Now set the environmental variable which are need for boto.

Commands to set the enviornment variable:-

✔ export AWS_ACCESS_KEY_ID = “ AXXXXXXXXXXXXXX”

✔ export AWS_SECRET_ACCESS_KEY = “XXXXXXXXXXXXXXXXXXXXXXXX ”

Credentials of aws can also be optionally specified in ec2.ini file. For this you can get two kewords i.e

Credentials specified here are ignored if the environment variables are set by using expose command.

🔹 Now let’s create Ansible-roles

Command to create roles:-
ansible-galaxy role init <role_name>

I created 3 roles which are as follows:

2. k8s_master : this role I created to configure master of K8S.

3. k8s-workers: this role I created to configure worker nodes of K8S.

🔹 Now let’s put variables and task inside aws_instance_provision roll.

I created following variables inside aws_instance_provisio/vars/main,yml

🔹 Ansible role to configure Kubernetes Master node

Now let’s put variables and task inside k8s_master roll which I have created earlier .

k8s_master this role will configure master of k8s cluster on the ec2.instance which we will launch as master by using aws_instance_provision role.

To install kubectl , kubeadm & kubelet we have to configure another yum remository for kubernetes. I have written task for this using yum_repository module.

2. Now we have repository for all the softwares so now let’s install all the softwares mentioned above. I have used loop here to install all five softwares one after another it makes our task easy.🤩

variable (i.e software )used in loop has defined in vars folder of k8s_master role.

3. Now let’s write task to start and enable the services of docker and kubelet.

for this task also I have used loop & used variable service which is defined in vars folder of k8s_master role.

4. Wrote play /task to pull docker images using kubeadm:

5. Change driver of docker from cgroupfs to systemd

Changing the settings such that your container runtime and kubelet use systemd as the cgroup driver stabilized the system. To configure this for Docker, set native.cgroupdriver=systemd.

For this I created file i.e daemon.json at controller node at location /root/k8s/daemon.json and created task to copy this file to master node at location /etc/docker/daemon.json (This task I have written using copy module).

I created two variables (i.e src & dest )to store above path of loactions which are as follows

src : /root/k8s/daemon.json (It contains path of location of daemon.json file at controller node of ansible)

dest: /etc/docker/daemon.json (It contains path of location of daemon.json file at master node of ansible)

(note: In my case I have created variables name same as parameters of copy module )

6. Now restart the services of docker

After changing any system setting we need to restart docker to make the changes successful. Here i have used service module to restart docker.

7. Now we have to initialize master

To make master ready in k8s cluster we have to initialize the master . I did the same task using following command which is shown in following figure.

we can start the Kubernetes services with kubeadm init, we need to provide a range for pods. By default, Kubernetes require 2GiB RAM and 2 vCPU. the initialization fails if we don’t provide those resources. If we use less than that it will give error therefore to ignore the errors I used — ignore-preflight-errors=NumCPU to ignore cpu error and — ignore-preflight-errors=Mem to ignore the memory errors while initializing the master.

8. Running all the commands given after initializing master.

9. Adding flannel network plugin to cluster -

Kuberntes cluster need a Network plugin for Overlay underlay networking and coreDNS pods depend on the Network plugin.

10. Generating Token — which will be used by worker to join with master.

After creating token it will be stores into variable tokens because I have assigned tokens as a variable in register parameter. tokens.stdout is shows the exact token created by master.

📌 All the above steps I have put in the main.yml file of tasks folder of k8s_master role. Below is the complete file of task (tasks/main.yml)

I have created and defined following variables for k8s_master role

🔹 Let’s Configure worker nodes of k8s_cluster using ansible role

I have created k8s-workers role to configure the workers of the k8s cluster I m going to create two slaves /workers . k8s-workers role will configure both the instances as worker node at the same time.

I have created and defined following variables for k8s-workers role

files/k8s.conf file to make sure that Master Node’s iptables can correctly see bridged traffic so we have to ensure net.bridge.bridge-nf-call-iptables is set to 1 in the sysctl config.

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

🔹 Now let’s write playbook to use created roles :

I have created playbook named cluster_setup.yml inside workspace (i.e k8s_task ) which uses created roles to execute the task written in nsible roles.

tag name of Master : tag_Name_k8s_master

tag name of workers : tag_Name_k8s_workers

👉These above tag names stores into ec2.py script after launching instances. To get these tag names run the following command .

./ec2.py --list

By using above command we can see and find the list of tag names of instances with IP’s of instances as shown below 👇.

🔹 Now let’s run the file cluster_setup.yml which will launch 3 ec2 instances 1 as master and two as workers and also configure k8s cluster over that instances .🤩

Command to run the cluster_setup.yml playbook:-

ansible-playbook cluster_setup.yml

Now 3 Instances has launched and one of them configured as master and other two has configured as workers i.e k8s cluster has configured ..!🤩

Let’s verify ….

we can see that three instances has launched which are in running state ..!🤩

Go to master instances and run the following command over master .

commad :- kubectl get nodes

As we can see here k8s cluster has configured with 2 workes and a master as it is showing in ready state. 🤩

Coomand to create deplyment:
kubectl create deployment <deployment_name> --image=<image_name>
kubectl get pods

As we can see that pod has exposed .

To get port no on run command kubectl get svc

Now access the application using port no. 30938 as we can see in above picture and public IP of any one of the worker node.

I have 2 worker node in kubernetes cluster i.e

Now Kubernetes cluster has configured ….! And also it is working great 🤩Now we can use this cluster for deploying any containerized applications.

we also have seen configuration of Kubernetes cluster using ansible is so faster than configuring that by manually .

Thank you for reading …!!!