Provision of ec2 instances over AWS cloud & Configuration of Kubernetes multinode cluster.

🌀 Kubeenetes: Container management tool

🌀 What is Kubernetes Cluster?

A Kubernetes cluster is a set of node machines for running containerized applications.

  • Therefore first of all let’s see how I have launched / provisioned instances over aws using ansible roles.
  • Private key of aws in .pem format
  • Here we require dynamic inventory.
  • Ansible does configuration with SSH protocol so the Ansible has to log in and we have to provide the remote_user
Link to get file : - to get ec2.ini file : -
  • aws_secret_access_key:
Command to create roles:-
ansible-galaxy role init <role_name>
  • iproute-tc — “iproute-tc” software on linux that manages the traffic control.
  • kubectl
  • kubelet
  • kubeadm
  • After that we need to copy the content from /etc/kubernetes/admin.conf to $HOME/.kube/config.
  • Then we need to change the owner permission of the configuration file (config).
  • path to go inside main.yml file of vars folder of k8s_master is :- k8s_master/vars/main.yml
  1. Installed docker , iproute-tc, kubectl ,kubeadm, kubelet
  2. Enabling the docker and Kubernetes
  3. Pulling the config images
  4. Configuring the docker daemon.json file
  5. Restarting the docker service
  6. Copy the k8s configuration file from controller node to worker node.
  7. Load settings from all system configuration files
  8. Joining worker to cluster using token generated by master
  • path to go inside main.yml file of vars folder of k8s_master is :- k8s-workers/vars/main.yml
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
  • In this file we also have to specify the names of the roles which we are using to excute the task.
  • hosts: In this parameter we have to assign the tag names of the instances which will be launched over aws cloud.
./ --list
ansible-playbook cluster_setup.yml

Let’s verify ….

  • Verified from console of AWS account.
Coomand to create deplyment:
kubectl create deployment <deployment_name> --image=<image_name>
kubectl get pods