Skip to content

Deploying Kubernetes with Kubespray

Avatar photo

https://www.linkedin.com/in/gineesh/ https://twitter.com/GiniGangadharan

Photo by Tom Fisk from Pexels

Introduction

You have multiple ways to deploy a production ready Kubernetes cluster and Kuberspray is another method in which we will use Ansible for configuring the nodes and deploying a kubernetes cluster in a pre-provisioned infrastructure. You can automate infrastructure automation using Terraform or even using Ansible but we are not covering that in this article.

What is Kubernetes ?

In simple words, Kubernetes is an open-source platform for managing containerized workloads and services, I donโ€™t want to repeat the well-known details โ€“ Read it here.

What is Kubespray ?

Kubespray is a combination for Kubernetes and Ansible and you can use Kubespray for deploying Kubernetes clusters on bare-metal or cloud platforms like AWS, GCE, Azure, OpenStack, vSphere etc.

Kubespray Features

  • Can deploy Highly Available (HA) clusters
  • Composable Options for Network Plugins (weave, calico, flannel etc)
  • Supports most popular Linux distributions (Debian, Ubuntu, CentOS/RHEL, Fedora, Fedora CoreOS, Oracle Linux, OpenSUSE etc)

Letโ€™s deploy a Kubernetes Cluster using Kubespray

As I mentioned earlier, Kubespray will support the marketโ€™s leading cloud platforms and bare metal servers to deploy Kubernetes clusters. But, as a developer or administrator (or as a student), you will not have that luxury of FREE cloud servers or bare-metal servers. And because of that reason, I have decided to explain this using VirtualBox with Vagrant.

Step 1: Prepare Ansible on your Workstation

Since Kubespray is fully relying on Ansible, you need a Linux based machine (Red Hat, Debian, CentOS, macOS, any of the BSDs, and so on) for all steps. Ansible is not supported in Windows natively and you may need to achieve this via WSL or another dedicated virtual machine inside VirtualBox for Ansible.

Also read : How to Install Ansible (Part 2 or Ansible Guide)

You can use any method for Ansible installation based on available options for your workstation OS.

$ yum install epel-release -y
$ yum install -y ansible

## Also install additional packages as needed.
$ yum install python36 -y
$ yum install python-pip -y
$ pip2 install jinja2

## Check version
$ ansible --version
ansible 2.9.6
config file = None
configured module search path = ['/Users/gini/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/gini/Library/Python/3.7/lib/python/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.3 (default, Nov 15 2019, 04:04:52) [Clang 11.0.0 (clang-1100.0.33.16)]

Also read: Official Ansible Documentation.

Step 2: Prepare Nodes for Kubernetes Cluster

As I mentioned, these tasks can be manual or using some automated methods using Terraform or Ansible (or whichever your choice) and we are using Vagrant for this purpose. If you check our Vagrantfile, you can see that we are creating 1x master nodes and 2x worker nodes for our kubernetes cluster. We will use the centos7 image for the nodes and you have the freedom to select your operating system. (Keep in mind that, you may need to amend the Ansible playbook accordingly as some of the tasks are written for CentOS based platforms)

Why Vagrant and VirtualBox ?

As you know, Vagrant is a simple and easy to use tool for building and managing virtual machine environments in a single workflow and it is available for Windows, Linux and Mac platforms. And you donโ€™t need to worry about the cloud infrastructure as you can do most of the items in your own laptop or desktop by Virtual Machines on top of VirtualBox (or other supported virtualization platforms). We will create 3 Virtual Machines in VirtualBox using Vagrant.

Also read : Deploy Minikube Using Vagrant and Ansible on VirtualBox โ€“ Infrastructure as Code

Note : You can skip this step if you are preparing your Virtual machines/servers for kubernetes cluster nodes in different methods, but make sure you cover the steps for configuring the Virtual machines before starting Kubespray playbooks.

Clone the Vagrant Repo for Nodes Preparation

Important Notes

  • We will use ansible playbook (node-config.yml) for configuring provisioned virtual machines with basic user access (remote_user_name: devops), sudo access, and ssh key configuration for password-less login from workstation. (from where you run kubespray later)
  • Update remote_user_public_key_file with your ssh public key filename.
  • Master IP address will start from 192.168.50.11 (192.168.50.#{i + 10})
  • Node IP address will start from 192.168.50.21 (192.168.50.#{i + 20})
$ git clone https://github.com/ginigangadharan/vagrant-iac-usecases
$ cd virtualbox-iac-kubespray
$ vagrant up

Wait for the virtual machines to boot up and ansible playbook to configure the user access and sudo access; and letโ€™s verify the access without password.

gini@greenmango ~ % for i in 11 21 22;do ssh [email protected].$i hostname;done
master-1
node-1
Node-2

Note : There is a Vagrantfile inside the Kubespray repo and you can directly use it (without cloning the vagrant repo I have mentioned above). But you have less control on the nodes, names, count and you will not be able to understand the full flow of Kubespray deployment.

Step 3: Prepare Node access via SSH

If you are using the above Vagrant method to provision virtual machines, then this step is already completed as our provisioner playbook (node-config.yml) will accomplish this.

If you are preparing your node manually or in any other way, you need to configure ssh key based authentication so that Kubespray can use Ansible to automate the kubernetes deployment on nodes.

Refer How to setup SSH key based authentication.

Verify the access without password

gini@greenmango ~ % for i in 11 21 22;do ssh [email protected].$i hostname;done
master-1
node-1
Node-2

Step 4: Clone Kubespray Repo and Prepare Configuration

We will clone the entire kubespray repository as below.

gini@greenmango codes % git clone https://github.com/kubernetes-sigs/kubespray.git
Cloning into 'kubespray'โ€ฆ
remote: Enumerating objects: 48501, done.
remote: Total 48501 (delta 0), reused 0 (delta 0), pack-reused 48501
Receiving objects: 100% (48501/48501), 14.39 MiB | 1.57 MiB/s, done.
Resolving deltas: 100% (26978/26978), done.

Switch to kubespray directory and install the dependencies from requirement.txt file using pip.

gini@greenmango codes % cd kubespray
gini@greenmango kubespray % pip3 install -r requirements.txt

Prepare Kubernetes Cluster Config

Now create a copy of the sample inventory directory for our cluster setup.

gini@greenmango kubespray % cp -rfp inventory/sample inventory/mycluster

Now we will declare our node IP address for auto generating host file.

gini@greenmango kubespray % declare -a IPS=(192.168.50.11 192.168.50.21 192.168.50.22)

And generate the sample inventory file using available python script.

gini@greenmango kubespray % CONFIG_FILE=inventory/mycluster/hosts.yaml \
python3 contrib/inventory_builder/inventory.py ${IPS[@]}
DEBUG: Adding group all
DEBUG: Adding group kube-master
DEBUG: Adding group kube-node
DEBUG: Adding group etcd
DEBUG: Adding group k8s-cluster
DEBUG: Adding group calico-rr
DEBUG: adding host node1 to group all
DEBUG: adding host node2 to group all
DEBUG: adding host node3 to group all
DEBUG: adding host node1 to group etcd
DEBUG: adding host node2 to group etcd
DEBUG: adding host node3 to group etcd
DEBUG: adding host node1 to group kube-master
DEBUG: adding host node2 to group kube-master
DEBUG: adding host node1 to group kube-node
DEBUG: adding host node2 to group kube-node
DEBUG: adding host node3 to group kube-node

Configure hosts.yaml

We need to review inventory/mycluster/hosts.yaml and its contents which was created from the previous step. Adjust the hosts as master or worker nodes by following the IP address. Refer to the sample hosts.yaml file below as a sample. As we mentioned in the beginning, we will have 1x master node and 2x worker nodes in this demo. Amend the hosts.yaml as per your node count and make sure you have updated all sections like, kube-node, kube-master, etcd etc.

gini@greenmango kubespray % cat inventory/mycluster/hosts.yaml
all:
  hosts:
    master-1:
      ansible_host: 192.168.50.11
      ip: 192.168.50.11
      access_ip: 192.168.50.11
    node-1:
      ansible_host: 192.168.50.21
      ip: 192.168.50.21
      access_ip: 192.168.50.21
    node-2:
      ansible_host: 192.168.50.22
      ip: 192.168.50.22
      access_ip: 192.168.50.22
  children:
    kube-master:
      hosts:
        master-1:
    kube-node:
      hosts:
        node-1:
        node-2:
    etcd:
      hosts:
        master-1:
    k8s-cluster:
      children:
        kube-master:
        kube-node:
    calico-rr:
      hosts: {}

Adjust group_vars

Now, review below group_vars files and adjust configurations as needed.

  • cat inventory/mycluster/group_vars/all/all.yml
  • cat inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml

For a basic kubernetes cluster, you can directly use those files as it is without amending anything. In this case we are making some small changes to enable some features in kubernetes.

  • Edit inventory/mycluster/group_vars/all/all.yml and uncomment kube_read_only_port as below.
## The read-only port for the Kubelet to serve on with no authentication/authorization. Uncomment to enable.
kube_read_only_port: 10255
  • Edit inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml and change default network plugin to weave.((you can keep default plugin which is calico)
# Choose network plugin (cilium, calico, contiv, weave or flannel. 
# Use cni for generic cni plugin)
# Can also be set to 'cloud', which lets the cloud provider 
# setup appropriate routing
kube_network_plugin: weave

Step 5: Letโ€™s Deploy Kubernetes

Now we are ready to deploy kubernetes using Kubespray and we will execute ansible playbook as below.

gini@greenmango kubespray % ansible-playbook \
  -i inventory/mycluster/hosts.yaml cluster.yml \ 
  -u devops -b

Where,

  • -i : inventory file to be used by ansible-playbook
  • cluster.yml : playbook to deploy a cluster
  • -u devops : the user account which we have created on all nodes for password-less ssh access.
  • -b : enable become โ€“ sudo access is needed for installing packages, starting services, creating SSL certificates etc.

The playbook will take 5-7 minutes โ€“ depends on your node specification and network & internet speed.

PLAY RECAP **********************************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
master-1                   : ok=532  changed=42   unreachable=0    failed=0    skipped=1119 rescued=0    ignored=0   
node-1                     : ok=325  changed=19   unreachable=0    failed=0    skipped=573  rescued=0    ignored=0   
node-2                     : ok=325  changed=19   unreachable=0    failed=0    skipped=572  rescued=0    ignored=0   

Monday 23 November 2020  18:51:17 +0800 (0:00:00.083)       0:09:26.945 ******* 
=============================================================================== 
download : download_container | Download image if required ------------------------------------- 125.29s
download : download_container | Download image if required -------------------------------------- 75.45s
download : download_container | Download image if required -------------------------------------- 35.34s
download : download_file | Download item -------------------------------------------------------- 32.35s
kubernetes/master : Master | wait for kube-scheduler --------------------------------------------- 8.96s
kubernetes/preinstall : Install packages requirements -------------------------------------------- 8.52s
kubernetes-apps/ansible : Kubernetes Apps | Start Resources -------------------------------------- 5.74s
kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template ---------------------------- 5.10s
network_plugin/calico : Wait for calico kubeconfig to be created --------------------------------- 4.29s
kubernetes/preinstall : Get current calico cluster version --------------------------------------- 3.97s
Gather necessary facts --------------------------------------------------------------------------- 3.74s
network_plugin/calico : Calico | Create calico manifests ----------------------------------------- 3.48s
network_plugin/calico : Calico | Create Calico Kubernetes datastore resources -------------------- 2.96s
policy_controller/calico : Start of Calico kube controllers -------------------------------------- 2.87s
container-engine/docker : ensure docker packages are installed ----------------------------------- 2.86s
download : download | Download files / images ---------------------------------------------------- 2.79s
network_plugin/calico : Start Calico resources --------------------------------------------------- 2.71s
policy_controller/calico : Create calico-kube-controllers manifests ------------------------------ 2.13s
download : download_file | Download item --------------------------------------------------------- 1.96s
kubernetes/master : kubeadm | Check if apiserver.crt contains all needed SANs -------------------- 1.91s

Now you have your kubernetes cluster deployed by Kubespray; great !

Step 6: How to access your newly installed Kubernetes Cluster ?

By default, Kubespray configures kube-master hosts with insecure access to kube-apiserver via port 8080 as http://localhost:8080 and you can connect this from master node. But we need to access the same in a secure method and we will use the default kubeconfig created in /etc/kubernetes/admin.conf.

Note: You need to follow the best practices to access the cluster for the production environment.

Access Kubernetes Cluster from Master node

We will use the /etc/kubernetes/admin.conf on master node and copy to user home directory as below.

Login to Master node

gini@greenmango virtualbox-iac-kubespray % ssh [email protected]
Last login: Mon Nov 23 10:13:03 2020 from 192.168.50.1
[devops@master-1 ~]$

Copy /etc/kubernetes/admin.conf to devops home directory (otherwise you will not have full permission on that config file) and modify permission accordingly.

[devops@master-1 ~]$ sudo cp /etc/kubernetes/admin.conf ~/
[devops@master-1 ~]$ USERNAME=$(whoami)
[devops@master-1 ~]$ sudo chown -R $USERNAME:$USERNAME ~/admin.conf
Letโ€™s test the access by telling the kubeconfig file.

Now test the cluster access using the kubeconfig.

[devops@master-1 ~]$ kubectl get nodes --kubeconfig=admin.conf
NAME       STATUS   ROLES    AGE    VERSION
master-1   Ready    master   3h6m   v1.19.3
node-1     Ready    <none>   3h5m   v1.19.3
node-2     Ready    <none>   3h5m   v1.19.3

Great ! All good and you can access the cluster now.

Access Kubernetes Cluster from Workstation

Okay, what if you want to access it from outside like our workstation we have run kubespray ? Simple, just copy the admin.conf to your workstation and use it with kubectl.

Yes, you need to install kubectl on your workstation and you can refer Install and Set Up kubectl for the same.


Letโ€™s copy the admin.conf (which we have copied in the previous step) from devops home directory on master node.

gini@greenmango kubespray % scp [email protected]:~/admin.conf ~/.kube/mycluster.conf
admin.conf                                100% 5577     8.6MB/s   00:00 

And verify access. (To avoid mentioning the kubeconfig every time, let us export the environment variable.)

gini@greenmango kubespray % export KUBECONFIG=~/.kube/mycluster.conf 
gini@greenmango kubespray % kubectl get nodes
NAME       STATUS   ROLES    AGE    VERSION
master-1   Ready    master   3h6m   v1.19.3
node-1     Ready    <none>   3h5m   v1.19.3
node-2     Ready    <none>   3h5m   v1.19.3

Thatโ€™s it.

Now you can deploy your application and test other functionalities as needed.

Read more

Disclaimer:

The views expressed and the content shared in all published articles on this website are solely those of the respective authors, and they do not necessarily reflect the views of the author’s employer or the techbeatly platform. We strive to ensure the accuracy and validity of the content published on our website. However, we cannot guarantee the absolute correctness or completeness of the information provided. It is the responsibility of the readers and users of this website to verify the accuracy and appropriateness of any information or opinions expressed within the articles. If you come across any content that you believe to be incorrect or invalid, please contact us immediately so that we can address the issue promptly.

Avatar photo


https://www.linkedin.com/in/gineesh/ https://twitter.com/GiniGangadharan
Gineesh Madapparambath is the founder of techbeatly and he is the author of the book - ๐—”๐—ป๐˜€๐—ถ๐—ฏ๐—น๐—ฒ ๐—ณ๐—ผ๐—ฟ ๐—ฅ๐—ฒ๐—ฎ๐—น-๐—Ÿ๐—ถ๐—ณ๐—ฒ ๐—”๐˜‚๐˜๐—ผ๐—บ๐—ฎ๐˜๐—ถ๐—ผ๐—ป. He has worked as a Systems Engineer, Automation Specialist, and content author. His primary focus is on Ansible Automation, Containerisation (OpenShift & Kubernetes), and Infrastructure as Code (Terraform). (aka Gini Gangadharan - iamgini.com)

Comments

5 Responses

  1. […] Deploying Kubernetes with Kubespray – November 23, 2020 […]

  2. […] Deploying Kubernetes with Kubespray – November 23, 2020 […]

  3. vyomsoft says:

    Hello,
    I tried same on RHEL 8.
    After deployment etcd.service is in activating(start) state
    and there is no admin.conf file present /etc/kubernetes folder in mater node. Could you suggest something.

    • If the file is missing then the installation is not completed.

      Also I didnโ€™t test this using RHEL8 yet.

      Let me test this next week ๐Ÿ˜„.
      Feel free to remind me after 19th May.

      For quick discussions pls join t.me/techbeatly

  4. Zain says:

    Hi ,
    I am trying to install kubernetes v1.26.2 using kubespray on RHEL 8.4

    i am getting below error “FAILED – RETRYING: ensure docker packages are installed (1 retries left).”

    my configuration are as below and i am using container_manager as docker

    git clone https://github.com/kubernetes-incubator/kubespray.git
    cd kubespray/
    python3 -m pip install -r requirements-2.11.txt
    ansible –version
    cp -rfp inventory/sample inventory/mycluster
    declare -a IPS=(192.168.232.120 192.168.232.121)
    CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}

    vim inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
    container_manager: docker
    kube_version: v1.26.2
    kube_network_plugin: calico
    kube_pods_subnet: 10.233.64.0/18
    kube_service_addresses: 10.233.0.0/18
    vim inventory/mycluster/group_vars/k8s_cluster/addons.yml
    dashboard_enabled: true
    ingress_nginx_enabled: true
    ingress_nginx_host_network: true

    ansible-playbook -i inventory/mycluster/hosts.yaml –become –become-user=root cluster.yml

    Any inputs on this.
    — Zain

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.