How to Configure Kubernetes Multimaster Cluster for High Availability

Mr.Zik
6 min readAug 1, 2021

--

Creating high availability in a Kubernetes environment is critical to ensuring robust and highly available applications in the event of a disaster or hardware failure. One solution to this problem is to use multiple control planes or masters nodes in a cluster.

In the previous post it has been explained to install a kubernetes cluster using containerd. this time we will use containerd as the runtime container because kubernetes support for docker is about to end.
The following is a list of servers used:

Table IP Address

And the topology used as follows :

Topology

I will skip the containerd configuration, you can see how to configure containerd in here https://medium.com/@mrzik/how-to-create-kubernetes-cluster-with-containerd-90399ec3b810

Step 1. Configure /etc/hosts

Do this configuration on all master nodes.

Add the following list to hosts files so each node can communicate with names

root@kubmaster1:~# vi /etc/hosts
127.0.0.1 localhost
127.0.1.1 kubmaster1
#Kubernetes Nodes
172.16.4.74 kubmaster1.mylab.local kubmaster1
172.16.4.75 kubmaster2.mylab.local kubmaster2
172.16.4.76 kubmaster3.mylab.local kubmaster3
172.16.4.77 kubworker1.mylab.local kubworker1
172.16.4.78 kubworker2.mylab.local kubworker2
172.16.4.79 kubworker3.mylab.local kubworker3
#Keepalived Virtual IP
172.16.4.80 kubvip.mylab.local kubvip
#Kubectl Client
172.16.4.81 kubclient.mylab.local kubclient
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
root@kubmaster1:~#

Step 2. Configure keepalived

Do this configuration on all master nodes

Install keepalived package

sudo apt -y install keepalived

Create check script for keepalived

sudo vi /etc/keepalived/check_apiserver.sh#!/bin/sh
errorExit() {
echo "*** $*" 1>&2
exit 1
}
curl --silent --max-time 2 --insecure https://localhost:8443/ -o /dev/null ||
errorExit "Error GET https://localhost:8443/"
if ip addr | grep -q 172.16.4.80; then
curl --silent --max-time 2 --insecure https://172.16.4.80:8443/ -o
/dev/null || errorExit "Error GET https://172.16.4.80:8443/"
fi

Add permisson for executing check script

sudo chmod +x /etc/keepalived/check_apiserver.sh

Create file configuration for keepalived

sudo vi /etc/keepalived/keepalived.conf#Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
enable_script_security
}
vrrp_script check_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 3
weight -2
fall 3
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface ens160 #costumized this with your interface
virtual_router_id 100
priority 255 #change priority for other node
authentication {
auth_type PASS
auth_pass password #define your password
}
virtual_ipaddress {
172.16.4.80/24
}
track_script {
check_apiserver
}
}

Enable and start keepalived service

systemctl enable --now keepalived
systemctl status keepalived

Repeat this step for other nodes ( kubmaster2 and kubmaster3 ), for kubmaster2 and kubmaster3 there are some changes that need to be adjusted ( state to BACKUP , priority to 254 and 253)

Step 3. Configure haproxy

Do this configuration on all master nodes

Install haproxy package

sudo apt -y install haproxy

Add the following lines to the end of haproxy.cfg

sudo vi /etc/haproxy/haproxy.cfg#Kubernetes APIServer Frontend
frontend apiserver
bind 172.16.4.80:8443
mode tcp
option tcplog
default_backend apiserver
#Kubernetes APIServer Backend
backend apiserver
option httpchk GET /healtz
http-check expect status 200
mode tcp
option tcp-check
balance roundrobin
server kubmaster1.mylab.local:6443 check
server kubmaster2.mylab.local:6443 check
server kubmaster3.mylab.local:6443 check

Start and enable haproxy service

sudo systemctl enable --now haproxy
sudo systemctl status haproxy

Repeat step for other master nodes

Step 4. Configure kubernetes requirement

Do this configuration on control plane/master nodes and worker nodes

Disable swap

sudo swapoff -a
sudo vi /etc/fstab#comment swap disk
#/swap.img none swap sw 0 0
#save and exit :wq

Configure sysctl

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sudo sysctl --system

Allow required port for kubernetes or Disable Firewall

port requirement
sudo ufw disable

Setup iptables backend to use iptables-legacy

sudo update-alternatives --config iptablesThere are 2 choices for the alternative iptables (providing /usr/sbin/iptables).Selection    Path                       Priority   Status
------------------------------------------------------------
0 /usr/sbin/iptables-nft 20 auto mode
* 1 /usr/sbin/iptables-legacy 10 manual mode
2 /usr/sbin/iptables-nft 20 manual modePress <enter> to keep the current choice[*], or type selection number:

Add kubernetes repository

sudo wget https://packages.cloud.google.com/apt/doc/apt-key.gpgsudo mv apt-key.gpg /etc/apt/trusted.gpg.d/sudo cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt update
sudo apt install -y kubeadm kubelet

Enable kubelet service but dont start yet

sudo systemctl enable kubelet

Step 5. Kubeadm Cluster Init

Do this configuration on kubmaster1

kubeadm init with external endpoint and use containerd as container runtime

sudo kubeadm init --control-plane-endpoint "kubvip.mylab.local:8443" --upload-certs --cri-socket /run/containerd/containerd.sock

Note final result of kubeadm init cause we will use it for join another control plane and worker

Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:  mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:kubeadm join kubvip.mylab.local:8443 --token x65nj4.l2vi5s9r8cmqofj5 \
--discovery-token-ca-cert-hash sha256:859cf17f33574ad60b67b3694b2348c2c27d292ca597a1f3d5dba84948ac2b6b \
--control-plane --certificate-key c999c6807438d200627b7d3b5d6e247441814949360226f6d5837dae4e6139a6
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:kubeadm join kubvip.mylab.local:8443 --token x65nj4.l2vi5s9r8cmqofj5 \
--discovery-token-ca-cert-hash sha256:859cf17f33574ad60b67b3694b2348c2c27d292ca597a1f3d5dba84948ac2b6b

On kubmaster2 and kubmaster3 run this command to joining cluster as control plane nodes and add containerd as container runtime

sudo kubeadm join kubvip.mylab.local:8443 --token x65nj4.l2vi5s9r8cmqofj5 \
--discovery-token-ca-cert-hash sha256:859cf17f33574ad60b67b3694b2348c2c27d292ca597a1f3d5dba84948ac2b6b \
--control-plane --certificate-key c999c6807438d200627b7d3b5d6e247441814949360226f6d5837dae4e6139a6 --cri-socket /run/containerd/containerd.sock

On kubworker1–3 run this command to joining cluster as worker nodes and add containerd as container runtime

sudo kubeadm join kubvip.mylab.local:8443 --token x65nj4.l2vi5s9r8cmqofj5 \
--discovery-token-ca-cert-hash sha256:859cf17f33574ad60b67b3694b2348c2c27d292ca597a1f3d5dba84948ac2b6b --cri-socket /run/containerd/containerd.sock

Step 6. Add Calico CNI

Do this configuration on kubclient node

Before add calico cni we need to install and configure kubectl on kubclient node

sudo wget https://packages.cloud.google.com/apt/doc/apt-key.gpgsudo mv apt-key.gpg /etc/apt/trusted.gpg.d/sudo cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt update
sudo apt -y install kubectl

Create .kube directory on home directory

mkdir -p $HOME/.kube

SSH to kubmaster1 and scp kubernetes admin.conf to kubclient as config file

sudo scp /etc/kubernetes/admin.conf youradminuser@kubclient:/$HOME/.kube/config

Back to kubclient change ownership of config file

sudo chown $(id -u):$(id -g) $HOME/.kube/config

Now download calico manifest

curl https://docs.projectcalico.org/manifests/calico.yaml -O

Apply the manifest using following command

kubectl apply -f calico.yaml

Check all pod and wait until all pod is running

kubectl get pods --all-namespaces
pod kube-system deployment
pod kube-system deployment

Your HighAvailability cluster now ready to use.
Its important on production grade enterprise implementation to build kubernetes cluster with High Availability infrastructure so no single point of failure.

Cheers

--

--