How to create Kubernetes cluster with two nodes part 1
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications.
Here we are going to show how to create Kubernetes cluster with two nodes.
There are two ways to try using Kubernetes:
1- Test environment: simply using Minikube to setup and lunch a Kubernetes cluster on 1 node. Minikube is the best choice for learning purpose.
2- Production Environment: we must setup control plane and nodes separately to achieve best practices.
Here we are going to try on production environment.
Also we skip extra and optional configuration and try to setup a cluster with minimal configuration.
Full Kubernetes documentation is available at Kubernetes Documentation.
Here is our environment:
Control Plane:
OS: Ubuntu server 20.04.2 on VMware workstation 16.0.1
CPU: 2 core
RAM: 2 GB
Disk: 20 GB
Filesystem: ext4
IP Address: 192.168.137.41
Node 1:
OS: Ubuntu server 20.04.2 on VMware workstation 16.0.1
CPU: 2 core
RAM: 2 GB
Disk: 20 GB
Filesystem: ext4
IP Address: 192.168.137.84
Node 2:
OS: Ubuntu server 20.04.2 on VMware workstation 16.0.1
CPU: 2 core
RAM: 2 GB
Disk: 20 GB
Filesystem: ext4
IP Address: 192.168.137.217
1- Disable swap on all VMs
For Kubernetes to work properly, we MUST disable swap space on ALL VMs.
In our Ubuntu installation, swap space is a swap file instead of a swap partition.
so we comment related line in /etc/fstab file and also issue swapoff command to turn it of immediately:
sed -i 's/\/swap.img/\#\/swap.img/' /etc/fstab swapoff -a
2- Bridge traffic
first we have to load “br_netfilter” modules in order to create bridge traffic and load “overlay” module for overlayfs filesystem:
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe br_netfilter
then we enable some kernel options to allow iptables see bridge traffic:
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF sudo sysctl --system
3- Open required ports
On control plane and worker nodes required ports must be open. so we do the following:
Control plane:
ufw allow proto tcp from any to any port 2379:2380,6443,10250,10251,10252 comment 'k8s control plane'
Nodes:
ufw allow proto tcp from any to any port 30000:32767,10250 comment 'k8s worker node'
4- Install container runtime
Kubernetes supports three Container Runtime (CR):
Docker
Containerd
CRI-O
Here we use containerd. This CR is maintained by Docker, so we need to add docker repository and install containerd on ALL Vms:
sudo apt-get update sudo apt-get install \ apt-transport-https \ ca-certificates \ curl \ gnupg \ lsb-release curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg echo \ "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update sudo apt-get install containerd.io
then we configure containerd:
sudo mkdir -p /etc/containerd containerd config default | sudo tee /etc/containerd/config.toml
and restart it:
sudo systemctl restart containerd
5- Install kubeadm, kubelet and kubectl
Now we have to install kubeadm, kubelet and kubectl on ALL Vms.
kubeadm: allow us to create a cluster.
kubelet: talk to API server on control plane and execute our requirements such as creating pods on worker nodes.
kubectl: allow us to manage cluster through API server.
5.1 Install required packages
sudo apt-get update sudo apt-get install -y apt-transport-https ca-certificates curl
5.2 Download google cloud public signing key
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
5.3 Add Kubernetes repository
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
5.4 Install management tools
sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl
the last command prevents kubelet, kubeadm and kubectl to be updated automatically.
the reason is related to version skew.
6- Configuring a cgroup driver
Every container runtime, manage containers resources based on two Linux kernel feature: cgroups and namespaces.
so we have to configure it for both CR and kubelet on ALL Vms:
6.1 Cgroup driver for container runtime
in /etc/containerd/config.toml file, we put the following line:
SystemdCgroup = true
under this line:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
then, restart containerd service:
sudo systemctl restart containerd
6.2 Cgroup driver for kubelet
we create a file called kubeadm-config.yaml and put the following lines in it:
cat << EOF | sudo tee /opt/kubeadm-config.yaml kind: ClusterConfiguration apiVersion: kubeadm.k8s.io/v1beta3 kubernetesVersion: v1.21.0 networking: podSubnet: "10.244.0.0/16" # --pod-network-cidr --- kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 cgroupDriver: systemd EOF
Initial configurations have been done. in the next article we show How to create a Kubernetes cluster with kubeadm.