This guide details the process for setting up a high-availability Kubernetes cluster using Kubeadm on CentOS servers. It is adapted from Setup a multi-master Kubernetes cluster with kubeadm but tailored for CentOS servers and assumes root user access.
- Masters: 3 nodes, each with 2 CPUs and 2 GB RAM
- Workers: 3 nodes, each with 2 CPUs and 2 GB RAM
- Load Balancers: 2 nodes, each with 1 CPU and 1 GB RAM
- Clone the repository:
git clone https://github.com/lethanhson9901/k8s-centos-ha-clusters cd k8s-centos-ha-clusters sudo su chmod +x ./* # Grant execution access to shell scripts ./install_vq.sh #install vq to read yml file
- Modify
lb_config.yml
to configure the IP addresses of your load balancers:nano config/lb_config.yml
- Run the setup script on each load balancer node:
./setup_lb.sh
- Verify the HAProxy service status:
systemctl status haproxy
- Modify
cluster_config.yml
to list the IP addresses of master and worker nodes:nano config/cluster_config.yml
- Execute the setup script on each master and worker node to install Kubeadm, Containerd, and other dependencies:
./install_kubeadm_containerd.sh
- In case of errors, you can reverse the setup by running:
./cleanup.sh
-
Initializing the First Master Node
After modifying the
cluster_config.yml
and executing the setup scripts, the next step is to initialize the first master node in the Kubernetes cluster. -
Choose a CNI Plugin
- Options: Choose a Container Network Interface (CNI) such as Calico, Flannel, etc., for network operations in your Kubernetes cluster.
- Documentation: For more information, visit Kubernetes CNI plugins documentation.
-
Initialize with Flannel
- If you opt for Flannel as your CNI, use
10.244.0.0/16
as thepod-network-cidr
. - Initialization Command:
kubeadm init --control-plane-endpoint "<cluster-vip-ip>:6443" --pod-network-cidr="10.244.0.0/16"
Replace
<cluster-vip-ip>
with the virtual IP address of your cluster. - If you opt for Flannel as your CNI, use
-
Set Up Kubeconfig
- After initializing the first master node, configure kubeconfig to manage your Kubernetes cluster:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
-
Verify the Setup
- Check the setup by listing all pods in the
kube-system
namespace:
kubectl get pods --namespace=kube-system
- Check the setup by listing all pods in the
After the first master node initialization, the next crucial step is to install the selected CNI plugin.
- Install Flannel CNI
- Apply Flannel Configuration: Download and apply the Flannel configuration to your Kubernetes cluster.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml systemctl restart containerd
- Check Installation: Confirm the deployment of Flannel.
Look for
kubectl get pods --namespace=kube-system
kube-flannel-ds
pods in theRunning
state.
- Verify Node Network
- Ensure each node is `ready`` and communicating correctly:
kubectl get nodes
- Notes
- Network Configuration: By default, Flannel uses the
10.244.0.0/16
subnet. - Compatibility: Ensure there are no conflicts with existing network infrastructures.
- Alternatives: If you prefer a different CNI, use the respective configuration file URL for installation.
Following these steps will ensure the successful installation of Flannel as your Kubernetes cluster's CNI, enabling seamless pod-to-pod networking.
-
Use the
kubeadm join
command from the initial master node's output. Replace the tokens and addresses with your specific details. Example:kubeadm join <cluster-vip-ip>:6443 --token <token> \ --discovery-token-ca-cert-hash sha256:<hash> \ --control-plane --certificate-key <certificate-key> --apiserver-advertise-address <master-ip>
-
Set up kubeconfig (Repeat on each master node):
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
-
(Optional) Incase token is expired or you forgot: Recreate Token if Needed:
kubeadm token create --print-join-command kubeadm init phase upload-certs --upload-certs
-
In case errors: reset kubeadm
kubeadm reset -f rm -rf /var/lib/kubelet/* /etc/kubernetes/* /var/lib/containerd/* /etc/cni/net.d/* /var/log/containers/* /var/log/pods/*
- Use
kubeadm join
command for worker nodes (similar to master nodes but without--control-plane
and--certificate-key
flags):kubeadm join <cluster-vip-ip>:6443 --token <token> \ --discovery-token-ca-cert-hash sha256:<hash>
- Run the following commands to verify the cluster setup:
kubectl cluster-info kubectl get nodes kubectl get cs
Deploy and Access the Kubernetes Dashboard
After setting up your high-availability Kubernetes cluster, you might want to test its resilience against various types of failures. Chaos Mesh is an open-source chaos engineering platform that helps you simulate system conditions and find potential issues in your deployment.
To install Chaos Mesh in your Kubernetes cluster, refer to the detailed guide provided in the Chaos Mesh README. This guide covers the steps for adding the Chaos Mesh Helm repository, creating a namespace for Chaos Mesh, installing Chaos Mesh using Helm, and verifying the installation.
Once installed, you can begin experimenting with Chaos Mesh by creating chaos experiments. These experiments help you understand how your cluster responds to various failure scenarios, enabling you to improve its resilience.
For more detailed instructions on installing and using Chaos Mesh, including setting up chaos experiments, please refer to the Chaos Mesh README.
Enjoy your Kubernetes Cluster!