This project demonstrates a simple Kubernetes setup with a backend and frontend application. The backend is a Flask API, and the frontend is a Flask application that communicates with the backend. This setup showcases Kubernetes load balancing and network policies to control traffic between services.
- Docker installed locally
- GitHub account and repository
- Install Minikube: Follow the Minikube installation guide
- Install kubectl: Follow the kubectl installation guide
- Backend: A Flask API that responds with a simple JSON message.
- Frontend: A Flask application that makes requests to the backend API and displays the responses.
- Network Policy: Restricts access to the backend service, allowing only the frontend to communicate with it.
Clone the repository to your local machine:
git clone https://github.com/your-username/your-repo.git
cd your-repo
-
Start Minikube with Calico:
Start Minikube with 1 Control Plane Node and 3 Worker Nodes the Calico CNI plugin:
minikube start --network-plugin=cni --cni=calico --nodes=4
Verify that Calico is running:
kubectl get pods -n kube-system | grep calico
-
Install MetalLB
Apply the MetalLB manifests:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
-
Configure MetalLB
Determine the IP address range that your Minikube cluster is using:
minikube ip
Let's assume your Minikube IP is
192.168.49.2
. Use an IP range in the same subnet for MetalLB (e.g.,192.168.49.30-192.168.49.40
).Create a MetalLB configuration file:
# metallb-config.yaml apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: default protocol: layer2 addresses: - 192.168.49.30-192.168.49.40
Apply the configuration:
kubectl apply -f deployment/metallb-config.yaml
-
Build and Push Docker Image for backend:
Build and push the Docker image for the backend application:
cd backend docker build -t ghcr.io/your-username/backend-demo:latest . docker push ghcr.io/your-username/backend-demo:latest cd ..
-
Build and Push Docker Image for frontend:
Build and push the Docker image for the frontend application:
cd frontend docker build -t ghcr.io/your-username/frontend-demo:latest . docker push ghcr.io/your-username/frontend-demo:latest cd ..
-
Apply Namespace:
kubectl apply -f deployment/app-namespace.yaml
-
Apply Backend Deployment:
kubectl apply -f deployment/backend-deployment.yaml kubectl apply -f deployment/backend-service.yaml
-
Apply Frontend Deployment:
kubectl apply -f deployment/frontend-deployment.yaml kubectl apply -f deployment/frontend-service.yaml
-
Get the Cluster IP and End Point Adresses:
# Get Kubernetes Service kubectl get svc -n demo-cni-app # Get the endpoints kubectl get ep -n demo-cni-app
min
-
Get the prerouting Rule for KUBE-SERVICE:
# Login to minikube minikube ssh
sudo iptables -t nat -L KUBE-SERVICES Chain KUBE-SERVICES (2 references) target prot opt source destination KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- anywhere 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:https KUBE-SVC-TCOU7JCQXEZGVUNU udp -- anywhere 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:domain KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- anywhere 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:domain KUBE-SVC-JD5MR3NA4I4DYORP tcp -- anywhere 10.96.0.10 /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153 KUBE-SVC-6YNYFUIKGNIA7RFX tcp -- anywhere 10.108.198.28 /* demo-cni-app/flask-api-service cluster IP */ tcp dpt:http KUBE-NODEPORTS all -- anywhere anywhere /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
-
Get the NAT Rule for ClusterIP:
sudo iptables -t nat -L KUBE-SVC-6YNYFUIKGNIA7RFX Chain KUBE-SVC-6YNYFUIKGNIA7RFX (1 references) target prot opt source destination KUBE-MARK-MASQ tcp -- !10.244.0.0/16 10.108.198.28 /* demo-cni-app/flask-api-service cluster IP */ tcp dpt:http KUBE-SEP-J7YQFRES3OILODCJ all -- anywhere anywhere /* demo-cni-app/flask-api-service -> 10.244.0.3:80 */
-
Get the Rule for the Service End Point:
sudo iptables -t nat -L KUBE-SEP-J7YQFRES3OILODCJ Chain KUBE-SEP-J7YQFRES3OILODCJ (1 references) target prot opt source destination KUBE-MARK-MASQ all -- 10.244.0.3 anywhere /* demo-cni-app/flask-api-service */ DNAT tcp -- anywhere anywhere /* demo-cni-app/flask-api-service */ tcp to:10.244.0.3:80
# Exit minikube exit
In this Demo we will work with Network Policy and how Network Policy effects traffic between Pods
-
Apply Demo Pod:
kubectl apply -f deployment/debug-pod-namespace.yaml kubectl apply -f deployment/debug-pod.yaml
-
Test from demo pod without policy:
Execute a shell inside the demo pod to test connectivity to the backend service:
kubectl exec -it debug-pod -n debug-pod -- sh
Inside the shell, try to connect to the backend service:
wget -qO- http://flask-api-service.demo-cni-app.svc.cluster.local/api
You should see that the connection is succesfull
-
Apply Network Policy:
kubectl apply -f deployment/network-policy.yaml
-
Test from Demo Pod with Policy:
Execute a shell inside the demo pod to test connectivity to the backend service:
kubectl exec -it debug-pod -n debug-pod -- sh
Inside the shell, try to connect to the backend service:
wget -T5 -qO- http://flask-api-service.demo-cni-app.svc.cluster.local/api
You should see that the connection is refused or times out, demonstrating that the network policy is effectively blocking traffic from the demo pod to the backend service. ku
To clean up the resources, delete the created Kubernetes resources and namespaces:
kubectl delete -f deployment/backend-deployment.yaml
kubectl delete -f deployment/backend-service.yaml
kubectl delete -f deployment/frontend-deployment.yaml
kubectl delete -f deployment/frontend-service.yaml
kubectl delete -f deployment/network-policy.yaml
kubectl delete namespace demo-namespace
This project is licensed under the MIT License. See the LICENSE file for details.
This README.md
includes instructions for:
- Cloning the repository.
- Building and pushing Docker images for both the backend and frontend applications.
- Deploying the applications and network policy to a Kubernetes cluster.
- Verifying the network policy.
- Cleaning up resources.
This should provide a comprehensive guide for anyone looking to understand and deploy the project.