Kamaji is the Kubernetes Control Plane Manager that leverages the concept of Hosted Control Plane, where there is a strong decoupling between Control Plane and Worker nodes.
Kamaji's approach is based on running the Kubernetes Control Plane components in Pods instead of dedicated machines in the downstream. This approach makes running high-density Kubernetes Control Planes cheaper and easier to build, deploy, and operate.
With Kamaji, the worker nodes of the downstream clusters can be placed on any infrastructure, even remote from their Control Plane, and kept isolated from each other. Kamaji itself does not provide any helper for the creation of worker nodes; instead, it leverages the Cluster Management API. This allows you to create the Tenant Clusters, including control plane and worker nodes, in a completely declarative way. Refer to the Cluster API guide to learn more about supported providers.
While the Cluster API is a common approach to run Kubernetes infrastructure in a declarative way, there are many situations where Cluster API is not usable or desired. This can be due to various reasons, such as the need to keep the management of the control planes separate from the management of the worker nodes, or when the infrastructure provider hosting the worker nodes is not natively supported by Cluster API.
In such cases, a different approach is to configure worker nodes using yaki
, a wrapper around the standard kubeadm
utility. Terraform is used to provision the machines and let yaki install all the Kubernetes dependencies and join the machine to the control plane.
This repository comes with example Terraform configurations that can be used to create the infrastructure needed to provision node pools for Kamaji tenant control planes. The example configurations are available for some infrastructure providers, and others will be added in the future.
- VMware vSphere
- VMware Cloud Director
- Proxmox PVE
- Canonical MAAS
These examples assume you have an already provisioned Tenant Control Plane running in Kamaji and accessible from the infrastructure hosting the worker nodes. To create a Tenant Control Plane in Kamaji, refer to the Getting Started Guide.
Make sure you are able to get the join command for the Tenant Control Plane, as stated in the guide above:
$ kubeadm --kubeconfig tenant.kubeconfig token create --print-join-command
This command returns the following:
join_url
join_token
join_token_cacert_hash
Make sure to fill the conresponding Terraform variables in your main.auto.tfvars
file with these values.
To install Terraform, check the Installation Guide.
You need to fill in your main.auto.tfvars
file with your specific configuration values. This file contains all the necessary variables required for the Terraform configuration.
Once you have configured the main.auto.tfvars
file and exported the necessary environment variables, you can run the following Terraform commands to create the node pool:
terraform init
terraform validate
terraform plan
terraform apply
Assuming you have a tenant called kamaji-tenant, and a tenant control plane tcp-charlie, you can create several node pools as shown in the following structure:
kamaji-tenant
├── tcp-alpha
├── tcp-beta
└── tcp-charlie
├── application-pool
│ ├── tcp-charlie-application-pool-node-00
│ └── tcp-charlie-application-pool-node-01
├── default-pool
│ ├── tcp-charlie-default-pool-node-00
│ ├── tcp-charlie-default-pool-node-01
│ └── tcp-charlie-default-pool-node-03
└── system-pool
├── tcp-charlie-system-pool-node-00
└── tcp-charlie-system-pool-node-01
By using terraform
and yaki
, you can easily provision and manage node pools for Kamaji tenant control planes across various infrastructure providers. This approach provides flexibility and ease of management, especially in environments where the Cluster API is not suitable or not desidered. Feel free to explore the example configurations provided in this repository and adapt them to your specific needs.