This document provides guidance on how to set up and manage Talos, including configuring extensions and using Terraform for deployment. Below are the steps and references you’ll need to achieve the same functionality previously encapsulated in the script.
- Talos CLI installed on your machine.
- Docker installed and running for building images.
- Terraform installed and configured.
- Access to the following GitHub repositories for extensions and images:
Set the following environment variables in your shell:
export CHECKPOINT_DISABLE='1'
export TF_LOG='DEBUG' # Options: TRACE, DEBUG, INFO, WARN, ERROR
export TF_LOG_PATH='terraform.log'
export TALOSCONFIG=~/.talos/config
export KUBECONFIG=~/.kube/config-talos
export TALOS_VERSION='<talos_version>'
export K8S_CLUSTER_NAME='<cluster_name>'
export K8S_NAMESPACE='<namespace>'
Variable Name | Type | Default Value | Description |
---|---|---|---|
proxmox_pve_node_name |
list(string) |
['pve01', 'pve02', 'pve03'] |
List of Proxmox node names. |
talos_version |
string |
1.8.3 |
Version of Talos to deploy. |
kubernetes_version |
string |
1.31.3 |
Version of Kubernetes to deploy. |
cluster_name |
string |
example |
Name for the Talos cluster. |
cluster_vip |
string |
172.31.1.10 |
Virtual IP (VIP) address of the Kubernetes API server. |
cluster_endpoint |
string |
https://172.31.1.10:6443 |
Endpoint for the Kubernetes API server. |
cluster_node_network_gateway |
string |
172.31.1.1 |
Gateway for cluster nodes' network. |
cluster_node_network |
string |
172.31.1.0/24 |
CIDR block of the cluster node network. |
cluster_node_network_first_controller_hostnum |
number |
40 |
Host number for the first controller. |
cluster_node_network_first_worker_hostnum |
number |
50 |
Host number for the first worker node. |
cluster_node_network_load_balancer_first_hostnum |
number |
70 |
Host number for the first load balancer. |
cluster_node_network_load_balancer_last_hostnum |
number |
80 |
Host number for the last load balancer. |
ingress_domain |
string |
example.test |
DNS domain for ingress resources. |
controller_count |
number |
1 |
Number of control plane nodes. Must be at least 1. |
worker_count |
number |
2 |
Number of worker nodes. Must be at least 1. |
prefix |
string |
vm-talos |
Prefix for VM names. |
talos-iso-datastoreid |
string |
isoShare |
Datastore ID for Talos ISO images. |
talos-datastoreid-suffix |
string |
local-lvm |
Datastore suffix for Talos VMs. |
api_token |
string |
XXXXXXXXXXX |
Secret token for authenticating with Proxmox. |
Use the Talos Extensions documentation to fetch and set the appropriate tags for:
- QEMU Guest Agent
- DRBD
- Spin
Use crane
(or an equivalent tool) to retrieve image tags and update your Talos extensions configuration. Follow the instructions in the Extensions README to install or update extensions.
- Prepare a configuration file
talos-<version>.yml
based on your requirements. - Use the Talos Imager Docker image to build the Talos disk image:
docker run --rm -i \
-v $PWD/tmp/talos:/secureboot:ro \
-v $PWD/tmp/talos:/out \
-v /dev:/dev \
--privileged \
"ghcr.io/siderolabs/imager:<talos_version_tag>" \
- < "tmp/talos/talos-<version>.yml"
- Convert the generated raw image to QCOW2 format for use with QEMU:
qemu-img convert -O qcow2 tmp/talos/nocloud-amd64.raw tmp/talos/talos-<version>.qcow2
qemu-img info tmp/talos/talos-<version>.qcow2
- Initialize Terraform:
terraform init
- Plan the deployment:
terraform plan -out=tfplan -var-file=integration.tfvars
- Apply the deployment:
terraform apply tfplan
- Extract configuration files for Talos and Kubernetes:
terraform output -raw talosconfig > ~/.talos/config
terraform output -raw kubeconfig > ~/.kube/config-talos
- Install the operator:
kubectl apply --server-side -k "https://github.com/piraeusdatastore/piraeus-operator//config/default?ref=v2.7.1"
- Wait for the operator to be ready:
kubectl wait pod --timeout=15m --for=condition=Ready -n piraeus-datastore -l app.kubernetes.io/component=piraeus-operator
- Configure the Piraeus cluster and storage class:
Refer to the examples in the Piraeus Talos Guide.
talosctl health --control-plane-nodes <controller_ips> --worker-nodes <worker_ips>
kubectl get nodes -o wide
kubectl linstor node list
kubectl linstor storage-pool list
kubectl linstor volume list
To clean up all resources:
terraform destroy -auto-approve