diff --git a/docs/pages/admin-guides/infrastructure-as-code/teleport-operator/trusted-cluster.mdx b/docs/pages/admin-guides/infrastructure-as-code/teleport-operator/trusted-cluster.mdx
new file mode 100644
index 0000000000000..5ae9b9028a8a4
--- /dev/null
+++ b/docs/pages/admin-guides/infrastructure-as-code/teleport-operator/trusted-cluster.mdx
@@ -0,0 +1,407 @@
+---
+title: Deploy Trusted Clusters using Kubernetes Operator
+description: Use Teleport's Kubernetes Operator to deploy Trusted Clusters
+---
+
+
+Trusted clusters are only available for self-hosted Teleport clusters.
+
+
+This guide will explain how to use Teleport's Kubernetes Operator to deploy
+trusted clusters.
+
+## Prerequisites
+
+- Access to **two** Teleport cluster instances.
+
+ The two clusters should be at the same version or, at most, the leaf cluster can be one major version
+ behind the root cluster version.
+
+- Read through the [Configure Trusted Clusters](../../management/admin/trustedclusters.mdx)
+ guide to understand how trusted clusters works.
+
+- Read through the [Looking up values from secrets](secret-lookup.mdx) guide
+ to understand how to store sensitive custom resource secrets in Kubernetes
+ Secrets.
+
+- [Helm](https://helm.sh/docs/intro/quickstart/)
+
+- [kubectl](https://kubernetes.io/docs/tasks/tools/)
+
+- Validate Kubernetes connectivity by running the following command:
+
+ ```code
+ $ kubectl cluster-info
+ # Kubernetes control plane is running at https://127.0.0.1:6443
+ # CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
+ ```
+
+
+ Users wanting to experiment locally with the Operator can use [minikube](https://minikube.sigs.k8s.io/docs/start/)
+ to start a local Kubernetes cluster:
+
+ ```code
+ $ minikube start
+ ```
+
+
+
+- Follow the [Teleport operator guides](../teleport-operator.mdx)
+ to install the Teleport Operator in your Kubernetes cluster.
+
+ Confirm that the CRD (Custom Resource Definition) for trusted clusters has
+ been installed with the following command:
+
+ ```code
+ $ kubectl explain TeleportTrustedClusterV2.spec
+ GROUP: resources.teleport.dev
+ KIND: TeleportTrustedClusterV2
+ VERSION: v1
+
+ FIELD: spec
+
+
+ DESCRIPTION:
+ TrustedCluster resource definition v2 from Teleport
+
+ FIELDS:
+ enabled
+ Enabled is a bool that indicates if the TrustedCluster is enabled or
+ disabled. Setting Enabled to false has a side effect of deleting the user
+ and host certificate authority (CA).
+
+ role_map <[]Object>
+ RoleMap specifies role mappings to remote roles.
+
+ token
+ Token is the authorization token provided by another cluster needed by this
+ cluster to join.
+
+ tunnel_addr
+ ReverseTunnelAddress is the address of the SSH proxy server of the cluster
+ to join. If not set, it is derived from `:`.
+
+ web_proxy_addr
+ ProxyAddress is the address of the web proxy server of the cluster to join.
+ If not set, it is derived from `:`.
+ ```
+
+## Step 1/5. Prepare the leaf cluster environment
+
+This guide demonstrates how to enable users of your root cluster to access
+a server in your leaf cluster with a specific user identity and role.
+For this example, the user identity you can use to access the server in the leaf
+cluster is `visitor`. Therefore, to prepare your environment, you first need to
+create the `visitor` user and a Teleport role that can assume this username when
+logging in to the server in the leaf cluster.
+
+To add a user and role for accessing the trusted cluster:
+
+1. Open a terminal shell on the server running the Teleport agent in the leaf cluster.
+
+1. Add the local `visitor` user and create a home directory for the user by running the
+following command:
+
+ ```code
+ $ sudo useradd --create-home visitor
+ ```
+
+ The home directory is required for the `visitor` user to access a shell on the server.
+
+1. Sign out of all user logins and clusters by running the following command:
+
+ ```code
+ $ tsh logout
+ ```
+
+1. Sign in to your **leaf cluster** from your administrative workstation using
+your Teleport username:
+
+ ```code
+ $ tsh login --proxy= --user=
+ ```
+
+ Replace `leafcluster.example.com` with the Teleport leaf cluster domain and
+ `myuser` with your Teleport username.
+
+1. Create a role definition file called `visitor.yaml` with the following content:
+
+ ```yaml
+ kind: role
+ version: v7
+ metadata:
+ name: visitor
+ spec:
+ allow:
+ logins:
+ - visitor
+ node_labels:
+ '*': '*'
+ ```
+
+ You must explicitly allow access to nodes with labels to SSH into the server running
+ the Teleport agent. In this example, the `visitor` login is allowed access to any server.
+
+1. Create the `visitor` role by running the following command:
+
+ ```code
+ $ tctl create visitor.yaml
+ ```
+
+ You now have a `visitor` role on your leaf cluster. The `visitor` role allows
+ users with the `visitor` login to access nodes in the leaf cluster. In the next step,
+ you must add the `visitor` login to your user so you can satisfy the conditions of
+ the role and access the server in the leaf cluster.
+
+
+## Step 2/5. Prepare the root cluster environment
+
+Before you can test access to the server in the leaf cluster, you must have a
+Teleport user that can assume the `visitor` login. Because authentication is
+handled by the root cluster, you need to add the `visitor` login to a user in the
+root cluster.
+
+To add the login to your Teleport user:
+
+1. Sign out of all user logins and clusters by running the following command:
+
+ ```code
+ $ tsh logout
+ ```
+
+1. Sign in to your **root cluster** from your administrative workstation using
+your Teleport username:
+
+ ```code
+ $ tsh login --proxy= --user=
+ ```
+
+ Replace `rootcluster.example.com` with the Teleport root cluster domain and
+ `myuser` with your Teleport username.
+
+1. Open your user resource in your editor by running a command similar to the
+following:
+
+ ```code
+ $ tctl edit user/
+ ```
+
+ Replace `myuser` with your Teleport username.
+
+1. Add the `visitor` login:
+
+ ```diff
+ traits:
+ logins:
+ + - visitor
+ - ubuntu
+ - root
+ ```
+
+1. Apply your changes by saving and closing the file in your editor.
+
+## Step 3/5. Generate a trusted cluster join token
+
+Before users from the root cluster can access the server in the
+leaf cluster using the `visitor` role, you must define a trust relationship
+between the clusters. Teleport establishes trust between the root cluster and a
+leaf cluster using an **invitation token**.
+
+To set up trust between clusters, you must first create the invitation token using the
+Teleport Auth Service in the root cluster. You can then use the Teleport Auth Service
+on the leaf cluster to create a `trusted_cluster` resource that includes the invitation token,
+proving to the root cluster that the leaf cluster is the one you expect to register.
+
+To establish the trust relationship:
+
+1. Sign out of all user logins and clusters by running the following command:
+
+ ```code
+ $ tsh logout
+ ```
+
+1. Sign in to your **root cluster** from your administrative workstation using
+your Teleport username:
+
+ ```code
+ $ tsh login --proxy= --user=
+ ```
+
+ Replace `rootcluster.example.com` with the Teleport root cluster domain and
+ `myuser` with your Teleport username.
+
+1. Generate the invitation token by running the following command:
+
+ ```code
+ $ tctl tokens add --type=trusted_cluster --ttl=5m
+ The cluster invite token: (=presets.tokens.first=)
+ ```
+
+ This command generates a trusted cluster invitation token to allow an inbound
+ connection from a leaf cluster. The token can be used multiple times. In this
+ command example, the token has an expiration time of five minutes.
+
+ Note that the invitation token is only used to establish a
+ connection for the first time. Clusters exchange certificates and
+ don't use tokens to re-establish their connection afterward.
+
+ You can copy the token for later use. If you need to display the token again,
+ run the following command against your root cluster:
+
+ ```code
+ $ tctl tokens ls
+ Token Type Labels Expiry Time (UTC)
+ -------------------------------------------------------- --------------- -------- ---------------------------
+ (=presets.tokens.first=) trusted_cluster 28 Apr 22 19:19 UTC (4m48s)
+ ```
+
+1. Store the token in a Kubernetes secret
+
+ ```yaml
+ # secret.yaml
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ name: teleport-trusted-cluster
+ annotations:
+ # This annotation allows any CR to look up this secret.
+ # You may want to restrict which CRs are allowed to look up this secret.
+ resources.teleport.dev/allow-lookup-from-cr: "*"
+ # We use stringData instead of data for the sake of simplicity, both are OK
+ stringData:
+ token: (=presets.tokens.first=)
+ ```
+
+ ```code
+ $ kubectl apply -f secret.yaml
+ ```
+
+## Step 4/5. Create a trusted cluster resource using `kubectl`
+
+1. Configure your trusted cluster resource in a file called `trusted-cluster.yaml`.
+
+ ```yaml
+ # trusted-cluster.yaml
+ apiVersion: resources.teleport.dev/v1
+ kind: TeleportTrustedClusterV2
+ metadata:
+ # The resource name must match the name of the trusted cluster.
+ name: rootcluster.example.com
+ spec:
+ # enabled enables the trusted cluster relationship.
+ enabled: true
+
+ # role_map maps Teleport roles from the root cluster in the leaf cluster.
+ # In this case, users with the `access` role in the root cluster are granted
+ # the `visitor` role in the leaf cluster.
+ role_map:
+ - remote: access
+ local:
+ - visitor
+
+ # token specifies the join token.
+ # This value will be resolved from the previously stored secret.
+ # `teleport-trusted-cluster` is the secret name and `token` is the secret key.
+ token: "secret://teleport-trusted-cluster/token"
+
+ # tunnel_addr specifies the reverse tunnel address of the root cluster proxy.
+ tunnel_addr: rootcluster.example.com:443
+
+ # web_proxy_addr specifies the address of the root cluster proxy.
+ web_proxy_addr: rootcluster.example.com:443
+ ```
+
+1. Create the Kubernetes resource:
+ ```code
+ $ kubectl apply -f trusted-cluster.yaml
+ ```
+
+1. List the ceated Kubernetes resource:
+ ```code
+ $ kubectl get trustedclustersv2
+ NAMESPACE NAME AGE
+ default rootcluster.example.com 60s
+ ```
+
+1. Sign out of the leaf cluster and sign back in to the root cluster.
+
+1. Verify the trusted cluster configuration by running the following command in
+the root cluster:
+
+ ```code
+ $ tsh clusters
+ ```
+
+ This command should list the root cluster and the leaf cluster with output
+ similar to the following:
+
+ ```code
+ Cluster Name Status Cluster Type Labels Selected
+ --------------------------- ------ ------------ ------ --------
+ rootcluster.example.com online root *
+ leafcluster.example.com online leaf
+ ```
+
+
+## Step 5/5. Access a server in the leaf cluster
+
+With the `trusted_cluster` resource you created earlier, you can log in to the
+server in your leaf cluster as a user of your root cluster.
+
+To test access to the server:
+
+1. Verify that you are signed in as a Teleport user on the root cluster by
+running the following command:
+
+ ```code
+ tsh status
+ ```
+
+1. Confirm that the server running the Teleport agent is joined to the leaf cluster by
+running a command similar to the following:
+
+ ```code
+ $ tsh ls --cluster=
+ ```
+
+ This command displays output similar to the following:
+
+ ```code
+ Node Name Address Labels
+ --------------- -------------- ------------------------------------
+ ip-172-3-1-242 127.0.0.1:3022 hostname=ip-172-3-1-242
+ ip-172-3-2-205 ⟵ Tunnel hostname=ip-172-3-2-205
+ ```
+
+1. Open a secure shell connection using the `visitor` login:
+
+ ```code
+ $ tsh ssh --cluster=leafcluster.example.com visitor@ip-172-3-2-205
+ ```
+
+1. Confirm you are logged in with as the user `visitor` on the server
+in the leaf cluster by running the following commands:
+
+ ```code
+ $ pwd
+ /home/visitor
+ $ uname -a
+ Linux ip-172-3-2-205 5.15.0-1041-aws #46~20.04.1-Ubuntu SMP Wed Jul 19 15:39:29 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
+ ```
+
+## Manage an existing trusted cluster with the Teleport Operator
+
+If you have an existing trusted cluster that you would like to manage with the
+Teleport Operator, you can do this by first setting the trusted cluster label
+`teleport.dev/origin: kubernetes`.
+
+ ```yaml
+ kind: trusted_cluster
+ metadata:
+ labels:
+ teleport.dev/origin: kubernetes
+ name: rootcluster.example.com
+ ...
+ ```