Skip to content

Commit

Permalink
Added required terraform configuration files and updated the steps in…
Browse files Browse the repository at this point in the history
… README
  • Loading branch information
vanajamukkara authored and rjeberhard committed Oct 25, 2023
1 parent 0993fa1 commit 556fae8
Show file tree
Hide file tree
Showing 13 changed files with 288 additions and 72 deletions.
85 changes: 83 additions & 2 deletions kubernetes/samples/scripts/terraform/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Copy provided `oci.props.template` file to `oci.props` and add all required valu
* `okeclustername` - The name for OCI Container Engine for Kubernetes cluster.
* `tenancy.ocid` - OCID for the target tenancy.
* `region` - name of region in the target tenancy.
* `compartment.ocid` - OCID for the target compartment.
* `compartment.ocid` - OCID for the target compartment. To find the OCID of the compartment - https://docs.oracle.com/en-us/iaas/Content/GSG/Tasks/contactingsupport_topic-Finding_the_OCID_of_a_Compartment.htm
* `compartment.name` - Name for the target compartment.
* `ociapi.pubkey.fingerprint` - Fingerprint of the OCI user's public key.
* `ocipk.path` - API Private Key -- local path to the private key for the API key pair.
Expand All @@ -38,9 +38,52 @@ Copy provided `oci.props.template` file to `oci.props` and add all required valu
* `nodepool.shape` - A valid OCI VM Shape for the cluster nodes.
* `k8s.version` - Kubernetes version.
* `nodepool.ssh.pubkey` - SSH public key (key contents as a string).
* `nodepool.imagename` - A valid image name for Node Pool creation.
* `nodepool.imagename` - A valid image OCID for Node Pool creation.
* `terraform.installdir` - Location to install Terraform binaries.

Optional, to modify the shape of the node, edit node-pool.tf
```aidl
node_shape_config {
#Optional
memory_in_gbs = 48.0
ocpus = 4.0
}
```
Optional, to add more nodes to the cluster(by default 2 worker nodes are created)
modify vcn.tf to add worker subnets
```aidl
resource "oci_core_subnet" "oke-subnet-worker-3" {
availability_domain = data.oci_identity_availability_domains.ADs.availability_domains[2]["name"]
cidr_block = "${var.vcn_cidr_prefix}.12.0/24"
display_name = "${var.cluster_name}-WorkerSubnet03"
dns_label = "workers03"
compartment_id = var.compartment_ocid
vcn_id = oci_core_virtual_network.oke-vcn.id
security_list_ids = [oci_core_security_list.oke-worker-security-list.id]
route_table_id = oci_core_virtual_network.oke-vcn.default_route_table_id
dhcp_options_id = oci_core_virtual_network.oke-vcn.default_dhcp_options_id
}
```
Add corresponding egress_security_rules and ingress_security_rules for the worker subnets
```aidl
egress_security_rules {
destination = "${var.vcn_cidr_prefix}.12.0/24"
protocol = "all"
stateless = true
}
```
```aidl
ingress_security_rules {
stateless = true
protocol = "all"
source = "${var.vcn_cidr_prefix}.12.0/24"
}
```
Modify node-pool.tf `subnet_ids` to add new worker subnets to the pool
```aidl
subnet_ids = [oci_core_subnet.oke-subnet-worker-1.id, oci_core_subnet.oke-subnet-worker-2.id, oci_core_subnet.oke-subnet-worker-3.id]
```

To run the script, use the command:
```shell
$ kubernetes/samples/scripts/terraform/oke.create.sh oci.props
Expand All @@ -50,4 +93,42 @@ The script collects the values from `oci.props` file and performs the following
* Downloads and installs all needed binaries for Terraform, Terraform OCI Provider, based on OS system (macOS or Linux)
* Applies the configuration and creates OKE Cluster using Terraform

Output of the oke.create.sh script
If there are errors in the configuration, output will be displayed like this
```
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
\u2577
\u2502 Error: Reference to undeclared resource
\u2502
\u2502 on node-pool.tf line 12, in resource "oci_containerengine_node_pool" "tfsample_node_pool":
\u2502 12: subnet_ids = [oci_core_subnet.oke-subnet-worker-1.id, oci_core_subnet.oke-subnet-worker-2.id, oci_core_subnet.oke-subnet-worker-3.id, oci_core_subnet.oke-subnet-worker-4.id, oci_core_subnet.oke-subnet-worker-5.id]
\u2502
\u2502 A managed resource "oci_core_subnet" "oke-subnet-worker-5" has not been declared in the root module.
\u2575
\u2577
\u2502 Error: Reference to undeclared resource
\u2502
\u2502 on node-pool.tf line 12, in resource "oci_containerengine_node_pool" "tfsample_node_pool":
\u2502 12: subnet_ids = [oci_core_subnet.oke-subnet-worker-1.id, oci_core_subnet.oke-subnet-worker-2.id, oci_core_subnet.oke-subnet-worker-3.id, oci_core_subnet.oke-subnet-worker-4.id, oci_core_subnet.oke-subnet-worker-5.id]
\u2502
\u2502 A managed resource "oci_core_subnet" "oke-subnet-worker-5" has not been declared in the root module.
\u2575
```

If the cluster is created successfully, below output will be displayed
```aidl
Confirm access to cluster...
- able to access cluster
myokecluster cluster is up and running
```

To add new nodes to the cluster after its created, make changes in vcn.tf and node-pool.tf files and
run the below commands.
```aidl
${terraform.installdir}/terraform plan -var-file=<tfvars.filename>
${terraform.installdir}/terraform apply -var-file=<tfvars.filename>
```

To delete the cluster, run `oke.delete.sh` script. It reads the `oci.props` file from the current directory and deletes the cluster.
34 changes: 2 additions & 32 deletions kubernetes/samples/scripts/terraform/cluster.tf
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
/*
# Copyright (c) 2018, 2021, Oracle and/or its affiliates.
# Copyright (c) 2020, 2023, Oracle and/or its affiliates.
# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl.
*/
variable "cluster_kubernetes_version" {
Expand Down Expand Up @@ -43,7 +43,7 @@ variable "node_pool_name" {
}

variable "node_pool_node_image_name" {
default = "Oracle-Linux-7.4"
default = "Oracle-Linux-7.6"
}

variable "node_pool_node_shape" {
Expand All @@ -68,11 +68,6 @@ resource "oci_containerengine_cluster" "tfsample_cluster" {
name = var.cluster_name
vcn_id = oci_core_virtual_network.oke-vcn.id

timeouts {
create = "60m"
delete = "2h"
}

#Optional
options {
service_lb_subnet_ids = [oci_core_subnet.oke-subnet-loadbalancer-1.id, oci_core_subnet.oke-subnet-loadbalancer-2.id]
Expand All @@ -86,31 +81,6 @@ resource "oci_containerengine_cluster" "tfsample_cluster" {
}
}

resource "oci_containerengine_node_pool" "tfsample_node_pool" {
#Required
cluster_id = oci_containerengine_cluster.tfsample_cluster.id
compartment_id = var.compartment_ocid
kubernetes_version = var.node_pool_kubernetes_version
name = var.node_pool_name
node_image_name = var.node_pool_node_image_name
node_shape = var.node_pool_node_shape
subnet_ids = [oci_core_subnet.oke-subnet-worker-1.id, oci_core_subnet.oke-subnet-worker-2.id, oci_core_subnet.oke-subnet-worker-3.id]

timeouts {
create = "60m"
delete = "2h"
}
node_shape_config {
#Optional
memory_in_gbs = 200.0
ocpus = 3.0
}


#Optional
quantity_per_subnet = var.node_pool_quantity_per_subnet
ssh_public_key = var.node_pool_ssh_public_key
}

output "cluster_id" {
value = oci_containerengine_cluster.tfsample_cluster.id
Expand Down
11 changes: 11 additions & 0 deletions kubernetes/samples/scripts/terraform/export.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
/*
# Copyright (c) 2023, Oracle and/or its affiliates.
# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl.
*/
resource "oci_file_storage_export" "oketest_export1" {
#Required
export_set_id = oci_file_storage_export_set.oketest_export_set.id
file_system_id = oci_file_storage_file_system.oketest_fs1.id
path = "/oketest1"
}

10 changes: 10 additions & 0 deletions kubernetes/samples/scripts/terraform/export_set.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
/*
# Copyright (c) 2023, Oracle and/or its affiliates.
# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl.
*/

resource "oci_file_storage_export_set" "oketest_export_set" {
# Required
mount_target_id = oci_file_storage_mount_target.oketest_mount_target.id
}

10 changes: 10 additions & 0 deletions kubernetes/samples/scripts/terraform/file_system.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
/*
# Copyright (c) 2023, Oracle and/or its affiliates.
# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl.
*/

resource "oci_file_storage_file_system" "oketest_fs1" {
#Required
availability_domain = data.oci_identity_availability_domains.ADs.availability_domains[1]["name"]
compartment_id = var.compartment_ocid
}
15 changes: 15 additions & 0 deletions kubernetes/samples/scripts/terraform/mount_target.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
/*
# Copyright (c) 2023, Oracle and/or its affiliates.
# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl.
*/
resource "oci_file_storage_mount_target" "oketest_mount_target" {
#Required
availability_domain = data.oci_identity_availability_domains.ADs.availability_domains[1]["name"]

compartment_id = var.compartment_ocid
subnet_id = oci_core_subnet.oke-subnet-worker-2.id

#Optional
display_name = "${var.cluster_name}-mt"
}

44 changes: 44 additions & 0 deletions kubernetes/samples/scripts/terraform/node-pool.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
/*
# Copyright (c) 2023, Oracle and/or its affiliates.
# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl.
*/
resource "oci_containerengine_node_pool" "tfsample_node_pool" {
#Required
cluster_id = oci_containerengine_cluster.tfsample_cluster.id
compartment_id = var.compartment_ocid
kubernetes_version = var.node_pool_kubernetes_version
name = var.node_pool_name
node_shape = var.node_pool_node_shape
subnet_ids = [oci_core_subnet.oke-subnet-worker-1.id, oci_core_subnet.oke-subnet-worker-2.id]

timeouts {
create = "60m"
delete = "2h"
}

node_eviction_node_pool_settings{
is_force_delete_after_grace_duration = true
eviction_grace_duration = "PT0M"
}


# Using image Oracle-Linux-7.x-<date>
# Find image OCID for your region from https://docs.oracle.com/iaas/images/
node_source_details {
image_id = var.node_pool_node_image_name
source_type = "image"
boot_volume_size_in_gbs = "200"
}
node_shape_config {
#Optional
memory_in_gbs = 48.0
ocpus = 4.0
}
# Optional
initial_node_labels {
key = "name"
value = "var.cluster_name"
}
ssh_public_key = var.node_pool_ssh_public_key
}

17 changes: 0 additions & 17 deletions kubernetes/samples/scripts/terraform/oci.props.example

This file was deleted.

23 changes: 20 additions & 3 deletions kubernetes/samples/scripts/terraform/oci.props.template
Original file line number Diff line number Diff line change
@@ -1,45 +1,62 @@
# Copyright (c) 2018, 2021, Oracle and/or its affiliates.
# Copyright (c) 2018, 2023, Oracle and/or its affiliates.
# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl.

# Properties to generate TF variables file for cluster creation from property file oci.props
#
# Copy this file to oci.props and update it with your own info, see oci.props.example as sample for values
# Copy this file to oci.props and update it with custom attribute value, see example values provided for each property
#

# OCID can be obtained from the user info page in the OCI console
#user.ocid=ocid1.user.oc1..xxxxyyyyzzzz
user.ocid=

# name of OKE cluster
#okeclustername=myokecluster
okeclustername=

# name of tfvars file (no extention) to generate
#tfvars.filename=myokeclustertf
tfvars.filename=

# Required tenancy info
#region=us-phoenix-1
#tenancy.ocid=ocid1.tenancy.oc1..xxxxyyyyzzzz
#compartment.ocid=ocid1.compartment.oc1..xxxxyyyyzzzz
#compartment.name=<compartment_name>
region=
tenancy.ocid=
compartment.ocid=
compartment.name=
region=


# API key fingerprint and private key location, needed for API access -- you should have added a public API key through the OCI console first, add escape backslash \ for each colon signt
#ociapi.pubkey.fingerprint=c8\:b2\:da\:b2\:e8\:96\:7e\:bf\:a1\:ee\:ce\:bc\:a8\:7f\:07\:c5
ociapi.pubkey.fingerprint=

# path to private OCI API key
#ocipk.path=/scratch/<user>/.oci/oci_api_key.pem
ocipk.path=

# VCN CIDR -- must be unique within the compartment in the tenancy
# - assuming 1:1 cluster:vcn
# BE SURE TO SET BOTH VARS -- the first 2 octets for each variable have to match
#vcn.cidr.prefix=10.1
#vcn.cidr=10.1.0.0/16
vcn.cidr.prefix=
vcn.cidr=

# Node pool info
#nodepool.shape=VM.Standard2.1
#nodepool.ssh.pubkey=ssh-rsa AAAAB3NzaC1yc2EAAAAQAAAABAQC9FSfGdjjL+EZre2p5yLTAgtLsnp49AUVX1yY9V8guaXHol6UkvJWnyFHhL7s0qvWj2M2BYo6WAROVc0/054UFtmbd9zb2oZtGVk82VbT6aS74cMlqlY91H/rt9/t51Om9Sp5AvbJEzN0mkI4ndeG/5p12AUyg9m5XOdkgI2n4J8KFnDAI33YSGjxXb7UrkWSGl6XZBGUdeaExo3t2Ow8Kpl9T0Tq19qI+IncOecsCFj1tbM5voD8IWE2l0SW7V6oIqFJDMecq4IZusXdO+bPc+TKak7g82RUZd8PARpvYB5/7EOfVadxsXGRirGAKPjlXDuhwJYVRj1+IjZ+5Suxz user@slc13kef
#nodepool.imagename=<ocid of the image>
nodepool.shape=
nodepool.ssh.pubkey=
nodepool.imagename=

# K8S version
#k8s.version=v1.26.2
k8s.version=

#location for terraform installation
#terraform.installdir=/scratch/<user>/myterraformtest
terraform.installdir=
10 changes: 5 additions & 5 deletions kubernetes/samples/scripts/terraform/oke.create.sh
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
#!/bin/bash
# Copyright (c) 2018, 2022, Oracle and/or its affiliates.
# Copyright (c) 2018, 2023, Oracle and/or its affiliates.
# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl.

prop() {
Expand Down Expand Up @@ -76,7 +76,7 @@ createRoleBindings () {

checkClusterRunning () {

echo "Confirm we have ${KUBERNETES_CLI:-kubectl} working..."
echo "Confirm access to cluster..."
myline=`${KUBERNETES_CLI:-kubectl} get nodes | awk '{print $2}'| tail -n+2`
status="NotReady"
max=50
Expand All @@ -101,9 +101,9 @@ checkClusterRunning () {

NODES=`${KUBERNETES_CLI:-kubectl} get nodes -o wide | grep "${privateIP}" | wc -l`
if [ "$NODES" == "1" ]; then
echo '- looks good'
echo '- able to access cluster'
else
echo '- could not talk to cluster, aborting'
echo '- could not access cluster, aborting'
cd ${terraformVarDir}
terraform destroy -auto-approve -var-file=${terraformVarDir}/${clusterTFVarsFile}.tfvars
exit 1
Expand Down Expand Up @@ -159,4 +159,4 @@ createCluster
#check status of OKE cluster nodes, destroy if can not access them
export KUBECONFIG=${terraformVarDir}/${okeclustername}_kubeconfig
checkClusterRunning
echo "$okeclustername is up and running"
echo "$okeclustername cluster is up and running"
Loading

0 comments on commit 556fae8

Please sign in to comment.