-
You need to provide some credentials for deployments to work
-
Create a file called "env_secret_vars.yml" and put it in the ./ansible/configs/CONFIGNAME/ directory.
-
At this point this file has to be created even if no vars from it are used.
-
-
You can choose to provide these values as extra vars (-e "var=value") in the command line if you prefer not to keep sensitive information in a file.
# ## Logon credentials for Red Hat Network # ## Required if using the subscription component # ## of this playbook. rhel_subscription_user: '' rhel_subscription_pass: '' # # ## LDAP Bind Password bindPassword: '' # # ## Desired openshift admin name and password admin_user: "" admin_user_password: "" # # ## AWS Credentials. This is required. aws_access_key_id: "" aws_secret_access_key: "" #If using repo_method: satellite, you must set these values as well. satellite_url: https://satellite.example.com satellite_org: Sat_org_name satellite_activationkey: "rhel7basic" zabbix_auto_registration_pass: "XXXXX"
-
This file ./env_vars.yml contains all the variables you need to define to control the deployment of your environment.
For managing users on the bastion, you can override the mgr_users
variable. The default is located in {{ ANSIBLE_REPO_PATH }}/configs/{{ env_type }}/mgr_users.yml
, and looks like :
mgr_users:
- name: opentlc-mgr
home: /home/opentlc-mgr
authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC4OojwKH74UWVOY92y87Tb/b56CMJoWbz2gyEYsr3geOc2z/n1pXMwPfiC2KT7rALZFHofc+x6vfUi6px5uTm06jXa78S7UB3MX56U3RUd8XF3svkpDzql1gLRbPIgL1h0C7sWHfr0K2LG479i0nPt/X+tjfsAmT3nWj5PVMqSLFfKrOs6B7dzsqAcQPInYIM+Pqm/pXk+Tjc7cfExur2oMdzx1DnF9mJaj1XTnMsR81h5ciR2ogXUuns0r6+HmsHzdr1I1sDUtd/sEVu3STXUPR8oDbXBsb41O5ek6E9iacBJ327G3/1SWwuLoJsjZM0ize+iq3HpT1NqtOW6YBLR [email protected]
You can, for example, want to add another user. For that just override the variable in env_secret_vars.yml
:
mgr_users:
- name: opentlc-mgr
home: /home/opentlc-mgr
authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC4OojwKH74UWVOY92y87Tb/b56CMJoWbz2gyEYsr3geOc2z/n1pXMwPfiC2KT7rALZFHofc+x6vfUi6px5uTm06jXa78S7UB3MX56U3RUd8XF3svkpDzql1gLRbPIgL1h0C7sWHfr0K2LG479i0nPt/X+tjfsAmT3nWj5PVMqSLFfKrOs6B7dzsqAcQPInYIM+Pqm/pXk+Tjc7cfExur2oMdzx1DnF9mJaj1XTnMsR81h5ciR2ogXUuns0r6+HmsHzdr1I1sDUtd/sEVu3STXUPR8oDbXBsb41O5ek6E9iacBJ327G3/1SWwuLoJsjZM0ize+iq3HpT1NqtOW6YBLR [email protected]
- name: fridim
home: /home/fridim
authorized_keys:
- https://github.com/fridim.keys
You can run the playbook with the following arguments to overwrite the default variable values:
REGION=us-east-1
KEYNAME=ocpkey
GUID=testocpworkshop1
ENVTYPE="ocp-workshop"
CLOUDPROVIDER=ec2
HOSTZONEID='Z186MFNM7DX4NF'
REPO_PATH='https://admin.example.com/repos/ocp/3.6/'
BASESUFFIX='.openshift.opentlc.com'
NODE_COUNT=2
REPO_VERSION=3.6
DEPLOYER_REPO_PATH=`pwd`
OSRELEASE=3.6.173.0.21
ansible-playbook main.yml -e "guid=${GUID}" -e "env_type=${ENVTYPE}" \
-e "osrelease=${OSRELEASE}" -e "repo_version=${REPO_VERSION}" \
-e "cloud_provider=${CLOUDPROVIDER}" -e "aws_region=${REGION}" \
-e "HostedZoneId=${HOSTZONEID}" -e "key_name=${KEYNAME}" \
-e "subdomain_base_suffix=${BASESUFFIX}" \
-e "bastion_instance_type=t2.large" -e "master_instance_type=c4.xlarge" \
-e "infranode_instance_type=c4.4xlarge" -e "node_instance_type=c4.4xlarge" \
-e "nfs_instance_type=m3.large" -e "node_instance_count=5" \
-e "[email protected]" \
-e "install_idm=htpasswd" -e "software_to_deploy=openshift" \
-e "ANSIBLE_REPO_PATH=${DEPLOYER_REPO_PATH}" -e "own_repo_path=${REPO_PATH}" --skip-tags=remove_self_provisioners
REGION=us-east-1 KEYNAME=ocpkey GUID=dev-na1 ENVTYPE="ocp-workshop" CLOUDPROVIDER=ec2 HOSTZONEID='Z186MFNM7DX4NF' BASESUFFIX='.openshift.opentlc.com' NODE_COUNT=2 REPO_VERSION=3.5 DEPLOYER_REPO_PATH=`pwd` LOG_FILE=/tmp/${ENVTYPE}-${GUID}.log IPAPASS=$5 if [ "$1" = "provision" ] ; then echo "Provisioning: ${STACK_NAME}" 1>> $LOG_FILE 2>> $LOG_FILE ansible-playbook ${DEPLOYER_REPO_PATH}/main.yml \ -e "guid=${GUID}" -e "env_type=${ENVTYPE}" -e "key_name=${KEYNAME}" \ -e "cloud_provider=${CLOUDPROVIDER}" -e "aws_region=${REGION}" -e "HostedZoneId=${HOSTZONEID}" \ -e "subdomain_base_suffix=${BASESUFFIX}" \ -e "bastion_instance_type=t2.large" -e "master_instance_type=c4.xlarge" \ -e "infranode_instance_type=c4.4xlarge" -e "node_instance_type=c4.4xlarge" \ -e "support_instance_type=c4.xlarge" -e "node_instance_count=${NODE_COUNT}" \ -e "ipa_host_password=${IPAPASS}" -e "install_idm=ldap" \ -e "repo_method=satellite" -e "repo_version=${REPO_VERSION}" \ -e "[email protected]" \ -e "software_to_deploy=openshift" -e "osrelease=3.5.5.15" -e "docker_version=1.12.6" \ -e "ANSIBLE_REPO_PATH=${DEPLOYER_REPO_PATH}" 1>> $LOG_FILE 2>> $LOG_FILE
REGION=us-east-1
KEYNAME=ocpkey
GUID=rdu
ENVTYPE="ocp-workshop"
CLOUDPROVIDER=ec2
HOSTZONEID='Z186MFNM7DX4NF'
REPO_PATH='https://admin.example.com/repos/ocp/3.5/'
DEPLOYER_REPO_PATH=/opt/ansible_agnostic_deployer/ansible
BASESUFFIX='.openshift.opentlc.com'
REPO_VERSION=3.5
ansible-playbook ${DEPLOYER_REPO_PATH}/main.yml \
-e "guid=${GUID}" \
-e "env_type=${ENVTYPE}" \
-e "cloud_provider=${CLOUDPROVIDER}" -e "aws_region=${REGION}" \
-e "HostedZoneId=${HOSTZONEID}" -e "key_name=${KEYNAME}" \
-e "subdomain_base_suffix=${BASESUFFIX}" \
-e "bastion_instance_type=t2.large" -e "master_instance_type=c4.xlarge" \
-e "infranode_instance_type=c4.4xlarge" -e "node_instance_type=c4.4xlarge" \
-e "nfs_instance_type=t2.large" -e "node_instance_count=${NODE_COUNT}" \
-e "install_idm=htpasswd" -e "software_to_deploy=openshift" \
-e "[email protected]" \
-e "own_repo_path=${REPO_PATH}" -e"repo_method=rhn" -e"ANSIBLE_REPO_PATH=${DEPLOYER_REPO_PATH}" \
-e "osrelease=3.5.5.31" -e "repo_version=${REPO_VERSION}" -e "docker_version=1.12.6" \
--skip-tags=remove_self_provisioners,opentlc-integration
You can either provide ipa_host_password
or a couple ipa_kerberos_user
/ipa_kerberos_password
to register the host to the ipa server. See roles/bastion-opentlc-ipa.
If you set this variable, 3 support nodes will be deployed and used for glusterfs:
-e install_glusterfs=true
Note
|
This will discard NFS PVs for logging (elasticsearch) and metrics (cassandra). Instead storage for those pods will be 'EmptyDir'. Proper persistent storage setup is left to user as a post-install step. |
Tested on OCP 3.7. See examples in scripts/examples
Use the scaleup.yml
playbook. Increase node_instance_count
and new_node_instance_count
accordingly. For example, if your previous node_instance_count
was 2:
REGION=us-west-1
KEYNAME=ocpkey
GUID=na1
ENVTYPE="ocp-workshop"
CLOUDPROVIDER=ec2
HOSTZONEID='Z186MFNM7DX4NF'
REPO_PATH='https://admin.example.com/repos/ocp/3.5/'
MINOR_VERSION="3.5.5.15"
INSTALLIPA=false
BASESUFFIX='.openshift.opentlc.com'
REPO_VERSION=3.5
NODE_COUNT=4
NEW_NODE_COUNT=2
ansible-playbook ./configs/${ENVTYPE}/scaleup.yml \
-e "ANSIBLE_REPO_PATH=${DEPLOYER_REPO_PATH}" \
-e "HostedZoneId=${HOSTZONEID}" \
-e "bastion_instance_type=t2.large" \
-e "cloud_provider=${CLOUDPROVIDER}" \
-e "guid=${GUID}" \
-e "infranode_instance_type=m4.4xlarge" \
-e "install_idm=htpasswd" \
-e user_password=PASSWORD \
-e admin_password=PASSWORD \
-e admin_user=admin \
-e "install_ipa_client=${INSTALLIPA}" \
-e "nfs_instance_type=m3.large" \
-e "osrelease=${MINOR_VERSION}" \
-e "own_repo_path=${REPO_PATH}" \
-e "[email protected]" \
-e "repo_method=file" \
-e "subdomain_base_suffix=${BASESUFFIX}" \
--skip-tags=remove_self_provisioners,install_zabbix \
-e "aws_region=${REGION}" \
-e "docker_version=1.12.6" \
-e "env_type=${ENVTYPE}" \
-e "key_name=${KEYNAME}" \
-e "master_instance_type=m4.xlarge" \
-e "node_instance_count=${NODE_COUNT}" \
-e "new_node_instance_count=${NEW_NODE_COUNT}" \
-e "node_instance_type=c4.4xlarge" \
-e "repo_version=${REPO_VERSION}"
REGION=us-west-1 KEYNAME=ocp-workshop-openshift GUID=na1 ENVTYPE="ocp-workshop" CLOUDPROVIDER=ec2 HOSTZONEID='Z186MFNM7DX4NF' #To Destroy an Env ansible-playbook ./configs/${ENVTYPE}/destroy_env.yml \ -e "guid=${GUID}" \ -e "env_type=${ENVTYPE}" \ -e "cloud_provider=${CLOUDPROVIDER}" \ -e "aws_region=${REGION}" \ -e "HostedZoneId=${HOSTZONEID}" \ -e "key_name=${KEYNAME}" \ -e "subdomain_base_suffix=${BASESUFFIX}"