OpenShift Origin in TripleO with multiple masters and multiple nodes
Earlier i wrote about how to deploy vanilla Kubernetes as a TripleO service. This article follows suit in describing how to deploy OpenShift Origin in the same way, also 3 masters and 3 nodes, using TripleO Quickstart as the driving mechanism. Please be aware that these posts describe a work-in-progress development environment setup, not a polished final end user experience.
Architecture
We use exactly the same architectural approach as described earlier in the Kubernetes post. Please read the Architecture section of the Kubernetes post if you are interested in the general perspective.
Integration of openshift-ansible
We install OpenShift Origin by having integrated with the
openshift-ansible installer. We use TripleO’s external_deploy_tasks
to generate the necessary input files for openshift-ansible, and then
we execute it. The files that we generate are:
-
an inventory,
-
a playbook, configuring NetworkManager and then including the
byo/config.yml
playbook from openshift-ansible (“byo” stands for “bring your own hosts”, TripleO can take care of provisioning the hosts), -
a file with Ansible variables for openshift-ansible.
If you want to explore the code, see the service template openshift-master.yaml as of 12th January 2018 There’s also openshift-worker.yaml service template, which tags nodes to be recognized by the inventory generator as workers, and sets up worker node firewall rules.
Deployment
Prepare the environment
The assumed starting point for will be having a deployed undercloud with 6 virtual baremetal nodes defined and ready for use.
There are several paths to do this with TripleO Quickstart, the
easiest one is probably to deploy full 3 controller + 3 compute
environment using --nodes config/nodes/3ctlr_3comp.yml
, and then
delete the overcloud stack. If your virt host doesn’t have enough
capacity for that many VMs, you can use a smaller configuration,
e.g. 1ctlr_1comp.yml
or just 1ctlr.yml
.
For detailed information how to deploy with Quickstart, please refer to TripleO Quickstart docs.
Deploy the overcloud
Let’s prepare extra-oooq-vars.yml
file. It’s a file with Quickstart
variables, so it will have to be on the host where you run
Quickstart. The contents will be as follows:
# use t-h-t with our cherry-picks
overcloud_templates_path: /home/stack/tripleo-heat-templates
# use NTP, clustered systems don't like time skew
ntp_args: --ntp-server pool.ntp.org
# make validation errors non-fatal
validation_args: ''
# network config in the featureset is for CI, override it back to defaults
network_args: -e /home/stack/net-config-noop.yaml
# deploy with config-download mechanism, we'll execute the actual
# software deployment via ansible subsequently
config_download_args: >-
-e /home/stack/tripleo-heat-templates/environments/config-download-environment.yaml
--disable-validations
--verbose
# do not run the workflow
deploy_steps_ansible_workflow: false
And /home/stack/net-config-noop.yaml
file (referenced above) will
have to be on the undercloud, and it has these contents:
resource_registry:
OS::TripleO::Controller::Net::SoftwareConfig: /usr/share/openstack-tripleo-heat-templates/net-config-noop.yaml
OS::TripleO::Compute::Net::SoftwareConfig: /usr/share/openstack-tripleo-heat-templates/net-config-noop.yaml
For OpenShift Origin specifcally, it’s important to set the
controllers’ NIC config to net-config-noop.yaml
to avoid depending
on OVS. (The default net-config-bridge.yaml
would create a br-ex
OVS bridge. Then openshift-ansible would stop OVS on baremetal and
start OVS in a container. Given that the default route would already
go through br-ex
at that point, stopping baremetal OVS could
effectively “brick” the controllers in terms of network traffic.)
Now let’s reuse the undercloud deployed previously by Quickstart, and
deploy the overcloud Heat stack. This could be done with
quickstart.sh
too, but personally i prefer running
ansible-playbook
for more direct control:
# run this where you run Quickstart (likely not the undercloud)
# VIRTHOST must point to the machine that hosts your Quickstart VMs,
# edit this if necessary
export VIRTHOST=$(hostname -f)
# WORKSPACE must point to your Quickstart workspace directory,
# edit this if necessary
export WORKSPACE=$HOME/.quickstart
source $WORKSPACE/bin/activate
export ANSIBLE_ROLES_PATH=$WORKSPACE/usr/local/share/ansible/roles:$WORKSPACE/usr/local/share/tripleo-quickstart/roles
export ANSIBLE_LIBRARY=$WORKSPACE/usr/local/share/ansible:$WORKSPACE/usr/local/share/tripleo-quickstart/library
export SSH_CONFIG=$WORKSPACE/ssh.config.ansible
export ANSIBLE_SSH_ARGS="-F ${SSH_CONFIG}"
ansible-playbook -v \
-i $WORKSPACE/hosts \
-e local_working_dir=$WORKSPACE \
-e virthost=$VIRTHOST \
-e @$WORKSPACE/config/release/tripleo-ci/master.yml \
-e @$WORKSPACE/config/nodes/3ctlr_3comp.yml \
-e @$WORKSPACE/config/general_config/featureset033.yml \
-e @extra-oooq-vars.yml \
$WORKSPACE/playbooks/quickstart-extras-overcloud.yml
Now let’s install openshift-ansible on the undercloud. For development purposes i get the 3.6 branch from source, but it can be installed via RPM too:
sudo yum -y install centos-release-openshift-origin36
sudo yum -y install openshift-ansible-playbooks
With the overcloud Heat stack created and openshift-ansible present,
we can fetch the overcloud software config definition and deploy it
with Ansible. In real use cases this can be done together with Heat
stack creation via openstack overcloud deploy
command, but we’re
taking an explicit approach here:
# clean any previous config downloads
rm -rf ~/config-download/tripleo*
# produce Ansible playbooks from Heat stack outputs
tripleo-config-download -s overcloud -o ~/config-download
# skip this in case you want to manually check fingerprints
export ANSIBLE_HOST_KEY_CHECKING=no
# deploy the software configuration of overcloud
ansible-playbook \
-v \
-i /usr/bin/tripleo-ansible-inventory \
~/config-download/tripleo-*/deploy_steps_playbook.yaml
This applies the software configuration, including installation of OpenShift Origin via openshift-ansible.
Hello Origin in TripleO
At the current stage, it’s best to ssh to an overcloud controller node
to manage the Origin cluster with oc
or kubectl
.
After smoke testing with e.g. oc status
, you can try deploying
something on the Origin cluster, e.g. according to the instructions at
OpenShift Origin CLI Walkthrough.