Building Execution environments
As you probably know, we need execution evironments to run our ansible code. Sometimes we need a special execution environment and we need to build it by hand, or do we?
When you build an execution-environment, you need to install ansible-builder, create a venv and meny things more.
The process starts with the creation of a "execution-environment.yml" which holds the requirements to be built into
the execution-environment image.
To build the EE from code, we will be templating all that is needed for the build.
While building an EE image requires a base image to build on top of and most people use a EE image as base, we use a minimal container image and add the ansible-runner ourselves. This way we have full control of what is in the image.
The first you might build by hand, but as we like to do everything in code, we want to build the execution environment from code.
Gitlab repository
To store the definition of an execution environment in code, we need a repository and a pipeline.
In this repository we store the following structure:
.
├── ee_vars.yml
├── files
│ ├── dummy.file
│ └── ca.crt
├── host_vars
│ ├── hub_dev
│ │ └── hub_dev.yml
│ └── hub_prod
│ └── hub_prod.yml
├── inventory.yml
├── main.yml
├── README.md
└── templates
├── bindep.txt.j2
├── create_image.sh.j2
├── execution-environment.yml.j2
├── requirements.txt.j2
└── requirements.yml.j2
There are all file we need in this repository. You might find you're missing the pipeline itself, but we store this in an other location for security and to keep pipelines uniform.
We will go through the files and explain how to configure this to build an EE.
The EE that I will be building here has a number of collections in it to run a playbook.
The files
We now describe each file and its contents.
ee_vars.yml
The ee_vars.yml file holds everything that is added to the base image we want in the execution-environment. It holds variables to fill the templates when the playbook is run by the pipeline.
---
# Put the contents of the files in here
# requirements.txt == ee_python
# requirements.yml == ee_collections
# bindep.txt == ee_system
#
use_ansible_cfg: true
ee_image_name: ee-demo-image
ee_python:
- requests
- python-gitlab
ee_collections:
- community.general;==8.5.2
- ansible.posix
- ansible.windows
ee_system:
- python3-systemd [platform:rpm]
- python3-pip [platform:rpm]
basic_image: quay.io/rockylinux/rockylinux:9.5-minimal
ee_version: 1.0
Above the contents of the ee_vars.yml file, as you can see we have collected all the variables that will be needed to
create an execution-environment together in one file.
There is a slight change in the specification for the version of a collection, be aware that there is a semicolon between the
collection name and the version. This is done so the templating can detect if there is a version number.
files/ca.crt
The file directory needs to be present and git tends to remove it when there is nothing in this directory. This will cause proplems during recovery, so ensure there is always a dummy file in here. During the pipeline run the ansible.cfg created during the base_config run, will be copied here, to be used during builds. I have a non-standard certificate authority, so the ca.crt is in here for my setup.
host_vars/hub_/hub_.yml
This file holds the variables to connect with the rhaap environment:
---
ee_ah_host: <rhaap_fqdn_for_env>
ee_validate_certs: false
registry_username: <redhat_account> # only needed when downloading from redhat.io
registry_password: <redhat_password> # only needed when downloading from redhat.io
ahub_username: ee_upload
ahub_password: <ee_upload_password>
These are used by the code to connect to the environment.
The user for automation hub is created using configuration as code and is a member of the hub_ee group
and has the rights to upload new execution environments.
If you are not using this user, you will need to use the admin account.
inventory.yml
The inventory to pass to the ansible playbook.
---
dev:
hosts:
hub_dev:
prod:
hosts:
hub_prod:
It just translates the hosts to the variables for the connection.
templates/bindep.txt.j2
Template to create the file bindep.txt from the variables in ee_vars.yml if present.
{% for package in ee_system %}
{{ package }}
{% endfor %}
ttemplates/requirements.txt.j2
Template to create the file requirements.txt from the variables in ee_vars.yml.
{% for package in ee_python %}
{{ package }}
{% endfor %}
ttemplates/requirements.yml.j2
Template to create the file requirements.yml from the variables in ee_vars.yml.
---
collections:
{% for collection in ee_collections %}
{% set name = collection|split(';') %}
- name: {{ name[0] }}
{% if name[1] is defined %}
version: "{{ name[1] }}"
{% endif %}
{% endfor %}
Here we see that the code evaluates if there is a semi colon in the variable and ajusts the output accordingly.
templates/create_image.sh.j2
Template to create the script that will build the EE in the end.
ansible-builder build --tag {{ ee_image_name }}
podman tag localhost/{{ ee_image_name }} {{ ee_ah_host }}/{{ ee_image_name }}:{{ ee_version }}
podman login --tls-verify=false -u {{ ahub_username }} -p {{ ahub_password }} {{ ee_ah_host }}
podman push --tls-verify=false {{ ee_ah_host }}/{{ ee_image_name }}:{{ ee_version }}
In most cases, you will see that a virtual env is created before the build is started. We don't do this, we run this
in a docker container that is created for this purpose. And because we use a container, everything is creared as we
stop the build and remove the container.
The definition of this contianer can be found later in this document.
templates/execution-environment.yml.j2
The template that defines the execution-environment from beginning to end.
---
version: 3
build_arg_defaults:
ANSIBLE_GALAXY_CLI_COLLECTION_OPTS: "-c"
{% if (ee_collections|length > 0) or (ee_python|length > -0) or (ee_system|length > 0) %}
dependencies:
ansible_core:
package_pip: ansible-core>=2.16,<=2.17
ansible_runner:
package_pip: ansible-runner
python_interpreter:
package_system: "python311"
python_path: "/usr/bin/python3.11"
exclude:
system:
- openshift-clients
python:
- systemd-python
{% endif %}
{% if ee_collections|length > 0 %}
galaxy: requirements.yml
{% endif %}
{% if ee_python|length > 0 %}
python: requirements.txt
{% endif %}
{% if ee_system|length > 0 %}
system: bindep.txt
{% endif %}
images:
base_image:
name: {{ basic_image }}
options:
package_manager_path: /usr/bin/microdnf
additional_build_files:
- src: files/ansible.cfg
dest: configs
options:
container_init:
package_pip: dumb-init>=1.2.5
entrypoint: '["dumb-init"]'
cmd: '["csh"]'
package_manager_path: /usr/bin/microdnf
relax_passwd_permissions: false
skip_ansible_check: true
additional_build_files:
- src: files/ansible.cfg
dest: configs
additional_build_steps:
prepend_base:
- COPY _build/configs/ansible.cfg /etc/ansible/ansible.cfg
- ADD _build/configs/ansible.cfg /home/runner/.ansible.cfg
append_final:
- RUN ls -la /etc
This template will take any rhel like container as base image and create an execution-environment when templated together with the ee_vars.yml. This will be done by the playbook main.yml.
main.yml
The main playbook that will be run by the pipline.
---
- name: Playbook to create custom EE
hosts: "{{ instance | default('dummy') }}"
connection: local
gather_facts: false
tasks:
- name: Include the definition of the ee
ansible.builtin.include_vars:
file: ee_vars.yml
- name: Copy ansible.cfg to home dir
ansible.builtin.copy:
src: ansible.cfg
dest: ~/ansible.cfg
mode: '0600'
when: use_ansible_cfg
- name: Template the execution-environment.yml
ansible.builtin.template:
src: execution-environment.yml.j2
dest: execution-environment.yml
mode: '0644'
- name: Template the bindep.txt
ansible.builtin.template:
src: bindep.txt.j2
dest: bindep.txt
mode: '0644'
when: (ee_collections is defined) and (ee_collections | length > 0)
- name: Template the requirements.yml
ansible.builtin.template:
src: requirements.yml.j2
dest: requirements.yml
mode: '0644'
when: (ee_system is defined) and (ee_system | length > 0)
- name: Template the requirements.txt
ansible.builtin.template:
src: requirements.txt.j2
dest: requirements.txt
mode: '0644'
when: (ee_python is defined) and (ee_python | length > 0)
- name: Template the creation script
ansible.builtin.template:
src: create_image.sh.j2
dest: create_image.sh
mode: '0700'
- name: Create the ee_image
block:
- name: Create the image
ansible.builtin.command: ./create_image.sh
register: _create_output
changed_when: _create_output.rc == 0
rescue:
- name: Show the output if any error
ansible.builtin.debug:
var: _create_output.stdout_lines
always:
- name: Fail the play if any error
ansible.builtin.fail:
msg: "Build failed, read the error above to find why"
when: _create_output.rc != 0
THis playbook templates all files and runs the creation script, which will build and upload the execution environment into your automation hub. If any error occurs, it will show you the output in the pipline jib log in gitlab.
build image
The docker image to run this build in, is defined as follows:
Dockerfile:
FROM registry.access.redhat.com/ubi9/python-311:latest
USER root
COPY files/ca.crt /etc/pki/ca-trust/source/anchors/ca.crt
COPY files/requirements.yml /tmp/requirements.yml
COPY files/ansible.cfg /etc/ansible/ansible.cfg
RUN pip install ansible-core ansible-lint ansible-builder pyyaml && \
dnf -y install podman findutils fuse3-devel fuse-overlayfs && \
dnf clean all
RUN ansible-galaxy collection install -r /tmp/requirements.yml
RUN /usr/bin/chmod 777 -R /opt/ && \
/usr/bin/update-ca-trust
Build and upload this file to the local image registry to be able to pull this in your pipeline.
pipeline
For testing you can place the .gitlab-ci.yml in your repository:
# Pull the ee-builder-image from local registry
image: docker.homelab:5000/ee-builder-image:1.0
# List of pipeline stages
stages:
- build_ee_image
- lint_and_merge
lint_after_commit:
tags:
- shared
stage: lint_and_merge
rules:
- if: '$CI_COMMIT_REF_NAME != "dev"
&& $CI_COMMIT_REF != "test"
&& $CI_COMMIT_REF != "accp"
&& $CI_COMMIT_REF != "prod"'
script:
- echo "From pipeline - Start linting on '$CI_COMMIT_REF_NAME'"
- HOST_VAR=$(echo "AUTOM_HOST_${CI_COMMIT_BRANCH}" | tr [:lower:] [:upper:])
- sshpass -p "${PASSWORD}" scp -o StrictHostKeyChecking=no ansible@$(printenv $HOST_VAR):/etc/ansible/ansible.cfg files/ansible.cfg
- ansible-lint
build_ee_image:
tags:
- shared
stage: build_ee_image
rules:
- if: '($CI_COMMIT_BRANCH == "dev"
|| $CI_COMMIT_BRANCH == "test"
|| $CI_COMMIT_BRANCH == "accp"
|| $CI_COMMIT_BRANCH == "prod")
&& $CI_PIPELINE_SOURCE == "push"
&& $CI_COMMIT_MESSAGE =~ /Merge branch/i'
script:
- echo "From pipeline - Start build image on '$CI_COMMIT_REF_NAME' Environment"
- HOST_VAR=$(echo "AUTOM_HOST_${CI_COMMIT_BRANCH}" | tr [:lower:] [:upper:])
- sshpass -p "${PASSWORD}" scp -o StrictHostKeyChecking=no ansible@$(printenv $HOST_VAR):/etc/ansible/ansible.cfg files/ansible.cfg
- ansible-playbook main.yml
-i inventory.yaml
-e instance=hub_$CI_COMMIT_REF_NAME
NOTE For this pipeline to work correctly, you should add some CICD variables to the repositories that use this pipeline. The variables are: - PASSWORD # the password for the account that copies the ansible.cfg - AUTOM_HOST_DEV # the fqdn for the development automation platform - AUTOM_HOST_PROD # the fqdn for the production automation platform If you have more environemnts, you'll need more variables.
The 'sshpass' line of code ensures you have a current ansible.cfg during the build of your EE, the other option is to add an ansible.cfg to the repository, but it has to be updated every time you run the base_config of the configuration as code.
Here is all you need to create EE's from an automated pipeline.
Later, we will show the code to generate this repository from rhaap, using a job-template.