creating the pipeline image

To run the pipeline from a gitlab runner or other git service, we will need an image in which the collections that are needed are installed.
This image must be availlable to the runner of the git service to be fetched and run.

We use a docker image which we built using the following configuration:

We created a directory on a server (with docker installed) with the following content:

.
|-- Dockerfile
|-- files
|   |-- ansible.cfg
|   |-- ca.crt
|   `-- requirements.yml
`-- pm_build.sh

The files in this structure are as follows:

ca.crt

The certificate for the CA if you have you own certificate setup like easy_rsa, if you use public certificates, this is not needed.

Dockerfile

The Dockerfile tells the docker build engine what to build and how, this process is well documented and I will just give you the files to build the image.
Tweak this at your convenience This works for me and results in an image of manageable size, all additions wil enlarge the image and eventually hurt performance, depending on your configuration.

# FROM registry.access.redhat.com/ubi9/python-311:latest
FROM quay.io/rockylinux/rockylinux:9.6-minimal
USER root

COPY files/ca.crt /etc/pki/ca-trust/source/anchors/ca.crt
COPY files/requirements.yml /tmp/requirements.yml
COPY files/ansible.cfg /etc/ansible/ansible.cfg
RUN microdnf -y install python3.11 podman python3-systemd python3.11-devel \
    python3-gssapi python3.11-requests python3.11-wheel krb5-libs openssh-clients \
    git-core wget findutils && \
    microdnf clean all && \
    rm /usr/bin/python && \
    ln -s /usr/bin/python3.9 /usr/bin/python && \
    rm /usr/bin/python3 && \
    ln -s /usr/bin/python3.11 /usr/bin/python3
RUN wget https://bootstrap.pypa.io/get-pip.py && \
    python3 ./get-pip.py && \
    pip3 install ansible-core ansible-lint pyyaml python-gitlab hvac
RUN ansible-galaxy collection install -r /tmp/requirements.yml && \
    ansible-galaxy collection list
RUN /usr/bin/chmod 777 -R /opt/ && \
    /usr/bin/update-ca-trust

pm_build.sh

This script does the hard work for me, ensures that the build is executed the same way every time.
As this is my personal environment the account information is in here, replace these before use or delete the script.

version=1.0
# docker login -u {username} -p {password} registry.redhat.io
docker build -t cac-image .
docker tag cac-image {your-docker-registry-url}/cac-image:${version}
docker push {your-docker-registry-url}/cac-image:${version}
docker rmi cac-image

If you need to rebuild the image, change the first docker commend to: docker build --no-cache -t cac-img-v3 . This will completely rebuild all layers of your image, as small changes might not be picked up.

ansible.cfg

Assuming you first configured the rhaap service by hand, or have a previous installation, you configure the ansible.cfg to point to your functional environment. This ensures you can pull collections form there. The collections mentiond in the requirements.yml should be in this installation.

[galaxy]
server_list = community_repo, rh-certified_repo,published_repo,validated_repo
validate_certs=false
ignore_certs=true
galaxy_ignore_certs=true

[galaxy_server.community_repo]
url=https://{rhaap-fqdn}/api/galaxy/content/community
token={token}

[galaxy_server.rh-certified_repo]
url=https://{rhaap-fqdn}/api/galaxy/content/rh-certified
token={token}

[galaxy_server.published_repo]
url=https://{rhaap-fqdn}/api/galaxy
token={token}

[galaxy_server.validated_repo]
url=https://{rhaap-fqdn}/api/galaxy/content/validated
token={token}

requirements.yml

These are the collections we will need to run our pipeleines.

---
collections:
  - infra.aap_configuration
  - ansible.controller
  - ansible.eda
  - ansible.hub
  - ansible.platform
  - community.general
  - hashicorp.vault
...

As we ultimately want no secrets in the pipeline, we integrate the requirements for our vault solution in the image.

Build the image, upload it into the registry and use it in the pipeline for the configuration as code.

Back