Automating Config As Code changes

When you go the automation route, everything must be automated... even adding teams(organizations) to automation platform. The same goes for deletion and changes like enabling EDA in config as code. In these pages you can find some examples how to do this.

Building Execution environments

As you probably know, we need execution evironments to run our ansible code. Sometimes we need a special execution environment and we need to build it by hand, or do we?

When you build an execution-environment, you need to install ansible-builder, create a venv and meny things more. The process starts with the creation of a "execution-environment.yml" which holds the requirements to be built into the execution-environment image. To build the EE from code, we will be templating all that is needed for the build.

While building an EE image requires a base image to build on top of and most people use a EE image as base, we use a minimal container image and add the ansible-runner ourselves. This way we have full control of what is in the image.

The first you might build by hand, but as we like to do everything in code, we want to build the execution environment from code.

Gitlab repository

To store the definition of an execution environment in code, we need a repository and a pipeline.
In this repository we store the following structure:

.
├── ee_vars.yml
├── files
│   └── dummy.file
├── host_vars
│   ├── hub_dev
│   │   └── hub_dev.yml
│   └── hub_prod
│       └── hub_prod.yml
├── inventory.yml
├── main.yml
├── README.md
└── templates
    ├── bindep.txt.j2
    ├── create_image.sh.j2
    ├── execution-environment.yml.j2
    ├── requirements.txt.j2
    └── requirements.yml.j2

There are all file we need in this repository. You might find you're missing the pipeline itself, but we store this in an other location for security and to keep pipelines uniform.

We will go through the files and explain how to configure this to build an EE. The EE that I will be building here has a number of collections in it to run a playbook.

The files

We now describe each file and its contents.

ee_vars.yml

The ee_vars.yml file holds everything that is added to the base image we want in the execution-environment. It holds variables to fill the templates when the playbook is run by the pipeline.

---
# Put the contents of the files in here
# requirements.txt  == ee_python
# requirements.yml  == ee_collections
# bindep.txt        == ee_system
#
use_ansible_cfg: true
ee_image_name: ee-demo-image
ee_python:
  - requests
  - python-gitlab

ee_collections:
  - community.general;==8.5.2
  - ansible.posix
  - ansible.windows

ee_system:
  - python3-systemd [platform:rpm]
  - python3-pip [platform:rpm]

basic_image: quay.io/rockylinux/rockylinux:9.5-minimal

ee_version: 1.0

Above the contents of the ee_vars.yml file, as you can see we have collected all the variables that will be needed to create an execution-environment together in one file.
There is a slight change in the specification for the version of a collection, be aware that there is a semicolon between the collection name and the version. This is done so the templating can detect if there is a version number.

files/dummy.file

The file directory needs to be present and git tends to remove it when there is nothing in this directory. This will cause proplems during recovery, so ensure there is always a dummy file in here. During the pipeline run the ansible.cfg created during the base_config run, will be copied here, to be used during builds.

host_vars/hub_/hub_.yml

This file holds the variables to connect with the rhaap environment:

---
ee_ah_host: <rhaap_fqdn_for_env>
ee_validate_certs: false
registry_username: <redhat_account>  # only needed when downloading from redhat.io
registry_password: <redhat_password> # only needed when downloading from redhat.io
ahub_username: ee_upload
ahub_password: <ee_upload_password>

These are used by the code to connect to the environment. The user for automation hub is created using configuration as code and is a member of the hub_ee group and has the rights to upload new execution environments. If you are not using this user, you will need to use the admin account.

inventory.yml

The inventory to pass to the ansible playbook.

---
dev:
  hosts:
    hub_dev:
prod:
  hosts:
    hub_prod:

It just translates the hosts to the variables for the connection.

templates/bindep.txt.j2

Template to create the file bindep.txt from the variables in ee_vars.yml if present.

{% for package in ee_system %}
{{ package }}
{% endfor %}

templates/requirements.txt.j2

Template to create the file requirements.txt from the variables in ee_vars.yml.

{% for package in ee_python %}
{{ package }}
{% endfor %}

templates/requirements.yml.j2

Template to create the file requirements.yml from the variables in ee_vars.yml.

---
collections:
{% for collection in ee_collections %}
{% set name = collection|split(';') %}
  - name: {{ name[0] }}
{% if name[1] is defined %}
    version: "{{ name[1] }}"
{% endif %}
{% endfor %}

Here we see that the code evaluates if there is a semi colon in the variable and ajusts the output accordingly.

templates/create_image.sh.j2

Template to create the script that will build the EE in the end.

ansible-builder build --tag {{ ee_image_name }}
podman tag localhost/{{ ee_image_name }} {{ ee_ah_host }}/{{ ee_image_name }}:{{ ee_version }}
podman login --tls-verify=false -u {{ ahub_username }} -p {{ ahub_password }} {{ ee_ah_host }}
podman push --tls-verify=false {{ ee_ah_host }}/{{ ee_image_name }}:{{ ee_version }}

In most cases, you will see that a virtual env is created before the build is started. We don't do this, we run this in a docker container that is created for this purpose. And because we use a container, everything is creared as we stop the build and remove the container. The definition of this contianer can be found later in this document.

templates/execution-environment.yml.j2

The template that defines the execution-environment from beginning to end.

---
version: 3
build_arg_defaults:
  ANSIBLE_GALAXY_CLI_COLLECTION_OPTS: "-c"

{% if (ee_collections|length > 0) or (ee_python|length > -0) or (ee_system|length > 0) %}
dependencies:
  ansible_core:
    package_pip: ansible-core>=2.16,<=2.17
  ansible_runner:
    package_pip: ansible-runner
  python_interpreter:
    package_system: "python311"
    python_path: "/usr/bin/python3.11"
  exclude:
    system:
      - openshift-clients
    python:
      - systemd-python
{% endif %}
{% if ee_collections|length > 0 %}
  galaxy: requirements.yml
{% endif %}
{% if ee_python|length > 0 %}
  python: requirements.txt
{% endif %}
{% if ee_system|length > 0 %}
  system: bindep.txt
{% endif %}

images:
  base_image:
    name: {{ basic_image }}

options:
  package_manager_path: /usr/bin/microdnf

additional_build_files:
    - src: files/ansible.cfg
      dest: configs

options:
  container_init:
    package_pip: dumb-init>=1.2.5
    entrypoint: '["dumb-init"]'
    cmd: '["csh"]'
  package_manager_path: /usr/bin/microdnf
  relax_passwd_permissions: false
  skip_ansible_check: true

additional_build_files:
  - src: files/ansible.cfg
    dest: configs

additional_build_steps:
  prepend_base:
    - COPY _build/configs/ansible.cfg /etc/ansible/ansible.cfg
    - ADD _build/configs/ansible.cfg /home/runner/.ansible.cfg
  append_final:
    - RUN ls -la /etc

This template will take any rhel like container as base image and create an execution-environment when templated together with the ee_vars.yml. This will be done by the playbook main.yml.

main.yml

The main playbook that will be run by the pipline.

---
- name: Playbook to create custom EE
  hosts: "{{ instance | default('dummy') }}"
  connection: local
  gather_facts: false

  tasks:
    - name: Include the definition of the ee
      ansible.builtin.include_vars:
        file: ee_vars.yml

    - name: Copy ansible.cfg to home dir
      ansible.builtin.copy:
        src: ansible.cfg
        dest: ~/ansible.cfg
        mode: '0600'
      when: use_ansible_cfg

    - name: Template the execution-environment.yml
      ansible.builtin.template:
        src: execution-environment.yml.j2
        dest: execution-environment.yml
        mode: '0644'

    - name: Template the bindep.txt
      ansible.builtin.template:
        src: bindep.txt.j2
        dest: bindep.txt
        mode: '0644'
      when: (ee_collections is defined) and (ee_collections | length > 0)

    - name: Template the requirements.yml
      ansible.builtin.template:
        src: requirements.yml.j2
        dest: requirements.yml
        mode: '0644'
      when: (ee_system is defined) and (ee_system | length > 0)

    - name: Template the requirements.txt
      ansible.builtin.template:
        src: requirements.txt.j2
        dest: requirements.txt
        mode: '0644'
      when: (ee_python is defined) and (ee_python | length > 0)

    - name: Template the creation script
      ansible.builtin.template:
        src: create_image.sh.j2
        dest: create_image.sh
        mode: '0700'

    - name: Create the ee_image
      block:
        - name: Create the image
          ansible.builtin.command: ./create_image.sh
          register: _create_output
          changed_when: _create_output.rc == 0

      rescue:
        - name: Show the output if any error
          ansible.builtin.debug:
            var: _create_output.stdout_lines

      always:
        - name: Fail the play if any error
          ansible.builtin.fail:
            msg: "Build failed, read the error above to find why"
          when: _create_output.rc != 0

THis playbook templates all files and runs the creation script, which will build and upload the execution environment into your automation hub. If any error occurs, it will show you the output in the pipline jib log in gitlab.

build image

The docker image to run this build in, is defined as follows:

Dockerfile:

FROM registry.access.redhat.com/ubi9/python-311:latest
USER root

COPY files/requirements.yml /tmp/requirements.yml
COPY files/ansible.cfg /etc/ansible/ansible.cfg
RUN pip install ansible-core ansible-lint ansible-builder pyyaml && \
    dnf -y install podman findutils fuse3-devel fuse-overlayfs && \
    dnf clean all
RUN ansible-galaxy collection install -r /tmp/requirements.yml
RUN /usr/bin/chmod 777 -R /opt/ && \
    /usr/bin/update-ca-trust

Build and upload this file to the local image registry to be able to pull this in your pipeline.

pipeline

For testing you can place the .gitlab-ci.yml in your repository:

# Pull the ee-builder-image from local registry
image: docker.homelab:5000/ee-builder-image:1.0

# List of pipeline stages
stages:
  - build_ee_image
  - lint_and_merge

lint_after_commit:
  tags:
    - shared
  stage: lint_and_merge
  rules:
    - if: '$CI_COMMIT_REF_NAME != "dev"
           && $CI_COMMIT_REF != "test"
           && $CI_COMMIT_REF != "accp"
           && $CI_COMMIT_REF != "prod"'
  script:
    - echo "From pipeline - Start linting on '$CI_COMMIT_REF_NAME'"
    - HOST_VAR=$(echo "AUTOM_HOST_${CI_COMMIT_BRANCH}" | tr [:lower:] [:upper:])
    - sshpass -p "${PASSWORD}" scp -o StrictHostKeyChecking=no ansible@$(printenv $HOST_VAR):/etc/ansible/ansible.cfg files/ansible.cfg
    - ansible-lint

build_ee_image:
  tags:
    - shared
  stage: build_ee_image
  rules:
    - if: '($CI_COMMIT_BRANCH == "dev"
           || $CI_COMMIT_BRANCH == "test"
           || $CI_COMMIT_BRANCH == "accp"
           || $CI_COMMIT_BRANCH == "prod")
           && $CI_PIPELINE_SOURCE == "push"
           && $CI_COMMIT_MESSAGE =~ /Merge branch/i'
  script:
    - echo "From pipeline - Start build image on '$CI_COMMIT_REF_NAME' Environment"
    - HOST_VAR=$(echo "AUTOM_HOST_${CI_COMMIT_BRANCH}" | tr [:lower:] [:upper:])
    - sshpass -p "${PASSWORD}" scp -o StrictHostKeyChecking=no ansible@$(printenv $HOST_VAR):/etc/ansible/ansible.cfg files/ansible.cfg
    - ansible-playbook main.yml
      -i inventory.yaml
      -e instance=hub_$CI_COMMIT_REF_NAME

NOTE For this pipeline to work correctly, you should add some CICD variables to the repositories that use this pipeline. The variables are: - PASSWORD # the password for the account that copies the ansible.cfg - AUTOM_HOST_DEV # the fqdn for the development automation platform - AUTOM_HOST_PROD # the fqdn for the production automation platform If you have more environemnts, you'll need more variables.

The 'sshpass' line of code ensures you have a current ansible.cfg during the build of your EE, the other option is to add an ansible.cfg to the repository, but it has to be updated every time you run the base_config of the configuration as code.

Here is all you need to create EE's from an automated pipeline.
Later, we will show the code to generate this repository from rhaap, using a job-template.

SSL issues running your EE

In case of private CA certificates/authorities you could get some ssl certificate verfication issues. This is not caused by the ansible builder and can be solved quite easy:

Ensure the CA certificate is registered and trusted on all execution nodes of the ansible platform.
As an execution environment is a container, it depends on the host to provide the trusted certificates.

Adding custom collections to automation hub

In this chapter, we're going to talk about automatically uploading and publishing your own ansible collections within your own organization. To publish these collections outside the organization is beyond the scope of this chapter. What we are going to do, however, is automate the publication and construction of the collection according to the GitOps methodology. We apply the solution described below for each collection, so each collection gets its own git repository and its own pipeline (centralized git location, the code is the same).

Index
- Conditions
- Namespace
- The git repository
- The Pipeline

Conditions

In order to be able to automatically upload a collection in the automation hub part of rhaap 2.5, there are a number of conditions that must be met: - A namespace must be available - Your collection must have a name in the namespace - Your collection is stored in a git repository - The repository has a pipeline - The pipeline uses its "own" user for the hub

Namespace

Create a namespace to store your collections and do this via the configuration as code (see base_configuration_gateway_and_hub), you should already know how to do that. Think carefully about this name, it must fit in with the naming within your organization.

The git repository

As with all GitOps repositories, it will be stored in git and has a branch for each environment. The repository will have the following directory structure:

├── CHANGELOG.md
├── docs
├── galaxy.yml
├── group_vars
│   └── all
├── host_vars
│   └── hub_dev
│   └── hub_prod
├── inventory.yaml
├── meta
├── plugins
├── README.md
├── roles
│   ├── role_1
│   │   ├── defaults
│   │   ├── files
│   │   ├── handlers
│   │   ├── meta
│   │   ├── README.md
│   │   ├── tasks
│   │   └── templates
│   ├── role_2
│   │   ├── defaults
│   │   ├── files
│   │   ├── meta
│   │   ├── README.md
│   │   ├── tasks
│   │   └── templates
└── upload_collection.yml

Most of the structure is imposed by the galaxy structure for the collections. But there are a few things that are needed to take care of the automation for gitops. These additions are as follows (and you probably already recognize them, if you've read the previous chapter carefully): - group_vars - host_vars - inventory.yaml - upload_collection.yml - A .gitlab-ci.yml will be added for GitLab

galaxy.yml

### REQUIRED
# The namespace of the collection. This can be a company/brand/organization or product namespace under which all
# content lives. May only contain alphanumeric lowercase characters and underscores. Namespaces cannot start with
# underscores or numbers and cannot contain consecutive underscores
namespace: linux

# The name of the collection. Has the same character restrictions as 'namespace'
name: web

# The version of the collection. Must be compatible with semantic versioning
version: 1.0.5
# The path to the Markdown (.md) readme file. This path is relative to the root of the collection
readme: README.md

# A list of the collection's content authors. Can be just the name or in the format 'Full Name <email> (url)
# @nicks:irc/im.site#channel'
authors:
  - Your Name <your_email>


### OPTIONAL but strongly recommended
# A short summary description of the collection
description: Collection to deploy apache and ipvs loadbalancers

# Either a single license or a list of licenses for content inside of a collection. Ansible Galaxy currently only
# accepts L(SPDX,https://spdx.org/licenses/) licenses. This key is mutually exclusive with 'license_file'
license:
  - GPL-2.0-or-later

# The path to the license file for the collection. This path is relative to the root of the collection. This key is
# mutually exclusive with 'license'
license_file: ''

# A list of tags you want to associate with the collection for indexing/searching. A tag name has the same character
# requirements as 'namespace' and 'name'
tags:
  - linux
  - infrastructure

# Collections that this collection requires to be installed for it to be usable. The key of the dict is the
# collection label 'namespace.name'. The value is a version range
# L(specifiers,https://python-semanticversion.readthedocs.io/en/latest/#requirement-specification). Multiple version
# range specifiers can be set and are separated by ','
dependencies:
  'community.general': '>=6.5.0'
  'ansible.posix': '>=1.5.2'

# The URL of the originating SCM repository
repository: git@gitlab.homelab/collections/linux.web.git

# The URL to any online docs
documentation: https://gitlab.homelab/collections/linux.web/README.md

# The URL to the homepage of the collection/project
homepage: https://gitlab.homelab/collections/linux.web

# The URL to the collection issue tracker
issues: http://example.com/issue/tracker

# A list of file glob-like patterns used to filter any files or directories that should not be included in the build
# artifact. A pattern is matched from the relative path of the file or directory of the collection directory. This
# uses 'fnmatch' to match the files or directories. Some directories and files like 'galaxy.yml', '*.pyc', '*.retry',
# and '.git' are always filtered
build_ignore:
  - .gitlab-ci.yml
  - host_vars
  - inventory.yml
  - upload_collection.yml
  - group_vars

inventory.yaml

The inventory tells the code where to find the automation hub to upload the collection into.

---
dev:
  hosts:
    hub_dev:
test:
  hosts:
    hub_test:
accp:
  hosts:
    hub_accp:
prod:
  hosts:
    hub_prod:

group_vars/all/ah_collections.yml

In this inventory file, the variables needed for the playbook are generated at the start of the play.

---
ah_configuration_async_retries: 10
ah_configuration_async_delay: 2
ah_collections:
  - name: "{{ galaxy_vars.name }}"
    namespace: "{{ galay_vars.namespace }}"
    version: "{{ galaxy_vars.version }}"
    path: "{{ coll_file }}"
    wait: false
    overwrite_existing: false
    state: present
...

As you can see, there is nothing to configure in this file, the content is determined by variables. The origin of these variables is described below. The host_vars folder, contains the files with the login details for the automation hub, these can be copied directly from the automation hub configuration as a code repository pipeline, with a modification that the user is modified for the collection upload.

host_vars/hub_dev/hub_auth.yml

In this file the credentials for logging into the automation hub are set. We use a separate account to automate the building and uploading the custom collections, so this will not disrupt things by invalidating tokens. This account is created using the base_config from the configuration as code. The team this account is a member of, has the rights to upload collections into hub. If you are not working with such an account, you will need to do this using the admin account.

---
ah_host: 'https://rhaap26.homelab'
ah_validate_certs: false
ah_username: <coll_upload_user>    # vaulted vaulue
ah_password: <coll_upload_passwd>   # vaulted value

host_vars/hub_dev/hub_dev.yml

Additional vars for handling the collection upload through the collection.

---
hosts: localhost
ah_configuration_async_dir: /opt/app-root/src/.ansible_async/

This was needed to let the collection find the response file and report the corrected exit code.

The Pipeline

How it works: - With each new commit in the repository, the pipeline is triggered. - The file .gitlab-ci.yml is read by the pipeline. - The actions in this file will be performed in order. - Old files are deleted - A new version of the collection is being built - The "upload_collection.yml" playbook starts. - The playbook searches for the file containing the collection - Reads the galaxy.yml as galaxy_vars, populating the variables in ah_collections.yml - Start the upload to the private automation hub - Publishes the new version (if any)

.gitlab-ci.yml

# Defaults
image: docker.homelab:5000/ansible-image:latest

# List of pipeline stages
stages:
 -linting
 - Build collection

linting:
  tags:
    - shared
  stage: linting
  rules:
    - if: '$CI_COMMIT_REF_NAME != "dev" 
           && $CI_COMMIT_REF_NAME != "test" 
           && $CI_COMMIT_REF_NAME != "accp" 
           && $CI_COMMIT_REF_NAME != "prod"'
  script:
    - echo "From pipeline - Start linting on '$CI_COMMIT_REF_NAME'"
    - wget -O ~/ansible.cfg http://web.dev.lab:81/dev_ansible.cfg
      # Role satellite is excluded for persistent module error
    - ansible-lint
      --exclude .gitlab-ci.yml
      --exclude host_vars/
      --exclude roles/role_infrastructure_satellite/tasks/main.yml

configure-automation-hub:
  tags:
    - gitlab-runner
  stage: Build collection
  rules:
    - if: '($CI_COMMIT_BRANCH == "dev" 
          || $CI_COMMIT_BRANCH == "test" 
          || $CI_COMMIT_BRANCH == "accp" 
          || $CI_COMMIT_BRANCH == "prod") 
           && $CI_PIPELINE_SOURCE == "push" 
           && $CI_COMMIT_MESSAGE =~ /Merge branch/i'
  script:
    - wget -O ~/ansible.cfg http://web.dev.lab:81/${CI_COMMIT_BRANCH}_ansible.cfg
    - echo "Remove old versions of the collection"
    - find . -name "*.tar.gz" -exec rm {} \;
    - echo "Build the collection"
    - ansible-galaxy collection build
    - echo "Push the collection to automationhub"
    - ansible-playbook upload_collection.yml
      -i inventory.yaml
      -e instance=hub_$CI_COMMIT_REF_NAME
      -e branch_name=$CI_COMMIT_REF_NAME
      --vault-password-file <(echo ${VAULT_PASSWORD})

The above code is triggered with every merge to the branches mentioned under "rules" and will execute the code under "script". Here you can see that an ansible playbook is being run to perform the configuration.

There are a number of variables used in the call of the playbook, these do not come out of the blue, but this is where they come from: |Variable| Description| |---|---| |$CI_COMMIT_REF_NAME|This is an internal variable that is given to each pipeline task by gitlab, the content of this variable is the branch for which the pipeline was started. By using these, we can magically tell the playbook what environment to configure.| |$VAULT_PASSWORD|Of course, this is not a standard variable of gitlab, we define this variable in gitlab with the project in the "Settings \ CI/CD \ Variables", where we make sure that it has "Masked and Expanded" as settings. This is where we store the vault password, with which the passwords or files are encrypted in ansible.|

upload_colection.yml

---
- hosts: "{{ instance }}"
  connection: local
  gather_facts: false

  pre_tasks:
    - name: Find collection file
      ansible.builtin.find:
        paths: "."
        patterns: '*.tar.gz'
      register: _file

    - name: Load vars form galaxy.yml
      ansible.builtin.include_vars:
        file: galaxy.yml
        name: galaxy_vars

    - name: Set the automation hub vars
      ansible.builtin.set_fact:
        coll_file: "{{ _file.files[0].path }}"

  roles:
    - { role: infra.ah_configuration.collection, ignore_errors: true }

The ignore_errors here is unfortunately necessary for the playbook to run smoothly, this is caused by a bug in the infra collection.

meta/runtime.yml

This specifies the dependency of your collection towards ansible.

---
requires_ansible: ">=2.14.0"

In the roles directory you create your roles you want to be in this collection.
In the plugin directory add the plugins to add into the collection.

The frameworks is now complete.

Add organization fully automated

23-04-2026 updated
Adapted the code for new repository layout.

documentation

It is the purpose in life of any automation engineer to be able to put our feet on the desk and say "Look mammy, No hands", as everything is automated.
So if my boss comes in saying we need to create a new team repository in rhaap, we just log in, klick on the appropriate rocket and let the automation do the work for us. In these pages I will describe how to enable this.

As I already described, when adding a team to this rhaap configuration, we need to create a new repository, with all the needed files in there to configure the organizations templates and so on..
We could clone a repository from another team (or a template repository) and start configuring the teams properties. The better way is generating this from code and never touch the repository's content. We still need to give the target team access to the newly created repository in gitlab.

When the repository is created, the team must be able to log in to rhaap, so this needs to be configured too.

We created a playbook to handle it all for the installation that is described in these pages.

This playbook is hosted in a gitlab repository that has the following content:

.
├── env_vars.yml
├── files
│   ├── main.yml.txt
│   └── README.md
├── main.yml
├── get_gitlab_api_token.yml
├── other_vars.yml
├── README.md
└── templates
    ├── aap_auth.yml.j2
    ├── aap_env.yml.j2
    ├── controller_credential_input_sources.yml.j2
    ├── controller_credentials.yml.j2
    ├── controller_hosts.yml.j2
    ├── controller_inventory_sources.yml.j2
    ├── controller_inventories.yml.j2
    ├── controller_labels.yml.j2
    ├── controller_notifications.yml.j2
    ├── controller_projects.yml.j2
    ├── controller_roles.yml.j2
    ├── controller_schedules.yml.j2
    ├── controller_templates.yml.j2
    ├── controller_workflows.yml.j2
    └── repo_inventory.yaml.j2

As you can see, it has a number of templates (these are the templates used for the new repository), a main.yml and a few support files. The README.md should be clear, these describe the working of the code and will not be listed here.

Here is a table with all files listed and links to the content and descriptions:

Filename link
env_vars.yml env_vars.yml
main.yml.txt main.yml.txt
main.yml main.yml
get_gitlab_api_token.yml get_gitlab_api_token.yml
other_vars.yml other_vars.yml
aap_auth.yml.j2 aap_auth.yml.j2
aap_env.yml.j2 aap_env.yml.j2
controller_credential_input_sources.yml.j2 controller_credential_input_sources.yml.j2
controller_credentials.yml.j2 controller_credentials.yml.j2
controller_hosts.yml.j2 controller_hosts.yml.j2
controller_inventory_sources.yml.j2 controller_inventory_sources.yml.j2
controller_inventories.yml.j2 controller_inventories.yml.j2
controller_labels.yml.j2 controller_labels.yml.j2
controller_notifications.yml.j2 controller_notifications.yml.j2
controller_projects.yml.j2 controller_projects.yml.j2
controller_roles.yml.j2 controller_roles.yml.j2
controller_schedules.yml.j2 controller_schedules.yml.j2
controller_templates.yml.j2 controller_templates.yml.j2
controller_workflows.yml.j2 controller_workflows.yml.j2
repo_inventory.yaml.j2 repo_inventory.yaml.j2

If all of the above is copied into a repository, you should be able to create a project in rhaap to run this playbook. First you should configure all variables for your environment into the env_vars.yml and other_vars.yml. With the addition af the right credentials, this play will create a new organization at each run, by providing just 2 survey variables:
- organization_short_name
- team_password

For testing purposes, you might want to set the pipeline variable to a pipeline script that does nothing. This enables you to check the created files for errors.

Add eda capability to an ogranization (automated)

Updated: 23-04-2026
Updated to latest code

As we have automated almost everything, we don't want to start now, adding files by hand. We created a playbook to add this capability to a organization without lifting a finger.

All we need to do, is run the playbook and teel the playbook to wich organization the files must be added.
The playbbok does the heavy lifting.

the repository

The repository for this play looks like this:

.
├── env_vars.yml
├── gitlab_get_api_token.yml
├── main.yml
├── README.md
└── templates
    ├── eda_controller_tokens.yml.j2
    ├── eda_credentials.yml.j2
    ├── eda_decision_environments.yml.j2
    ├── eda_event_streams.yml.j2
    ├── eda_projects.yml.j2
    ├── eda_rulebook_activations.yml.j2
    ├── main.yml.j2
    └── stop_running_rulebooks.yml

Steps

When the main.yml is started, it will perform a number of actions:
- Checkout the existing config as code repository for the organization - Add new files for EDA config as code - Replace existing playbook main.yml - Add support playbook - create new branch and push the repository - create a merge request and run the pipeline into development

After running this play, the organization can start adding their event driven automations as configuration as code into the rhaap platform.

Files

To make this automation possible, we need a number of variable defined, these are gathered in the file:

env_vars.yml

This file is a subset of the variable file that is used in the 'Add organization automated' chapter. The same variables will be needed here, in a later stage we will ook into this to reduce the doubling of these variables. For now, we need them here..

The organization_short_name value needs to be passed to the play as 'extra_vars'.

---
organization_long_name: 'org_{{ organization_short_name }}'
gitlab_protocol: 'https://'
gitlab_url: 'gitlab.homelab/'
gitlab_group: 'cac_26'
gitlab_default_branch: dev
gitlab_validate_certs: false
team_project_name: "rhaap_cac_{{ organization_long_name | lower }}"
aap_env:
  dev:
    rhaap_hostname: rhaap_dev.homelab
  prod:
    rhaap_hostname: rhaap_prod.homelab
code_environment_vars:
  all:
  dev:
  prod:

gitlab_get_api_token.yml

This play is used in several plays and creates a session token to gitlab, this token is then used for checking the pipeline status.

- name: GitLab Post | Obtain Access Token
  ansible.builtin.uri:
    url: "{{ gitlab_protocol }}{{ gitlab_url }}oauth/token"
    method: POST
    validate_certs: false
    body_format: json
    headers:
      Content-Type: application/json
    body: >
      {
        "grant_type": "password",
        "username": "{{ gitlab_user_username }}",
        "password": "{{ gitlab_user_password }}"
      }
  register: gitlab_access_token
  no_log: true

- name: Store the token in var
  ansible.builtin.set_fact:
    token: "{{ gitlab_access_token.json.access_token }}"
  no_log: true

The main playbook that wil add the files:
main.yml

The templates used to create the files: Will be added soon

Deleting organizations automated

Delete an existing organization from automation platform.

Almost Automated Recovery

Keep Private Hub in sync with Galaxy

Here you will find a playbook you can schedule to sync your private hub to galaxy on a regular basis.