"custom" execution environments

In this chapter, we'll show you how to easily create and upload your own custom execution environments in the private automation hub. The entire process will be automatic when the repository is changed.

Index
- Conditions
- Gitlab runner setup
- The git repository
- The Pipeline
- The playbook

Conditions

When you are going to create and upload an execution environment, a number of things must be arranged in order for this to be fully automated:

  • Automation hub must be available for the environment
  • The pipeline must be able to build images (see pipeline)
  • The execution environment is defined in a git repository
  • The repository has a pipeline
  • The pipeline uses a separate user to upload the EE.

Gitlab runner

If you run the code included in this repository in a standard gitlab runner, it will likely fail. Most gitlab runners nowadays run on a container platform such as docker or Openshift (Kubernetes). Things are likely to go wrong on these platforms if you don't take action. This is due to the fact that most platforms do not allow the creation of containers in containers via podman by default. This requires special security settings.

But we are going to do this with the code that is here, how do we solve this?

  • We create a special group in gitlab
  • All execution environment definition projects should be included in this group.
  • We are making a special gitlab runner for this group.
  • In which we change the security so that building the containers is possible (see below for docker).

For the runner in a docker environment, we change the config.toml in the runner container, so that the container runs in privileged mode:

  [runners.docker]
    tls_verify = false
    image = "gitlab-runner-image:latest"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false

The privileged = true is the key, if it is not in the runner configuration, the build of the container will fail with error messages. In case of a possible security issue, make sure that this runner is only available for this group and that not everyone in this group can create repositories.

The git repository

As with all gitops repositories, this repository also has a branch for each environment. The repository has the following directory structure:

├── files
│   └── ansible.cfg
├── host_vars
│   ├── hub_dev
│   └── hub_test
├── templates
│   ├── bindep.txt.j2
│   ├── create_image.sh.j2
│   ├── requirements.txt.j2
│   ├── requirements.yml.j2
│   └── execution-environment.yml
├── ee_vars.yml
├── inventory.yaml
├── README.md
└── main.yml

Most of the structure shown is determined by the environment, only a small number of files are actually needed to build an execution environment. Namely:

  • main.yml the tasking playbook
  • files/ansible.cfg if you need to get custom collections from your hub.
  • Templates If you read the README.md, you know what they are needed for.
  • For Gitlab a .gitlab-ci.yml for the pipeline

Clone this repository for any execution environment you want to build. The only configuration for your execution environment is done in the files listed below:

host_vars/hub_/hub_.yml

This is where the variables for the pipeline are placed, keep it secure and encrypted with vault. Make sure these are never pushed into a public repository

---
ee_ah_host: <hostname_and_port_for_automation_hub>
ee_validate_certs: <true_in_enterprise_may_be_false_in_a_lab>
registry_username: <redhat account name>
registry_password: <redhat account password, may be vaulted>
ahub_username: <admin user for EE uploads on hub>
ahub_password: <admin password for EE uploads on hub>

inventory.yaml

Make sure that there is a definition in the inventory.yaml for each environment, see the example below.

---
dev:
  hosts: 
    hub_dev:
test:
  hosts:
    hub_test:
accp:
  hosts:
    hub_accp:
Prod:
  hosts:
    hub_prod:

ee_vars.yml

This is where the execution environment is defined, the contents of the files that you would normally have to fill are grouped together here and put in the right place by the playbook.

---
use_ansible_cfg: false

ee_image_name: ee_cac_image

ee_python:
  - requests
  - python-gitlab

ee_collections:
  - community.general

ee_system: []

Description of variables:

|Var name|Description| |use_ansible_cfg|If you created a ansible.cfg in the files directory that must be used to find the correct collections, set this to true| |ee_python|All records that would go into requirements.txt should be on this list. The code will create the file from this list.| |ee_collections|All records that would go into requirements.yml should be on this list. The code will create the file from this list. Be sure to use the oneline syntax for collections and versions.| |ee_system|All records that would go into bindep.txt should go into this list. The code will create the file from this list.|

That's all it takes...

Pipeline

How it works:

  • With each new commit in the repository, the pipeline is triggered.
  • The file .gitlab-ci.yml is read by the pipeline.
  • The actions in this file will be performed in order.
  • The "main.yml" playbook starts.
  • The playbook creates the files and builds the image
  • Start the upload to the private automation hub

The pipeline looks like this:

# in case of a container runner:
# Pull an image with ansible configured

image: <registry>/<image>:<version>

# in case of a shell runner, ensure that ansible in installed
# otherwise the pipeline will fail


# List of pipeline stages 
stages: 
  - Build EE on merge 
  - Recover EE 

run_pipeline_after_merge: 
  tags: 
    - gitlab-ee-runner 
  stage: Build EE on merge 
  rules: 
    - if: '$CI_COMMIT_BRANCH == "dev" 
           && $CI_PIPELINE_SOURCE == "push" 
           && $CI_COMMIT_MESSAGE =~ /initial config/i' 
    - if: '$CI_COMMIT_BRANCH == "test" 
           && $CI_PIPELINE_SOURCE == "push" 
           && $CI_COMMIT_MESSAGE =~ /initial config/i' 
    - if: '$CI_COMMIT_BRANCH == "accp" 
           && $CI_PIPELINE_SOURCE == "push" 
           && $CI_COMMIT_MESSAGE =~ /initial config/i' 
    - if: '$CI_COMMIT_BRANCH == "prod" 
           && $CI_PIPELINE_SOURCE == "push" 
           && $CI_COMMIT_MESSAGE =~ /Merge branch/i' 
  script: 
    - echo "From pipeline - Start build image on '$CI_COMMIT_REF_NAME' Environment" 
    - ansible-playbook main.yml 
      -i inventory.yaml  
      -e instance=hub_$CI_COMMIT_REF_NAME  
      -e branch_name=$CI_COMMIT_REF_NAME  
      --vault-password-file <(echo ${VAULT_KEY}) 

# This runs in case of a trigger from another project ( like recovery process ) 
recover_hub_from_trigger: 
  tags: 
    - sscc-autom-ee-runner 
  stage: Recover EE 
  rules: 
    - if: '$CI_PIPELINE_SOURCE == "pipeline"' 
  script: 
    - echo "From pipeline - Start build image on '$CI_COMMIT_REF_NAME' Environment" 
    - ansible-playbook main.yml  
      -i inventory.yaml  
      -e instance=hub_$CI_COMMIT_REF_NAME  
      -e branch_name=$CI_COMMIT_REF_NAME  
      --vault-password-file <(echo ${VAULT_KEY})

Shown above is the definition of the pipeline that is used to start the build and upload of the execution environment. To avoid starting the pipeline for each feature branch as well, we use the "rules" key. Keywords

Rules:
    rules:
        - if: '$CI_COMMIT_BRANCH == "dev" 
             && $CI_PIPELINE_SOURCE == "push" 
             && $CI_COMMIT_MESSAGE =~ /initial config/i' 

This limits the pipeline in such a way that the mentioned tasks are only executed if the branch being worked on matches the mentioned branches. Since we specify the branches as environments, the tasks will only be executed if we want to apply them to an environment.

tags
   tags:
     - builder-runner-name

The tag determines whether the runner will pick up a task, if the specified tag matches the name of a runner, it will pick up the task. So, in this case, we need to make sure that the runner is given a specific name and that the pipeline has a tag that matches that name. As a result, we can assume that the runner who will perform this task also has the rights to build that image.

script
   script:
     - echo "This is step 1 of the executed commands"
     - echo "this is step 2 ..."
     - echo "etc..."

The "script" keyword defines the steps that need to be performed if the pipeline is actually going to run for a branch. As you can see in the example at the beginning of this chapter, a playbook is started which gets a number of variables from the pipeline, this is what makes it so simple that it can work for any environment. By choosing your variables smartly and using them to steer your playbook. Passing a vault password is usually an interactive action if a playbook isn't launched from a controller. Or you have to put your vault password in a file and that's not really safe in a git environment. So we give the password via a trick on the pipeline, the vault password is added to the repository as a secure variable and is passed through the pipeline masked and will not be written in any logging. How are we going to manage multiple environments from 1 repository without targeting all environments at the same time? Because we have included the "{{ instance }}" variable in the playbook and we have it matched with a variable in the inventory, it becomes child's play to run this 4 pipeline specifically for 1 particular environment.

The playbook

Below is the playbook to build the execution environment from the pipeline. The image is built and then uploaded directly to the automation hub of that particular environment.

---
- name: Playbook to create custom EE
  hosts: "{{ instance }}"
  connection: local
  gather_facts: false

  tasks:
    - name: Include the definition of the ee
      ansible.builtin.include_vars:
        file: ee_vars.yml

    - name: Copy ansible.cfg to home dir
      ansible.builtin.copy:
        src: ansible.cfg
        dest: ~/ansible.cfg
        mode: '0600'
      when: use_ansible_cfg

    - name: Template the execution-environment.yml
      ansible.builtin.template:
        src: execution-environment.yml.j2
        dest: execution-environment.yml

    - name: Template the requirements.yml
      ansible.builtin.template:
        src: requirements.yml.j2
        dest: requirements.yml
      when: (ee_collections is defined) and (ee_collections|length > 0)

    - name: Template the requirements.txt
      ansible.builtin.template:
        src: requirements.txt.j2
        dest: requirements.txt
      when: (ee_python is defined) and (ee_python|length > 0)

    - name: Template the creation script
      ansible.builtin.template:
        src: create_image.sh.j2
        dest: create_image.sh
        mode: '0700'

    - name: Create the ee_image
      block:
        - name: Create the image
          ansible.builtin.shell:
            cmd: ./create_image.sh
          register: _create_output

      rescue:
        - name: Show the output if any error
          ansible.builtin.debug:
            var: _create_output

      always:
        - name: Remove ansible.cfg
          ansible.builtin.file:
            path: ~/ansible.cfg
            state: absent
          when: use_ansible_cfg

        - name: Fail the play if error
          ansible.builtin.fail:
            msg: "Build failed, read the error above to find why"
          when: _create_output.rc != 0

Have you lost any of the environments? Recreate it and regenerate the content by running the pipeline for that environment.

Back

Back to Site