In-house "custom" collections
In this chapter, we're going to talk about automatically uploading and publishing your own ansible collections within your own organization. To publish these collections outside the organization is beyond the scope of this book. What we are going to do, however, is automate the publication and construction of the collection according to the GitOps methodology. We apply the solution described below for each collection, so each collection gets its own git repository and its own pipeline.
Index
- Conditions
- Namespace
- The git repository
- The Pipeline
Conditions
In order to be able to automatically upload a collection in the private automation hub, there are a number of conditions that must be met: - A namespace must be available - Your collection must have a name in the namespace - Your collection is stored in a git repository - The repository has a pipeline - The pipeline uses its "own" user for the hub
Namespace
Create a namespace to store your collections and do this via the configuration as code (see chapter 6), you should already know how to do that. Think carefully about this name, it must fit in with the naming within your organization.
The git repository
As with all GitOps repositories, it will be stored in git and has a branch for each environment. The repository will have the following directory structure:
├── CHANGELOG.md
├── docs
├── galaxy.yml
├── group_vars
│ └── all
├── host_vars
│ └── hub_prod
├── inventory.yaml
├── meta
├── plugins
├── README.md
├── roles
│ ├── role_1
│ │ ├── defaults
│ │ ├── files
│ │ ├── handlers
│ │ ├── meta
│ │ ├── README.md
│ │ ├── tasks
│ │ └── templates
│ ├── role_2
│ │ ├── defaults
│ │ ├── files
│ │ ├── meta
│ │ ├── README.md
│ │ ├── tasks
│ │ └── templates
└── upload_collection.yml
upload_collection.yml
Most of the structure is imposed by the galaxy structure for the collections. But there are a few things that are needed to take care of the automation for gitops. These additions are as follows (and you probably already recognize them, if you've read the previous chapter carefully): - group_vars - host_vars - inventory.yaml - upload_collection.yml - A .gitlab-ci.yml will be added for GitLab
The above files and folders should not be included in the collection building process,
so we'll take care of that in the galaxy.yml:
build_ignore:
- .gitlab-ci.yml
- host_vars
- inventory.yml
- upload_collection.yml
- group_vars
In the folder group_vars/all there is only 1 file: ah_collections.yml. it s contents has been kept as simple as possible:
---
ah_configuration_async_retries: 10
ah_configuration_async_delay: 2
ah_collections:
- name: "{{ galaxy_vars.name }}"
namespace: "{{ galay_vars.namespace }}"
version: "{{ galaxy_vars.version }}"
path: "{{ coll_file }}"
wait: false
overwrite_existing: false
state: present
...
As you can see, there is nothing to configure in this file, the content is determined by variables. The origin of these variables is described below. The host_vars folder, contains the files with the login details for the automation hub, these can be copied directly from the automation hub configuration as a code repository pipeline, with a modification that the user is modified for the collection upload.
The Pipeline
How it works: - With each new commit in the repository, the pipeline is triggered. - The file .gitlab-ci.yml is read by the pipeline. - The actions in this file will be performed in order. - Old files are deleted - A new version of the collection is being built - The "upload_collection.yml" playbook starts. - The playbook searches for the file containing the collection - Reads the galaxy.yml as galaxy_vars, populating the variables in ah_collections.yml - Start the upload to the private automation hub - Publishes the new version (if any)
.gitlab-ci.yml
# Defaults
image: image-registry.openshift-image-registry.svc:5000/images/rh-python-image:latest
# List of pipeline stages
stages:
-linting
- Build collection
linting:
tags:
- shared
stage: linting
rules:
- if: '$CI_COMMIT_REF_NAME != "dev"
&& $CI_COMMIT_REF_NAME != "test"
&& $CI_COMMIT_REF_NAME != "accp"
&& $CI_COMMIT_REF_NAME != "prod"'
script:
- echo "From pipeline - Start linting on '$CI_COMMIT_REF_NAME'"
# Role satellite is excluded for persistent module error
- ansible-lint
--exclude .gitlab-ci.yml
--exclude host_vars/
--exclude roles/role_infrastructure_satellite/tasks/main.yml
configure-automation-hub:
tags:
- gitlab-runner
stage: Build collection
rules:
- if: '$CI_COMMIT_BRANCH == "dev"
&& $CI_PIPELINE_SOURCE == "push"
&& $CI_COMMIT_MESSAGE =~ /Merge branch/i'
- if: '$CI_COMMIT_BRANCH == "test"
&& $CI_PIPELINE_SOURCE == "push"
&& $CI_COMMIT_MESSAGE =~ /Merge branch/i'
- if: '$CI_COMMIT_BRANCH == "accp"
&& $CI_PIPELINE_SOURCE == "push"
&& $CI_COMMIT_MESSAGE =~ /Merge branch/i'
- if: '$CI_COMMIT_BRANCH == "prod"
&& $CI_PIPELINE_SOURCE == "push"
&& $CI_COMMIT_MESSAGE =~ /Merge branch/i'
script:
- echo "Remove old versions of the collection"
- find . -name "*.tar.gz" -exec rm {} \;
- echo "Build the collection"
- ansible-galaxy collection build
- echo "Push the collection to automationhub"
- ansible-playbook upload_collection.yml
-i inventory.yaml
-e instance=hub_$CI_COMMIT_REF_NAME
-e branch_name=$CI_COMMIT_REF_NAME
--vault-password-file <(echo ${VAULT_PASSWORD})
The above code is triggered with every merge to the branches mentioned under "rules" and will execute the code under "script". Here you can see that an ansible playbook is being run to perform the configuration.
There are a number of variables used in the call of the playbook, these do not come out of the blue, but this is where they come from: |Variable| Description| |---|---| |$CI_COMMIT_REF_NAME|This is an internal variable that is given to each pipeline task by gitlab, the content of this variable is the branch for which the pipeline was started. By using these, we can magically tell the playbook what environment to configure.| |$VAULT_PASSWORD|Of course, this is not a standard variable of gitlab, we define this variable in gitlab with the project in the "Settings \ CI/CD \ Variables", where we make sure that it has "Masked and Expanded" as settings. This is where we store the vault password, with which the passwords or files are encrypted in ansible.|
upload_colection.yml
---
- hosts: "{{ instance }}"
connection: local
gather_facts: false
pre_tasks:
- name: Find collection file
ansible.builtin.find:
paths: "."
patterns: '*.tar.gz'
register: _file
- name: Load vars form galaxy.yml
ansible.builtin.include_vars:
file: galaxy.yml
name: galaxy_vars
- name: Set the automation hub vars
ansible.builtin.set_fact:
coll_file: "{{ _file.files[0].path }}"
roles:
- { role: infra.ah_configuration.collection, ignore_errors: true }
The ignore_errors here is unfortunately necessary for the playbook to run smoothly, this is caused by a bug in the infra collection.
inventory.yaml
---
dev:
hosts:
hub_dev:
test:
hosts:
hub_test:
accp:
hosts:
hub_accp:
Prod:
hosts:
hub_prod: