Adding custom collections to automation hub

In this chapter, we're going to talk about automatically uploading and publishing your own ansible collections within your own organization. To publish these collections outside the organization is beyond the scope of this chapter. What we are going to do, however, is automate the publication and construction of the collection according to the GitOps methodology. We apply the solution described below for each collection, so each collection gets its own git repository and its own pipeline (centralized git location, the code is the same).

Index
- Conditions
- Namespace
- The git repository
- The Pipeline

Conditions

In order to be able to automatically upload a collection in the automation hub part of rhaap 2.5, there are a number of conditions that must be met: - A namespace must be available - Your collection must have a name in the namespace - Your collection is stored in a git repository - The repository has a pipeline - The pipeline uses its "own" user for the hub

Namespace

Create a namespace to store your collections and do this via the configuration as code (see base_configuration_gateway_and_hub), you should already know how to do that. Think carefully about this name, it must fit in with the naming within your organization.

The git repository

As with all GitOps repositories, it will be stored in git and has a branch for each environment. The repository will have the following directory structure:

├── CHANGELOG.md
├── docs
├── galaxy.yml
├── group_vars
│   └── all
├── host_vars
│   └── hub_dev
│   └── hub_prod
├── inventory.yaml
├── meta
├── plugins
├── README.md
├── roles
│   ├── role_1
│   │   ├── defaults
│   │   ├── files
│   │   ├── handlers
│   │   ├── meta
│   │   ├── README.md
│   │   ├── tasks
│   │   └── templates
│   ├── role_2
│   │   ├── defaults
│   │   ├── files
│   │   ├── meta
│   │   ├── README.md
│   │   ├── tasks
│   │   └── templates
└── upload_collection.yml

Most of the structure is imposed by the galaxy structure for the collections. But there are a few things that are needed to take care of the automation for gitops. These additions are as follows (and you probably already recognize them, if you've read the previous chapter carefully): - group_vars - host_vars - inventory.yaml - upload_collection.yml - A .gitlab-ci.yml will be added for GitLab

galaxy.yml

### REQUIRED
# The namespace of the collection. This can be a company/brand/organization or product namespace under which all
# content lives. May only contain alphanumeric lowercase characters and underscores. Namespaces cannot start with
# underscores or numbers and cannot contain consecutive underscores
namespace: linux

# The name of the collection. Has the same character restrictions as 'namespace'
name: web

# The version of the collection. Must be compatible with semantic versioning
version: 1.0.5
# The path to the Markdown (.md) readme file. This path is relative to the root of the collection
readme: README.md

# A list of the collection's content authors. Can be just the name or in the format 'Full Name <email> (url)
# @nicks:irc/im.site#channel'
authors:
  - Your Name <your_email>


### OPTIONAL but strongly recommended
# A short summary description of the collection
description: Collection to deploy apache and ipvs loadbalancers

# Either a single license or a list of licenses for content inside of a collection. Ansible Galaxy currently only
# accepts L(SPDX,https://spdx.org/licenses/) licenses. This key is mutually exclusive with 'license_file'
license:
  - GPL-2.0-or-later

# The path to the license file for the collection. This path is relative to the root of the collection. This key is
# mutually exclusive with 'license'
license_file: ''

# A list of tags you want to associate with the collection for indexing/searching. A tag name has the same character
# requirements as 'namespace' and 'name'
tags:
  - linux
  - infrastructure

# Collections that this collection requires to be installed for it to be usable. The key of the dict is the
# collection label 'namespace.name'. The value is a version range
# L(specifiers,https://python-semanticversion.readthedocs.io/en/latest/#requirement-specification). Multiple version
# range specifiers can be set and are separated by ','
dependencies:
  'community.general': '>=6.5.0'
  'ansible.posix': '>=1.5.2'

# The URL of the originating SCM repository
repository: git@gitlab.homelab/collections/linux.web.git

# The URL to any online docs
documentation: https://gitlab.homelab/collections/linux.web/README.md

# The URL to the homepage of the collection/project
homepage: https://gitlab.homelab/collections/linux.web

# The URL to the collection issue tracker
issues: http://example.com/issue/tracker

# A list of file glob-like patterns used to filter any files or directories that should not be included in the build
# artifact. A pattern is matched from the relative path of the file or directory of the collection directory. This
# uses 'fnmatch' to match the files or directories. Some directories and files like 'galaxy.yml', '*.pyc', '*.retry',
# and '.git' are always filtered
build_ignore:
  - .gitlab-ci.yml
  - host_vars
  - inventory.yml
  - upload_collection.yml
  - group_vars

inventory.yaml

The invnetory tells the code where to find the automation hub to upload the collection into.

---
dev:
  hosts:
    hub_dev:
test:
  hosts:
    hub_test:
accp:
  hosts:
    hub_accp:
prod:
  hosts:
    hub_prod:

group_vars/all/ah_collections.yml

In this inventory file, the variables needed for the playbook are generated at the start of the play.

---
ah_configuration_async_retries: 10
ah_configuration_async_delay: 2
ah_collections:
  - name: "{{ galaxy_vars.name }}"
    namespace: "{{ galay_vars.namespace }}"
    version: "{{ galaxy_vars.version }}"
    path: "{{ coll_file }}"
    wait: false
    overwrite_existing: false
    state: present
...

As you can see, there is nothing to configure in this file, the content is determined by variables. The origin of these variables is described below. The host_vars folder, contains the files with the login details for the automation hub, these can be copied directly from the automation hub configuration as a code repository pipeline, with a modification that the user is modified for the collection upload.

host_vars/hub_dev/hub_auth.yml

In this file the credentials for logging into the automation hub are set. We use a separate account to automate the building and uploading the custom collections, so this will not disrupt things by invalidating tokens.
This account is created using the base_config from the configuration as code. The team this account is a member of, has the rights to upload collections into hub. If you are not working with such an account, you will need to do this using the admin account.

---
ah_host: 'https://rhaap26.homelab'
ah_validate_certs: false
ah_username: <coll_upload_user>    # vaulted vaulue
ah_password: <coll_upload_passwd>   # vaulted value

host_vars/hub_dev/hub_dev.yml

Additional vars for handling the collection upload through the collection.

---
hosts: localhost
ah_configuration_async_dir: /opt/app-root/src/.ansible_async/

This was needed to let the collection find the response file and report the corrected exit code.

The Pipeline

How it works: - With each new commit in the repository, the pipeline is triggered. - The file .gitlab-ci.yml is read by the pipeline. - The actions in this file will be performed in order. - Old files are deleted - A new version of the collection is being built - The "upload_collection.yml" playbook starts. - The playbook searches for the file containing the collection - Reads the galaxy.yml as galaxy_vars, populating the variables in ah_collections.yml - Start the upload to the private automation hub - Publishes the new version (if any)

.gitlab-ci.yml

# Defaults
image: docker.homelab:5000/ansible-image:latest

# List of pipeline stages
stages:
 -linting
 - Build collection

linting:
  tags:
    - shared
  stage: linting
  rules:
    - if: '$CI_COMMIT_REF_NAME != "dev" 
           && $CI_COMMIT_REF_NAME != "test" 
           && $CI_COMMIT_REF_NAME != "accp" 
           && $CI_COMMIT_REF_NAME != "prod"'
  script:
    - echo "From pipeline - Start linting on '$CI_COMMIT_REF_NAME'"
    - wget -O ~/ansible.cfg http://web.dev.lab:81/dev_ansible.cfg
      # Role satellite is excluded for persistent module error
    - ansible-lint
      --exclude .gitlab-ci.yml
      --exclude host_vars/
      --exclude roles/role_infrastructure_satellite/tasks/main.yml

configure-automation-hub:
  tags:
    - gitlab-runner
  stage: Build collection
  rules:
    - if: '($CI_COMMIT_BRANCH == "dev" 
          || $CI_COMMIT_BRANCH == "test" 
          || $CI_COMMIT_BRANCH == "accp" 
          || $CI_COMMIT_BRANCH == "prod") 
           && $CI_PIPELINE_SOURCE == "push" 
           && $CI_COMMIT_MESSAGE =~ /Merge branch/i'
  script:
    - wget -O ~/ansible.cfg http://web.dev.lab:81/${CI_COMMIT_BRANCH}_ansible.cfg
    - echo "Remove old versions of the collection"
    - find . -name "*.tar.gz" -exec rm {} \;
    - echo "Build the collection"
    - ansible-galaxy collection build
    - echo "Push the collection to automationhub"
    - ansible-playbook upload_collection.yml
      -i inventory.yaml
      -e instance=hub_$CI_COMMIT_REF_NAME
      -e branch_name=$CI_COMMIT_REF_NAME
      --vault-password-file <(echo ${VAULT_PASSWORD})

The above code is triggered with every merge to the branches mentioned under "rules" and will execute the code under "script". Here you can see that an ansible playbook is being run to perform the configuration.

There are a number of variables used in the call of the playbook, these do not come out of the blue, but this is where they come from: |Variable| Description| |---|---| |$CI_COMMIT_REF_NAME|This is an internal variable that is given to each pipeline task by gitlab, the content of this variable is the branch for which the pipeline was started. By using these, we can magically tell the playbook what environment to configure.| |$VAULT_PASSWORD|Of course, this is not a standard variable of gitlab, we define this variable in gitlab with the project in the "Settings \ CI/CD \ Variables", where we make sure that it has "Masked and Expanded" as settings. This is where we store the vault password, with which the passwords or files are encrypted in ansible.|

upload_colection.yml

---
- hosts: "{{ instance }}"
  connection: local
  gather_facts: false

  pre_tasks:
    - name: Find collection file
      ansible.builtin.find:
        paths: "."
        patterns: '*.tar.gz'
      register: _file

    - name: Load vars form galaxy.yml
      ansible.builtin.include_vars:
        file: galaxy.yml
        name: galaxy_vars

    - name: Set the automation hub vars
      ansible.builtin.set_fact:
        coll_file: "{{ _file.files[0].path }}"

  roles:
    - { role: infra.ah_configuration.collection, ignore_errors: true }

The ignore_errors here is unfortunately necessary for the playbook to run smoothly, this is caused by a bug in the infra collection.

meta/runtime.yml

This specifies the dependency of your collection towards ansible.

---
requires_ansible: ">=2.14.0"

In the roles directory you create your roles you want to be in this collection.
In the plugin directory add the plugins to add into the collection.

The frameworks is now complete.

Back

Back to Site