Ansible Automation Platform - Configuration As Code
If you're using ansible, the term "infrastructure as code" is usually familiar to most
users.
This specifies how to define a server in code and then do the installation and
configuration from that code so that the result is exactly the same every time.
Configuration as code actually does exactly the same thing in the sense of ansible
and result, but does not install the server. In this case, the ansible controller content
is placed from the code into the controller, so that it always contains a correct (saved)
configuration. All credentials, projects and job templates will be recreated over and
over again if desired.
What is the advantage of CaC
If you have fully implemented configuration as code, the benefit is clear if there is a
major incident in the organization and people are dependent on the automation. If
this platform is also affected by the incident, you have a challenge. That challenge
becomes a lot smaller when you know that everything is fixed in code and you can
perform a restore within a certain amount of time that restores the automation,
including all playbooks. After which the organization can restore the rest of the
affected systems.
Guess who will be the hero of the day...
Later in this book, we'll tell you exactly what you need to do to do this.
Implement configuration as code
To implement configuration as code for the automation controller, follow the
description below and enjoy the benefits. The configuration for the controller is split
into 2 parts:
- A basic configuration for the controller itself and the environment where it is located.
- An organization part that is used for each team that uses AAP
Why did we split this one?
Especially to let organizations (teams) take responsibility for their own configuration
and data, they also arrange their own RBAC model within their own team. This saves
a lot of errors that can occur during configuration loading, when everything is in a
major update. Recovery is also easier. But more on that later.
We're going to configure the controller first, then we can move on to the
organizations within AAP.
Conditions
This implementation of CaC uses (relies on) the following ansible collections:
- ansible.controller (of awx.awx)
- infra.controller_configuration
We use a gitlab pipeline in all examples, because gitlab is a widely used git within
organizations. Of course, there are more git implementations with their own pipelines.
Feel free to modify the scripts listed here to make them suitable for the pipeline of the
git implementation used in your own organization.
The idea is that all configuration is stored in an inventory and that this is used to
provide the system with all the necessary data. Now what we are going to make
looks like an inventory, but it is not quite. We'll come back to this (if you've read the
chapter of the automation hub, you'll know what's coming).
The CaC pipeline
To be able to set up a pipeline, we will first need to create a git repository, in this
case a repository in which the basic configuration of the automation controller will be
stored. The name doesn't matter, but the advice is to make it reasonably descriptive.
We've talked before in this book about the groups in which the repositories had to be
made (that's not possible with all implementations, sometimes they are separate
organizations) at gitlab and that should be the CaC group.
In the git repository, a branch must be created for each environment. So your
environment (in a standard Enterprise) will often contain the following environments:
- development
- test
- acceptance
- production
For each environment, we create a branch of the same name, the first of which will
become the default branch. All other branches should be removed. For these
branches, we are going to write a pipeline that will handle the actions if a merge is
performed to one of these environments.
If we make a new update to the configuration in the repository, we will first create it
on a new (feature) branch, this branch will then be made via merge requests in order
across the environments, starting with development. How are we going to arrange
this? We create a piece of pipeline code that handles it completely for us, so we don't
have to do any manual work on this anymore.
# Pull the ansible config as code image
image: localhost:5000/ansible-image:1.0
# List of pipeline stages
stages:
- Configure controller
Configure controller:
tags:
- shared
stage: Configure controller
rules:
- if: '($CI_COMMIT_BRANCH == "dev" ||
$CI_COMMIT_BRANCH == "test" ||
$CI_COMMIT_BRANCH == "accp" ||
$CI_COMMIT_BRANCH == "prod") &&
$CI_PIPELINE_SOURCE == "push" &&
$CI_COMMIT_MESSAGE =~ /Merge branch/i'
script:
- echo "Perform merge to '$CI_COMMIT_BRANCH' Environment"
- TOKEN_VAR=$(echo "AUTOMATION_HUB_TOKEN_${CI_COMMIT_BRANCH}" | tr [:lower:]
[:upper:])
- ansible-playbook main.yml
-i inventory.yaml
-e instance=controller_$CI_COMMIT_BRANCH
-e ahub_token=$(printenv $TOKEN_VAR)
-e branch_name=$CI_COMMIT_BRANCH
--vault-password-file <(echo ${VAULT_PASSWORD})
The above pipeline will start running after the merge has been carried out on one of
the mentioned branches. To ensure that faulty code cannot be merged into the next
environment/branch, the settings for merge_requests on the repository must enable
the condition that the pipeline "must succeed" is enabled.
As you can see above, there aren't that many pipeline steps to start the controller
configuration. The steps that actually configure the controller are performed by the
community collection. The only thing we are concerned with is filling the data files
with the configuration items for the controller. The trick we apply is that we have a
separate set of files for each environment and also a set that is valid for all
environments. Especially the set of files for all environments, saves a lot of data
duplication about this later.
Because we create branches with the names of the environments and provide them
with the start of the pipeline, we can easily distinguish between those environments.
In case you are wondering what the function of TOKEN_VAR is, it passes the API
token of the automation hub for each environment, Should the token become invalid,
it can be implemented on the controller without any modification to the files.
The repository
This includes options that may affect the operation of the
infra.controller_configuration collection, some are not necessary initially, others
prevent errors. Test for yourself which of these you need.
The group_vars map is built as an ansible inventory, but it is not a real inventory for
the code. This gives us the opportunity to unleash another piece of magic in our
code.
Below is an example of the folder structure of the group_vars:
.
├── collections
│ └── requirements.yml
├── group_vars
│ ├── accp
│ │ ├── credentials.yaml
│ │ ├── credential_types.yaml.example
│ │ ├── execution_environments.yaml
│ │ ├── instance_groups.yaml
│ │ ├── inventory_sources.yaml
│ │ ├── inventory.yaml
│ │ ├── notification_templates.yaml
│ │ ├── organization.yaml
│ │ ├── projects.yaml
│ │ ├── schedules.yaml
│ │ ├── settings.yaml
│ │ ├── team_roles.yaml
│ │ ├── teams.yaml
│ │ ├── user_roles.yaml
│ │ └── users.yaml
│ ├── prod
│ │ ├── credentials.yaml
│ │ ├── credential_types.yaml.example
│ │ ├── execution_environments.yaml
│ │ ├── instance_groups.yaml
│ │ ├── inventory_sources.yaml
│ │ ├── inventory.yaml
│ │ ├── notification_templates.yaml
│ │ ├── organization.yaml
│ │ ├── projects.yaml
│ │ ├── schedules.yaml
│ │ ├── settings.yaml
│ │ ├── team_roles.yaml
│ │ ├── teams.yaml
│ │ ├── user_roles.yaml
│ │ └── users.yaml
│ ├── test
│ │ ├── credentials.yaml
│ │ ├── credential_types.yaml.example
│ │ ├── execution_environments.yaml
│ │ ├── instance_groups.yaml
│ │ ├── inventory_sources.yaml
│ │ ├── inventory.yaml
│ │ ├── notification_templates.yaml
│ │ ├── organization.yaml
│ │ ├── projects.yaml
│ │ ├── schedules.yaml
│ │ ├── settings.yaml
│ │ ├── team_roles.yaml
│ │ ├── teams.yaml
│ │ ├── user_roles.yaml
│ │ └── users.yaml
│ ├── all
│ │ ├── credentials.yaml
│ │ ├── credential_types.yaml.example
│ │ ├── execution_environments.yaml
│ │ ├── instance_groups.yaml
│ │ ├── inventory_sources.yaml
│ │ ├── inventory.yaml
│ │ ├── notification_templates.yaml
│ │ ├── organization.yaml
│ │ ├── projects.yaml
│ │ ├── schedules.yaml
│ │ ├── settings.yaml
│ │ ├── team_roles.yaml
│ │ ├── teams.yaml
│ │ ├── user_roles.yaml
│ │ └── users.yaml
│ └── dev
│ │ ├── credentials.yaml
│ │ ├── credential_types.yaml.example
│ │ ├── execution_environments.yaml
│ │ ├── instance_groups.yaml
│ │ ├── inventory_sources.yaml
│ │ ├── inventory.yaml
│ │ ├── notification_templates.yaml
│ │ ├── organization.yaml
│ │ ├── projects.yaml
│ │ ├── schedules.yaml
│ │ ├── settings.yaml
│ │ ├── team_roles.yaml
│ │ ├── teams.yaml
│ │ ├── user_roles.yaml
│ │ └── users.yaml
├── host_vars
│ ├── controller_dev
│ │ └── controller_auth.yaml
│ └── controller_accp
│ └── controller_auth.yaml
├── inventory.yaml
├── main.yml
└── README.md
Ideally, the differences between the environments are minimal. Most of the
differences will be in the following areas (with most configurations):
credentials
inventories (especially the content)
* job template surveysIn the basic configuration, there won't be many
differences
If there are major differences, it might be advisable to take a critical look at the design of the environments. If you merge the configuration through the various branches, all controllers included here are configured automatically and almost identically, any deviations that exist between the environments have been entered into the files themselves. Now, suppose someone were to throw away one of the controllers... Imagine that after a standard installation, you let the pipeline do its job, and restore everything. Life can be so simple.
The content of the inventory used here is extremely simple:
---
dev:
hosts: controller_dev
test:
hosts: controller_test
accp:
hosts: controller_accp
Prod:
hosts: controller_prod
The host_vars folder contains a folder for each environment's controller. This always
includes 2 files. For example, the folder: host_vars/controller_dev/
controller_auth.yaml
controller_dev.yaml
The content of controller_auth.yaml:
---
controller_hostname: {aap_controller_url}
controller_validate_certs: false
controller_username: {aapcontroller_admin_username}
controller_password: {aapcontroller_admin_password}
Make sure this file is always encrypted with vault when you push it to git... The content of controller_dev.yaml:
---
hostname: localhost
controller_configuration_async_dir: /opt/app-root/src/.ansible_async/
controller_configuration_async_retries: 50
controller_configuration_async_delay: 5
controller_request_timeout: 60
The playbook
The playbook for the configuration as code is almost too simple, there are only 2 real
actions:
First, we merge the collections of the variables for the right environment
Second, we start the run to use those variables to perform the configuration
It looks more difficult than it actually is, first in the pre_tasks the variables from the various files are merged into a variable that the collection expects. A merge is included for each file. All miscellaneous bales are joined in the same way with 1 difference, this is due to an extra trick we played with the credentials, which means that the default merge does not work.
---
- hosts: "{{ instance }}"
connection: local
pre_tasks:
pre_tasks:
- name: Set credentials_var
ansible.builtin.set_fact:
controller_credentials: "{{ controller_credentials_all + controller_credentials_dev }}"
when: branch_name == 'dev'
- name: Set credentials_var
ansible.builtin.set_fact:
controller_credentials: "{{ controller_credentials_all + controller_credentials_test }}"
when: branch_name == 'test'
- name: Set credentials_var
ansible.builtin.set_fact:
controller_credentials: "{{ controller_credentials_all + controller_credentials_accp }}"
when: branch_name == 'accp'
- name: Set credentials_var
ansible.builtin.set_fact:
controller_credentials: "{{ controller_credentials_all + controller_credentials_prod }}"
when: branch_name == 'prod'
- name: Set the application vars
ansible.builtin.set_fact:
controller_applications: >
{{ controller_applications_all |
community.general.lists_mergeby( vars['controller_applications_' + branch_name],
'name', recursive=true, list_merge='append' ) }}
- name: Set the credential_sources vars
ansible.builtin.set_fact:
controller_credential_input_sources: >
{{ controller_credential_input_sources_all |
community.general.lists_mergeby( vars['controller_credential_input_sources_' + branch_name],
'source_credential', recursive=true, list_merge='append' ) }}
- name: Set the credential_types vars
ansible.builtin.set_fact:
controller_credential_types: >
{{ controller_credential_types_all |
community.general.lists_mergeby( vars['controller_credential_types_' + branch_name],
'name', recursive=true, list_merge='append' ) }}
- name: Set the execution_environments vars
ansible.builtin.set_fact:
controller_execution_environments: >
{{ controller_execution_environments_all |
community.general.lists_mergeby( vars['controller_execution_environments_' + branch_name],
'name', recursive=true, list_merge='append' ) }}
- name: Set the groups vars
ansible.builtin.set_fact:
controller_groups: >
{{ controller_groups_all |
community.general.lists_mergeby( vars['controller_groups_' + branch_name],
'name', recursive=true, list_merge='append' ) }}
- name: Set the hosts vars
ansible.builtin.set_fact:
controller_hosts: >
{{ controller_hosts_all |
community.general.lists_mergeby( vars['controller_hosts_' + branch_name],
'name', recursive=true, list_merge='append' ) }}
- name: Set the instance_groups vars
ansible.builtin.set_fact:
controller_instance_groups: >
{{ controller_instance_groups_all |
community.general.lists_mergeby( vars['controller_instance_groups_' + branch_name],
'name', recursive=true, list_merge='append' ) }}
- name: Set the inventories vars
ansible.builtin.set_fact:
controller_inventories: >
{{ controller_inventories_all |
community.general.lists_mergeby( vars['controller_inventories_' + branch_name],
'name', recursive=true, list_merge='append' ) }}
- name: Set the inventory_sources vars
ansible.builtin.set_fact:
controller_inventory_sources: >
{{ controller_inventory_sources_all |
community.general.lists_mergeby( vars['controller_inventory_sources_' + branch_name],
'name', recursive=true, list_merge='append' ) }}
- name: Set the labels vars
ansible.builtin.set_fact:
controller_labels: >
{{ controller_labels_all |
community.general.lists_mergeby( vars['controller_labels_' + branch_name],
'name', recursive=true, list_merge='append' ) }}
- name: Set the notification vars
ansible.builtin.set_fact:
controller_notifications: >
{{ controller_notifications_all |
community.general.lists_mergeby( vars['controller_notifications_' + branch_name],
'name', recursive=true, list_merge='append' ) }}
- name: Set the organization vars
ansible.builtin.set_fact:
controller_organizations: >
{{ controller_organizations_all |
community.general.lists_mergeby( vars['controller_organizations_' + branch_name],
'name', recursive=true, list_merge='append' ) }}
- name: Set the projects vars
ansible.builtin.set_fact:
controller_projects: >
{{ controller_projects_all |
community.general.lists_mergeby( vars['controller_projects_' + branch_name],
'name', recursive=true, list_merge='append' ) }}
- name: Set the roles vars
ansible.builtin.set_fact:
controller_roles: >
{{ controller_roles_all |
community.general.lists_mergeby( vars['controller_roles_' + branch_name],
'role', recursive=true, list_merge='append' ) }}
- name: Set the schedules vars
ansible.builtin.set_fact:
controller_schedules: >
{{ controller_schedules_all |
community.general.lists_mergeby( vars['controller_schedules_' + branch_name],
'name', recursive=true, list_merge='append' ) }}
- name: Set the settings vars
ansible.builtin.set_fact:
controller_settings: >
{{ controller_settings_all |
community.general.lists_mergeby( vars['controller_settings_' + branch_name],
'name', recursive=true, list_merge='append' ) }}
- name: Set the teams vars
ansible.builtin.set_fact:
controller_teams: >
{{ controller_teams_all |
community.general.lists_mergeby( vars['controller_teams_' + branch_name],
'name', recursive=true, list_merge='append' ) }}
- name: Set the templates vars
ansible.builtin.set_fact:
controller_templates: >
{{ controller_templates_all |
community.general.lists_mergeby( vars['controller_templates_' + branch_name],
'name', recursive=true, list_merge='append' ) }}
- name: Set the user vars
ansible.builtin.set_fact:
controller_user_accounts: >
{{ controller_user_accounts_all |
community.general.lists_mergeby( vars['controller_user_accounts_' + branch_name],
'user', recursive=true, list_merge='append' ) }}
- name: Set the workflows vars
ansible.builtin.set_fact:
controller_workflows: >
{{ controller_workflows_all |
community.general.lists_mergeby( vars['controller_workflows_' + branch_name],
'name', recursive=true, list_merge='append' ) }}
roles:
- infra.controller_configuration.dispatch
This is all...
Base Configuration of the Automation Controller
As shown before, the configuration of the controller(s) is stored in an inventory structure. Below is a small basis for the structure in the repository:
..
├── collections
│ └── requirements.yml
├── group_vars
│ ├── all
│ │ ├── credentials.yaml
│ │ ├── credential_types.yaml.example
│ │ ├── execution_environments.yaml
│ │ ├── instance_groups.yaml
│ │ ├── inventory_sources.yaml
│ │ ├── inventory.yaml
│ │ ├── notification_templates.yaml
│ │ ├── organization.yaml
│ │ ├── projects.yaml
│ │ ├── schedules.yaml
│ │ ├── settings.yaml
│ │ ├── team_roles.yaml
│ │ ├── teams.yaml
│ │ ├── user_roles.yaml
│ │ └── users.yaml
├── host_vars
│ ├── controller_dev
│ │ └── controller_auth.yaml
│ └── controller_accp
│ └── controller_auth.yaml
├── inventory.yaml
├── main.yml
└── README.md
Above you can already see that the inventory, the configuration and the playbook are
included in 1 repository.
In the example above, only the structure that is the same for all environments is
included in the all folder. In order for it to work correctly, there must be a folder for
each environment that is being deployed (as shown at the beginning of this chapter).
All the files for the variables should be present here as well. The content of the files is
slightly different for each folder and file, we will briefly show this here and it is very
important, the playbook depends on this:
/group_vars/all/{file}.yaml
---
controller_{var_name}_all:
/group_vars/dev/{file}.yaml
---
controller_{var_name}_dev:
Above are 2 examples, where we only show the first 2 lines of those files, that's where the difference lies. The variable name that you configure in that file is slightly different on the environment, making the name unique and able to be merged in the playbook. Here's the trick to this construction. So we do this for every environment. To implement a basic configuration of a controller, we need to fill the files in the group_vars with relevant information. We're going to go through all the files and we're going to go through them step by step, some of them are going to be simple and simple, but we're going to cover them all, but just for the _all. Additions that are environment-specific can therefore be made in environment-specific files.
execution_environments.yaml
Example:
---
controller_execution_environments_all:
- name: Control Plane Execution Environment
description:
image: registry.redhat.io/ansible-automation-platform-24/ee-supported-rhel8:latest
pull:
credential: Default Execution Environment Registry Credential
- name: Default execution environment
description:
image: registry.redhat.io/ansible-automation-platform-24/ee-supported-rhel8:latest
pull:
credential: Default Execution Environment Registry Credential
- name: Minimal execution environment
description:
image: registry.redhat.io/ansible-automation-platform-24/ee-minimal-rhel8:latest
pull:
credential: Default Execution Environment Registry Credential
- name: Automation Hub Default execution environment
description:
image: privatehub.localdomain/ee-supported-rhel8:latest
pull:
credential: Default Execution Environment Registry Credential
- name: Automation Hub Minimal execution environment
description:
image: privatehub.example.com/ee-minimal-rhel8:latest
pull:
credential: Default Execution Environment Registry Credential
...
Shown above is the configuration of the execution environments that are included in the controller by default after a clean install. In addition, two custom execution environments have been added (as the last one) that have been made available in the private automation hub. All custom execution environments can be added here, which should be present by default for the entire organization.
Data Structure
| Variable Name | Default Value | Required | Type | Description |
|---|---|---|---|---|
| Name | "" | yes | str | Name of execution environment |
| new_name | "" | no | str | Setting this option will change the existing name (looked up via the name field). |
| Description | "" | no | str | Description to use for the execution environment. |
| Image | "" | yes | str | Container image to use for the execution environment |
| Organization | "" | no | str | The organization the execution environment belongs to. |
| Credential | "" | no | str | Name of the credential to use for the execution environment. |
| Pull | "missing" | no | choice("always", "missing", "never") | Determine image pull behavior |
| State | present | no | str | Desired state of the resource. |
credential_types.yaml
Example:
---
controller_credential_types_all:
- name: example credential
kind: cloud
inputs:
- id: password
label: Password
help_text: A password
type: string
multiline: false
secret: true
required:
- password
injectors:
approx:
EXAMPLE_CRED_PASSWD: "{{ password }}"
...
See the documentation below for more options to create credential_types. Since only the system administrators can create these types, this way of using credentials is not preferred.
Data Structure
| Variable Name | Default Value | Required | Description |
|---|---|---|---|
| name | "" | yes | Name of Credential Type |
| new_name | "" | no | Setting this option will change the existing name (looked up via the name field). |
| description | False | no | The description of the credential type to give more detail about it. |
| injectors | "" | no | Enter injectors using either JSON or YAML syntax. Refer to the Ansible controller documentation for example syntax. See below on proper formatting. |
| inputs | "" | no | Enter inputs using either JSON or YAML syntax. Refer to the Ansible controller documentation for example syntax. |
| kind | "cloud" | no | The type of credential type being added. Note that only cloud and net can be used for creating credential types. |
| state | present | no | Desired state of the resource. |
instance_groups.yaml
Example:
---
controller_instance_groups_all:
- name: controlplane
policy_instance_minimum: 0
policy_instance_percentage: 100
instances:
- controller_hostname
- name: default
policy_instance_minimum: 0
policy_instance_percentage: 100
instances:
- controller_hostname
...
In this file we record the instance group and the resource usage of the controller and the execution nodes. In this example, only a single controller is present.
Data Structure
| Variable Name | Default Value | Required | Type | Description |
|---|---|---|---|---|
| name | "" | yes | str | Name of this instance group. |
| new_name | "" | str | no | Setting this option will change the existing name (looked up via the name field). |
| credential | "" | no | str | Credential to authenticate with Kubernetes or OpenShift. Must be of type "Kubernetes/OpenShift API Bearer Token". Will make instance part of a Container Group. |
| is_container_group | False | no | bool | Signifies that this InstanceGroup should act as a ContainerGroup. If no credential is specified, the underlying Pod's ServiceAccount will be used. |
| policy_instance_percentage | "" | no | Int | Minimum percentage of all instances that will be automatically assigned to this group when new instances come online. |
| policy_instance_minimum | "" | no | Int | Static minimum number of Instances that will be automatically assign to this group when new instances come online. |
| policy_instance_list | "" | no | list | List of exact-match Instances that will be assigned to this group. |
| max_concurrent_jobs | 0 | no | Int | Maximum number of concurrent jobs to run on this group. Zero means no limit. |
| max_forks | 0 | no | Int | Max forks to execute on this group. Zero means no limit. |
| pod_spec_override | "" | no | str | A custom Kubernetes or OpenShift Pod specification. |
| instances | "" | no | list | The instances associated with this instance_group. |
| state | present | no | str | Desired state of the resource. |
settings.yaml
Example:
---
controller_settings_all:
- name: ACTIVITY_STREAM_ENABLED
value: true
- name: ACTIVITY_STREAM_ENABLED_FOR_INVENTORY_SYNC
value: false
- name: AUTOMATION_ANALYTICS_GATHER_INTERVAL
value: 14400
- name: AUTOMATION_ANALYTICS_LAST_ENTRIES
value: ''
- name: DEFAULT_EXECUTION_ENVIRONMENT
value: null
- name: INSIGHTS_TRACKING_STATE
value: true
- name: INSTALL_UUID
value: 6bab853f-19aa-481b-9c67-8ef191c665fb
- name: GALAXY_IGNORE_CERTS
value: true
- name: MANAGE_ORGANIZATION_AUTH
value: true
- name: ORG_ADMINS_CAN_SEE_ALL_USERS
value: true
- name: PENDO_TRACKING_STATE
value: detailed
- name: PROXY_IP_ALLOWED_LIST
value: []
- name: REDHAT_PASSWORD
value: ''
- name: REDHAT_USERNAME
value: ''
- name: REMOTE_HOST_HEADERS
value:
- REMOTE_ADDR
- REMOTE_HOST
- name: SUBSCRIPTIONS_PASSWORD
value: ''
- name: SUBSCRIPTIONS_USERNAME
value: ''
- name: UI_NEXT
value: true
...
This file contains the basic settings of the controller, including the license and user
details of the licensee!
So always make sure that these variables in the file are encrypted with a vault
password.
There are a lot of options that can be set via the settings.yaml, it would take too long
to list them all here. See the documentation online for all the options that can be set
here.
As you can see, the settings are in name/value pairs, this enables you to differentiate the values in the separate environments, by specifying that variable in environment settings.yaml with a different value.
credentails.yaml
Example:
---
controller_credentials_all:
- name: default_infra_vault
description: vault credential for infra
credential_type: Vault
organization: Default
inputs:
vault_id: infra
vault_password: password
- name: automation_hub_token_published
description:
credential_type: Ansible Galaxy/Automation Hub API Token
organization: Default
inputs:
auth_url: ''
Token: "{{ token }}"
url: https://privatehub.example.com/api/galaxy/content/published/
update_secrets: true
- name: automation_hub_token_rh_certified
description:
credential_type: Ansible Galaxy/Automation Hub API Token
organization: Default
inputs:
auth_url: ''
Token: "{{ token }}"
url: https://privatehub.example.com/api/galaxy/content/rh_certified/
update_secrets: true
- name: ansible
description:
credential_type: Machine
organization: Default
inputs:
become_method: sudo
become_username: ''
ssh_key_data: |
-----BEGIN OPENSSH PRIVATE KEY-----
key-data
-----END OPENSSH PRIVATE KEY-----
username: ansible
- Name: Git
description:
credential_type: Source Control
organization: Default
inputs:
ssh_key_data: |
-----BEGIN OPENSSH PRIVATE KEY-----
key-data
-----END OPENSSH PRIVATE KEY-----
username: AAP_git_user
...
In this file, the secrets that apply to the "Default" organization are created, usually the
default organization is not used and in that case no secrets are needed here. It can
be useful to create the galaxy tokens here, for retrieving execution environments and
the like.
In the credentials.yaml above, you may notice that the "update_secrets" variable is
set to true for the various tokens. This is deliberate. The variable "{{ token }}" is
passed through the pipeline and entered into the configuration at the time of
execution, this is done in no time, to ensure that the token can be easily updated,
without having to change the entire configuration again. The token can sometimes
become invalid, due to unknown causes.
Data Structure
| Variable Name | Default Value | Required | Description |
|---|---|---|---|
| name | "" | yes | Name of Credential |
| new_name | "" | no | Setting this option will change the existing name (looked up via the name field). |
| copy_from | "" | no | Name or id to copy the credential from. This will copy an existing credential and change any parameters supplied. |
| description | False | no | Description of of Credential. |
| organization | "" | no | Organization this Credential belongs to. If provided on creation, do not give either user or team. |
| credential_type | "" | no | Name of credential type. See below for list of options. More information in Ansible controller documentation. |
| inputs | "" | no | Credential inputs where the keys are var names used in templating. Refer to the Ansible controller documentation for example syntax. Individual examples can be found at /api/v2/credential_types/ on an controller. |
| user | "" | no | User that should own this credential. If provided, do not give either team or organization. |
| team | "" | no | Team that should own this credential. If provided, do not give either user or organization. |
| state | Present | no | Desired state of the resource. |
| update_secrets | True | no | True will always change password if user specifies password, even if API gives encrypted for password. False will only set the password if other values change too. |
templates.yaml
Example:
---
controller_job_templates_all:
- name: CaC_config_template
description: Config As Code AAP
organization: Default
project: CaC_project
inventory: CaC_inventory
playbook: main.yml
job_type: run
fact_caching_enabled: false
credentials:
- ansible
- infra_vault
concurrent_jobs_enabled: false
ask_scm_branch_on_launch: false
ask_tags_on_launch: false
ask_verbosity_on_launch: false
ask_variables_on_launch: true
extra_vars:
instances: localhost
execution_environment: Default execution environment
survey_enabled: false
survey_spec: {}
The only template that is currently created in the default organization is the CaC template, to be able to perform the configuration updates from the controller itself. Of course, more templates can be added here where necessary.
Data Structure
| Variable Name | Default Value | Required | Type | Description |
|---|---|---|---|---|
| name | "" | yes | str | Name of Job Template |
| new_name | "" | str | no | Setting this option will change the existing name (looked up via the name field). |
| copy_from | "" | no | str | Name or id to copy the job template from. This will copy an existing credential and change any parameters supplied. |
| description | False | no | str | Description to use for the job template. |
| execution_environment | "" | no | str | Execution Environment to use for the job template. |
| job_type | run | no | str | The job type to use for the job template(run, check). |
| inventory | "" | no | str | Name of the inventory to use for the job template. |
| organization | "" | no | str | Organization the job template exists in. Used to help lookup the object, cannot be modified using this module. The Organization is inferred from the associated project |
| project | "" | no | str | Name of the project to use for the job template. |
| playbook | "" | no | str | Path to the playbook to use for the job template within the project provided. |
| credentials | "" | no | list | List of credentials to use for the job template. |
| forks | "" | no | Int | The number of parallel or simultaneous processes to use while executing the playbook. |
| limit | "" | no | str | A host pattern to further constrain the list of hosts managed or affected by the playbook |
| verbosity | "" | no | Int | Control the output level Ansible produces as the playbook runs. 0 - Normal, 1 Verbose, 2 - More Verbose, 3 - Debug, 4 - Connection Debug . |
| extra_vars | "" | no | dict | Specify extra_vars for the template. |
| job_tags | "" | no | str | Comma separated list of the tags to use for the job template. |
| force_handlers | "" | no | bool | Enable forcing playbook handlers to run even if a task fails. |
| skip_tags | "" | no | str | Comma separated list of the tags to skip for the job template. |
| start_at_task | "" | no | str | Start the playbook at the task matching this name. |
| diff_mode | "" | no | bool | Enable diff mode for the job template |
| use_fact_cache | "" | no | bool | Enable use of fact caching for the job template. |
| host_config_key | "" | no | str | Allow provisioning callbacks using this host config key. |
| ask_scm_branch_on_launch | "" | no | bool | Prompt user for scm branch on launch. |
| ask_diff_mode_on_launch | "" | no | bool | Prompt user to enable diff mode show changes to files when supported by modules. |
| ask_variables_on_launch | "" | no | bool | Prompt user for extra_vars on launch. |
| ask_limit_on_launch | "" | no | bool | Prompt user for a limit on launch. |
| ask_tags_on_launch | "" | no | bool | Prompt user for job tags on launch. |
| ask_skip_tags_on_launch | "" | no | bool | Prompt user for job tags to skip on launch. |
| ask_job_type_on_launch | "" | no | bool | Prompt user for job type on launch. |
| ask_verbosity_on_launch | "" | no | bool | Prompt user to choose a verbosity level on launch. |
| ask_inventory_on_launch | "" | no | bool | Prompt user for inventory on launch. |
| ask_credential_on_launch | "" | no | bool | Prompt user for credential on launch. |
| ask_execution_environment_on_launch | "" | no | bool | Prompt user for execution environment on launch. |
| ask_forks_on_launch | "" | no | bool | Prompt user for forks on launch. |
| ask_instance_groups_on_launch | "" | no | bool | Prompt user for instance groups on launch. |
| ask_job_slice_count_on_launch | "" | no | bool | Prompt user for job slice count on launch. |
| ask_labels_on_launch | "" | no | bool | Prompt user for labels on launch. |
| ask_timeout_on_launch | "" | no | bool | Prompt user for timeout on launch. |
| prevent_instance_group_fallback | "" | no | bool | Prevent falling back to instance groups set on the associated inventory or organization. |
| survey_enabled | "" | no | bool | Enable a survey on the job template. |
| survey_spec | "" | no | dict | JSON/YAML dict formatted survey definition. |
| survey | "" | no | dict | JSON/YAML dict formatted survey definition. Alias of survey_spec |
| become_enabled | "" | no | bool | Activate privilege escalation. |
| allow_simultaneous | "" | no | bool | Allow simultaneous runs of the job template. |
| timeout | "" | no | Int | Maximum time in seconds to wait for a job to finish (server-side). |
| instance_groups | "" | no | list | list of Instance Groups for this Job Template to run on. |
| job_slice_count | "" | no | Int | The number of jobs to slice into at runtime. Will cause the Job Template to launch a workflow if value is greater than 1. |
| webhook_service | "" | no | str | Service that webhook requests will be accepted from (github, gitlab) |
| webhook_credential | "" | no | str | Personal Access Token for posting back the status to the service API |
| scm_branch | "" | no | str | Branch to use in job run. Project default used if blank. Only allowed if project allow_override field is set to true. |
| labels | "" | no | list | The labels applied to this job template. NOTE: Labels must be created with the labels role first, an error will occur if the label supplied to this role does not exist. |
| custom_virtualenv | "" | no | str | Local absolute file path containing a custom Python virtualenv to use. |
| notification_templates_started | "" | no | list | The notifications on started to use for this rganization in a list. |
| notification_templates_success | "" | no | list | The notifications on success to use for this organization in a list. |
| notification_templates_error | "" | no | list | The notifications on error to use for this organization in a list. |
| State | present | no | str | Desired state of the resource. |
projects.yaml
Example:
---
controller_projects_all:
- name: CaC_project
organization: Default
scm_branch: master
scm_type: git
scm_update_on_launch: true
scm_credential: git
scm_url: git@git.example.com:project/aap_cac_automation.git
- name: inventory
description: inventory project
organization: Default
scm_type: git
scm_url: git@git.example.com:project/inventory_test.git
scm_credential: git
scm_branch: master
scm_clean: false
scm_delete_on_update: false
scm_update_on_launch: true
scm_update_cache_timeout: 0
allow_override: false
timeout: 0
...
Create the projects that are necessary for inventories and job templates, for example. Projects are the links to git to retrieve the playbooks and run them in controller.
| Variable Name | Default Value | Required | Type | Description |
|---|---|---|---|---|
| name | "" | yes | str | Name of Project |
| new_name | "" | no | str | Setting this option will change the existing name (looked up via the name field). |
| copy_from | "" | no | str | Name or id to copy the project from. This will copy an existing project and change any parameters supplied. |
| description | False | no | str | Description of the Project. |
| organization | False | yes | str | Name of organization for project. |
| scm_type | "" | no | str | Type of SCM resource. |
| scm_url | "" | no | str | URL of SCM resource. |
| default_environment | "" | no | str | Default Execution Environment to use for jobs relating to the project. |
| local_path | "" | no | str | The server playbook directory for manual projects. |
| scm_branch | "" | no | str | The branch to use for the SCM resource. |
| scm_refspec | "" | no | str | The refspec to use for the SCM resource. |
| credential | "" | no | str | Name of the credential to use with this SCM resource. |
| signature_validation_credential | "" | no | str | Name of the credential to use for signature validation. if signature validation credential is provided, signature validation will be enabled. |
| scm_clean | "" | no | bool | Remove local modifications before updating. |
| scm_delete_on_update | "" | no | bool | Remove the repository completely before updating. |
| scm_track_submodules | "" | no | bool | Track submodules latest commit on specified branch. |
| scm_update_on_launch | "" | no | bool | Before an update to the local repository before launching a job with this project. |
| scm_update_cache_timeout | "" | no | str"Cache Timeout to cache prior project syncs for a certain number of seconds. Only valid if scm_update_on_launch is to True, otherwise ignored. | |
| allow_override | "" | no | str | Allow changing the SCM branch or revision in a job template that uses this project. |
| timeout | "" | no | Int | The amount of time (in seconds) to run before the SCM Update is canceled. A value of 0 means no timeout. |
| custom_virtualenv | "" | no | str | Local absolute file path containing a custom Python virtualenv to use. |
| notification_templates_started | "" | no | list | The notifications on started to use for this organization in a list. |
| notification_templates_success | "" | no | list | The notifications on success to use for this organization in a list. |
| notification_templates_error | "" | no | list | The notifications on error to use for this organization in a list. |
| state | present | no | str | Desired state of the resource. |
| Wait | "" | no | bool | Provides option to wait for completed project sync before returning. |
| update_project | False | no | bool | Force project to update after changes. Used in conjunction with wait, interval, and timeout. |
| interval | no | float | The interval to request an update from controller. Requires wait. |
inventory.yaml
Example:
---
controller_inventories_all:
- name: inventory_test
description: Default inventory
organization: Default
- name: CaC_inventory
description: CaC inventory
organization: Default
controller_inventory_sources_all:
- name: inventory
description:
organization: Default
source: scm
source_project: inventory
source_path: hosts.yaml
inventory: inventory_test
update_on_launch: true
overwrite: true
- name: CaC_inventory
description:
organization: Default
source: scm
source_project: CaC_project
source_path: inventory.yaml
inventory: CaC_inventory
update_on_launch: true
overwrite: true
...
In the example, 2 inventories are created, both inventories are based on a project.
Data Structure inventory
| Variable Name | Default Value | Required | Type | Description |
|---|---|---|---|---|
| name | "" | yes | str | Name of this inventory. |
| new_name | "" | no | str | Setting this option will change the existing name (looked up via the name field). |
| copy_from | "" | no | str | Name or id to copy the inventory from. This will copy an existing inventory and change any parameters supplied. |
| description | "" | no | str | Description of this inventory. |
| organization | "" | yes | str | Organization this inventory belongs to. |
| instance_groups | "" | no | list | List of Instance Groups for this Inventory to run on. |
| input_inventories | "" | no | list | List of Inventories to use as input for Constructed Inventory. |
| variables | {} | no | dic | Variables for the inventory. |
| Kind | "" | no | str | The kind of inventory. Currently choices are '' and 'smart' |
| host_filter | "" | no | str | The host filter field, useful only when 'kind=smart' |
| prevent_instance_group_fallback | False | no | bool | Prevent falling back to instance groups set on the organization |
| state | present | no | str | Desired state of the resource. |
Data Structure
inventory_sources
| Variable Name | Default Value | Required | Type | Description |
|---|---|---|---|---|
| name | "" | yes | The name to use for the inventory source. | |
| new_name | "" | no | A new name for this assets (will rename the asset). | |
| description | False | no | The description to use for the inventory source. | |
| inventory | "" | yes | Inventory the group should be made a member of. | |
| organization | "" | no | Organization the inventory belongs to. | |
| source | "" | no | The source to use for this group. If set to constructed this role will be skipped as they are not meant to be edited. | |
| source_path | "" | no | For an SCM based inventory source, the source path points to the file within the repo to use as an inventory. | |
| source_vars | "" | no | The variables or environment fields to apply to this source type. | |
| enabled_var | "" | no | The variable to use to determine enabled state e.g., "status.power_state". | |
| enabled_value | "" | no | Value when the host is considered enabled, e.g., "powered_on". | |
| host_filter | "" | no | If specified, controller will only import hosts that match this regular expression. | |
| limit | "" | no | Enter host, group or pattern match. | |
| credential | "" | no | Credential to use for the source. | |
| execution_environment | "" | no | Execution Environment to use for the source. | |
| overwrite | "" | n | Delete child groups and hosts not found in source. | |
| overwrite_vars | "" | no | Override vars in child groups and hosts with those from external source. | |
| custom_virtualenv | "" | no | Local absolute file path containing a custom Python virtualenv to use. | |
| timeout | "" | no | The amount of time (in seconds) to run before the task is canceled. | |
| verbosity | "" | no | The verbosity level to run this inventory source under. | |
| update_on_launch | "" | no | Refresh inventory data from its source each time a job is run. | |
| update_cache_timeout | "" | no | Time in seconds to consider an inventory sync to be current. | |
| source_project | "" | no | Project to use as source with scm option. | |
| scm_branch | "" | no | Project scm branch to use as source with scm option. Project must have branch override enabled. | |
| state | present | no | Desired state of the resource. | |
| notification_templates_started | "" | no | The notifications on started to use for this inventory source in a list. | |
| notification_templates_success | "" | no | The notifications on success to use for this inventory source in a list. | |
| notification_templates_error | "" | no | The notifications on error to use for this inventory source in a list. |
schedules.yaml
Example:
---
controller_schedules_all:
- name: Sync Private Hub
description: Sync Private hub repos
unified_job_template: sync_private_hub
rrule: "DTSTART:20230711T110000Z RRULE:FREQ=DAILY; INTERVAL=1; BYDAY=TU,TH"
...
A task can be scheduled to run so that it runs at regular intervals. In the example above, the synchronization of the private automation hub is scheduled to run at 11 a.m. every Tuesday and Thursday. The tricky part here is the content of the RRULE, which is not well explained in the documentation, but there is plenty of information about it online.
Data Structure
| Variable Name | Default Value | Required | Type | Description |
|---|---|---|---|---|
| name | "" | yes | str | Name of Job Template |
| new_name | "" | str | no | Setting this option will change the existing name (looked up via the name field). |
| description | False | no | str | Description to use for the job template. |
| rrule | "" | yes | str | A value representing the schedules iCal recurrence rule. See the awx.awx.schedule plugin for help constructing this value |
| extra_data | {} | no | dict | Extra vars for the job template. Only allowed if prompt on launch |
| inventory | "" | no | str | Inventory applied to job template, assuming the job template prompts for an inventory. |
| credentials | "" | no | list | List of credentials applied as a prompt, assuming job template prompts for credentials |
| scm_branch | Project default | no | str | Branch to use in the job run. Project default used if not set. Only allowed if allow_override set to true on project |
| execution_environment | Job Template default | no | str | Execution Environment applied as a prompt. Job Template default used if not set. Only allowed if ask_execution_environment_on_launch set to true on Job Template |
| forks | Job Template default | no | str | Forks applied as a prompt. Job Template default used if not set. Only allowed if ask_forks_on_launch set to true on Job Template |
| instance_groups | Job Template default | no | str | List of Instance Groups applied as a prompt. Job Template default used if not set. Only allowed if ask_instance_groups_on_launch set to true on Job Template |
| job_slice_count | Job Template default | no | str | Job Slice Count to use in the job run. Job Template default used if not set. Only allowed if ask_job_slice_count_on_launch set to true on Job Template |
| labels | Job Template default | no | list | List of labels to use in the job run. Job Template default used if not set. Only allowed if ask_labels_on_launch set to true on Job Template |
| timeout | Job Template default | no | str | Timeout to use in the job run. Job Template default used if not set. Only allowed if |
| ask_timeout_on_launch set to true on Job Template | ||||
| job_type | Job template default | no | str | The job type used for the job template. |
| job_tags | "" | no | str | Comma separated list of tags to apply to the job |
| skip_tags | "" | no | str | Comma separated list of tags to skip for the job |
| limit | "" | no | str | A host pattern to constrain the list of hosts managed or affected by the playbook |
| diff_mode | Job template default | no | bool | Enable diff mode for the job template |
| verbosity | Job template default | no | Int | Level of verbosity for the job. Only allowed if configured to prompt on launch |
| unified_job_template | "" | no | string | The name of object that is being targeted by the schedule. Example objects include projects, inventory sources, and templates. Required if state='present. |
| organization | "" | no | str | The organization the unified job template exists in. Used for looking up the unified job template, not a direct model field. |
| enabled | true | no | bool | Enabled processing of this job template |
| state | present | no | str | Desired state of the resource. |
users.yaml
Example:
---
controller_user_accounts_all:
- username: deploy
password: password
email:
first_name: deploy
last_name:
auditor: false
superuser: false
update_secrets: false
- username: super
password: superpass
email: su.per@example.com
first_name: su
last_name: per
auditor: false
superuser: true
update_secrets: false
...
Create the users who have access to automation controller here, the superusers could be created here. In any case, 1 backup superuser (as a backup if the LDAP or AD link does not work) must be created, so that the management team always has access. This user's password should be stored in a password vault. The organization admin accounts are also created here, and linked to the organization for which they will be admins via a role, we will discuss this in another chapter of this book. Make sure that the passwords in this file are always vault encrypted.
Data Structure
| Variable Name | Default Value | Required | Type | Description |
|---|---|---|---|---|
| username | "" | yes | str | The username of the user |
| new_username | "" | yes | str | Setting this option will change the existing username (looked up via the username field). |
| password | "{{ controller_user_default_password }}" | no | str | The password of the user |
| "" | yes | st | The email of the user | |
| first_name | "" | no | str | The first name of the user |
| last_name | "" | no | str | The last name of the user |
| is_superuser | false | no | bool | Whether the user is a superuser |
| is_system_auditor | false | no | bool | Whether the user is an auditor |
| organization | "" | no | str | The name of the organization the user belongs to. Added in awx.awx >= 20.0.0 DOES NOT exist in ansible.controller yet. |
| state | present | no | str | Desired state of the resource. |
| update_secrets | true | no | bool | True will always change password if user specifies password, even if API gives encrypted for password. False will only set the password if other values change too. |
teams.yaml
Example:
---
controller_teams_all:
- name: admins
description: Admin users
organization: Default
- name: deploy
description: deployment users
organization: Default
...
Create teams in the default organization, there shouldn't be many, as most of the activity will take place in the organizations.
Data Structure
| Variable Name | Default Value | Required | Type | Description |
|---|---|---|---|---|
| name | yes | str | The desired team name to create or modify | |
| new_name | no | str | To use when changing a team's name. | |
| description | omitted | no | str | The team description |
| organization | yes | str | The organization in which team will be created | |
| state | present | no | str | Desired state of the resource. |
roles.yaml
Example:
---
controller_roles_all:
- user: deploy
organizations:
- Default
role: member
- user: deploy
target_teams:
- deploy
role: member
- team: deploy
job_template: CaC_config_template
role: execute
...
In this file, we record what a user can run, view, or manage as a member of this team. In the example above, a member of the deploy team can only run the mentioned template, nothing else...
Data Structure
| Variable Name | Default Value | Required | Type | Description |
|---|---|---|---|---|
| user | "" | no | str | The user for which the role applies |
| users | "" | no | list | The users for which the role applies |
| team | "" | no | str | The team for which the role applies |
| teams | "" | no | list | The teams for which the role applies |
| roles | "" | no | str | (see note below) The roles which are applied to one of {target_team, inventory, job_template, target_team, inventory, job_template} for either user or team |
| role | "" | no | str | (see note below) The role which is applied to one of {target_team, inventory, job_template, target_team, inventory, job_template} for either user or team |
| target_team | "" | no | str | The team the role applies against |
| target_teams | "" | no | list | The teams the role applies against |
| inventory | "" | no | str | The inventory the role applies against |
| inventories | "" | no | list | The inventories the role applies against |
| job_template | "" | no | str | The job template the role applies against |
| job_templates | "" | no | list | The job templates the role applies against |
| workflow | "" | no | str | The workflow the role applies against |
| workflows | "" | no | list | The workflows the role applies against |
| credential | "" | no | str | The credential the role applies against |
| credentials | "" | no | list | The credentials the role applies against |
| organization | "" | no | str | The organization the role applies against |
| organizations | "" | no | list | The organizations the role applies against |
| lookup_organization | "" | no | str | Organization the inventories, job templates, projects, or workflows the items exists in. Used to help lookup the object, for organization roles see organization. If not provided, will lookup by name only, which does not work with duplicates. |
| project | "" | no | str | The project the role applies against |
| projects | "" | no | list | The project the role applies against |
| instance_groups | "" | no | list | The instance groups the role applies against |
| state | present | no | str | Desired state of the resource. |
For the basic configuration of the controller, nothing more is required than what is described above. There's not much the controller can do yet, but the real work is done in the organizations, which is the next step in the configuration journey.