Task:
Perform swing migration of AAP 2.5 RPM to 2.5 Podman Containers.
The idea of a swing migration is to install the same version of on the new server as the old server and then do any software upgrades. So instead of upgrading from RPM AAP 2.5 to containers AAP 2.6 directly, better to migrate the same version. That way upgrade migration issues don't have to be tackled at the same time.
These notes are executed from the Red Hat documentation:
docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/ansible_automation_platform_migration/index
Ansible bundles:
(As of 2025/10)
- RPM: ansible-automation-platform-setup-bundle-2.5-20-x86_64
- Container: ansible-automation-platform-containerized-setup-bundle-2.5-20-x86_64.tar.gz
Source Lab Environment:
The RHEL 9.x AAP Gateway is: aapgwdev.mindwatering.net (10.0.35.150)
The RHEL 9.x AAP Controller is aapdev.mindwatering.net (10.0.35.153)
The RHEL 9.x AAP Hub is aaphubdev.mindwatering.net (10.0.35.156)
The RHEL 9.x AAP EDA is aapedadev.mindwatering.net (10.0.35.158)
The RHEL 9.x PostgreSQL server is aapdbdev.mindwatering.net (10.0.35.155)
Target Container Node Worker Hosts:
cksnode1.mindwatering.net
cksnode2.minwatering.net
cksnode3.minwatering.net
Working folder locations:
RPM installation setup directory: /home/myadminid/aapinstall/ansible-automation-platform-setup-bundle-2.5-n-x86_64/
Set-up backup directory target: /local/backup/
The /local/backup/ destination is a NFS mounted disk shared by the controller, hub, and gateway, the target host container node, and an administrative workstation
Overall Steps:
1. Perform Inventory, Cold Snapshots, and Backup
2. Gather DB version, Settings, and Secrets for Migration
3. Build the Artifact Folders and Files
4. Create the Artifact TAR Archive
5. Download RH AAP 2.5 Containerized
6. Copy the Artifact.tar, Inventory, and Containerized Setup Bundle .tar.gz Files to the Target Container Node
7. Register Container Host Nodes with RedHat and Perform Manifest Download/Upload as Needed
8. Prepare the Inventory File for Installation and Run Install
9. Import Source Content to the Target Containerized Install
10. Reconcile/Clean-up the Target Containerized Environment
11. List-Instance Validation
12. Post Repairs and Fix-ups
13. Post 2.5 migration Validation
14. Post Migration Backup
Important Notes:
- It is difficult to complete this in an overnight maintenance window, even if you have all your custom EEs ready for the new environment. This migration tends to consume a weekend just in the waiting time.
- Before performing a swing migration, upgrade to latest patch release of AAP 2.5 first.
- Ensure the container version being installed is the same AAP 2.5 version.
- Ensure there is a large enough disk on the "primary" controller or an external NFS share to hold the backup which includes the PostgreSQL DB.
- Confirm your latest backup is good before starting, and confirm your new environment backups before killing the old AAP VMs.
- Run the swing on a non-HA lower environment (e.g. development) AAP before performing on a production environment. I have never done a AAP upgrade where there was no unexpected issues/errors to resolve.
- AAP 2.5 has a scheduler bug - it is wise to upgrade to AAP 2.6 after migration
- RH AAP subscriptions are based on "managed nodes" count. According to RH documentation: "Ansible does not recycle node counts or reset automated hosts"
- SCA (Simple Content Access) subscriptions register the nodes (running the containers) with RHSM (Red Hat Subscription Management) or Satellite. subscription-manager attach is no longer required.
- Podman uses the host node's subscriptions, ensure the target container nodes are registered with: sudo subscription-manager register --username <rhloginuserid> --password <rhloginpwd>
- OCP 4.1+ registers OCM (OpenShift Cluster Manager) at the cluster level rather than the host level (Tech notes : 253273 and 4405041)
- Keep the source RPM-based deployment down after the migration is underway (e.g. artifact.tar built)
- Delete the source RPM-based VMs only after validation of the new environment functionality is complete
Warning:
The following components will not be migrated and have to be manually re-deployed or re-imported:
- Non-default execution environments
- Instance Groups: Specifically, restored automation controller resources that were associated to instance groups likely need to be reassigned to instance groups present on the new automation controller cluster.
- Hub content
- Custom CA for receptor mesh
- EDA config
1. Perform Inventory, Cold Snapshots, and Backup
a. Bring up the AAP gateway UI and gather the version number:
web browser --> aapgwdev.mindwatering.net --> <login> --> About (page) --> write down version number (pop up, above the cow)
b. Bring up the AAP controller UI and gather the version number:
web browser --> aapdev.mindwatering.net --> <login> --> About (page) --> write down version number (pop up, above the cow)
c. Bring up the AAP hub UI and gather the version number:
web browser --> aaphubdev.mindwatering.net --> <login> --> About (page) --> write down version number (pop up, above the cow)
d. SSH into the controller and verify the original setup.sh install directory is still there, the inventory file is there, etc.
$ ssh myadminid@aapdev.mindwatering.net
$ cd /users/myadminid/aapinstall/ansible-automation-platform-setup-bundle-2.5-n-x86_64/
$ ls
<confirm installation and inventory files still located in install folder. Confirm the apitoken. Confirm if the installation was run by root or by the myadminid user, should be run by same user again>
e. Perform controller services stop:
$ sudo automation-controller-service stop
<wait - verify controller(s) stopped>
f. In vCenter, shutdown the AAP environment.
Note: Ensure the PostgreSQL VM shutdown is last.
- vCenter UI --> Inventory --> Templates and Folders (icon, 2nd one) --> vCenter (FQDN twistie) --> AAP Dev (folder twistie) --> select VM --> Actions --> Power --> Shut Down Guest OS
- repeat for all VMs, and DB VM last
g. Still in vCenter, perform the cold snapshots:
- vCenter UI --> Inventory --> Templates and Folders (icon, 2nd one) --> vCenter (FQDN twistie) --> AAP Dev (folder twistie) --> select VM --> Actions --> Snapshots --> Manage Snapshots --> Take Snapshot (button)
- repeat for all VMs
h. Startup the VMs with the PostgreSQL DB VM first:
- vCenter UI --> Inventory --> Templates and Folders (icon, 2nd one) --> vCenter (FQDN twistie) --> AAP Dev (folder twistie) --> select VM --> Actions --> Power --> Power On
- repeat for the controller, hub, gateway, etc.
i. Return to the Controller, shutdown the controller services again, and perform the backup:
- Login:
$ ssh myadminid@aapdev.mindwatering.net
- Stop controller (no new jobs during backup):
$ sudo automation-controller-service stop
<wait - verify controller(s) stopped>
- Get the backup size (need now and for the migration folder):
$ sudo su -
# su postres -
$ psql -c '\l+'
<view result, ensure you have 3 or 4 times this amount of disk space somewhere. in our case it is /local/backup/ >
$ exit
- Perform backup:
Note: We will not actually use this backup for the migration. This is a backup for this VM-based RMP cluster.
$ cd /users/myadminid/aapinstall/ansible-automation-platform-setup-bundle-2.5-n-x86_64/
$ mkdir /local/backup/2025_aap25
$ ./setup.sh -e 'backup_dest=/local/backup/2025_aap25' -b
<wait>
2. Gather DB version, Settings, and Secrets for Migration
a. Verify PostgreSQL version is 15.x:
$ ssh myadminid@aapdbdev.mindwatering.net
$ sudo su -
# su postgres -
$ psql -c 'SELECT version();'
<verify database is version 15, and note the exact version>
$ exit
# exit
$ exit
b. Get Controller settings and secrets:
- The backup contains the controller settings and the databases. This backs up copies of those settings, the database settings, and the various kyr and crt files, etc.
- In our case the /local/backup/ destination is a NFS mounted disk shared by the controller, hub, and gateway, the target host container node, and an administrative workstation
- Sudo or pbrun to root, if installed AAP installed as root - typical for our client management, retrieve the controller settings and secrets
- If not using a local mount, SCP these settings and password files to an admin workstation for the migration.
$ ssh myadminid@aapdev.mindwatering.net
$ sudo su -
# mkdir /local/backup/2025_aap25files
# chmod 777 /local/backup/2025_aap25files
# awx-manage print_settings | grep '^DATABASES' > /local/backup/2025_aap25files/controller-awx-settings-DATABASE.txt
# cp /etc/tower/SECRET_KEY /local/backup/2025_aap25files/controller-tower-SECRET_KEY
# cp /etc/tower/tower.cert /local/backup/2025_aap25files/controller-tower.cert
# cp /etc/tower/tower.key /local/backup/2025_aap25files/controller-tower.key
# cp /etc/tower/conf.d/postgres.py /local/backup/2025_aap25files/controller-postgres.py
# cp /etc/tower/conf.d/channels.py /local/backup/2025_aap25files/controller-channels.py
- copy the inventory file for migration
# cp /home/myadminid/aapinstall/ansible-automation-platform-setup-bundle-2.5-n-x86_64/inventory /local/backup/2025_aap25files/
- Dump copies of the databases that will be used in the migration
- confirm access and backup/dump the controller, hub, and gateway databases
- confirm the have legitimate sizes
# su postgres -
$ psql -h aapdbdev.mindwatering.net -U awx -d awx -t -c 'SHOW server_version;'
$ pg_dump -h aapdbdev.mindwatering.net -U awx -d awx --clean --create -Fc -f /local/backup/2025_aap25files/controller-awx.pgc
$ psql -h aapdbdev.mindwatering.net -U automationhub -d automationhub -t -c 'SHOW server_version;'
$ pg_dump -h aapdbdev.mindwatering.net -U automationhub -d automationhub --clean --create -Fc -f /local/backup/2025_aap25files/automationhub.pgc
$ psql -h aapdbdev.mindwatering.net -U automationgateway -d automationgateway -t -c 'SHOW server_version;'
$ pg_dump -h aapdbdev.mindwatering.net -U automationgateway -d automationgateway --clean --create -Fc -f /local/backup/2025_aap25files/automationgateway.pgc
$ exit
# chown myadminid:myadminid /local/backup/2025_aap25files/
# exit
$ exit
Note:
- If you have any custom controller configuration files, back those up, as well. The following are managed/set-up by the setup.sh installation, and can be skipped: postgres.py, channels.py, caching.py, and cluster_host_id.py
$ ssh myadminid@aapdev.mindwatering.net
$ sudo su -
# cd /etc/tower/conf.d/
# ls -ld
<view if any files not included above, if so copy them to the /local/backup/2025_aap25files folder>
c. Get Hub settings and secrets:
- Retrieve the hub settings and secrets.
- Copy/scp the files to the controller /local/backup/2025_aap25files/ folder
$ ssh myadminid@aaphubdev.mindwatering.net
$ mkdir /home/myadminid/app25tmpfiles/
$ sudo su -
# grep '^DATABASES' /etc/pulp/settings.py > /home/myadminid/app25tmpfiles/hub-pulp-settings-DATABASE.txt
# grep '^SECRET_KEY' /etc/pulp/settings.py | awk -F'=' '{ print $2 }' > /home/myadminid/app25tmpfiles/hub-SECRET_KEY
# cp /etc/pulp/certs/pulp_webserver.crt /home/myadminid/app25tmpfiles/hub-pulp_webserver.crt_backup_yyyymmdd.crt
# cp /etc/pulp/certs/pulp_webserver.key /home/myadminid/app25tmpfiles/hub-pulp_webserver.key_backup_yyyymmdd.key
# cp /etc/pulp/certs/database_fields.symmetric.key /home/myadminid/app25tmpfiles/hub-pulp-database_fields.symmetric.key
# chown myadminid:myadminid /home/myadminid/app25tmpfiles/
# exit
$ scp /home/myadminid/app25tmpfiles/* myadminid@aapdev.mindwatering.net:/local/backup/2025_aap25files/
$ exit
d. Get Gateway settings and secrets:
- Retrieve the gateway settings and secrets.
- SCP settings and files below to an admin workstation for the migration.
$ ssh myadminid@aapgwdev.mindwatering.net
$ mkdir /home/myadminid/app25tmpfiles/
$ sudo su -
# grep '^DATABASES' /etc/pulp/settings.py > /home/myadminid/app25tmpfiles/gw-pulp-settings-DATABASE.txt
# cp /etc/ansible-automation-platform/gateway/SECRET_KEY /home/myadminid/app25tmpfiles/gw-SECRET_KEY
# chown myadminid:myadminid /home/myadminid/app25tmpfiles/
# exit
$ scp /home/myadminid/app25tmpfiles/* myadminid@aapdev.mindwatering.net:/local/backup/2025_aap25files/
$ exit
3. Build the Artifact Folders and Files
The folder structure is:
/
manifest.yml
secrets.yml
sha256sum.txt
/ -> controller:
controller.pgc
-> custom_configs:
foo.py
bar.py
/ -> gateway:
gateway.pgc
/ -> hub:
hub.pgc
a. Create the folders and empty files:
$ ssh myadminid@aapdev.mindwatering.net
$ mkdir /home/myadminid/artifact
$ mkdir /home/myadminid/artifact/controller
$ mkdir /home/myadminid/artifact/gateway
$ mkdir /home/myadminid/artifact/hub
$ mkdir /home/myadminid/artifact/controller/custom_configs
$ touch /home/myadminid/artifact/manifest.yml
$ touch /home/myadminid/artifact/secrets.yml
b. Move the individual AAP component database backups:
$ mv /local/backup/2025_aap25files/controller-awx.pgc /local/myadminid/artifact/controller/
$ mv /local/backup/2025_aap25files/automationhub.pgc /local/myadminid/artifact/hub/
$ mv /local/backup/2025_aap25files/automationgateway.pgc /local/myadminid/artifact/gateway/
c. Build the manifest.yml file, using the Red Hat AAP documentation example file:
Notes:
- The gateway version is also called the Unified UI version in the release notes.
- Versions syntax:
aap_version: x.y = 2.5
components --> controller --> version x.y.z
components --> gateway --> version x.y.z
components --> hub --> version x.y.z
Example:
$ vi /home/myadminid/artifact/manifest.yml
---
aap_version: 2.5
platform: rpm
components:
- name: controller
version: 4.6.21
- name: hub
version: 4.10.9
- name: gateway
version: 1.6.0
d. Build the secrets.yml file using the files we copied to /local/backup/2025_aap25files/ folder:
Notes:
- Paste each database name or secret. Get the database names from the inventory file or psql, and get the secrets from the backup files:
- - controller_pg_database: awx
- - gateway_pg_database: automationgateway
- - hub_pg_database: automationhub
- - controller_secret_key: Use this file's content --> /local/backup/2025_aap25files/controller-tower-SECRET_KEY
- - gateway_secret_key: Use this file's content --> /local/backup/2025_aap25files/gw-SECRET_KEY
- - hub_secret_key: Use this file's content --> /local/backup/2025_aap25files/hub-SECRET_KEY
- - hub_db_fields_encryption_key: Use this file's content --> /local/backup/2025_aap25files/hub-pulp-database_fields.symmetric.key
$ vi /home/myadminid/artifact/secrets.yml
awx_pg_database: awx
controller_pg_database: awx
controller_secret_key: <SECRET>
gateway_pg_database: automationgateway
gateway_secret_key: <SECRET>
hub_pg_database: automationhub
hub_secret_key: <SECRET>
hub_db_fields_encryption_key: <SECRET>
e. If you have custom configuration files, copy them into the artifact/controller/custom_configs subfolder.
4. Create the Artifact TAR Archive
Use the following commands to remove any existing checksum, and add the sha256sum.txt file to the artifact folder.
a. Login:
$ ssh myadminid@aapdev.mindwatering.net
$ cd/home/myadminid/artifact
b. Remove any existing file (if not first loop):
$ rm -f sha256sum.txt
c. Create the new database files checksum:
$ find . -type f -name "*.pgc" -exec sha256sum {} \; >> sha256sum.txt
d. Review checksum:
$ cat sha256sum.txt
e. Create the tar archive and its checksum:
$ tar cf artifact.tar artifact
$ sha256sum artifact.tar > artifact.tar.sha256
- Complete the archive:
$ sha256sum --check artifact.tar.sha256
$ tar tvf artifact.tar
<view output: the pgc folders should have significant size>
f. Copy the archive.tar to the /local/backup folder:
$ cp ./archive.tar /local/backup/2025_aap25files/
5. Download RH AAP 2.5 Containerized
a. Download AAP 2.5 containerized:
access.redhat.com/downloads/content/480/ver=2.5/rhel---9/2.5/x86_64/product-software
b. Choose online or offline:
(We prefer the full offline bundles.)
e.g. ansible-automation-platform-containerized-setup-bundle-2.5-20-x86_64.tar.gz
6. Create AAP Administrative User on Target Container Nodes:
a. Create dedicated AAP user ID (as needed):
Notes:
- User requires sudo ability
- User is responsible for the installation of containerized AAP
- User will run at startup as assumed is running the containers
- Podman cannot use NFS share to support container images.
- The instructions below assume the /etc/suduers file has been already edited with visudo and the group wheel is uncommented and allowed to run all commands (e.g. wheel ALL=(ALL) ALL).
- Login
$ ssh myadminid@cksnode1.mindwatering.net
- Add user
$ sudo adduser myaapadminid
$ sudo passwd myaapadminid
<enter the new password for myaapadminid>
- Add to sudoers:
$ sudo usermod --append -G wheel myaapadminid
- Verify myaapadminid is a suduer:
$ su myaapadminid -
<enter myaapadminid pwd>
$ sudo whoami
<verify output says root>
$ exit
- Enable linger and login with system start-up
$ sudo loginctl enable-linger myaapadminid
<enter pwd and confirm>
$ sudo loginctl list-users
<verify id(s) include the new myaapadminid>
- Create the containers folder (not sure if needed):
$ su myaapadminid -
<enter myaapadminid pwd>
$ mkdir ~/.config/containers/systemd/
b. Repeat step a above, and create the user on the other container hosts/nodes
$ ssh myadminid@cksnode2.mindwatering.net
<repeat above>
$ ssh myadminid@cksnode3.mindwatering.net
<repeat above>
c. Create SSH keys for myaapadminid and copy to the other nodes:
Note:
- The ssh-keygen execution will place the key-pair in the home ./ssh folder
- Login
$ ssh myaapadminid@cksnode1.mindwatering.net
$ ssh-keygen -b 4096 -t rsa
<if prompt asked for passphrase, press <enter> for none>
- Copy to the other nodes:
$ ssh-copy-id -i $HOME/.ssh/id_rsa.pub myaapadminid@cksnode2.mindwatering.net
<enter pwd>
$ ssh-copy-id -i $HOME/.ssh/id_rsa.pub myaapadminid@cksnode3.mindwatering.net
<enter pwd>
6. Copy the Artifact.tar, Inventory, and Containerized Setup Bundle .tar.gz Files to the Target Container Node
Notes:
- Red Hat recommends using the target containers node that will host the gateway containers; and if not, using the node that will host the PostgreSQL db.
- If using a non-root-based container engine (e.g. Podman), place in the engine's user id home folder (e.g. /home/myadminid/aapinstalltmp/. If the container are running in a root engine, place wherever (e.g. /home/myadmid/aapinstalltmp/).
- If transferred via root to the host node and the engine is a non-root-based engine, perform a chown on the files if the engine's home folder.
- If your /local/backup/ location is a NFS share like ours, then you can also just mount the backup. For these instructions, we'll be copying from a temporary mount: /local/backup/
a. If not mounting /local/backup on the host container node:
- Transfer the artifact.tar file to a host container node (e.g. cksnode1.mindwatering.net).
- Transfer the inventory file to the same host container node.
- Transfer ansible-automation-platform-containerized-setup-bundle-2.5-20-x86_64.tar.gz to the same host container node.
b. If mounted, switch to root, and copy to the administrative (non-root) user, or proper location:
$ ssh myaapadminid@cksnode1.mindwatering.net
$ mkdir /home/myaapadminid/aapinstalltmp/
$ cd /home/myaapadminid/aapinstalltmp/
$ sudo su -
# cp /local/backup/2025_aap25files/artifact.tar ./
# cp /local/backup/2025_aap25files/inventory ./
# cp /local/backup/2025_aap25files/ansible-automation-platform-containerized-setup-bundle-2.5-20-x86_64.tar.gz ./
# chown myaapadminid:myaapadminid /home/myadminid/aapinstalltmp/*
# exit
c. Extract the artifact.tar
- Check the checksum:
$ sha256sum --check artifact.tar.sha256
- Extract and verify the internal checksum:
$ tar xvf artifact.tar
$ cd ./artifact
$ sha256sum --check sha256sum.txt
d. Extract the AAP 2.5 containerized bundle:
$ cd /home/myaapadminid/aapinstalltmp/
$ tar xvzf ansible-automation-platform-containerized-setup-bundle-2.5-20-x86_64.tar.gz
<wait for extract>
$ cd /home/myaapadminid/aapinstalltmp/ansible-automation-platform-containerized-setup-bundle-2.5-20-x86_64/
$ ls -l
<view extracted files>
7. Register Container Host Nodes with RedHat and Perform Manifest Download/Upload as Needed
- If you are running OCP, you add w/in the OCP UI
- If you are running on RH RHEL 10 OS (VMs or physical), the containers get their licensing through the host.
- With the new SCA (Simple Content Access), you don't have to do the --attach or --auto anymore). SCA registration is the only step required to access Ansible Automation Platform content.
a. Login:
$ ssh myaapadminid@cksnode1.mindwatering.net
b. Verify hostname is set.
- View current hostname:
$ hostname -f
<view FQDN output>
- If not a FQDN, set with:
$ sudo hostnamectl set-hostname cksnode1.mindwatering.net
c. On the host container node, register access to AAP (if not already done):
$ sudo subscription-manager register --username <red_hat_cloud-userid> --password <red_hat_cloud_password>
d. Refresh and verify:
$ sudo subscription-manager refresh
$ sudo subscription-manager identity
e. Verify the BaseOS and AppStream repositories are enabled for the container host node:
$ sudo dnf repolist
<view output and confirm>
repo id repo name
rhel-10-for-x86_64-appstream-rpms Red Hat Enterprise Linux 10 for x86_64 - AppStream (RPMs)
rhel-10-for-x86_64-baseos-rpms Red Hat Enterprise Linux 10 for x86_64 - BaseOS (RPMs)
f. Verify ansible-core, and other needed utilities installed:
$ sudo dnf install -y ansible-core wget git-core rsync vim
<install if not already installed>
g. If you are air-gapped, and need to perform the manifest download manually and upload it back to portal manually, perform the following steps on the below documentation page:
- Obtaining a manifest file
- Creating a subscription allocation
- Adding subscriptions to a subscription allocation
- Downloading a manifest file
- Activating Red Hat Ansible Automation Platform (either Activate with credentials or Activate with a manifest file)
docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/containerized_installation/index
8. Prepare the Inventory File for Installation and Run Install
a. Move any existing inventory file out of the way:
$ ssh myaapadminid@cksnode1.mindwatering.net
$ cd /home/myaapadminid/aapinstalltmp/ansible-automation-platform-containerized-setup-bundle-2.5-20-x86_64/
$ mv inventory inventory_backup
$ cp /home/myaapadminid/aapinstalltmp/inventory ./
Note:
At this point, we can add to the inventory the secret.yml contents as extra-variables. Or we can just include in the containerized install command.
b. Verify the inventory is the same topology in the source RPM deployment
- Compare the /home/myaapadminid/aapinstalltmp/ansible-automation-platform-containerized-setup-bundle-2.5-20-x86_64/inventory file with the /home/myaapadminid/aapinstalltmp/artifact/manifest.yml file
- Verify the database entries, etc.
c. Run the installation:
Note:
This install will have the topology and structure, but not the PostgreSQL databases.
$ cd /home/myaapadminid/aapinstalltmp/ansible-automation-platform-containerized-setup-bundle-2.5-20-x86_64/
$ ansible-playbook -i inventory ansible.containerized_installer.install -e @~/aapinstalltmp/artifact/secrets.yml -e "__hub_database_fields='{{ hub_db_fields_encryption_key }}'"
d. Backup the installation before we add the databases back:
$ ansible-playbook -i inventory ansible.containerized_installer.backup
e. Verify the fresh installation functions correctly:
- Services running on container host/node
- Containers ping on their FQDNs
- UI login pages display
9. Import Source Content to the Target Containerized Install
Notes:
- If using a separate customer-provided PostgreSQL database, the data is actually still in the database and your inventory file is/was probably still was pointing to it.
- - The databases restore may not be needed if the customer-provided PostgreSQL VM was/is not being migrated/containerized.
- - These instructions assume the database is being migrated because the PostgreSQL source databases were external AAP-managed databases.
- This will require a SSH to the worker nodes where the components are running. They may not be on one worker node (e.g. @cksnode1.mindwatering.net).
- For the Redis service, verify the inventory where Redis is configured. (In standalone, Redis is on the gateway container, in HA, it is typically configured on both gateways, both hubs, and both edas.)
- For these instructions, we are going to pretend that all the services are running on csknode1.
a. Verify what nodes are running which services:
$ ssh myaapadminid@cksnode1.mindwatering.net
$ podman ps
<view running pods>
$ ssh myaapadminid@cksnode2.mindwatering.net
$ podman ps
<view running pods>
$ ssh myaapadminid@cksnode3.mindwatering.net
$ podman ps
<view running pods>
b. If Performance Co-Pilot is configured, stop it on all nodes:
$ ssh myaapadminid@cksnode1.mindwatering.net
$ systemctl --user stop pcp
$ ssh myaapadminid@cksnode2.mindwatering.net
$ systemctl --user stop pcp
$ ssh myaapadminid@cksnode3.mindwatering.net
$ systemctl --user stop pcp
c. Stop the controller, hub, gateway, and eda containers on whichever nodes they are running (see step a above):
- On the node(s) running the controller(s):
$ ssh myaapadminid@cksnode1.mindwatering.net
$ sudo systemctl --user stop automation-controller-task automation-controller-web automation-controller-rsyslog
$ sudo systemctl --user stop receptor
- On the node(s) running the gateway(s):
$ ssh myaapadminid@cksnode1.mindwatering.net
$ systemctl --user stop automation-gateway automation-gateway-proxy
- On the node(s) running the hub(s):
$ ssh myaapadminid@cksnode1.mindwatering.net
$ sudo systemctl --user stop automation-hub-api automation-hub-content automation-hub-web automation-hub-worker-1 automation-hub-worker-2
- On the node(s) running the eda(s):
$ ssh myaapadminid@cksnode1.mindwatering.net
$ systemctl --user stop automation-eda-scheduler automation-eda-daphne automation-eda-web automation-eda-api automation-eda-worker-1 automation-eda-worker-2 automation-eda-activation-worker-1 automation-eda-activation-worker-2
- On the node(s) running Redis:
$ ssh myaapadminid@cksnode1.mindwatering.net
$ systemctl --user stop redis-unix redis-tcp
d. Import the database dumps to the containerized environment:
Notes:
- When using an AAP managed databases, a temporary container must be spun up to perform the restore.
- The temporary container will require the registry.redhat.io/rhel8/postgresql-15 image.
- The temporary container will mount:
- - the artifact mounted at /var/lib/pgsql/backups.
- - the PostgreSQL certificates to ensure that you can resolve the correct certificates.
- Bash is appended to the temporary container launch so that the psql commands can be entered w/in the container.
- SSH into the node running PostgreSQL and verify the postgresql-15 image is available:
$ ssh myaapadminid@cksnode1.mindwatering.net
$ podman images ls
<view images and check for the postgresql-15 image>
- Run the tempory container:
$ podman run -it --rm --name postgresql_restore_temp --network host --volume ~/aap/tls/extracted:/etc/pki/ca-trust/extracted:z --volume ~/aap/postgresql/server.crt:/var/lib/pgsql/server.crt:ro,z --volume ~/aap/postgresql/server.key:/var/lib/pgsql/server.key:ro,z --volume ~/aapinstalltmp/artifact:/var/lib/pgsql/backups:ro,z registry.redhat.io/rhel8/postgresql-15:latest bash
- Get the temporary container network info:
$ podman container inspect postgresql_restore_temp
<view output - especially the NetworkSettings section to get the container IP and or hostname>
- Now inside the container run the following:
bash-4.4$ psql -h aapdbdev.mindwatering.net -U postgres
postgres=# \l
<view output - see example below>
Name | Owner | Encoding | Collate | Ctype | ICU Locale | Locale Provider | Access privileges
-------------------------+---------------+----------+-------------+-------------+------------+-----------------+------------------
-----
automationedacontroller | automationedacontroller | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | libc |
automationhub | automationhub | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | libc |
awx | awx | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | libc |
gateway | automationgateway | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | libc |
...
- Temporarily add the CREATEDB role for each of the accounts used in the source backups:
postgres=# ALTER ROLE awx WITH CREATEDB;
postgres=# ALTER ROLE automationgateway WITH CREATEDB;
postgres=# ALTER ROLE automationhub WITH CREATEDB;
postgres=# ALTER ROLE automationedacontroller WITH CREATEDB;
postgres=# \q
- Restore the psql backups:
bash$ cd /var/lib/psql/backups
bash$ pg_restore --clean --create --no-owner -h aapdbdev.mindwatering.net -U awx -d template1 controller/awx.pgc
<wait>
bash$ pg_restore --clean --create --no-owner -h aapdbdev.mindwatering.net -U automationhub -d template1 hub/pah.pgc
<wait>
bash$ pg_restore --clean --create --no-owner -h aapdbdev.mindwatering.net -U automationgateway -d template1 gateway/gw.pgc
<wait>
- Revoke the temporary CREATEDB role:
bash-4.4$ psql -h aapdbdev.mindwatering.net -U postgres
postgres=# ALTER ROLE awx WITH NOCREATEDB;
postgres=# ALTER ROLE automationgateway WITH NOCREATEDB;
postgres=# ALTER ROLE automationhub WITH NOCREATEDB;
postgres=# ALTER ROLE automationedacontroller WITH NOCREATEDB;
postgres=# \q
e. If Performance Co-Pilot is configured, start it on all nodes:
$ ssh myaapadminid@cksnode1.mindwatering.net
$ systemctl --user start pcp
$ ssh myaapadminid@cksnode2.mindwatering.net
$ systemctl --user start pcp
$ ssh myaapadminid@cksnode3.mindwatering.net
$ systemctl --user start pcp
f. Start the controller, hub, gateway, and eda containers on whichever nodes they are running:
- On the node(s) running the controller(s):
$ ssh myaapadminid@cksnode1.mindwatering.net
$ sudo systemctl --user start automation-controller-task automation-controller-web automation-controller-rsyslog
$ sudo systemctl --user start receptor
- On the node(s) running the hub(s):
$ ssh myaapadminid@cksnode1.mindwatering.net
$ sudo systemctl --user start automation-hub-api automation-hub-content automation-hub-web automation-hub-worker-1 automation-hub-worker-2
- On the node(s) running the eda(s):
$ ssh myaapadminid@cksnode1.mindwatering.net
$ systemctl --user start automation-eda-scheduler automation-eda-daphne automation-eda-web automation-eda-api automation-eda-worker-1 automation-eda-worker-2 automation-eda-activation-worker-1 automation-eda-activation-worker-2
- On the node(s) running the gateway(s):
$ ssh myaapadminid@cksnode1.mindwatering.net
$ systemctl --user start automation-gateway automation-gateway-proxy
- On the node(s) running Redis:
$ ssh myaapadminid@cksnode1.mindwatering.net
$ systemctl --user start redis-unix redis-tcp
g. Check your pods
$ ssh myaapadminid@cksnode1.mindwatering.net
$ podman ps
<view status of running pods>
10. Reconcile/Clean-up the Target Containerized Environment
a. Clean-up the gateway container(s):
Note:
The objects cleaned-up/deleted are automatically recreated when the upgraded/new controller node re-registers with the gateway.
- Login:
$ ssh myaapadminid@cksnode1.mindwatering.net
- Start a bash session inside the container:
$ podman exec -it automation-gateway bash
- Perform the clean-up in the container:
bash$ aap-gateway-manage migrate
bash$ aap-gateway-manage shell_plus
>>> HTTPPort.objects.all().delete(); ServiceNode.objects.all().delete(); ServiceCluster.objects.all().delete()
b. If there are additional configuration settings, add them to the inventory file on the node hosting the controller container.
- Edit the inventory file (as needed)
$ ssh myaapadminid@cksnode1.mindwatering.net
$ cd /home/myaapadminid/aapinstalltmp/ansible-automation-platform-containerized-setup-bundle-2.5-20-x86_64/
$ vi inventory
- - Add extra_settings for each component using component_extra_settings (e.g. controller_extra_settings, hub_extra_settings, eda_extra_settings, postgresql_extra_settings)
- - documentation: docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars
c. Fix/Update the container resource secrets for the gateway component:
- Login to the node for the component:
$ ssh myaapadminid@cksnode1.mindwatering.net
- Get the current secrets from the gateway:
$ podman exec -it automation-gateway bash -c 'aap-gateway-manage shell_plus --quiet -c "[print(cl.name, key.secret) for cl in ServiceCluster.objects.all() for key in cl.service_keys.all()]"'
- Compare the secrets using the following bash loop:
$ for secret_name in eda_resource_server hub_resource_server controller_resource_server
do
echo $secret_name
podman secret inspect $secret_name --showsecret | grep SecretData
done
<watch the output of each secret_name for one not to match>
- ONLY if any secrets do not match, run:
Note: <SECRET_NAME> is going to be one of the names in the loop above: eda_resource_server hub_resource_server controller_resource_server
$ podman secret rm <SECRET_NAME>
$ echo "secret_value" | podman secret create <SECRET_NAME> -
d. Pull the EEs (non-default) from the source environment into the new container environment:
- Recreate using Ansible Builder
- Delete any local or remotely pulled images from earlier runs
- The migration does not copy over Pulp, those old images are taking up space and won't be used
- Push replacement EEs to the new hub
- Documentation: docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html-single/creating_and_consuming_execution_environments/index#populate-container-registry
e. Re-run the Installer again, but this time using ansible.containerized_installer.install, not .backup:
$ ssh myaapadminid@cksnode1.mindwatering.net
$ cd /home/myaapadminid/aapinstalltmp/ansible-automation-platform-containerized-setup-bundle-2.5-20-x86_64/
$ ansible-playbook -i inventory ansible.containerized_installer.install
<wait and watch for errors>
Important:
- If no errors, proceed to validation.
- If the installation playbook reported errors. Fix each one, and repeat.
11. List-Instance Validation
a. Login to the node for the controller-component:
$ ssh myaapadminid@cksnode1.mindwatering.net
b. Perform the standard awx-manage list-instances command w/in the container:
$ podman exec -it automation-controller-task bash
bash$ awx-manage list_instances
<view output - heartbeat timestamps are great, any nodes with capacity=0 are offline because they likely were not migrated>
Healthy example:
... heartbeat="2025-11-25 10:11:12" ...
Missing node:
...
[ungrouped capacity=0]
[DISABLED] oldnode.mindwatering.net capacity=0 ...
c. Remove old/unused node(s):
bash$ awx-manage deprovision_instance --host=oldnode.mindwatering.net
bash$ exit
$ exit
12. Post Repairs and Fix-ups
a. Remove the orphaned automation hub pulp content links:
- Login to the node for the gateway-component:
$ ssh myaapadminid@cksnode1.mindwatering.net
- Run the repair:
$ curl -d '{\"verify_checksums\": true }' -X POST -k https://<gateway url>/api/galaxy/pulp/api/v3/repair/ -u <gateway_admin_user>:<gateway_admin_password>
Notes:
- The repair will return JSON. If the JSON contains "state=failed", then troubleshoot with:
- - error: The error message
- - traceback: The section of code where failed (may be helpful)
- - reserved_resources_record: The resource that is responsible for the problem/error message
- Fix any failed messages before continuing.
b. Fix the Instance Groups:
- Login to the gateway webUI for AAP
web browser --> aapgwdev.mindwatering.net --> login
- Update the Instance Groups:
Automation Execution (left menu twistie) --> Infrastructure (left menu sub-twistie) --> Instance Groups (left menu) --> <select group> --> Instances (tab) -->
- - Update by adding or removing instances
c. Fix the EEs:
- Update Decision Environment
Automation Decisions (left menu twistie) --> Decision Environments (left menu) --> <select decision environment> -->
- - Edit and update registry URLs, associated credential(s), etc that are invalid or pointing to any old source cluster and update to the new target environment
- Update the Infrastructure Execution Environments:
Automation Execution (left menu twistie) --> Infrastructure (left menu sub-twistie) --> Execution Environments (left menu) -->
- - Edit and update each EE image and verify addresses point to the new target environment
- Update the Infrastructure Credentials:
Automation Execution (left menu twistie) --> Infrastructure (left menu sub-twistie) --> Credentials (left menu) -->
- - Edit and verify/update credential EE specific information still aligns in the new target environment
13. Post 2.5 migration Validation
a. Check Authentication and Role Access:
- Test user authentication with the local admin account and a few domain-based login accounts to confirm working.
- Test RBAC roles that rights across Organization, Projects, Inventories, Job Templates, etc.
- Confirm team (groups) memberships are intact.
- Test external API access. Look also for any old AAP 2.4 external apps/services that were never updated to point to the Gateway.
- If using SSO integration, verify SSO working.
b. Verify Gateway login again:
- Login to the gateway webUI for AAP
web browser --> aapgwdev.mindwatering.net --> login
- - Dashboard loads
- - No HTTP 500 errors
- - Controller connected
c. Verify Automation Controller:
- Login to the gateway webUI for AAP
web browser --> aapgwdev.mindwatering.net --> login - -> Automation Execution -->
- - Verify Organizations all exist
- - Verify Projects all exist
- - Verify Projects Sync successfully
- - Verify Inventories all exist
- - Verify Credentials all exist
- - Verify Job Templates all exist
- - Verify running Jobs running and confirm finishing successfully
- - Confirm non-default EEs that have been migrated, and Jobs using custom EEs are running and finishing successfully
- - Verify jobs running can access target inventory servers, and external systems (e.g. DNS, AD/LDAP, etc) are accessed okay
d. Verify Automation Hub:
- Login to the gateway webUI for AAP
web browser --> aapgwdev.mindwatering.net --> login - -> Automation Content -->
- - Confirm Collections exist
- - Confirm Namespaces appear and are correct
- - Test a Sync and a Publish as needed
e. Verify Event-Driven Ansible:
- Login to the gateway webUI for AAP
web browser --> aapgwdev.mindwatering.net --> login - -> Automation Execution --> Decisions
- - Confirm Rulebooks migrated
- - Confirm Activations migrated
- - Confirm Rule Audits migrated
f. Scan the podman logs for each container:
$ ssh myaapadminid@cksnode1.mindwatering.net
$ podman logs -f <aap-container>
<review logs>
14. Post Migration Backup
a. Take a post database import backup:
$ ssh myaapadminid@cksnode1.mindwatering.net
$ cd /home/myaapadminid/aapinstalltmp/ansible-automation-platform-containerized-setup-bundle-2.5-20-x86_64/
$ ansible-playbook -i inventory ansible.containerized_installer.backup
<complete and wait>
b. Confirm configuration of your backup software to the new container environment
- Complete any set-up needed to backup the new AAP 2.5 container environment
- Discontinue back-ups of the old source environment
c. Confirm new backups running on schedule
d. Confirm a restoration to an isolated network segment of your current new AAP 2.5 containerized environment
previous page
|