Thank you for your interest in Red Hat Ansible Automation Platform. Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments.

This guide describes how to install Ansible Automation Platform self-service technology preview and connect it with an instance of Ansible Automation Platform.

Providing feedback on Red Hat documentation

If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.

Important

Ansible Automation Platform self-service technology preview is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

1. About Ansible Automation Platform self-service technology preview

Ansible Automation Platform self-service technology preview connects with Red Hat Ansible Automation Platform using an OAuth application for authentication. For the self-service technology preview release, the following restrictions apply:

  • You can only use one Ansible Automation Platform instance.

  • You can only use one Ansible Automation Platform organization.

1.1. Supported platforms

Self-service technology preview supports installation using a Helm chart on OpenShift Container Platform, and Ansible Automation Platform version 2.5.

2. Installation overview

You can deploy self-service technology preview from a Helm chart on OpenShift Container Platform.

Helm is a tool that simplifies deployment of applications on Red Hat OpenShift Container Platform clusters. Helm uses a packaging format called Helm charts. A Helm chart is a package of files that define how an application is deployed and managed on OpenShift. The Helm chart for self-service technology preview is available in the OpenShift Helm catalog.

2.1. Prerequisites

  • A valid subscription to Red Hat Ansible Automation Platform.

  • Red Hat Ansible Automation Platform 2.5.

  • An Ansible Automation Platform instance with the appropriate permissions to create an OAuth application.

  • An OpenShift Container Platform instance (Version 4.12 or newer) with the appropriate permissions within your project to create an application.

  • You have installed the OpenShift CLI (oc). See the Getting started with the OpenShift CLI chapter of the Understanding OpenShift Container Platform guide.

  • You have installed Helm 3.10 or newer. See the Installing Helm chapter of the OpenShift Container Platform Building applications guide.

3. Pre-installation configuration

3.1. Creating an OAuth application

To use the Helm chart to deploy self-service technology preview, you must have set up an OAuth application on your Ansible Automation Platform instance. However, you cannot run automation on your Ansible Automation Platform instance until you have deployed your self-service technology preview Helm chart, because the OAuth configuration requires the URL for your deployment.

Create the OAuth Application on your Ansible Automation Platform instance, using a placeholder name for the deployment URL.

After deploying self-service technology preview, you must replace the placeholder value with a URL derived from your deployment URL in your OAuth application.

The steps below describe how to create an OAuth Application in the Ansible Automation Platform Platform console.

Procedure
  1. Open your Ansible Automation Platform instance in a browser and log in.

  2. Navigate to Access Management  OAuth Applications.

  3. Click Create OAuth Application.

  4. Complete the fields in the form.

    • Name: Add a name for your application.

    • Organization: Choose the organization.

    • Authorization grant type: Choose Authorization code.

    • Client type: choose Confidential.

    • Redirect URIs: Add placeholder text for the deployment URL (for example https//:example.com).

      Create OAuth application
  5. Click Create OAuth application.

    The Application information popup displays the clientId and clientSecret values.

  6. Copy the clientId and clientSecret values and save them.

    These values are used in an OpenShift secret for Ansible Automation Platform authentication.

3.2. Generating a token for user authentication

You must create a token in Ansible Automation Platform. The token is used in an OpenShift secret for Ansible Automation Platform authentication.

Procedure
  1. Log in to your instance of Ansible Automation Platform as the admin user.

  2. Navigate to Access Management  Users.

  3. Select the admin user.

  4. Select the Tokens tab

  5. Click Create Token.

  6. Select your OAuth application. In the Scope menu, select Write.

    Create OAuth token
  7. Click Create Token to generate the token.

  8. Save the new token.

    The token is used in an OpenShift secret that is fetched by the Helm chart.

3.3. Generating GitHub and Gitlab personal access tokens

3.3.1. Creating a Personal access token (PAT) on GitHub

  1. In a browser, log in to GitHub and navigate to the Personal access tokens page.

  2. Click Generate new token(classic).

  3. In the Select scopes: section, enable the following:

    • repo

    • read:org

    • workflow (as needed)

  4. Click Generate token.

  5. Save the Personal access token.

3.3.2. Creating a Personal access token (PAT) on Gitlab

  1. In a browser, log in to Gitlab and navigate to the Personal access tokens page.

  2. Click Add new token.

  3. Provide a name and expiration date for the token.

  4. In the Scopes: section, select the following:

    • read_repository

    • api

  5. Click Create personal access token.

  6. Save the Personal access token.

3.4. Setting up a project for self-service technology preview in OpenShift Container Platform

You must set up a project in OpenShift Container Platform for self-service technology preview. You can create the project from a terminal using the oc command. Alternatively, you can create the project in the OpenShift Container Platform console.

For more about OpenShift Container Platform projects, see the Building applications guide in the OpenShift Container Platform documentation.

3.4.1. Setting up an OpenShift Container Platform project using oc

  1. In a terminal, log in to OpenShift Container Platform using your credentials:

    oc login <OpenShift_API_URL> -u <username>

    For example:

    $ oc login https://api.<my_cluster>.com:6443 -u kubeadmin
    WARNING: Using insecure TLS client config. Setting this option is not supported!
    
    Console URL: https://api.<my_cluster>.com:6443/console
    Authentication required for https://api.<my_cluster>.com:6443 (openshift)
    Username: kubeadmin
    Password:
    Login successful.
    
    You have access to 22 projects, the list has been suppressed. You can list all projects with 'oc projects'
    
    Using project "default".
  2. Create a new project. Use a unique project name.

    $ oc new-project <self-service-tech-preview-project-name>

    Lowercase alphanumeric characters (a-z, 0-9) and the hyphen character (-) are permitted for project names. The underscore (_) character is not permitted. The maximum length for project names is 63 characters.

    Example:

    $ oc new-project <my-project>
    
    Now using project "my-project" on server "https://openshift.example.com:6443".
  3. Open your new project:

    $ oc project <self-service-tech-preview-project-name>

3.4.2. Setting up a project in the OpenShift Container Platform web console

You can use the OpenShift Container Platform web console to create a project in your cluster.

  1. In a browser, log in to the OpenShift Container Platform web console.

  2. Choose the Developer perspective.

  3. Click the Project menu and select Create project.

    1. In the Create Project dialog box, enter a unique name Name field.

      • Lowercase alphanumeric characters (a-z, 0-9) and the hyphen character (-) are permitted for project names.

      • The underscore (_) character is not permitted.

      • The maximum length for project names is 63 characters.

    2. Optional: display name and description for your project.

  4. Click Click to create the project.

3.5. Creating a plug-in registry in OpenShift

3.5.1. Downloading the TAR files

  1. Create a directory on your local machine to store the .tar files.

    $ mkdir /path/to/<ansible-backstage-plugins-local-dir-changeme>
  2. Set an environment variable ($DYNAMIC_PLUGIN_ROOT_DIR) to represent the directory path.

    $ export DYNAMIC_PLUGIN_ROOT_DIR=/path/to/<ansible-backstage-plugins-local-dir-changeme>
  3. Download the latest .tar file for the plug-ins from the Red Hat Ansible Automation Platform Product Software downloads page.

    The format of the filename is ansible-backstage-rhaap-bundle-x.y.z.tar.gz.

    Substitute the Ansible plug-ins release version, for example 1.0.0, for x.y.z.

  4. Extract the ansible-backstage-rhaap-bundle-<version-number>.tar.gz contents to $DYNAMIC_PLUGIN_ROOT_DIR.

    $ tar --exclude='*code*' -xzf ansible-backstage-rhaap-bundle-x.y.z.tar.gz -C $DYNAMIC_PLUGIN_ROOT_DIR

    Substitute the Ansible plug-ins release version, for example 1.0.0, for x.y.z.

Verification

Run ls to verify that the extracted files are in the $DYNAMIC_PLUGIN_ROOT_DIR directory:

$ ls $DYNAMIC_PLUGIN_ROOT_DIR
ansible-plugin-backstage-rhaap-dynamic-x.y.z.tgz
ansible-plugin-backstage-rhaap-dynamic-x.y.z.tgz.integrity
ansible-plugin-backstage-rhaap-backend-dynamic-x.y.z.tgz
ansible-plugin-backstage-rhaap-backend-dynamic-x.y.z.tgz.integrity
ansible-plugin-scaffolder-backend-module-backstage-rhaap-dynamic-x.y.z.tgz
ansible-plugin-scaffolder-backend-module-backstage-rhaap-dynamic-x.y.z.tgz.integrity

The files with the .integrity file type contain the plugin SHA value.

3.5.2. Setting up the plugin registry image

Set up a registry in your OpenShift cluster to host the plug-ins and make them available for installation.

Procedure
  1. Log in to your OpenShift Container Platform instance with credentials to create a new application.

  2. Open your OpenShift project for self-service technology preview.

    $ oc project <AAP-self-service-tech-preview-project-name>
  3. Run the following commands to create a plugin registry build in in your OpenShift project.

    $ oc new-build httpd --name=plugin-registry --binary
    $ oc start-build plugin-registry --from-dir=$DYNAMIC_PLUGIN_ROOT_DIR --wait
    $ oc new-app --image-stream=plugin-registry
Verification

Verify that the plugin-registry was deployed successfully:

  1. Open the Topology view in the Developer perspective for your project in the OpenShift web console.

  2. Select the plugin registry icon to open the plugin-registry details pane.

  3. In the Pods section of the plugin-registry details pane, click View logs for the plugin-registry-#########-#### pod.

    Developer perspective

    (1) Plug-in registry

  4. Click the terminal tab and log in to the container.

  5. In the terminal, run ls to confirm that the .tar files are in the plugin registry.

    ansible-plugin-backstage-rhaap-dynamic-x.y.z.tgz
    ansible-plugin-backstage-rhaap-backend-dynamic-x.y.z.tgz
    ansible-plugin-scaffolder-backend-module-backstage-rhaap-dynamic-x.y.z.tgz

    The version numbers and file names can differ.

3.6. Creating secrets in OpenShift for your environment variables

Before installing the chart, you must create a set of secrets in your OpenShift project. The self-service technology preview Helm chart fetches environment variables from OpenShift secrets.

3.6.1. Creating Ansible Automation Platform authentication secrets

  1. Log in to your OpenShift Container Platform instance.

  2. Open your OpenShift project for self-service technology preview in the Administrator view.

  3. Click Secrets in the side pane.

  4. Click Create to open the form for creating a new secret.

  5. Select the Key/Value option.

  6. Create a secret named secrets-rhaap-self-service-preview.

    Note

    The secret must use this exact name.

  7. Add the following key-value pairs to the secret.

    Note

    The secrets must use the exact key names specified below.

    • Key: aap-host-url

      Value needed: Ansible Automation Platform instance URL

    • Key: oauth-client-id

      Value needed: Ansible Automation Platform OAuth client ID

    • Key: oauth-client-secret

      Value needed: Ansible Automation Platform OAuth client secret value

    • Key: aap-token

      Value needed: Token for Ansible Automation Platform user authentication (must have write access).

  8. Click Create to create the secret.

3.6.2. Creating Creating GitHub and Gitlab secrets

  1. Log in to your OpenShift Container Platform instance.

  2. Open your OpenShift project for self-service technology preview.

  3. Click Secrets in the side pane.

  4. Click Create to open the form for creating a new secret.

  5. Select the Key/Value option.

  6. Create a secret named secrets-scm.

    Note

    The secret must use this exact name.

  7. Add the following key-value pairs to the secret. If you are only using one SCM, just add the key-value pair for that SCM.

    Note

    The secrets must use the exact key names specified below.

    • Key: github-token

      Value needed: Github Personal Access Token (PAT)

    • Key: gitlab-token

      Value needed: Gitlab Personal Access Token (PAT)

  8. Click Create to create the secret.

4. Installing the self-service technology preview Helm chart

4.1. Configuring the self-service technology preview Helm chart from the OpenShift catalog

Prerequisites
  1. You have created a project for self-service technology preview in OpenShift.

  2. You have created a plugin registry in your project.

  3. You have set up secrets for Ansible Automation Platform authentication and SCM authentication.

Procedure
  1. In a browser, navigate to your OpenShift project for self-service technology preview that you created earlier.

  2. Select the Developer view.

  3. Click the Helm option in the OpenShift sidebar.

  4. In the Helm page, click Create and select Helm Release.

  5. Search for AAP in the Helm Charts filter, and select the AAP Technical Preview: Self-service automation chart.

  6. In the modal dialog on the chart page, click Create.

  7. Select the YAML view in the Create Helm Release page.

  8. Locate the clusterRouterBase key in the YAML file and replace the placeholder value with the base URL of your OpenShift instance.

    The base URL is the URL portion of your OpenShift URL that follows https://console-openshift-console, for example apps.example.com:

      redhat-developer-hub
        global:
          clusterRouterBase: apps.example.com
  9. The Helm chart is set up for the Default Ansible Automation Platform organization.

    To update the Helm chart to use a different organization, update the value for the catalog.providers.rhaap.orgs key from Default to your Ansible Automation Platform organization name.

  10. Click Create to launch the deployment.

4.2. Verifying the installation

  1. In a browser, log in to your OpenShift instance.

  2. In the Developer view, navigate to the Topology view for the namespace where you deployed the Helm chart.

    The deployment appears: the label on the icon is D. The name of the deployment is <installation-name>-backstage, for example <my-aap-self-service-tech-preview-backstage>.

    While it is deploying, the icon is light blue. The color changes to dark blue when deployment is complete.

Deployment on OpenShift console

5. Installing self-service technology preview in air-gapped OpenShift Container Platform environments

You can install self-service technology preview in a disconnected OpenShift Container Platform environment.

5.1. Prerequisites

  • You have installed the OpenShift CLI (oc). See the Getting started with the OpenShift CLI chapter of the Understanding OpenShift Container Platform guide.

  • You have installed Helm 3.10 or newer. See the Installing Helm chapter of the OpenShift Container Platform Building applications guide.

  • You have installed and configured Podman for pulling and pushing container images.

  • You have internet access. This is required to pull images and charts from public repositories, including registry.redhat.io and https://charts.openshift.io/.

  • A Red Hat pull secret, for exmaple pull-secret.json or similar credentials file that allows you to pull images from registry.redhat.io.

  • Sufficient disk space to store downloaded images and chart packages.

  • Access to public registries: Docker Hub, quay.io, registry.redhat.io, and your disconnected OpenShift cluster’s internal registry.

5.2. Preparing for air-gapped installation

Before you can install self-service technology preview in a disconnected OpenShift Container Platform environment, you must complete some processes on a connected bastion host.

5.2.1. Mirroring container images

  1. Log in to registry.redhat.io:

    $ podman login registry.redhat.io

    Enter your Red Hat username and password when prompted.

    Alternatively, you can use:

    $ podman login --authfile <path_to_pull_secret.json> registry.redhat.io
  2. Log in to your disconnected registry:

    $ podman login <disconnected_registry_url>
  3. Pull the original image from registry.redhat.io:

    $ podman pull registry.redhat.io/rhdh/rhdh-hub-rhel9:1.5.2
  4. Tag the image for your disconnected registry:

    $ podman tag registry.redhat.io/rhdh/rhdh-hub-rhel9:1.5.2 <disconnected_registry_url>/<your_namespace>/rhdh-hub-rhel9:1.5.2

    Example:

    $ podman tag registry.redhat.io/rhdh/rhdh-hub-rhel9:1.5.2 my-disconnected-registry.com/myproject/rhdh-hub-rhel9:1.5.2
  5. Push the tagged image to your disconnected registry:

    $ podman push <disconnected_registry_url>/<your_namespace>/rhdh-hub-rhel9:1.5.2

5.2.2. Downloading the helm chart package

  1. Add the OpenShift Helm charts repository:

    $ helm repo add openshift-helm-charts https://charts.openshift.io/
  2. Update your Helm repositories to fetch the latest chart information:

    $ helm repo update
  3. Pull the chart:

    $ helm pull openshift-helm-charts/redhat-rhaap-self-service-preview --version 1.0.1

    This command downloads the chart as a .tgz file (e.g., redhat-rhaap-self-service-preview-1.0.1.tgz).

  4. Unpack the chart:

    $ tar -xvf redhat-rhaap-self-service-preview-1.0.1.tgz

    This creates a directory with a name similar to redhat-rhaap-self-service-preview-1.0.1/.

  5. Navigate to the unpacked chart directory (for example, cd redhat-rhaap-self-service-preview-1.0.1) and open the values.yaml file in a text editor.

  6. Find all the image: entries in values.yaml and replace the original image references with the full path to the image in your disconnected registry.

    For example, replace image: registry.redhat.io/rhdh/rhdh-hub-rhel9:1.5.2 with image: <disconnected_registry_url>/<your_namespace>/rhdh-hub-rhel9:1.5.2

  7. Repack the modified chart:

    $ helm package redhat-rhaap-self-service-preview-1.0.1

    This creates a new .tgz file with your changes (for example, redhat-rhaap-self-service-preview-1.0.1.tgz).

5.2.3. Transferring assets to the disconnected environment

  • Copy the modified Helm chart .tgz file or files (for example, redhat-rhaap-self-service-preview-1.0.1.tgz) from your connected bastion host to a machine or jump box within your disconnected OpenShift network.

5.3. Installing the Helm chart in the disconnected OpenShift environment

5.3.1. Accessing the disconnected OpenShift environment

Prerequisites

Ensure you have the necessary kubeconfig and permissions (e.g., cluster-admin for setting up image pull secrets or insecure registries).

Procedure
  1. In a terminal, log in to your disconnected OpenShift cluster using the oc CLI.

    oc login --token=<your_token> --server=<your_openshift_api_url>

    Use the following command if you have a kubeconfig:

    export KUBECONFIG=/path/to/your/kubeconfig
    oc login
  2. Ensure that your OpenShift cluster is configured to trust your disconnected registry:

    1. Use ImageContentSourcePolicy for mirroring.

    2. Use additionalTrustedCA in image.config.openshift.io/cluster for self-signed certificates.

    3. Use insecure-registries for plain HTTP.

5.3.2. Defining Parameters and Navigate to Chart Location

  1. On the machine within your disconnected environment, navigate to the directory where you placed the transferred Helm chart .tgz file.

    cd /path/to/your/transferred/charts/

    Example:

    cd /opt/disconnected-assets/charts/
  2. If the namespace doesn’t exist, create it:

    oc new-project ${MY_NAMESPACE}
  3. Define your namespace and cluster router base as environment variables for easier use:

    export MY_NAMESPACE="<your_namespace_name>"
    export MY_CLUSTER_ROUTER_BASE="<your_cluster_router_base>"

    Example:

    export MY_NAMESPACE="rhdh-dev"
    export MY_CLUSTER_ROUTER_BASE="apps.yourcluster.example.com"

5.3.3. Installing the Helm chart

  • Install the chart using the helm install command, referencing the local .tgz file by its name and using --set flags to provide necessary overrides.

    Add more --set flags for any other values that were in your original values.yaml file.

    $ helm install redhat-rhaap-self-service-preview \
      redhat-rhaap-self-service-preview-1.0.1.tgz \
      --namespace ${MY_NAMESPACE} \
      --set redhat-developer-hub.global.clusterRouterBase=${MY_CLUSTER_ROUTER_BASE} \
      --set redhat-developer-hub.image.name=<disconnected_registry_url>/<your_namespace>/rhdh-hub-rhel9:1.5.2 \
    • redhat-rhaap-self-service-preview: the release name for your Helm deployment.

    • redhat-rhaap-self-service-preview-1.0.1.tgz: the local path/filename to your modified Helm chart .tgz file.

    • --namespace ${MY_NAMESPACE}: the OpenShift project (namespace) where the chart will be installed, using your defined variable.

    • --set redhat-developer-hub.global.clusterRouterBase=${MY_CLUSTER_ROUTER_BASE}: the cluster router base, using your defined variable.

5.4. Verifying the disconnected installation

  1. Check the Helm release status:

    $ helm list -n ${MY_NAMESPACE}
  2. Monitor the pods in your namespace to ensure they are running:

    $ oc get pods -n ${MY_NAMESPACE}
  3. Check for ImagePullBackOff or other errors in pod events:

    $ oc describe pod <pod_name> -n ${MY_NAMESPACE}
  4. If the chart uses routes to expose services, verify that the routes are created and accessible:

    $ oc get route -n ${MY_NAMESPACE}

6. Inspecting the deployment on OpenShift

You can inspect the deployment logs and ConfigMap on the OpenShift UI to verify that the deployment conforms with the settings in your Helm chart.

6.1. Viewing the deployment logs

  1. In a browser, log in to your OpenShift instance.

  2. In the Developer view, navigate to the Topology view for the namespace where you deployed the Helm chart.

  3. The deployment appears: the label on the icon is D.

    The name of the deployment is <installation-name>-backstage, for example <my-aap-self-service-tech-preview-backstage>.

  4. Click the icon representing the deployment.

  5. The Details pane for the deployment opens.

  6. Select the Resources tab.

  7. Click View logs for the deployment pod in the Pods section:

    Deployment on OpenShift console

    The Pod details page opens for the deployment pod.

  8. Select the Logs tab in the Pod details page.

  9. To view the install messages, select the install-dynamic-plugins container from the INIT CONTAINERS section of the dropdown list of containers:

    View install messages

    The log stream displays the progress of the installation of the plug-ins from the plug-in registry.

    The log stream for successful installation of the plug-ins resembles the following output:

     ======= Installing dynamic plugin http://plugin-registry:8080/ansible-backstage-plugin-catalog-backend-module-rhaap-dynamic-0.1.0.tgz
     *=> Grabbing package archive through pm pack'
     •=› Vertfying package Integrity
     •*> Extracting package archtve /dynamtc-plugtns-root/anstble-backstage-plugtn-catalog-backend-nodule-rhaap-dynamic-0.1.0.tgz
     •*› Removing package archive /dynamic-plugins-root/ansible-backstage-plugin-catalog-backend-module-rhaap-dynamic-0.1.0. tgz
     -> Successfully installed dynamic plugin http://plugin-registry:8080/ansible-backstage-plugin-catalog-backend-module-rhaap-dynamic-0.1.0.tgz
  10. Select the Environment tab in the Pod details page to view the environment variables for the containers. If you set additional environment variables in your Helm chart, check that they are listed here.

    Pod environment variables

6.2. Viewing the ConfigMaps

  1. In a browser, open the project for your self-service technology preview in your OpenShift instance.

  2. In the Developer view, select ConfigMaps in the navigation pane.

  3. Select the <installation-name>-backstage-app-config ConfigMap, for example my-aap-self-service-tech-preview-backstage-app-config.

  4. Verify that the ConfigMap conforms with the values you updated in the Helm chart.

  5. Return to the list of ConfigMaps and select the <installation-name>-dynamic-plugins ConfigMap, for example my-aap-self-service-tech-preview-dynamic-plugins.

  6. Verify that the ConfigMap conforms with the expected plugin values.

7. Accessing the self-service technology preview deployment

7.1. Adding the deployment URL to the OAuth Application

When you set up your OAuth application in Ansible Automation Platform before deploying self-service technology preview, you added placeholder text for the Redirect URIs value.

You must update this value using the URL from the deployed application so that you can run automation on self-service technology preview from self-service technology preview.

  1. Determine the Redirect URI from your OpenShift deployment:

    1. Open the URL for the deployment from the OpenShift console to display the sign-in page for self-service technology preview.

    2. Copy the URL.

    3. To determine the Redirect URI value, append /api/auth/rhaap/handler/frame to the end of the deployment URL.

      For example, if the URL for the deployment is https://my-aap-self-service-tech-preview-backstage-myproject.mycluster.com, then the Redirect URI value is https://my-aap-self-service-tech-preview-backstage-myproject.mycluster.com/api/auth/rhaap/handler/frame.

  2. Update the Redirect URIs field in the OAuth application in Ansible Automation Platform:

    1. In a browser, open your instance of Ansible Automation Platform.

    2. Navigate to Access Management  OAuth Applications.

    3. In the list view, click the OAuth application you created.

    4. Replace the placeholder text in the Redirect URIs field with the value you determined from your OpenShift deployment.

    5. Click Save to apply the changes.

7.2. Signing in to self-service technology preview

Prerequisites
  1. You have configured an OAuth application in Ansible Automation Platform for self-service technology preview.

  2. You have configured a user account in Ansible Automation Platform.

Procedure
  1. In a browser, navigate to the URL for self-service technology preview to open the sign-in page.

    Self-service sign-in page
  2. Click Sign In.

  3. The sign-in page for Ansible Automation Platform appears:

    Ansible Automation Platform sign-in page
  4. Enter your Ansible Automation Platform credentials and click Log in.

  5. The self-service technology preview UI opens.

  6. Click Templates to open a landing page where tiles are displayed, representing templates. When the page is populated with templates, the layout resembles the following screenshot:

    Templates view

7.3. Adjusting synchronization frequency between Ansible Automation Platform and self-service technology preview

The Helm chart defines how frequently users, teams and organization configuration information is synchronized from Ansible Automation Platform to self-service technology preview.

The frequency is set by the catalog.providers.rhaap.schedule.frequency key. By default, the synchronization occurs hourly.

  • To adjust the synchronization frequency, edit the value for the catalog.providers.rhaap.schedule.frequency key in the Helm chart.

            catalog:
              ...
              providers:
                rhaap:
                  '{{- include "catalog.providers.env" . }}':
                    schedule:
                      frequency: {minutes: 60}
                      timeout: {seconds: 30}
Note

Increasing the synchronization frequency generates extra traffic.

Bear this in mind when deciding the frequency, particularly if you have a large number of users.

8. Telemetry capturing

The telemetry data collection feature helps in collecting and analyzing the telemetry data to improve your experience with Ansible Automation Platform self-service technology preview. This feature is enabled by default.

8.1. Telemetry data collected by Red Hat

Red Hat collects and analyses the following data:

  • Events of page visits and clicks on links or buttons.

  • System-related information, for example, locale, timezone, user agent including browser and OS details.

  • Page-related information, for example, title, category, extension name, URL, path, referrer, and search parameters.

  • Anonymized IP addresses, recorded as 0.0.0.0.

  • Anonymized username hashes, which are unique identifiers used solely to identify the number of unique users of the RHDH application.

8.2. Disabling telemetry data collection

You can disable and enable the telemetry data collection feature for self-service technology preview by updating the Helm chart for your OpenShift Container Platform project.

  1. Log in to the OpenShift Container Platform console and open the project for self-service technology preview in the Developer perspective.

  2. Navigate to Helm.

  3. Click the More actions ⋮ icon for your self-service technology preview Helm chart and select Upgrade.

  4. Select YAML view.

  5. Locate the redhat-developer-hub.global.dynamic.plugins section of the Helm chart.

  6. To disable telemetry data collection, add the following lines to the redhat-developer-hub.global.dynamic.plugins section.

    redhat-developer-hub:
      global:
        ....
        dynamic:
          plugins:
            - disabled: true
              package: >-
                ./dynamic-plugins/dist/backstage-community-plugin-analytics-provider-segment

    To re-enable telemetry data collection, delete these lines.

  7. Click Upgrade to apply the changes to the Helm chart and restart the pod.