Author Archive: Martin Berger

Oracle Cloud Infrastructure – A small and secure Development Environment – Next Level: Terraform

In a previous blog post I wrote how to build a small and secure development environment in Oracle Cloud Infrastructure with an OpenVPN entry point and a compute instance in a private setup. Now there is the Terraform code available in GitHub to setup it on an easy and reusable way:

terraform-examples/oci/openvpnas at main · Trivadis/terraform-examples (github.com)

What you get

After executing the code, you will get this setup here:

  • an OpenVPN Access Server from OCI Marketplace
  • a Compute Instance

Prerequisites

  • Oracle OCI CLI installed and configured
  • Terraform up and running
  • Git client installed

SSH Key Access

An example private and public SSH key to get access on the compute instance in the private subnet is provided in subdirectory SSH, if you want to use your own SSH key – which is highly recommended – just replace the public key variable in file variables.tf with your own key:

Some Code Snippets

Terraform State File

In file backend.tf, the Terraform state is set  to local, there is also an example to store your state file in OCI Object Store. Please prepare the bucket first according the documentation here: Using Object Storage for State Files (oracle.com). Example:

Compute Instance Image

The compute instance as defined in compute.tf uses this images according your location – for other data centers or images, follow here is the link where all images are listed: https://docs.us-phoenix-1.oraclecloud.com/images/

OpenVPN Marketplace Image

 

Let’s Terraform it

0: Clone GitHub Directory

And go to openvpnas subdirectory.

1st: Set Variables

2nd: terraform init, plan and apply

Login and Go!

And after some minutes – you can get access to the OpenVPN Administrator Dashboard or get your client or profile. All required information like OpenVPN Access Server public IP, URL etc. are provided in the Terraform output.

Login into the compute instance with the private key and the private subnet IP address when the VPN tunnel is up and running:

Links and Documents

Summary

Setup an Oracle Cloud Infrastructure with Terraform is a good way to start in the IaC – Infrastructure as Code – world. Feel free to use this code a base for your next project. What’s your next level? Mine is to integrate the code in the Oracle Cloud Resource Manager – stay tuned!

OCI Cloud Performance Management for On-Premises Databases – Part 1 – Management Agent Installation

The OCI Management Agent service collects data from services and sources for monitoring and management in Oracle Cloud Infrastructure. In this blog post series I will show you how you can monitor and manage an on-premises Oracle databases in OCI. The communication between an agent and OCI requires an Agent Install Key and is based on HTTPS. Service Plugins extend a Management Agent for example for Oracle database performance monitoring and management or log analytics.

This is a small blog post series

My Setup

  • An OCI Tenant in datacenter EU-FRANKFURT-1
  • An OCI compartment called datacenter-kestenholz
  • An on-premises Container Database called CDB114, running on Oracle Linux 7
  • Three on-premises Pluggable Databases
The goal is to manage the on-premises database in Oracle Cloud Infrastructure OCI. Output from the Trivadis TVD-Basenv(TM) framework which show the database up and running:

Prerequisites for Management Agent Installation

  • Oracle Linux 6 or higher
  • Red Hat Enterprise Linux 6 or higher
  • CentOS 6 or 7
  • SUSE Linux Enterprise Server 12 or 15
  • Windows Server 2012 R2, 2016 or 2019

There are other prerequisites on the target server like the correct Java version (e.g. version 11 does not work) and sudo permissions. For the complete list of prerequisites, see here:

https://docs.oracle.com/en-us/iaas/management-agents/doc/perform-prerequisites-deploying-management-agents.html#GUID-BC5862F0-3E68-4096-B18E-C4462BC76271

Setup for OCI Management Agent

It is recommended to handle the agents in a separate user group and with policies. This allows us to define the Management Agent management on an fine granular level.

Group AGENT_ADMINS

According the documentation, I have created an user group called AGENT_ADMINS.

Policy Datacenter_Kestenholz_Agent_Policy

A new policy is created that allows the admin group to interact with the management agents, handle keys etc.

Dynamic Group Management_Agent_Dynamic_Group

New added agents in the compartment belong automatically to this group. Replace the OCID with the OCID for your compartment.

Policy Datacenter_Kestenholz_Agent_Communication_Policy

A policy is required that allows the agents to communicate with the OCI endpoints. This policy is important, otherwise you run in an communication error (see below in section troubleshooting).

Install On-Premises Management Agent

Create Agent Install Key

Go to Management Agent Menu / Downloads and Keys, create a new Agent Install Key. Set the compartment and the time how long the key is valid. In this example, I need to replace the key after one month.

When click on Download Key to File, a textfile is created with the ManagementAgentInstallKey and all other (optional) parameters which can be used for install. You can use this file as responsefile template later.

Download the Software and Transfer it to the Target Server

I use the Agent for Linux and transferred it to the target on-premises server into a stage directory as OS user root.

Create a Local Response File

This is an example of a simple two-lines response file for agent installation in the same folder where the rpm is located, called input.rsp. The parameter managementAgentInstallKey is visible in the OCI web interface, the CredentialWalletPassword is your password for the wallet.

RPM Installation

As OS user root (or a user with sudo permissions) – install the rpm file. Here you can see that the minimum required Java version is not met.

After installing the jdk-8u281-linux-x64.rpm to update the server Java version, the installer runs fine.


2nd try – successful

Agent Configuration

Run the install script with the created response file as additional parameter. The agent will be started automatically.

A systemd service is created.

Verify the Management Agent in Oracle Cloud Infrastructure

Immediately after the setup, the Management Agent is visible with status Active in OCI and starts uploading data.

Agent details:

Troubleshooting

Management Agent logfiles are located in directory /opt/oracle/mgmt_agent/agent_inst/log.

Example error when the policy for agent communication is not set properly:

From My Oracle Support: OCI : Management Agent Status Reporting As “Not Available” Post Installation (Doc ID 2745566.1)

Summary Part 1

The Management Agent installation and integration is easy to setup when all prerequisites are met. For troubleshooting you have full access on the agent logs. See you for blog post 2, where I try to integrate the  on-premises Oracle databases into OCI.

Oracle Cloud Infrastructure – A short Blog Post about a secure and small Development Setup

For an internal project I had the pleasure to setup a new Oracle Cloud Infrastructure environment for an APEX development team. Here is a short overview about the setup.

Requirements

  • VPN Access from everywhere – 2 people are working maximal at same time on the environment
  • Oracle Standard Edition 2 – no license available in project
  • Small monitoring to verify server stats
  • Instances can be started and stopped from the developers to save costs for example over night, weekend, holiday etc.

Architecture Diagram

Resource Network Usage Remarks
Open VPN Access Server Public Subnet VPN client access and traffic routing OCI Cloud Marketplace Image – OpenVPN Access Server (2 FREE VPN Connections) – OpenVPN Inc. – Oracle Cloud Marketplace
Management Server Private Subnet OCI-CLI, Monitoring Application server and database node start/stop with OCI-CLI, Grafana and Prometheus for monitoring
Application Server Private Subnet Tomcat ORDS, APEX
Database System Private Subnet OCI Database Standard Edition 2, Backup to Object Store enabled

Network Components

  • Regional private and public subnet
  • Security lists and network security groups
  • Private and public routing table
  • NAT gateway for regional private subnet

Monitoring

Grafana and Prometheus, running on the management server. The free shape VM.Standard.E2.1.Micro fits perfect for this small setup! The Prometheus node exporter runs on the database and the application server. I used this Grafana dashboard here: Prometheus Node Exporter Full dashboard for Grafana | Grafana Labs

Links

Next Steps

  • Adding Influx DB for persistence
  • Adding the Oracle database to Grafana monitoring
  • Optimizing shape size for the database server according usage

Other Ideas

  • Create a blueprint for internal developer environments
  • Automate the setup with Terraform and Ansible

Summary

Setting up this infrastructure in Oracle Cloud Infrastructure was fun. All developer requirements are fulfilled. Started with the Network and OpenVPN configuration – I really like their Marketplace instance – and the moved on to application and database server, step-by-step. There are many other ideas what we can do more based on this setup, the work will not run out. #ilike

Oracle Cloud Infrastructure Data Safe – How to burn down 201.44 Swiss Francs in 30 Seconds…

Is Data Safe really for free?

In the last autumn, the new Oracle Cloud Infrastructure feature called Data Safe was released. For sure, new features has to be tested. I have tested the Data Safe feature too and added a cloud database to Data Safe. But in my enthusiasm about this cool feature – or maybe it was just too late in the evening –  I did a mistake by adding the database target. Four days later, I recognized that Data Safe is charged in my account. Mmm, but should it not be for free? First reaction: I raised an SR and described the case. The nice guy from My Oracle Support realized the situation quickly:

Dear Mister Berger, you have used the wrong target type when adding the Oracle Cloud Infrastructure database as a new Data Safe target.

From the Service Request:

  • B91632 – Oracle Cloud Infrastructure – Data Safe for Database Cloud Service – Each (Includes 1 million audit records per target per month) – Free
  • B91631 – Oracle Cloud Infrastructure – Data Safe for Database Cloud Service – Audit Record Collection Over 1 Million Records (over 1 million audit records per target per month) – 0.0800 / 10,000 Audit Records Per Target Per Month
  • B92733 – Oracle Cloud Infrastructure – Data Safe for On-Premises Databases – Target Database Per Month – 200.00 Target Database Per Month + Includes 1 million audit records per target per month (pre-requisite under B91632)

Indeed, indeed. According My Oracle Support I have used the wrong target type. Instead Oracle Cloud Database, I used Oracle Database on Compute. And did not realized, the mistake and ignored the text below to the dropdown box. Shame on me 😉 –  here is the small, but important difference:

So far so good, the mistake was recognized. I deleted the target and added it from scratch with the correct target type. But this didn’t help, the charging went on.

Oracle Cloud Infrastructure Price List

Adding an other target type than Oracle Cloud Database is charged on monthly fee base as described here: Cloud Price List | Oracle

Cost and Usage Report

In the detailed  cost and usage report, the target is marked as deleted (suffix DELETED + deletion date), and charged.

All you can do is getting angry about that mistake and wait. After a month, the money was burned down, and there were no more Oracle Cloud Infrastructure Data Safe costs charged. As you can see, there are 201.44 CHF charged for a month.

I don’t know what Oracle has for a currency converter, but actual 200 USD are less that 180 CHF 😉

Lessons learned

Pity about the beautiful money – and for my next test run: RTFM.

Red Hat Ansible Tower Upgrade from 3.5 to 3.8 – when running setup.sh is not enough – or: I have made fire!

I a customer project I had to update an existing Red Hat Ansible Tower setup from version 3.5.1 to the newest available version 3.8. The upgrade scenario as described in 8. Upgrading an Existing Tower Installation — Ansible Tower Installation and Reference Guide v3.8.0 does not work here. For example: the delivered setup.sh is not able to restore data exported from Postgres 9.6 into a new Postgres 10 database. This upgrade scenario as described is a result of a long discussion with Red Hat support (Thanks to Swati – he did a great job!) and an intensive test period on local virtual machines until the live system was in the row.

Remark: The servers in this blog post are listed without domain names to make it easier readable.

The running Ansible Tower Setup

It contains three Red Hat Ansible Tower servers and the repository. All servers can connect among each other. Firewall ports are opened.

Server Tower Version Operating System PostgreSQL Version Remarks
ansible-tower-01 3.5.1 RHEL 7.8 no Internet access
ansible-tower-02 3.5.1 RHEL 7.8 no Internet access
ansible-tower-03 3.5.1 RHEL 7.8 no Internet access
ansible-db03 RHEL 7.8 9.6 no Internet access, runs on port 5432

Upgrade Path

This documents describes the upgrade path, a release cannot be more than two release numbers behind to upgrade it. In our case, Tower 3.5.1 needs to be updated to 3.7.1 first before 3.8 can be applied. Ansible Tower 3.8 requires Postgres 10.

What are the Recommended Upgrade Paths for Ansible Tower? – Red Hat Customer Portal

Bundles: We use bundles (aka offline installers) for version 3.7.4 and 3.8 – the Ansible Tower servers don’t require internet access to upgrade and install relevant packages. They are included in the bundle. The bundles are accessible by the Ansible Tower server where the setup processes are executed.

Bundle URL Note
3.7.4.1 https://releases.ansible.com/ansible-tower/setup-bundle/ansible-tower-setup-bundle-3.7.4-1.tar.gz free download
3.8 https://access.redhat.com/downloads/content/480/ver=1.2/rhel—7/1.2/x86_64/product-software requires a Red Hat subscription to download

Prerequisites

  • The software and the inventory file of the current Ansible Tower installation
  • Bundle of the future releases are downloaded
  • Postgres 10 installed on database server
  • A Red Hat Subscription Manifest File which contains the license information for Tower 3.8 – the Manifest file can only be created in the Red Hat account when a valid subscription is available

Upgrade Overview

  1. Shutdown all involved virtual machines properly, create a snapshot for fallback, restart all involved servers properly
  2. Install Postgres 10 on database server and create a new Postgres 10 database
  3. Manual export/import tower data into the new database with pg_dump/pg_restore
  4. Re-run Ansible Tower 3.5.1 setup against new database
  5. Upgrade to Ansible Tower 3.7.1
  6. Upgrade to Ansible Tower 3.8
  7. Add Red Hat Subscription Manifest
  8. I have made fire!

1. Shutdown Machines and create Snapshot of the Hosts

It is highly recommended to create a backup/snapshot of the existing environment in case of upgrade troubles. In my case, we did VMWare vRealize Automation snaphots first after shutting down tower and database properly.

Tower and Database Shutdown

ansible-tower-01 / ansible-tower-02 / ansible-tower-03: stop the tower service:

db03: Stop the database service:

Create a backup or snapshot

In my case, we did VMWare vRealize Automation snaphots first after shutting down tower and database properly.

Restart machines

Restart all involved servers and verify in the Tower UI that everything works properly. Tower and database are configured as OS service, they are started automatically after server power on.

2. Install Postgres 10 on database server and create a new Postgres 10 database

Install Postgres 10

Connection details of the new repository database:

  • Host: ansible-db03
  • Port: 5433 (old Tower repository was 5432)

There is no internet access to the official PostgreSQL yum repository. Therefore, download the following files from https://download.postgresql.org/pub/repos/yum/10/redhat/rhel-7.8-x86_64/ and copy them to /tmp to the databse server. Ansible Tower 3.7.x requires PostgreSQL 10. It does not work with a newer version! See here: https://docs.ansible.com/ansible-tower/latest/html/installandreference/requirements_refguide.html.

 

Create a new PostgreSQL Cluster with Trivadis pgOperate

On the database server ansible-db03, a new cluster database is required with PostgreSQL version 10. Therefore I used pgOperate. pgOperate is part of the Open Source tool pgBasEnv. My Trivadis colleagues have developed a really cool framework to manage PostgreSQL clusters. Both tools with the manuals and examples are available on GitHub:

GitHub – Trivadis/pgbasenv: pgBasEnv – PostgreSQL Base Environment Tool

GitHub – Trivadis/pgoperate: pgOperate – PostgreSQL Operation Tool

Example to create a new cluster when the tools are installed in base directory /var/lib/pgsql/tvdtoolbox. You will find more information how to configure and how to use the tool on the GitHub pages.

Set the cluster  parameters to create a new cluster which is running on port 5433:

Set the alias and execute cluster creation script.

Run root.sh to enable automated startup and allow user postgres to start/stop the service.

Login as user postgres and verify the new created cluster which is started automatically. Here you can see the old PostgreSQL database version 9.6 running together with version 10 as output from the status screen after login.

Verify that the file pg_hba.conf allows traffic to the database, for example allow database connects from each server:

0.0.0.0/0 ensures, that all Ansible Tower servers can connect to the new database. This does not implicit allow everybody to connect. For example you can control the access on OS firewall level.

Create a new Database for Tower Repository

Login as user postgres and start psql against the new cluster. In this case, we create a new database called tower10 with same username and password as the existing database.

Verification

Test the database connection from all Tower servers to avoid firewall or configuration issues and verify if the new created database tower10 is listed.

3. Manual Export / Import of Ansible Tower Repository Data

In this step we migrate the tower database to the new PostgreSQL 10 database. It’s recommended to stop the Ansible Tower before starting the export.

ansible-tower-01 / ansible-tower-02 / ansible-tower-03: stop the tower service:

Export Data

Import Data

Stop and disable existing Postgres 9.6 Database

4. Re-run Ansible Tower 3.5.1 setup against new database

Adapt Inventory

In this step, the existing Tower setup is registered again against the new database. We have to change the inventory file of the existing 3.5.1 installation.

Example inventory file of the existing 3.5.1 installation with three Ansible Tower servers. Note: the [database] section is empty, so the setup procedure will not try to install a new database and uses the new one what we have created above.

Re-run setup.sh

As user root, execute the setup.sh script on ansible-towert01.

Verify the setup playbook result – no failed tasks should occur. The Ansible Tower 3.5.1 runs now with the PostgreSQL database version 10 and is ready to upgrade. Verify if all Tower are running properly and log in in the user interface.

Verification

Log in into Ansible Tower Servers and verify if the version is still 3.5.1.

5. Upgrade to Ansible Tower 3.7.1

The software bundle is transferred to the target server. As I don’t have much free space, I moved the bundle on a NFS which is attached on all Ansible Tower servers. It’s recommended to stop the Ansible Tower before starting the upgrade process.

ansible-tower-01 / ansible-tower-02 / ansible-tower-03: stop the tower service:

As user root, go to the install directory and extract the bundle.

For the inventory file, the same settings as used in 3.5.1 can be used. The RabbitMQ settings are not required anymore and can be removed. This component is removed from Ansible Tower during the upgrade process. Example inventory file:

Run setup.sh

As user root, execute the setup.sh script on ansible-towert01.

Verify the setup playbook result – no failed tasks should occur.

Verification

Log in into Ansible Tower Servers and verify if the version is now 3.7.4.

Package verification on Ansible Tower servers:

Output from the play where RabbitMQ is removed:

6. Upgrade to Ansible Tower 3.8

The software bundle is transferred to the target server. As I don’t have much free space, I moved the bundle on a NFS which is attached on all Ansible Tower servers. It’s recommended to stop the Ansible Tower before starting the upgrade process.

ansible-tower-01 / ansible-tower-02 / ansible-tower-03: stop the tower service:

As user root, go to the install directory and extract the bundle.

For the inventory file, the same settings as used in 3.7.4 can be used. Example inventory file:

Run setup.sh

As user root, execute the setup.sh script on ansible-towert01.

Note: Ansible verifies if the ansible RPM version is 2.4 or higher. Only on the node where the installer runs, the ansible package is updated. If you want to upadte the package on the oder Tower servers too, you can force the update of the package to 2.9.15 with the parameter upgrade_ansible_with_tower=1. A manual upgrade on by rpm -Uhv is possible too, the ansible package is available in the bundle.

Verify the setup playbook result – no failed tasks should occur.

Verification

Log in into Ansible Tower Servers and verify if the version is now 3.7.4.

Package verification on Ansible Tower servers:

7. Add Red Hat Subscription Manifest

In former versions, a license file was required. Now it has changed to a subscription manifest. This file was generated in the Red Hat customer portal. After the first login into the 3.8 servers, you have to add the manifest. That’s all folks.

8. I have made fire!

After spending a lot of time to figure out the correct way how to upgrade, doing a lot of tests and finally the implementation on the live system, I did it. After version 3.8 was showing up, I felt like Tom Hanks in the movie Cast Away – I have made fire!

Summary

Upgrading the Ansible Tower with the existing online documentation? No chance! I have opened a support case at Red Had to clarify a lot of this like changing the repository database, updating the ansible packages etc. The procedure with setup.sh -b / setup.sh -r as described in the upgrade documentation did not work. It needs a manual data transfer. I really like the method to install the new Ansible Tower versions with a bundle, so no internet connection is required to keep the environment up to date. Hopefully Red Hat will update the documentation in the near future, for example with an upgrade cookbook section or however they want to call it.