Latest Posts

Red Hat Ansible Tower Upgrade from 3.5 to 3.8 – when running setup.sh is not enough – or: I have made fire!

I a customer project I had to update an existing Red Hat Ansible Tower setup from version 3.5.1 to the newest available version 3.8. The upgrade scenario as described in 8. Upgrading an Existing Tower Installation — Ansible Tower Installation and Reference Guide v3.8.0 does not work here. For example: the delivered setup.sh is not able to restore data exported from Postgres 9.6 into a new Postgres 10 database. This upgrade scenario as described is a result of a long discussion with Red Hat support (Thanks to Swati – he did a great job!) and an intensive test period on local virtual machines until the live system was in the row.

Remark: The servers in this blog post are listed without domain names to make it easier readable.

The running Ansible Tower Setup

It contains three Red Hat Ansible Tower servers and the repository. All servers can connect among each other. Firewall ports are opened.

Server Tower Version Operating System PostgreSQL Version Remarks
ansible-tower-01 3.5.1 RHEL 7.8 no Internet access
ansible-tower-02 3.5.1 RHEL 7.8 no Internet access
ansible-tower-03 3.5.1 RHEL 7.8 no Internet access
ansible-db03 RHEL 7.8 9.6 no Internet access, runs on port 5432

Upgrade Path

This documents describes the upgrade path, a release cannot be more than two release numbers behind to upgrade it. In our case, Tower 3.5.1 needs to be updated to 3.7.1 first before 3.8 can be applied. Ansible Tower 3.8 requires Postgres 10.

What are the Recommended Upgrade Paths for Ansible Tower? – Red Hat Customer Portal

Bundles: We use bundles (aka offline installers) for version 3.7.4 and 3.8 – the Ansible Tower servers don’t require internet access to upgrade and install relevant packages. They are included in the bundle. The bundles are accessible by the Ansible Tower server where the setup processes are executed.

Bundle URL Note
3.7.4.1 https://releases.ansible.com/ansible-tower/setup-bundle/ansible-tower-setup-bundle-3.7.4-1.tar.gz free download
3.8 https://access.redhat.com/downloads/content/480/ver=1.2/rhel—7/1.2/x86_64/product-software requires a Red Hat subscription to download

Prerequisites

  • The software and the inventory file of the current Ansible Tower installation
  • Bundle of the future releases are downloaded
  • Postgres 10 installed on database server
  • A Red Hat Subscription Manifest File which contains the license information for Tower 3.8 – the Manifest file can only be created in the Red Hat account when a valid subscription is available

Upgrade Overview

  1. Shutdown all involved virtual machines properly, create a snapshot for fallback, restart all involved servers properly
  2. Install Postgres 10 on database server and create a new Postgres 10 database
  3. Manual export/import tower data into the new database with pg_dump/pg_restore
  4. Re-run Ansible Tower 3.5.1 setup against new database
  5. Upgrade to Ansible Tower 3.7.1
  6. Upgrade to Ansible Tower 3.8
  7. Add Red Hat Subscription Manifest
  8. I have made fire!

1. Shutdown Machines and create Snapshot of the Hosts

It is highly recommended to create a backup/snapshot of the existing environment in case of upgrade troubles. In my case, we did VMWare vRealize Automation snaphots first after shutting down tower and database properly.

Tower and Database Shutdown

ansible-tower-01 / ansible-tower-02 / ansible-tower-03: stop the tower service:

db03: Stop the database service:

Create a backup or snapshot

In my case, we did VMWare vRealize Automation snaphots first after shutting down tower and database properly.

Restart machines

Restart all involved servers and verify in the Tower UI that everything works properly. Tower and database are configured as OS service, they are started automatically after server power on.

2. Install Postgres 10 on database server and create a new Postgres 10 database

Install Postgres 10

Connection details of the new repository database:

  • Host: ansible-db03
  • Port: 5433 (old Tower repository was 5432)

There is no internet access to the official PostgreSQL yum repository. Therefore, download the following files from https://download.postgresql.org/pub/repos/yum/10/redhat/rhel-7.8-x86_64/ and copy them to /tmp to the databse server. Ansible Tower 3.7.x requires PostgreSQL 10. It does not work with a newer version! See here: https://docs.ansible.com/ansible-tower/latest/html/installandreference/requirements_refguide.html.

 

Create a new PostgreSQL Cluster with Trivadis pgOperate

On the database server ansible-db03, a new cluster database is required with PostgreSQL version 10. Therefore I used pgOperate. pgOperate is part of the Open Source tool pgBasEnv. My Trivadis colleagues have developed a really cool framework to manage PostgreSQL clusters. Both tools with the manuals and examples are available on GitHub:

GitHub – Trivadis/pgbasenv: pgBasEnv – PostgreSQL Base Environment Tool

GitHub – Trivadis/pgoperate: pgOperate – PostgreSQL Operation Tool

Example to create a new cluster when the tools are installed in base directory /var/lib/pgsql/tvdtoolbox. You will find more information how to configure and how to use the tool on the GitHub pages.

Set the cluster  parameters to create a new cluster which is running on port 5433:

Set the alias and execute cluster creation script.

Run root.sh to enable automated startup and allow user postgres to start/stop the service.

Login as user postgres and verify the new created cluster which is started automatically. Here you can see the old PostgreSQL database version 9.6 running together with version 10 as output from the status screen after login.

Verify that the file pg_hba.conf allows traffic to the database, for example allow database connects from each server:

0.0.0.0/0 ensures, that all Ansible Tower servers can connect to the new database. This does not implicit allow everybody to connect. For example you can control the access on OS firewall level.

Create a new Database for Tower Repository

Login as user postgres and start psql against the new cluster. In this case, we create a new database called tower10 with same username and password as the existing database.

Verification

Test the database connection from all Tower servers to avoid firewall or configuration issues and verify if the new created database tower10 is listed.

3. Manual Export / Import of Ansible Tower Repository Data

In this step we migrate the tower database to the new PostgreSQL 10 database. It’s recommended to stop the Ansible Tower before starting the export.

ansible-tower-01 / ansible-tower-02 / ansible-tower-03: stop the tower service:

Export Data

Import Data

Stop and disable existing Postgres 9.6 Database

4. Re-run Ansible Tower 3.5.1 setup against new database

Adapt Inventory

In this step, the existing Tower setup is registered again against the new database. We have to change the inventory file of the existing 3.5.1 installation.

Example inventory file of the existing 3.5.1 installation with three Ansible Tower servers. Note: the [database] section is empty, so the setup procedure will not try to install a new database and uses the new one what we have created above.

Re-run setup.sh

As user root, execute the setup.sh script on ansible-towert01.

Verify the setup playbook result – no failed tasks should occur. The Ansible Tower 3.5.1 runs now with the PostgreSQL database version 10 and is ready to upgrade. Verify if all Tower are running properly and log in in the user interface.

Verification

Log in into Ansible Tower Servers and verify if the version is still 3.5.1.

5. Upgrade to Ansible Tower 3.7.1

The software bundle is transferred to the target server. As I don’t have much free space, I moved the bundle on a NFS which is attached on all Ansible Tower servers. It’s recommended to stop the Ansible Tower before starting the upgrade process.

ansible-tower-01 / ansible-tower-02 / ansible-tower-03: stop the tower service:

As user root, go to the install directory and extract the bundle.

For the inventory file, the same settings as used in 3.5.1 can be used. The RabbitMQ settings are not required anymore and can be removed. This component is removed from Ansible Tower during the upgrade process. Example inventory file:

Run setup.sh

As user root, execute the setup.sh script on ansible-towert01.

Verify the setup playbook result – no failed tasks should occur.

Verification

Log in into Ansible Tower Servers and verify if the version is now 3.7.4.

Package verification on Ansible Tower servers:

Output from the play where RabbitMQ is removed:

6. Upgrade to Ansible Tower 3.8

The software bundle is transferred to the target server. As I don’t have much free space, I moved the bundle on a NFS which is attached on all Ansible Tower servers. It’s recommended to stop the Ansible Tower before starting the upgrade process.

ansible-tower-01 / ansible-tower-02 / ansible-tower-03: stop the tower service:

As user root, go to the install directory and extract the bundle.

For the inventory file, the same settings as used in 3.7.4 can be used. Example inventory file:

Run setup.sh

As user root, execute the setup.sh script on ansible-towert01.

Note: Ansible verifies if the ansible RPM version is 2.4 or higher. Only on the node where the installer runs, the ansible package is updated. If you want to upadte the package on the oder Tower servers too, you can force the update of the package to 2.9.15 with the parameter upgrade_ansible_with_tower=1. A manual upgrade on by rpm -Uhv is possible too, the ansible package is available in the bundle.

Verify the setup playbook result – no failed tasks should occur.

Verification

Log in into Ansible Tower Servers and verify if the version is now 3.7.4.

Package verification on Ansible Tower servers:

7. Add Red Hat Subscription Manifest

In former versions, a license file was required. Now it has changed to a subscription manifest. This file was generated in the Red Hat customer portal. After the first login into the 3.8 servers, you have to add the manifest. That’s all folks.

8. I have made fire!

After spending a lot of time to figure out the correct way how to upgrade, doing a lot of tests and finally the implementation on the live system, I did it. After version 3.8 was showing up, I felt like Tom Hanks in the movie Cast Away – I have made fire!

Summary

Upgrading the Ansible Tower with the existing online documentation? No chance! I have opened a support case at Red Had to clarify a lot of this like changing the repository database, updating the ansible packages etc. The procedure with setup.sh -b / setup.sh -r as described in the upgrade documentation did not work. It needs a manual data transfer. I really like the method to install the new Ansible Tower versions with a bundle, so no internet connection is required to keep the environment up to date. Hopefully Red Hat will update the documentation in the near future, for example with an upgrade cookbook section or however they want to call it.

Oracle OCI Data Transfer Service – A journey from Kestenholz/Jurasüdfuss/Switzerland to Frankfurt and back

The Oracle Data Transfer service is a offline data transfer method to migrate data to the Oracle Cloud Infrastructure. A transfer service is useful, when your network bandwidth and connection is not sufficient to upload your migration data  in a meaningful time. Oracle offers two methods: The disk-based data transfer and the appliance-based data transfer. The service is no only one-way, data can also be exported in an Oracle Cloud Infrastructure data center and shipped to your data center.

According one of my Company Trivadis’ cultural value called curiosity, I was wondering how this service works. This is the story of a tiny USB Hard Disk Drive full of data, which was going on a long journey from Kestenholz / Jurasüdfuss / Solothurn / Switzerland to the Oracle Infrastructure data center in Frankfurt and back.

Setup

  • The OCI Data Transfer utility is Linux based, the USB 3.0 HDD is attached to a VMware Virtual machine where Oracle Linux is running
  • The virtual machine has access to the Internet
  • Data is available – for this example I used some open data from Swiss government (opendata.swiss)

Data Transfer Service Regions

Actually data transfer is available in Frankfurt, Ashburn, Phoenix, London and Osaka. From Switzerland, Frankfurt is the nearest location.

How is your data coming into the Oracle Cloud

  1. Enable Data Transfer Service – Data Transfer
  2. Prepare an Object Storage bucket
  3. Create a Transfer Job
  4. Attach a HDD to a Linux based host, use the Data Transfer Utility to create and encrypt the device
  5. Copy data to the HDD
  6. Create and upload the disk manifest
  7. Lock the disk
  8. Attach the Disk to the transfer label and create the package
  9. Create a transfer package
  10. Shipping and update shipping information
  11. Tracking
  12. Data Verification
  13. Object Storage Replication Policy (optional)
  14. Finally…

Note: Most of the jobs above can be done by the OCI CLI on command line and are very well described in the Oracle doumentation.

1. Enable Data Transfer Service – Entitlement

Before you can use this service, the Data Transfer service has to be enabled in general. Therefore you have to request it. The OCI tenant administrator gets a document, what he has to sign in a digital way. It contains for example a description how to bring data to OCI, and if you order an appliance that there will be a 45day maximum limit where the appliance has to be returned to Oracle. And a few days later, the service is ready to use. Basically now you have the permissions, to order a Data Transfer Appliance, but in this test I used the Disk Service.

2. Prepare an Object Storage Bucket

In the Frankfurt region, I created a new Object Storage bucket called data_transfer_usb. This is the bucket where the shipped data will be transferred in.

3. Create a Transfer Job

In Object Storage – Data Transfer Import,  we create a new transfer job. It contains the upload bucket from above and as method the transfer device type is disk. For furtehr processing, we need the OCID of the job. As you can see, actually there is no transfer disk attached.

4. Attach a HDD to a Linux based host, use the Data Transfer Utility to create and encrypt the Device

Prerequisites for the Data Transfer Utility according the documentation:

  • An OCI Account which have the IAM permissions for Data Transfer
  • A Linux machine with Oracle Linux 6 or greater, Ubuntu 14.04 or greater, SUSE 11 or greater
  • Java 1.8  or 1.11
  • hdparm 9.0 or later
  • Cryptsetup 1.2.0 or later

Package Installation for my Oracle Linux 7 Machine

Download and Installation of the Data Transfer Utility

The actual link to the file is in the online documentation.

Test

Configure IAM Credentials for Data Transfer Actions

The configuration is according configuring the Oracle Cloud Infrastructure CLI with user, fingerprint, key_file, tenancy and region. Example configuration file:

Verify Credentials

Show Data Transfer Job Details – Status is PREPARED

Here you can see the shipping address from the Oracle Infrastructure data center frankfurk and the label. Both information are used later in process.

Prepare USB Hard Disk Drive

The disk is attached as /dev/sdb – it is a Western Digital drive. Important: The disk needs no partition.

Create Transfer Disk for Data Copy

This command will setup the disk and mount it immediately. As additional information we need the disk label for further processing.

Mount point is /mnt/orcdts_DAMOED7GH.

The Transfer Disk status has changed to PREPARING and the disk serial number is registered now.

5. Copy Data to HDD

For the test run I have copied some Open Data stuff, an Oracle Backup and Oracle Data Pump export files to the disk.

6. Generate Manifest File

It generates a local file which contains a list of the files and her MD5 checksums like an inventory file. Here the disk label is required.

The file:

7. Lock the Disk

8. Attach the Disk to the Transfer Label and create the Package

The status now changes to ACTIVE.

9. Shipping and Shipping Information Update

As shipping company I used DHL Switzerland. They have a pick point near by in Langenthal. At this point, it’s important to organize the return shipping too and put the return shipping label in the box. I didn’t realize it and have forgotten to organize the return shipping. So the disk was stranded in the Frankfurt data center.  And then the story began. As a private person, the delivery companies DHL and UPS doesn’t allow private persons to re-import packages from outside Switzerland  without a customer number. But, private persons don’t get such a number. Finally with FedEx I was able to organize the return shipping. Thanks to Andrew and Christos from Oracle’s OCI Data Transfer team for their patience!

Note: Companies like DHL have templates to create pro-forma commercial invoices – https://www.dhl.ch/exp-de/express/zollabwicklung/zollpapiere/proforma_rechnung.html#invoice

This disk was sent to the Oracle Cloud Infrastructure data center Frankfurt.

Now the shipping information has to be updated with vendor and the tracking numbers.

10. Tracking

DHL required two days until delivery in Frankfurt. Oracle started one day later with the data import.

11. Data Processing

Then Oracle is uploading the data and the disk is attached, the job transfer status changes to PROCESSING.

12. Data Verification

Finally the data is arrive in the Oracle Cloud Infrastructure Object Storage and is ready for use. The file processing is logged in the new created file upload_summary.txt.

13. Object Storage Replication Policy (optional)

The files are now in the data center Frankfurt, but I want to have them in the Swiss data center region Zurich. Therefore I set a replication policy on level Object Storage. In Zurich, a new bucket called data_transfer_usb_from_FRA is created. And a few minutes later, the files were available in the Object Storage Zurich. Sure, it depends on the file size 😉

Finally…

Detach the transfer disk so the data center guys can send it back to you.

And after a few days…welcome FedEx in Kestenholz / Jurasüdfuss / Solothurn / Switzerland!

Some words about Shipping and Costs

Shipping costs from DHL and Fedex:

Vendor From To Costs
DHL Langenthal / Switzerland Frankfurt 79.50 CHF
FedEx Frankfurt Kestenholz 130.45 CHF

Links

Summary

To watch nice marketing slides and documents about  cool features is not enough. To find out how it works in the real word, a real is test is required. How to migrate data into a data center of any cloud provider should be basic know-how of each consultant which is working with and on cloud themes. Moving data by a disk ro an appliance opens a lot of possibilities for data migrations into the cloud. For example a huge DWH: Transfer the RMAN backup into the cloud, restore it, close the GAP by an incremental backup and synchronize it with Oracle Golden Gate. #ilike

Oracle Cloud Infrastructure Classic Object Storage – Cleanup Day with FTM CLI

Yesterday I decided to cleanup old Oracle Cloud Infrastructure Classic objects. There were a lot of files lying around in the Object Storage of a project from 2018. Cleaning up these files in the OCI console was no option, they can only be deleted one by one. And with over 1500 files, a bad idea. During the search for an option for Object Storage mass deletion, I found this tool: ftmcli – Object Storage Classic File Transfer Manager. The MOS note contains the script and a short manual how to use. It’s a Java based script,  a perfect match for my Windows Subsytem for Linux (Ubuntu), which I often use for OCI actions.

OAC-Classic : How To Delete A Storage Container That has Multiple Objects. (Doc ID 2634021.1)

Link to the User Guide: https://docs.oracle.com/en/cloud/iaas-classic/storage-cloud/csclr/preparing-use-ftm-cli.html#GUID-5BB8647F-DDAD-4371-A519-1116402245FB

The Container List

Here is the content of my Object Storage Classic what I have to clean up – it contains five containers.

ftmcli – Installation

Over the WSL mountpoint in /mnt where you have access to your local Windows disk drives, transfer the package to the home directory and extract it.

In the extracted subdirectory is a file called ftmcli.properties – there you have to set your OCI Object Storage information. Two parameters are used:

  • user=<OCI login>
  • rest-endpoint=<your Storage Classic Endpoint>

The Storage Classic Endpoint is visible in the Storage Classic Account tab. There is no need to set the password in the properties file. You are prompted for it when ftmcli is executed.

ftmcli – Commands

Available commands – output from ftmcli:

upload Upload a file or a directory to a container.
download Download an object or a virtual directory from a container.
create-container Create a container.
restore Restore an object from an Archive container.
list List containers in the account or objects in a container.
delete Delete a container in the account or an object in a container.
describe Describes the attributes of a container in the account or an object in a container.
set Set the metadata attribute(s) of a container in the account or an object in a container.
set-crp Set a replication policy for a container.
copy Copy an object to a destination container.

 

ftmcli – List Object Storage Containers

Here you will be prompted for your OCI password. Curious thing: My existing OCI password what I have used for months contained a lot of special characters( ,LCaOQ3|~[PT”+x), and there were not accepted.

I had to set a new OCI account password with less complexity and special characters and then it worked.

ftmcli – Delete Object Storage Containers

With the -f flag, the deletion of a container which contains objects can be forced. I did it for all my containers.

And finally all containers are removed. Job successfully done!

Summary

This is what I like in Oracle Cloud Infrastructure, for a lot of problems and technical questions are a lot of scripts and tools available like this one: ftmcli. But sometimes it’s not so easy to find them.

MV2OCI – One-Click Move of your Data into Oracle Cloud Infrastructure Database

mv2oci is a tool which helps to migrate on-premise data to the Oracle Cloud Infrastructure based on Oracle Data Pump and works as a data load tool. The local Data Pump export is transferred and imported to/on the target cloud server automatically. There is no use of Oracle Cloud Object Storage, the dump files are transferred with rsync or scp to the target database node. This is the different behavior to mv2adb – see my blogpost here – which uses the Object Storage. As an option, the data can be transferred via Database Link (mv2oci Parameter –netlink).

All you need to know about mv2oci is written in the My Oracle Support Note (OCI) MV2OCI: move data to Oracle Cloud Database in “one-click” (Doc ID 2514026.1).  The newest version of the rpm package can be downloaded there. The package has to be installed on the source server.

Prerequisites

  • SQL*Net connection between the two databases
  • A Java executable – in my case I have installed jre (yum install jre)
  • Verify if the firewall to the VCN Subnet is open for Port 1521 – Port 22 is open as per default
  • Password of database user SYSTEM

The Use Case

Let’s move the database schema SOE from my on-premise Oracle Linux Server into the cloud step-by-step. An Oracle Cloud Infrastructure database instance is already up and running, the target tablespace is created. The data centers are connect by VPN.

 

Database Information

Source Target
CDB Name CDB118 CDB118
PDB Name pdb11801 pdboci
Hostname heckenweg srv-cdb118
IP Address 192.168.1.184 172.16.0.8
PDB Service Name pdb11801.kestenholz.net pdboci.subnetvcnmohnwe.vcnmohnwegvpn.oraclevcn.com

1. Package Installation

Download and transfer the package to the on-premise server, for example in directory /tmp. As user root, install the package.

Verify that the SSH private key which is used for the connection to the Oracle Cloud Infrastructure server is available and the connection is working. Here is the OCI SSH key available in the $HOME/.ssh.

2. Encrypt the SYSTEM passwords for both databases – mv2oci encpass

3. Configuration File

A template of the configuration file is located in /opt/mv2oci/oci. I used the following parameters – other parameters like ICHOME for Instance Client configuration are well described.

Source DB Parameters

Parameter Value
DB_CONSTRING //heckenweg/pdb11801.kestenholz.net
SYSTEM_DB_PASSWORD Encrypted SYSTEM password
SCHEMAS SOE
DUMP_FILES /tmp/exp_soe_18102020_01.dmp, /tmp/exp_soe_18102020_02.dmp
OHOME /u01/app/oracle/product/19.0.0/dbhome_1

Expdp/Impdp Parameters

Parameter Value
Dump Name exp_soe_18102020.dmp
DUMP_PATH /tmp
PARALLEL 2 – creates two Dumpfiles called exp_soe_18102020_01.dmp and exp_soe_18102020_02.dmp

OCI Parameters

Parameter Value
OC_HOST 172.16.0.8
OC_SSHKEY /home/oracle/.ssh/id_rsa_oci_29012020
OC_DB_CONSTRING //172.16.0.8/pdboci.subnetvcnmohnwe.vcnmohnwegvpn.oraclevcn.com
OC_DB_PASSWORD Encrypted SYSTEM password
OC_DUMP_LOC /tmp

 

4. Export Data – mv2oci expdp

Dump files created in /tmp.

5. Transfer Data – mv2oci putdump

Files are available now on target server.

6. Import Data

Tablespace SOEDATA exists on target server, otherwise you can use to the EXTRA_IMPDP parameters in the mv2oci configuration file to do a remapping etc.

Analysis of the error in the SQL*Developer – there is a missing execution permission on package DBMS_LOCK.

This is an easy thing:

7. Reporting – mv2oci report

The report compares the objects on source and target database.

8.  All in One – mvoci auto

We did the steps one-by-one, by using the parameter auto, the steps above are done automatically (except reporting).

9. Logfiles

Logfiles tom the mv2oci actions are located in:

mv2oci /opt/mv2oci/out/log
Data Pump Directory in parameter DUMP_PATH

Summary

mv2oci is another great tool to support the movement to Oracle Cloud Infrastructure. Easy to configure, easy to use. #ilike

Oracle Release Update 19.9 – Lab Update Time (Grid Infrastructure Standalone & RDBMS)

The Oracle Release update 19.9 for Linux is available since a few days. Time to upgrade my lab environment at home which consists of the following components:

  • Oracle Grid Infrastructure Standalone 19.8.0 with ASM Normal Redundancy – +ASM
  • Oracle 19.8.0 RDBMS as Repository for Oracle Enterprise Manager – EMREPO

The running 19.8 Environment

Output from Trivadis base environment tool TVD-Basenv(TM).

Patch Download, Transfer and Extract

I have downloaded the Combo which contains the RU for Grid Infrastructure and Oracle Java Virtual Machine.

  • COMBO OF OJVM RU COMPONENT 19.9.0.0.201020 + GI RU 19.9.0.0.201020(Patch 31720429)
    • OJVM RELEASE UPDATE 19.9.0.0.0(Patch 31668882)
    • GI RELEASE UPDATE 19.9.0.0.0(Patch 31750108)

The local stage directory with the extracted files:

OPatch

OPatch in Grid Infrastructure home directory has to be version 12.2.0.1.19 or later.

Version Verification

+ASM – Grid Infrastructure Standalone

EMREPO – RDBMS

CheckConflictAgainstOHWithDetail

+ASM – Grid Infrastructure Standalone

EMREPO – RDBMS

Take care, this line here produces an error:

According My Oracle Support Note opatch CheckSystemSpace Command For Grid Infrastructure RU Fails With: “This command doesn’t support System Patch” (Doc ID 2634165.1), this error can be ignored and the line removed in future patch apply actions.

CheckSystemSpace

To check for space, we create there two files which contain the patch directories. The checks have to be successful.

+ASM – Grid Infrastructure Standalone

EMREPO – RDBMS

Release Update Apply

As first action I stop the OEM. The I run opatchauto as user root. Grid Infrastructure and RBDMS components are stopped, started and patch automatically one by one. Here is the full output of the patch apply where you can see the executed steps. In my lab environment, it took about 20 minutes.

Version Verification

+ASM – Grid Infrastructure Standalone

EMREPO – RDBMS

As you can see, the components were updated successfully. Time to start Oracle Enterprise Manager 13c Release 4 Update.

OJVM Apply

At the end, the OJVM patch has to applied. Set ORACLE_SID and ORACLE_HOME according the RDBMS environment.

Stop the RDBMS with srvctl

Change to OJVM Patch Directory

Apply the OJVM Patch

Startup Upgrade – Container Database and Pluggable Database – in SQL*Plus

Run datapatch

Shutdown Database in SQL*Plus

Start the RDBMS with srvctl

Version Verification

EMREPO – RDBMS

Summary

There were no issues. Ok, it’s just a GI Standalone environment. But this was really a pleasure.