EM12c Cloud Control

Enterprise Manager 13c – Let’s use the Hybrid Agent for Amazon EC2 and Azure Instances

I like the concept behind the Oracle Enterprise Manager Hybrid Cloud Architecture to connect my on-premise OMS with targets in the Oracle cloud. The Agent communicates via SSH tunnel to target servers, no other ports than SSH 22 are open against the world wide web. An I was interested to find out, if the installation of such an agent works for other cloud providers than Oracle too.

Create a Oracle Linux Instance in Amazon AWS

I have created a small Oracle Linux instance in Amazon AWS and inserted the public IP into the /etc/hosts file of the Oracle Management Server. Why I have used an Oracle Linux? According the documentation, at the moment only Oracle Linux x86-64 is supported to use this hybrid feature.

On the Amazon instance I installed the 12c prerequisite package (yum install oracle-rdbms-server-12cR1-preinstall)  to be sure that libraries etc. are available and the user oracle is created. And finally I added the public key in the authorized_keys file of the user oracle so that connects via SSH without a password are possible.


Hint: Test your passwordless SSH connection with a tool like Putty or MobaXterm.

Login Credential

For the login via SSH tunnel, in Enterprise Manager 13c a named credential has to be created with the SSH keys which were used by the Amazon instance. This credential us used later in the agent deployment process. For further information how to create such a credential, please take a look into the Hybrid Coud documentation https://docs.oracle.com/cd/E24628_01/doc.121/e24473/hybrid-cloud.htm#BABJACHI.












Hybrid Agent Deployment – First Run

After setting the DNS information and SSH configuration, it’s time to start a Hybrid Agent deployment. EM13c – Setup – Add Target – Install Agent on Host. The most important thing is that the checkbox for the Hybrid Cloud Agent is enabled at the bottom of the browser window.











Host Name does not map to an Oracle Public Cloud Virtual Host – Investigation

The deployment has started. But in the prerequisites phase the remote validation fails with this message: The provided host name does not map to an Oracle Public Cloud virtual host. You can deploy Hybrid Cloud Agents only on Oracle Public Cloud virtual hosts.




It looks like the prerequisites check is verifying the hostname. In the deployment logfile from the Oracle Management Server, I found these lines:

Oracle is doing a simple hostname -d command to verify if the host is running in the Oracle Cloud. I have verified the hostname -d command on Oracle Cloud instance, and the output there is different:

But my Amazon instance has this output here:

Lets fake the Amazon instance hostname and try the deployment again. I added this line into the Amazon instance /etc/hosts with the new domainname.

Now the hostname -d command shows me the new name according the Oracle cloud instances.

Hybrid Agent Deployment – Second Run

EM13c – Retry, using same inputs. And the error is gone, the agent is installed successfully. After running the scripts

  • /u00/app/oracle/product/agent13c/agent_13.
  • /home/oracle/oraInventory/orainstRoot.sh

as user root, the Amazon instance is added as new host.



With a simple change for the hostname -d command, you can install the Oracle Hybrid Agent on targets outside the Oracle cloud. BTW, this works for local instances too. All other ports than SSH 22 are closed. And that’s an important thing when you work with cloud products.

Oracle Enterprise Manager 13c – KILL SESSION for Application Administrators – Part 1

Basically to execute a ALTER SYSTEM KILL SESSION command you have to be a) a DBA or b) you need the ALTER SYSTEM privilege. Granting the ALTER SYSTEM privilege to a Non-DBA has big risks. This user is now able to change a lot of parameters like memory parameters, NLS settings etc.

In one of my projects, a small team of well known application administrators is having a read-only account in Enterprise Manager 12c to verify the performance, see the user sessions and many more of their subset of databases. And sometimes, they have to kill a hanging Oracle session. Until now they called the DBA: “Please do it for me”. Sure, we can build a small PL/SQL procedure on every database and give them the executions rights so they can kill a session in their terminal theirself. But this is not very user friendly.

Here is an approach to manage the small path between security and manageability. I am aware that this is – like we say in Switzerland – a “Kompromiss”. But in fact we have implemented this solution in a production environment two months ago without any negative impacts.

Note: All the steps which are show below in Enterprise Manager 13c can be executed in 12c too.

The Concept

  • we create a new database user in the target databases
  • we create a new role with ALTER SYSTEM privilege in the target databases
  • we enable auditing for ALTER SYSTEM commands in the target databases
  • we create a new Enterprise Manager role for the application administrators
  • we create a new named credential with the new user and grant it to the application administrators
  • we build an Enterprise Manager report which shows us the ALTER SYSTEM actions based on a metric extension

The New Database User

This user has to be created in every target database.

The New Database Role

This role has to be created in every target database.

Grant role to the user:

Enable Auditing for ALTER SYSTEM Commands


Verify the enabled audit settings:

Verify the audit parameterin the target database. If audit_trail is not set to EXTENDED, the SQL command which was executed is not recorded. How it works with Unified Auditing will be verified in a later blog post.

In the example below you can see the difference in the column SQL_TEXT.

Enterprise Manager Role

Setup – Security – Role – Create


Set name and description – Next




Activate the checkbox for the privilige Connect to any viewable target and scroll down


Add database targets and set the Manage Targets Privilege Grants

  • For EM12c use: View
  • For EM13c use: Manage Database Sessions




Select the user to grant the role – Next




The role is now created and be granted to a user.


Enterprise Manager Named Credential

The application adninistrators don’t have to know the password for the created user with the ALTER privileges. We create a named credential and give the admins the permission to use it.

Setup – Security – Named Credentials – Create

Set Credential Name, Authenticated Target Type, Credential Type and set Scope to Global. The Credential Properties are according to our new created user APPL_ADMIN.

Scroll down to set Access Control


Add Grant


Search for the user, in my case it is APPL_BERGER



Feel free to test ist against a target which contains the APPL_ADMIN user.




Now we test the configuration. The role APPL_ADMIN was granted to my user APPL_BERGER. On the target database TVD12 user SCOTT has locked some data.


On the target site we go to the Blocking Sessions page


The Named Credential is already filled in.



Select the session – Kill Session


Confirm the action to kill the selected session immediate – Yes


Session has been killed.



On the target database, an audit record was generated. Login as user SYS and execute this query. You can see the TIME, the USERNAME, the EM13c USERNAME in column CLIENT_ID and the SQL statement which was executed in background.

Summary – Part 1

As I said in the introduction, giving some other users than DBAs the ALTER SYSTEM privilege is risky. But when the DBAs and application adminstrators are working as a team, then this can be a possible solution to make their daily business easier.

In the next blog post I will show how you can create a Enterprise Manager report based on a Metric Extension to produce daily reports of the ALTER SYSTEM actions.

EM12c Agent – java.lang.OutOfMemoryError: Java heap space

After an AIX 7.1 server reboot, there was one agent which did not started. The command emctl start agent resulted in a java.lang.OutOfMemoryError: Java heap space message. In the agent subdirectory some dumpfiles were created:

First I took a look in the agent logfiles to gather more information.


There are a log of MOS notes available for the search term peer not authentificated, but none of them were helpful.


Here we see that the EM12c agent had a JVM memory problem at startup. Let’s try out the emctl clearstate agent command.

emctl clearstate agent

From the Oracle documentation about the clearstate command – http://docs.oracle.com/cd/E29597_01/doc.1111/e24473/emctl.htm#r21c1-t27

emctl clearstate Clears the state directory contents. The files that are located under $ORACLE_HOME/sysman/emd/state will be deleted if this command is run. The state files are the files which are ready for the agent to convert them into corresponding xml files.

The emctl clearstate agent command produced the same error:

After some research in My Oracle Support for OutOfMemoryError: Java heap space, I found this note:

Duplicate 1952593.1 – EM12c: emctl start agent Fails With ‘ Target Interaction Manager failed at Startup java.lang.OutOfMemoryError: Java heap space’ reported in gcagent_errors.log (Doc ID 1902124.1)

From the MOS note:

  1. Kill any leftover process -> not for me, the agent was no started at all
  2. Move old files from  /agent_inst/sysman/emd/state/* to a new directory -> sounds very interesting
  3. Execute clearstate command -> let’s try it out

The directory u00/app/oracle/product/agent12c/agent_inst/sysman/emd/state/

There was a lot of stuff in this directory – time to move:

emctl clearstate agent – 2nd run

Now the clearstate command has worked without errors.

emctl start agent

The start was successful – fine.


The cleanup of the state directory solved the problem. But why does a emctl clearstate agent command not clean up the state directory as described in the documentation? Aaah, and do’t forget to clean up the moved files. They are not needed anymore.

Oracle Enterprise Manager 12cR5 – Clone a PDB to the Oracle Cloud

Cloning an on-premise PDB to the Oracle Cloud is basically done in three steps. 1. Select your PDB – 2. Start Clone Menu – 3. Verify new PDB. In the basic clone menu, Oracle uses some default settings in the background, for example the source datafiles will be packed in a compressed file which is located in /tmp directory of the source server. When your /tmp directory is too small, the clone process fails. Why this directory is used? I don’t know. But it could be a problem when you clone bigger databases. Maybe we can edit the clone procedure, but this is a task for me for a day when it’s raining outside… All cloning steps are well monitored, in case of an error you see a detailed error message so you can go on from the step where the error occurred.

To clone a PDB, there are some prerequisites which has to be fulfilled like same characterset, endianess and options. Don’t worry, if your source database has a wrong characterset, the clone procedure will be stopped.

The source PDB

My source PDB is a small database called CRM02 located in a Trivadis Datacenter – the container database is called TVD12CDB. I want to clone the local PDB to the Cloud database TVDHIPER.


Start the Clone Procedure

In the menu Oracle DatabaseCloning select “Clone to Oracle Cloud”.


Source and Destination Information

Enter the source and destination information like credentials, passwords etc. As destination Database Host Credentials select the SSH credential.This credebtial will be used to copy the files to the new server. In the right top corner you have the buttons Advanced and Clone. Clone starts the process with default settings immediately, if you select Advanced you can set destination directory, scheduler the execution etc. I select Clone.


The Process starts immediately

This is the main menu where all the steps are monitored. You see all the details which are executed in the background. It’s an excellent overview.


I one of the firsts steps, Oracle creates the PDB XML manifest and puts all datafiles in a compressed tar.gz file. As I said in one the first lines in this blog post, the directory is /tmp…

The tar.gz file will be transferred to the new target and after the transfer it’s deleted. But you have to clean up the datafiles manually.

What happens on Target Server

The tar.file will be extracted in a new folder and then the database will be plugged in. Here is the tar.gz file on target server. After the successful plug-in, the files will be deleted at this location. Oracle uses OMF – Oracle Managed Files, the extracted files are then located in this directory structure.

If the plug-in action fails, you can see that immediately in EM12c. During my clone process, an error occured when Oracle wants to apply a patch to the new PDB.


In this case, the plugged in PDB has a wrong state. Oracle was not able to apply the datapatch because the pluggable database was not in UPGRADE state.


As workaround I logged in as SYSDBA into the target database in the Oracle cloud and changed the mode manually.

Then I restarted the procedure from the position where the error occurred.


Welcome to the cloned PDB

After some minutes, all steps are done, no errors. the pluggable database is now cloned and up and running. The task has status succeeded.


EM12c Integration

EM12c target discovery is not required, the new pluggable database CRM02_CL will be added automatically as new target.



First I wanted to try to clone a PDB from cloud database to cloud database, but this is not implemented at the moment. So I tried out to clone an on-premise database to the cloud. I have tested the basic functionality with a small database and it works fine. There are some issues like why does Oracle write files to /tmp or why has the plugged-in PDB the wrong state to apply the patches. But in general when you fulfill all the prerequisites to clone a PDB like character set etc. as you have to to on conventional way too, it runs. One of my next tests will be how it works when the source PDB version is not, when it’s a PDB. But I assume it works as designed, there will be a message in PDB_PLUG_IN_VIOLATIONS view and the I have to solve it manually. Or do you think Oracle will upgrade my PDB automatically? We will see… 🙂

Oracle Enterprise Manager 12cR5 – Let’s connect to the Oracle Cloud

Since EM12c Release 5 Oracle has integrated the connection to the Oracle cloud, databases can be monitored and handled very easy, on-premise databases and cloud databases are now managed in one tool. Adding a cloud database to an on-premise EM12c from cloud.oracle.com is very easy and done in a few steps:

  • Create a SSH key
  • Create a DBaaS instance on cloud.oracle.com
  • Configure a local Linux agent as Hybrid Cloud Gateway Agent
  • Create an EM12c credential for SSH login
  • Install EM12c Agent via push method on the DBaaS machine
  • Discover and add the new targets
  • Enjoy

A few documents are available how this setup has to be done. But the best way to get some experience is just to do it. We do not start from scratch with the EM12c integration, I have already created a new DbaaS instance which is up and running:


Configure a local Linux Agent as Hybrid Cloud Gateway Agent

A local agent has to be configured to communicate to the Oracle cloud as gateway. This has to be an existing Agent. I have decided to use the local OMS agent as Hybrid Cloud Gateway agent. The concept behind the Hybrid Cloud Gateway and the configuration is well described here, also how to configure it high available, the used ports etc.


Create an EM12c Credential for SSH Login

The credential contains an username and the private ssh key for the passwordless connection to the new host of the DBaaS. The scope is global, so all new cloud connections can use it. You can create the credential in menu Setup – Security – Named Credential. A test run is not possible because there were no other cloud targets configured at the moment.


Install EM12c Agent via Push Method on the DBaaS Machine

The installation of the agent is simple, it’s almost like local installations. The only two changes are a) to use the SSH credential and b) to choose the Hybrid Cloud Gateway Agent.

Setup – Add Target – Add Targets Manually – Add Host Targets – Add Host …

Information: During tests I ran into errors when I wanted to resolve the cloud hostname via /etc/hosts file. I was able to install the agent, but not to configure the databases. So I used the IP address here instead of the hostname, this works fine.


Select Named Credential and activate Checkbox to enable the Hybrid Cloud Gateway Agent. This port will be set automatically.


Deploy the agent and run root.sh on cloud host.


In the agent details via Setup – Manage Cloud Control – Agents you can verify if it’s a cloud agent or not.


Discover and add new Targets

After the successful agent installation, the cloud database can be added. Target – Databases – Add Enter Hostname – Next.


The fresh added database is available after a few minutes. The look and feel is as usual, only the small cloud icon on top shows you that the database is hosted at Oracle.



The integration of a cloud database into a on-premise EM12c is well done by Oracle. This is only the beginning, in the next days I will test more features like cloning a on-premise pluggable database to the cloud, set up a cloud database in the EM12c Self Service Portal and many more. It’s important to verify what it’s possible – and what’s not. Have fun with in the cloud!


  • https://cloud.oracle.com/database
  • http://docs.oracle.com/cloud/latest/dbcs_dbaas/index.html
  • http://docs.oracle.com/cd/E24628_01/doc.121/e24473/hybrid-cloud.htm#EMADM15141