Oracle RDBMS

How to build OCI Infrastructure Environments with Ansible

The Oracle provided Ansible module gives us the opportunity to provision and configure Oracle Cloud Infrastructure resources on an automated base. The Ansible basic setup is very easy and the Oracle provided example playbooks in Git are a good base to start with your infrastructure automation project. Oracle provides Ansible example playbooks for

  • Block Volumes
  • Compute 
  • Database
  • File Storage
  • IAM
  • Load Balancer
  • Private Subnets with VPN
  • Delete Objects
  • etc.

In this blog post, I will show you how easy it is to bring Ansible and the Oracle Cloud Infrastructure together. 

Requirements

  • A local machine to install Ansible and the required software and modules, in my case it’s an Oracle Linux 7 virtual machine with Internet access.
  • An Oracle Cloud Infrastructure Account with permissons to create new resources.

Steps to configure Ansible and OCI

  1. Install and configure the Oracle Cloud Infrastructure Python SDK
  2. Install and configure Ansible
  3. Download and configure the OCI modules for Ansible
  4. OCI Test Run

Install and configure the Oracle Cloud Infrastructure Python SDK

In this step, the OCI Python SDK will be installed an configured. The new created SSH public key has to be added in the OCI console for further actions.  As OS user root we create a new operating system user called oci for Oracle Cllud Infrastrcuture actions and give him sudo privileges.

Create a User and SSH Keys

# groupadd oci
# useradd oci -g oci
# passwd oci

Add this line in /etc/sudoers.

oci ALL=(ALL) ALL

Login as user oci, create a new SSH key and download an configure the OCI SDK. Protect your keys.

$ mkdir ~/.oci
$ openssl genrsa -out ~/.oci/oci_api_key.pem 2048
$ chmod go-rwx ~/.oci/oci_api_key.pem
$ openssl rsa -pubout -in ~/.oci/oci_api_key.pem -out ~/.oci/oci_api_key_public.pem
$ chmod go-rwx ~/.oci/oci_api_key_public.pem
$ chmod go-rwx ~/.oci/oci_api_key.pem

Show the public key and add it in the OCI console to your cloud account user.

 

The OCI Configuration File

As user oci, create the Oracle Cloud Infrastructure configuration file

$ vi  ~/.oci/config

Content of the file – for example in region Frankfurt and with the created SSH key file from above.

[DEFAULT]
user=[OCID of your OCI User]
fingerprint=[API Key Fingerprint of added SSH public key]
tenancy=[Your tenancy id]
region=eu-frankfurt-1
key_file=~/.oci/oci_api_key.pem

Change the file permissions.

$ chmod 600 /home/oci/.oci/config

Install the Oracle Cloud Infrastructure Python SDK

$ sudo yum-config-manager --enable ol7_developer ol7_developer_epel
$ sudo yum -y install python-oci-sdk python-oci-cli

Test

Command to list all instances in the selected compartment.

$ oci compute instance list --compartment-id [OCID of your OCI Compartment]| grep display-name
     "display-name": "Instance-AS-Test-1"

Install and configure Ansible

As user oci, download and install Ansible and Git.

$ sudo yum -y install git ansible
$ sudo mkdir -p /usr/share/ansible/modules

Set up the module directory.

$ sudo vi /etc/ansible/ansible.cfg
library=/usr/share/ansible/modules

Install additional packages.

$ sudo yum install python-pip
$ sudo pip install --upgrade Jinja2

This upgrade step is required, otherwise the public key creation in the OCI Ansible module fails (for example when you want to launch a new Compute instance).

Download and configure the OCI modules for Ansible

As user oci, download the Ansible modules from Git.

$ cd /usr/share/ansible/modules
$ sudo git clone https://github.com/oracle/oci-ansible-modules.git

Show the content.

$ ls -la
total 4
drwxr-xr-x. 3 root root 33 Mar 11 13:58 .
drwxr-xr-x. 3 root root 21 Mar 11 13:56 ..
drwxr-xr-x. 12 root root 4096 Mar 11 13:58 oci-ansible-modules

Change into the new created directory and execute the configuration script install.py.

$ cd oci-ansible-modules
$ sudo ./install.py
Copying documentation fragments from /usr/share/ansible/modules/oci-ansible-modules/module_docs_fragments to /usr/lib/python2.7/site-packages/ansible/utils/module_docs_fragments
Copying oracle utility files from /usr/share/ansible/modules/oci-ansible-modules/module_utils/oracle to /usr/lib/python2.7/site-packages/ansible/module_utils/oracle
Creating directory /usr/lib/python2.7/site-packages/ansible/modules/cloud/oracle
Copying OCI Ansible modules from /usr/share/ansible/modules/oci-ansible-modules/library to /usr/lib/python2.7/site-packages/ansible/modules/cloud/oracle
OCI Ansible modules installed successfully.

OCI Test Run

We copy the example playbook to launch a Compute cloud instance into the local folder and run the playbook. The Oracle provided playbook needs three variables:

SAMPLE_AD_NAME Availability Domain, e.g. EUZg:EU-FRANKFURT-1-AD-1
SAMPLE_IMAGE_OCID OCID of the selected OS – see https://docs.cloud.oracle.com/iaas/images/ to list all available images
SAMPLE_COMPARTMENT_OCID OCID of your compartment – OCI > Identity > Compartments

 

Create a working directory and copy the example playbook.

$ sudo mkdir -p ~/ansible/playbooks cd ~/ansible/playbooks
$ cp -r /usr/share/ansible/modules/oci-ansible-modules/samples/compute/launch_compute_instance ~/ansible/playbooks

$ cd ~/ansible/playbooks/launch_compute_instance

Set variables.

$ export SAMPLE_AD_NAME=EUZg:EU-FRANKFURT-1-AD-1
$ export SAMPLE_IMAGE_OCID=ocid1.image.oc1.eu-frankfurt-1.aaaaaaaan7ghs3nhu6bbujqnyukj755642xnmzshck5pm5svol6uigkxl2hq
$ export SAMPLE_COMPARTMENT_OCID=ocid1.compartment.oc1..aaaaaaaayc4703470347034703470347034703o3hx2exkz5pzi6kt4kunhiq

Run the playbook

Attention: All OCI resources are created and afterwards terminated immediately. If you don’t want to terminate them, comment out this line in file sample.yaml.

- import_tasks: teardown.yaml

Execute the Ansible playbook. The infrastructure will be created step by step. Key generation, network configuration, firewall rule setup, instance creation etc. is all automated.

$ ansible-playbook sample.yaml
 [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'


PLAY [Launch a compute instance and connect to it using SSH] *******************************************************************************************************************

TASK [Gathering Facts] *********************************************************************************************************************************************************
ok: [localhost]

TASK [Check pre-requisites] ****************************************************************************************************************************************************
skipping: [localhost] => (item=SAMPLE_COMPARTMENT_OCID)
skipping: [localhost] => (item=SAMPLE_IMAGE_OCID)
skipping: [localhost] => (item=SAMPLE_AD_NAME)

TASK [Create a temporary directory to house a temporary SSH keypair we will later use to connect to instance] ******************************************************************
changed: [localhost]

TASK [set_fact] ****************************************************************************************************************************************************************
ok: [localhost]

TASK [Generate a Private Key] **************************************************************************************************************************************************
changed: [localhost]

TASK [set_fact] ****************************************************************************************************************************************************************
ok: [localhost]

TASK [Generate a Public Key] ***************************************************************************************************************************************************
changed: [localhost]

TASK [Create a VCN] ************************************************************************************************************************************************************
changed: [localhost]

TASK [set_fact] ****************************************************************************************************************************************************************
ok: [localhost]

TASK [Create a new Internet Gateway] *******************************************************************************************************************************************
changed: [localhost]

TASK [set_fact] ****************************************************************************************************************************************************************
ok: [localhost]

TASK [Create route table to connect internet gateway to the VCN] ***************************************************************************************************************
changed: [localhost]

TASK [set_fact] ****************************************************************************************************************************************************************
ok: [localhost]

TASK [create ingress rules yaml body] ******************************************************************************************************************************************
ok: [localhost]

TASK [create egress yaml body] *************************************************************************************************************************************************
ok: [localhost]

TASK [load the variables defined in the ingress rules yaml body] ***************************************************************************************************************
ok: [localhost]

TASK [print loaded_ingress] ****************************************************************************************************************************************************
ok: [localhost] => {
    "msg": "loaded ingress is {u'instance_ingress_security_rules': [{u'source': u'0.0.0.0/0', u'protocol': u'6', u'tcp_options': {u'destination_port_range': {u'max': 22, u'min': 22}}}]}"
}

TASK [load the variables defined in the egress rules yaml body] ****************************************************************************************************************
ok: [localhost]

TASK [print loaded_egress] *****************************************************************************************************************************************************
ok: [localhost] => {
    "msg": "loaded egress is {u'instance_egress_security_rules': [{u'tcp_options': {u'destination_port_range': {u'max': 22, u'min': 22}}, u'destination': u'0.0.0.0/0', u'protocol': u'6'}]}"
}

TASK [Create a security list for allowing access to public instance] ***********************************************************************************************************
changed: [localhost]

TASK [set_fact] ****************************************************************************************************************************************************************
ok: [localhost]

TASK [Create a subnet to host the public instance. Link security_list and route_table.] ****************************************************************************************
changed: [localhost]

TASK [set_fact] ****************************************************************************************************************************************************************
ok: [localhost]

TASK [Launch an instance] ******************************************************************************************************************************************************
changed: [localhost]

TASK [Print instance details] **************************************************************************************************************************************************
ok: [localhost] => {
    "msg": "Launched a new instance {u'added_instances': [{u'lifecycle_state': u'RUNNING', u'availability_domain': u'EUZg:EU-FRANKFURT-1-AD-1', u'display_name': u'my_test_instance', u'time_maintenance_reboot_due': None, u'compartment_id': u'ocid1.compartment.oc1..aaaaaaaayc5kgqshdb5g2mjg4bnt34htnybbho3hx2exkz5pzi6kt4kunhiq', u'defined_tags': {}, u'region': u'eu-frankfurt-1', u'freeform_tags': {}, u'time_created': u'2019-03-11T13:52:34.238000+00:00', u'launch_options': {u'remote_data_volume_type': u'PARAVIRTUALIZED', u'firmware': u'UEFI_64', u'boot_volume_type': u'PARAVIRTUALIZED', u'is_consistent_volume_naming_enabled': True, u'network_type': u'VFIO', u'is_pv_encryption_in_transit_enabled': True}, u'image_id': u'ocid1.image.oc1.eu-frankfurt-1.aaaaaaaan7ghs3nhu6bbujqnyukj755642xnmzshck5pm5svol6uigkxl2hq', u'source_details': {u'source_type': u'image', u'kms_key_id': None, u'boot_volume_size_in_gbs': None, u'image_id': u'ocid1.image.oc1.eu-frankfurt-1.aaaaaaaan7ghs3nhu6bbujqnyukj755642xnmzshck5pm5svol6uigkxl2hq'}, u'fault_domain': u'FAULT-DOMAIN-3', u'shape': u'VM.Standard1.1', u'launch_mode': u'NATIVE', u'agent_config': {u'is_monitoring_disabled': False}, u'extended_metadata': {}, u'ipxe_script': None, u'id': u'ocid1.instance.oc1.eu-frankfurt-1.abtheljslxpqenvnafkstq2s7hxqizyam7shjd6kihyg33i7w2geeynedotq', u'metadata': {u'ssh_authorized_keys': u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAA4703470347034703470347034703470347034703470347034703470347034703XR6Zcv4sHKdMH/aDfZKFaLpqHtJ2P+JEC0ok2lWrPQSG0zPoSYWTrldllpgSfzOv1kQ1R/Uor+eedRz9x2DXtKl0anSXY6KNW1GVfOvoFk/c8AuaQy6Y7AZnJAculyF1pRJiHQyZEPuBL9E9EsiRqkQR18G9yewBApRZf/QGXZbLydG8vAa9T1DCYrODb3N2u73IIqbMFwr7sgQ8XQb7h9wlYq2mfUKxy+8H4w7S67'}}], u'instances': [{u'lifecycle_state': u'RUNNING', u'availability_domain': u'EUZg:EU-FRANKFURT-1-AD-1', u'display_name': u'my_test_instance', u'time_maintenance_reboot_due': None, u'compartment_id': u'ocid1.compartment.oc1..aaaaaaaayc5kgqshdb5g2mjg4bnt34htnybbho3hx2exkz5pzi6kt4kunhiq', u'defined_tags': {}, u'region': u'eu-frankfurt-1', u'freeform_tags': {}, u'time_created': u'2019-03-11T13:52:34.238000+00:00', u'launch_options': {u'remote_data_volume_type': u'PARAVIRTUALIZED', u'firmware': u'UEFI_64', u'boot_volume_type': u'PARAVIRTUALIZED', u'is_consistent_volume_naming_enabled': True, u'network_type': u'VFIO', u'is_pv_encryption_in_transit_enabled': True}, u'image_id': u'ocid1.image.oc1.eu-frankfurt-1.aaaaaaaan7ghs3nhu6bbujqnyukj755642xnmzshck5pm5svol6uigkxl2hq', u'source_details': {u'source_type': u'image', u'kms_key_id': None, u'boot_volume_size_in_gbs': None, u'image_id': u'ocid1.image.oc1.eu-frankfurt-1.aaaaaaaan7ghs3nhu6bbujqnyukj755642xnmzshck5pm5svol6uigkxl2hq'}, u'fault_domain': u'FAULT-DOMAIN-3', u'shape': u'VM.Standard1.1', u'launch_mode': u'NATIVE', u'agent_config': {u'is_monitoring_disabled': False}, u'extended_metadata': {}, u'ipxe_script': None, u'id': u'ocid1.instance.oc1.eu-frankfurt-1.abtheljslxpqenvnafkstq2s7hxqizyam7shjd6kihyg33i7w2geeynedotq', u'metadata': {u'ssh_authorized_keys': u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAA4703470347034703470347034703470347034703470347034703470347034703XR6Zcv4sHKdMH/aDfZKFaLpqHtJ2P+JEC0ok2lWrPQSG0zPoSYWTrldllpgSfzOv1kQ1R/Uor+eedRz9x2DXtKl0anSXY6KNW1GVfOvoFk/c8AuaQy6Y7AZnJAculyF1pRJiHQyZEPuBL9E9EsiRqkQR18G9yewBApRZf/QGXZbLydG8vAa9T1DCYrODb3N2u73IIqbMFwr7sgQ8XQb7h9wlYq2mfUKxy+8H4w7S67'}}], u'changed': True, 'failed': False, u'instance': {u'lifecycle_state': u'RUNNING', u'fault_domain': u'FAULT-DOMAIN-3', u'extended_metadata': {}, u'time_maintenance_reboot_due': None, u'compartment_id': u'ocid1.compartment.oc1..aaaaaaaayc5kgqshdb5g2mjg4bnt34htnybbho3hx2exkz5pzi6kt4kunhiq', u'defined_tags': {}, u'region': u'eu-frankfurt-1', u'freeform_tags': {}, u'time_created': u'2019-03-11T13:52:34.238000+00:00', u'source_details': {u'source_type': u'image', u'image_id': u'ocid1.image.oc1.eu-frankfurt-1.aaaaaaaan7ghs3nhu6bbujqnyukj755642xnmzshck5pm5svol6uigkxl2hq', u'kms_key_id': None, u'boot_volume_size_in_gbs': None}, u'agent_config': {u'is_monitoring_disabled': False}, u'image_id': u'ocid1.image.oc1.eu-frankfurt-1.aaaaaaaan7ghs3nhu6bbujqnyukj755642xnmzshck5pm5svol6uigkxl2hq', u'shape': u'VM.Standard1.1', u'availability_domain': u'EUZg:EU-FRANKFURT-1-AD-1', u'launch_mode': u'NATIVE', u'ipxe_script': None, u'display_name': u'my_test_instance', u'launch_options': {u'remote_data_volume_type': u'PARAVIRTUALIZED', u'firmware': u'UEFI_64', u'boot_volume_type': u'PARAVIRTUALIZED', u'is_consistent_volume_naming_enabled': True, u'network_type': u'VFIO', u'is_pv_encryption_in_transit_enabled': True}, u'id': u'ocid1.instance.oc1.eu-frankfurt-1.abtheljslxpqenvnafkstq2s7hxqizyam7shjd6kihyg33i7w2geeynedotq', u'metadata': {u'ssh_authorized_keys': u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAA4703470347034703470347034703470347034703470347034703470347034703XR6Zcv4sHKdMH/aDfZKFaLpqHtJ2P+JEC0ok2lWrPQSG0zPoSYWTrldllpgSfzOv1kQ1R/Uor+eedRz9x2DXtKl0anSXY6KNW1GVfOvoFk/c8AuaQy6Y7AZnJAculyF1pRJiHQyZEPuBL9E9EsiRqkQR18G9yewBApRZf/QGXZbLydG8vAa9T1DCYrODb3N2u73IIqbMFwr7sgQ8XQb7h9wlYq2mfUKxy+8H4w7S67'}}}"
}

TASK [set_fact] ****************************************************************************************************************************************************************
ok: [localhost]

TASK [Get the VNIC attachment details of instance] *****************************************************************************************************************************
ok: [localhost]

TASK [Get details of the VNIC] *************************************************************************************************************************************************
ok: [localhost]

TASK [set_fact] ****************************************************************************************************************************************************************
ok: [localhost]

TASK [Print the public ip of the newly launched instance] **********************************************************************************************************************
ok: [localhost] => {
    "msg": "Public IP of launched instance 130.61.40.231"
}

TASK [Wait (upto 5 minutes) for port 22 to become open] ************************************************************************************************************************
ok: [localhost]

TASK [Attempt a ssh connection to the newly launced instance] ******************************************************************************************************************
changed: [localhost]

TASK [Print SSH response from launched instance] *******************************************************************************************************************************
ok: [localhost] => {
    "msg": "SSH response from instance -> [u'Linux mytestinstance 3.10.0-957.5.1.el7.x86_64 #1 SMP Fri Feb 1 14:54:57 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux']"
}

TASK [Terminate the instance] **************************************************************************************************************************************************
changed: [localhost]

TASK [Delete the subnet] *******************************************************************************************************************************************************
changed: [localhost]

TASK [Delete the security list] ************************************************************************************************************************************************
changed: [localhost]

TASK [Delete the route table] **************************************************************************************************************************************************
changed: [localhost]

TASK [Delete the Internet Gateway] *********************************************************************************************************************************************
changed: [localhost]

TASK [Delete the VCN] **********************************************************************************************************************************************************
changed: [localhost]

PLAY RECAP *********************************************************************************************************************************************************************
localhost                  : ok=38   changed=16   unreachable=0    failed=0


Ready to Use

After a few minutes, a complete infrastructure for an OCI compute instance is created and the instance is ready to connect. 

The required SSH keys for the terminal connection were generated in a subdirectory of /tmp with the prefix ansible. In my example, the private and the public SSH key are located in /tmp/ansible.v6ckX0cert.

$ pwd
/tmp/ansible.v6ckX0cert

$ ll
total 8
-rw-------. 1 oci oci 1708 Mar 11 14:03 private_key.pem
-rw-rw-r--. 1 oci oci  380 Mar 11 14:03 public_key.pem

Links

Summary

The Oracle provided Ansible playbooks are a good entry point to start with OCI automation. I am already working at the next tasks to make my work easier with more variables and simplified playbooks. And finally I want to integrate it in Ansible AWX. Well done Oracle!

Oracle Autonomous Transaction Processing – Move your Data with Oracle Data Pump – Part 3

In this blog post serie which has three parts, I want to describe how data will be uploaded from an on-premises environment into the Oracle Autonomous Transaction Processing database using Oracle Data Pump. 

Oracle Import Prerequisites

Credentials

To get acccess to the dump file on the Oracle Object Storage, a credential has to be created in the Oracle Autonomous Transaction Processing database with the DBMS_CLOUD procedure. For more information about the package, see the blog post from Christian Antognini – DBMS_CLOUD Package – A Reference Guide.

The DBMS_CLOUD procedure needs a password value which is the token from the user account. If you don’t now your token, create a new one. Go to Identity – Users – your username and click on the left side on “Auth Tokens”. Create a new token by click on “Generate Token”. The random generated string is the value for the DBMS_CLOUD password.

Enter a name for the token and click on “Generate Token”.

Note your generated token and “Close” the window.

Login into the Autonomous Transaction Processing database as admin user and create a new credential called ATPCS_CRED.

C:\oracle\product\instantclient_18_3>sqlplus admin/***MY_PASSWORD***@atphrdata_high

SQL*Plus: Release 18.0.0.0.0 - Production on Fri Oct 5 13:43:58 2018
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle. All rights reserved.

Last Successful login time: Fri Oct 05 2018 10:47:04 +02:00

Connected to:
Oracle Database 18c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> BEGIN
2      DBMS_CLOUD.CREATE_CREDENTIAL(
3        credential_name => 'ATPCS_CRED',
4        username => 'martin.berger@trivadis.com',
5        password => '***MY_TOKEN***'
6      );
7 END;
8 /

PL/SQL procedure successfully completed.

A new Database User called HRATP

In the ATP, we create a new user called HRATP. The tablespace DATA is the default tablespace in an Autonomous Transaction Processing database and does not have to be defined.

SQL> CREATE USER hratp IDENTIFIED BY "***MY_PASSWORD***";
SQL> GRANT connect,resource TO hratp;
SQL> ALTER USER hratp QUOTA UNLIMITED ON data;

Oracle Data Pump Import

The impdp tool is part of my Instant Client installation in Part 1 of this blog serie. Oracle recommends to set parameters like 

  • partition_options=merge
  • transform=segment_attributes:n
  • transform=dwcs_cvt_iots:y
  • transform=constraint_use_default_index:y
  • exclude=index, cluster, indextype, materialized_view, materialized_view_log, materialized_zonemap, db_link

Two new Oracle Data Pump parameters to work with the Oracle cloud databases are credential and and dumpfile.

  • credential: The DBM_CLOUD created credential
  • dumpfile: The URL where the dumpfile is located

Attention

The URL provided by the Object Storage menu cannot be accessed directly by impdp:

ORA-31640: unable to open dump file "https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/<identity_domain>/b/datapump_transfer/o/expdpHR05102018.dmp" for read
ORA-17503: ksfdopn:11 Failed to open file https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/<identity_domain>/b/datapump_transfer/o/expdpHR05102018.dmp
ORA-17500: ODM err:ODM HTTP Unauthorized

The URL has to be mapped from objectstorage to swiftobjectstorage:

https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/<identity_domain>/b/datapump_transfer/o/expdpHR05102018.dmp

to

https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1/<identity_domain>/datapump_transfer/expdpHR05102018.dmp

Data Pump Import Execution with REMAP of Schema and Tablespace

Start of the Oracle Data Pump job from my Windows client:

$ impdp admin/***ADMIN_PASSWORD***@atphrdata_high directory=data_pump_dir credential=ATPCS_CRED dumpfile=https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1/<identity_domain>/datapump_transfer/expdpHR05102018.dmp remap_schema=HR:HRATP remap_tablespace=HRDATA:DATA partition_options=merge transform=segment_attributes:n transform=dwcs_cvt_iots:y transform=constraint_use_default_index:y exclude=index, cluster, indextype, materialized_view, materialized_view_log, materialized_zonemap, db_link

Import: Release 18.0.0.0.0 - Production on Fri Oct 5 14:17:17 2018
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 18c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Master table "ADMIN"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "ADMIN"."SYS_IMPORT_FULL_01": admin/********@atphrdata_high directory=data_pump_dir credential=ATPCS_CRED dumpfile=https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1/<identity_domain>/datapump_transfer/expdpHR05102018.dmp remap_schema=HR:HRATP remap_tablespace=HRDATA:DATA partition_options=merge transform=segment_attributes:n transform=dwcs_cvt_iots:y transform=constraint_use_default_index:y exclude=index, cluster, indextype, materialized_view, materialized_view_log, materialized_zonemap, db_link
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"HRATP" already exists

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "HRATP"."EMPLOYEES" 17.08 KB 107 rows
. . imported "HRATP"."LOCATIONS" 8.437 KB 23 rows
. . imported "HRATP"."JOB_HISTORY" 7.195 KB 10 rows
. . imported "HRATP"."JOBS" 7.109 KB 19 rows
. . imported "HRATP"."DEPARTMENTS" 7.125 KB 27 rows
. . imported "HRATP"."COUNTRIES" 6.367 KB 25 rows
. . imported "HRATP"."REGIONS" 5.546 KB 4 rows
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "ADMIN"."SYS_IMPORT_FULL_01" completed with 1 error(s) at Fri Oct 5 12:19:29 2018 elapsed 0 00:00:27

The message about the existing user can be ignored. 

Data Pump Logfile

The logfile of the import process cannot be access directly, it has to be moved into the Object Storage with the DBMS_CLOUD package first.

SQL> BEGIN
2          DBMS_CLOUD.PUT_OBJECT(
3              credential_name => 'ATPCS_CRED',
4              object_uri => 'https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1/<identity_domain>/datapump_transfer/import.log',
5              directory_name => 'DATA_PUMP_DIR',
6              file_name => 'import.log'
7          );
8    END;
9 /

Now the file can be access in the Object Storage menu:

Connect as new User HRATP to verify the Data

C:\oracle\product\instantclient_18_3>sqlplus hratp/***MY_PASSWORD***@atphrdata_high

SQL*Plus: Release 18.0.0.0.0 - Production on Fri Oct 5 14:37:27 2018
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle. All rights reserved.

Connected to:
Oracle Database 18c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select tname from tab;

TNAME
--------------------------------------------------------------------------------
COUNTRIES
DEPARTMENTS
JOB_HISTORY
REGIONS
LOCATIONS
EMPLOYEES
JOBS
EMP_DETAILS_VIEW

8 rows selected.

Summary of Part 3

If all prerequistes are met, the data transfer with Oracle Data Pump is easy to configure and easy to handle. Take care about the token, only 2 token per user can be generated. If you losed it, you have to delete a existing token, rebuild it and re-create the credentials before you can import data again.

Oracle Autonomous Transaction Processing – Move your Data with Oracle Data Pump – Part 2

In this blog post serie which has three parts, I want to describe how data will be uploaded from an on-premises environment into the Oracle Autonomous Transaction Processing database using Oracle Data Pump. 

 

Oracle Data Pump Export on On-Premises Database

Oracle recommends the following settings for the Oracle Data Pump export job. If you use already an 18c database, you have to set the version parameter to 12.2 to avoid this error during ATP import process: ORA-39358: Export dump file version 18.0.0 not compatible with target version 12.2.0.

exclude=cluster, db_link
parallel=n
schemas=schema name
dumpfile=export%u.dmp
version=12.2

My expdp export job for the schema HR:

$ expdp system/***password***@PDBHR01.martinberger.local schemas=hr directory=expdp dumpfile=expdpHR05102018.dmp logfile =expdpHR05102018.log exclude=cluster, db_link version=12.2

The example schema contains these objects – output from expdp job

Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
. . exported "HR"."EMPLOYEES" 17.08 KB 107 rows
. . exported "HR"."LOCATIONS" 8.437 KB 23 rows
. . exported "HR"."JOB_HISTORY" 7.195 KB 10 rows
. . exported "HR"."JOBS" 7.109 KB 19 rows
. . exported "HR"."DEPARTMENTS" 7.125 KB 27 rows
. . exported "HR"."COUNTRIES" 6.367 KB 25 rows
. . exported "HR"."REGIONS" 5.546 KB 4 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/u01/app/oracle/admin/ORA18/expdp/expdpHR05102018.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Fri Oct 5 09:06:27 2018 elapsed 0 00:04:07

Transfer the Dump File Set into the Oracle Cloud

The export dump file has to be transferred into the Oracle Cloud Object Storage for later usage. First we have to create an Object Storage Bucket. Login into Object Storage menu and click on the “Create Bucket” button. 

Enter a Bucket Name and click on “Create Bucket”.

Enter the new created bucket. Click on the bucket name link or the “three bullets” on the right side to view the details.

Upload Data

Upload the export dump file, click on “Upload Object”.

“Browse” local for the Oracle Data Pump export file and click on “Upload Object”.

Object Details

Click on object “Details” to verify the object on the Object Storage. 

 

Note the URL. The URL will be used later for the import process.

Summary of Part 2

Upload objects into the Oracle Cloud Object Storage is very easy. If you don’t want to use the browser functionality, there are other possibilities to upload files like the API.

Now the export dump file is ready to import into the Autonomous Transaction Processing Database.

Oracle Autonomous Transaction Processing – Move your Data with Oracle Data Pump – Part 1

In this blog post serie which has three parts, I want to describe how data will be uploaded from an on-premises environment into the Oracle Autonomous Transaction Processing database Oracle Data Pump. 

Basics about ATP from the Using Oracle Autonomous Transaction Processing User Guide:

Autonomous Transaction Processing is designed to support all standard business applications and deliver scalable query performance.
Autonomous Transaction Processing provides all of the performance of the marketleading Oracle Database in an environment that is tuned and optimized for transaction processing workloads.
As a service Autonomous Transaction Processing does not require database administration. With Autonomous Transaction Processing you do not need to configure or manage any hardware, or install any software. Autonomous Transaction Processing handles creating the database, backing up the database, patching and upgrading the database, and growing or shrinking the database.

This is my setup:

  • Oracle Cloud Infrastructure account with an own Compartment
  • On-premises Oracle RDBMS 18c EE with HR example schema installed on Oracle Linux 7.4
  • Windows 10 64bit client

Creation of the Autonomous Transaction Processing Database

Login into Oracle Cloud Infrastructure in ATP menu and click on the “Create Autonomous Transaction Processing Database” button. 

Set DISPLAY NAME, DATABASE NAME, CPU CORE COUNT, STORAGE and set the Administrator Credentials. These credentials are used later to manage the Autonomous Transaction Processing database. Verify your license situation and click on the “Create Autonomous Transaction Processing Database” button. 

My ATP database is called ATPHRDATA, it contains later data from schema HR.

Some minutes later, the database has state AVAILABLE and is ready to use. Click on the ATP database name link or the “three bullets” on the right side to view the details.

Click on “Service Console”.

Enter admin as username the password from the creation step. Click on “Sign In”.

This is the ATP main dashboard. Here you can manage the ATP database. Click on the “Adminstration” link.

Download the login credentials by click on the “Download Client Credentials” link . ATP uses a SSL encrypted connection. The provided zip file contains all required files and configurations to connect a client by OCI, ODBC or JDBC with the Autonomous Transaction Processing Database. You have to protect this file to prevent unauthorized database access by a password.

 

The Client Credential Package

Content of the extracted “Client Credentials” package, it will used later for the connection configuration and verification. This file can also be used to configure a connection to the database with the Oracle SQL Developer 17.4 or later.

Oracle Instant Client

Download the newest Oracle Instant Client by click on the “Download Oracle Instant Client”. Older clients than 12.2 do not support ATP connections. 

In my example, I use the 18c “Instant Client Downloads for Microsoft Windows (x64)”  and the additional package with the SQL*Plus and Oracle Data Pump components. 

Client Setup

To verify the connection, I have installed the Oracle Instant Client from above and configured the Windows environment like this:

Oracle Instant Client Installation Directory C:\oracle\product\instantclient_18_3
TNS_ADMIN which contains the extracted ATP database credentials / wallet C:\oracle\network\admin

 

The sqlnet.ora file which is provided by Oracle ATP has to be modified to the real location of the configuration files. If your TNS_ADMIN is not located in a subdirectory of your ORACE_HOME, change the directory path to the wallet from

WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="?/network/admin")))
SSL_SERVER_DN_MATCH=yes

to your TNS_ADMIN path:

WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="C:\oracle\network\admin")))
SSL_SERVER_DN_MATCH=yes

If the path is not set correctly, you will get an ORA-28759: failure to open file error.

Connection Verification by SQL*Plus

SQL*Plus Connection Test with the user admin – the TNS alias is from the provided tnsnames.ora.

Links

docs.oracle.com – https://docs.oracle.com/en/cloud/paas/atp-cloud/index.html
ATP User Guide – https://docs.oracle.com/en/cloud/paas/atp-cloud/atpug/using-oracle-autonomous-transaction-processing.pdf
ATP Introduction – http://www.oracle.com/us/products/database/atp-brief-5029003.pdf

Summary of Part 1

Now the autonomous database is ready to use, the client connection works fine. The next steps are to export the on-premises data and import them into the Autonomous Transaction Processing Database. 

Oracle Database Backup Service – Encrypt your 12.2 Database Backups to the Cloud

The Oracle RMAN backup encryption is necessary if you want to backup your database into the Oracle cloud. In Oracle 12c, you have three methods available to encrypt an Oracle RMAN backup:

  • with a passphrase
  • with a master encryption key
  • hybrid with a passphrase and an encryption key

On docs.oracle.com, the basic setup is described here: https://docs.oracle.com/en/cloud/paas/db-backup-cloud/csdbb/configuring-encryption-backups.html#GUID-4A1F5CF5-7EAF-4D71-9B7F-B46412F552CE

In this blog post, I show you how to configure your database environment with a master encryption key and a keystore. I use this solution to to backup and recovery to and into the Oracle cloud. And in the cloud, I don’t like to type in passwords manually for every action or write passwords in backup and restore scripts.

There are also some issues reports like in My Oracle Support Note TDE Wallet Problem in 12c: Cannot do a Set Key operation when an auto-login wallet is present (Doc ID 1944507.1).

Here are steps to create an autologin wallet.

Configure SQLNET.ora in $TNS_ADMIN to use a Keystore

ENCRYPTION_WALLET_LOCATION =
   (SOURCE =
     (METHOD = FILE)
     (METHOD_DATA =
       (DIRECTORY = /u00/app/oracle/network/wallet)
     )
    )

Create Keystore as SYSDBA

SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/u00/app/oracle/tde_wallet' IDENTIFIED BY "my#wallet18";

Open Keystore

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "my#wallet18";

The status is set to OPEN_NO_MASTER_KEY.

SQL> SELECT wrl_parameter, wallet_type, status
  2  FROM v$encryption_wallet;

WRL_PARAMETER                       WALLET_TYPE     STATUS
----------------------------------- --------------- --------------------
/u00/app/oracle/tde_wallet/     PASSWORD        OPEN_NO_MASTER_KEY

Set Master Key

Now the master key has to defined. When you have already defined a wallet earlier and deleted the keys,  you have to set the undocumented parameter to set the master key again. This works here too to set the key. Otherwise you get an ORA-28374: typed master key not found in wallet error. See Master Note For Transparent Data Encryption ( TDE ) (Doc ID 1228046.1) for further information.

SQL> ALTER SYSTEM SET "_db_discard_lost_masterkey"=true;
SQL> ADMINISTER KEY MANAGEMENT SET KEY FORCE KEYSTORE IDENTIFIED BY "my#wallet18" WITH BACKUP USING 'master_key_1';

Now the status is set to OPEN.

SQL> SELECT wrl_parameter, wallet_type, status
  2  FROM v$encryption_wallet;

WRL_PARAMETER                       WALLET_TYPE     STATUS
----------------------------------- --------------- --------------------
/u00/app/oracle/tde_wallet/     PASSWORD        OPEN

Activate Auto Login

SQL> ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE '/u00/app/oracle/tde_wallet' IDENTIFIED BY "my#wallet18";

Restart the Database

SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP

Verify if the keystore is available and WALLET_TYPE is AUTOLOGIN.

SQL> SELECT wrl_parameter, wallet_type, status
  2  FROM v$encryption_wallet;

WRL_PARAMETER                       WALLET_TYPE     STATUS
----------------------------------- --------------- --------------------
/u00/app/oracle/tde_wallet/     AUTOLOGIN       OPEN

Configure RMAN for Encryption

RMAN> CONFIGURE ENCRYPTION FOR DATABASE ON;

RMAN Backup Test

A simple RMAN controlfile backup into the Oracle cloud (OPC Backup Module is already configured).

RUN {  
 allocate channel t1 type 'sbt_tape' parms='SBT_LIBRARY=libopc.so, SBT_PARMS=(OPC_PFILE=/u00/app/oracle/admin/OCIDB01/opc_config/opcOCIDB01.ora)';  
 backup current controlfile;  
 release channel t1;  
}

Error message if you want to backup into the Oracle cloud and the encryption is not configured correctly:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on t1 channel at 08/27/2018 18:48:27
ORA-19914: unable to encrypt backup
ORA-28365: wallet is not open

Backup Verification in V$BACKUP_PIECE – Column ENCRYPTED

SQL> SELECT start_time,handle,substr(media,1,30),encrypted
  2  FROM v$backup_piece;

START_TIME         HANDLE                                   SUBSTR(MEDIA,1,30)                  ENC
------------------ ---------------------------------------- ----------------------------------- ---
27-AUG-18          c-903044157-20180827-00                  eucom-north-1.stora..orage-tri      YES

Links

http://www.oracle.com/technetwork/database/security/twp-transparent-data-encryption-bes-130696.pdf

http://www.oracle.com/technetwork/database/security/index-095354.html