Latest Posts

Oracle OCI Data Transfer Service – A journey from Kestenholz/Jurasüdfuss/Switzerland to Frankfurt and back

The Oracle Data Transfer service is a offline data transfer method to migrate data to the Oracle Cloud Infrastructure. A transfer service is useful, when your network bandwidth and connection is not sufficient to upload your migration data  in a meaningful time. Oracle offers two methods: The disk-based data transfer and the appliance-based data transfer. The service is no only one-way, data can also be exported in an Oracle Cloud Infrastructure data center and shipped to your data center.

According one of my Company Trivadis’ cultural value called curiosity, I was wondering how this service works. This is the story of a tiny USB Hard Disk Drive full of data, which was going on a long journey from Kestenholz / Jurasüdfuss / Solothurn / Switzerland to the Oracle Infrastructure data center in Frankfurt and back.

Setup

  • The OCI Data Transfer utility is Linux based, the USB 3.0 HDD is attached to a VMware Virtual machine where Oracle Linux is running
  • The virtual machine has access to the Internet
  • Data is available – for this example I used some open data from Swiss government (opendata.swiss)

Data Transfer Service Regions

Actually data transfer is available in Frankfurt, Ashburn, Phoenix, London and Osaka. From Switzerland, Frankfurt is the nearest location.

How is your data coming into the Oracle Cloud

  1. Enable Data Transfer Service – Data Transfer
  2. Prepare an Object Storage bucket
  3. Create a Transfer Job
  4. Attach a HDD to a Linux based host, use the Data Transfer Utility to create and encrypt the device
  5. Copy data to the HDD
  6. Create and upload the disk manifest
  7. Lock the disk
  8. Attach the Disk to the transfer label and create the package
  9. Create a transfer package
  10. Shipping and update shipping information
  11. Tracking
  12. Data Verification
  13. Object Storage Replication Policy (optional)
  14. Finally…

Note: Most of the jobs above can be done by the OCI CLI on command line and are very well described in the Oracle doumentation.

1. Enable Data Transfer Service – Entitlement

Before you can use this service, the Data Transfer service has to be enabled in general. Therefore you have to request it. The OCI tenant administrator gets a document, what he has to sign in a digital way. It contains for example a description how to bring data to OCI, and if you order an appliance that there will be a 45day maximum limit where the appliance has to be returned to Oracle. And a few days later, the service is ready to use. Basically now you have the permissions, to order a Data Transfer Appliance, but in this test I used the Disk Service.

2. Prepare an Object Storage Bucket

In the Frankfurt region, I created a new Object Storage bucket called data_transfer_usb. This is the bucket where the shipped data will be transferred in.

3. Create a Transfer Job

In Object Storage – Data Transfer Import,  we create a new transfer job. It contains the upload bucket from above and as method the transfer device type is disk. For furtehr processing, we need the OCID of the job. As you can see, actually there is no transfer disk attached.

4. Attach a HDD to a Linux based host, use the Data Transfer Utility to create and encrypt the Device

Prerequisites for the Data Transfer Utility according the documentation:

  • An OCI Account which have the IAM permissions for Data Transfer
  • A Linux machine with Oracle Linux 6 or greater, Ubuntu 14.04 or greater, SUSE 11 or greater
  • Java 1.8  or 1.11
  • hdparm 9.0 or later
  • Cryptsetup 1.2.0 or later

Package Installation for my Oracle Linux 7 Machine

Download and Installation of the Data Transfer Utility

The actual link to the file is in the online documentation.

Test

Configure IAM Credentials for Data Transfer Actions

The configuration is according configuring the Oracle Cloud Infrastructure CLI with user, fingerprint, key_file, tenancy and region. Example configuration file:

Verify Credentials

Show Data Transfer Job Details – Status is PREPARED

Here you can see the shipping address from the Oracle Infrastructure data center frankfurk and the label. Both information are used later in process.

Prepare USB Hard Disk Drive

The disk is attached as /dev/sdb – it is a Western Digital drive. Important: The disk needs no partition.

Create Transfer Disk for Data Copy

This command will setup the disk and mount it immediately. As additional information we need the disk label for further processing.

Mount point is /mnt/orcdts_DAMOED7GH.

The Transfer Disk status has changed to PREPARING and the disk serial number is registered now.

5. Copy Data to HDD

For the test run I have copied some Open Data stuff, an Oracle Backup and Oracle Data Pump export files to the disk.

6. Generate Manifest File

It generates a local file which contains a list of the files and her MD5 checksums like an inventory file. Here the disk label is required.

The file:

7. Lock the Disk

8. Attach the Disk to the Transfer Label and create the Package

The status now changes to ACTIVE.

9. Shipping and Shipping Information Update

As shipping company I used DHL Switzerland. They have a pick point near by in Langenthal. At this point, it’s important to organize the return shipping too and put the return shipping label in the box. I didn’t realize it and have forgotten to organize the return shipping. So the disk was stranded in the Frankfurt data center.  And then the story began. As a private person, the delivery companies DHL and UPS doesn’t allow private persons to re-import packages from outside Switzerland  without a customer number. But, private persons don’t get such a number. Finally with FedEx I was able to organize the return shipping. Thanks to Andrew and Christos from Oracle’s OCI Data Transfer team for their patience!

Note: Companies like DHL have templates to create pro-forma commercial invoices – https://www.dhl.ch/exp-de/express/zollabwicklung/zollpapiere/proforma_rechnung.html#invoice

This disk was sent to the Oracle Cloud Infrastructure data center Frankfurt.

Now the shipping information has to be updated with vendor and the tracking numbers.

10. Tracking

DHL required two days until delivery in Frankfurt. Oracle started one day later with the data import.

11. Data Processing

Then Oracle is uploading the data and the disk is attached, the job transfer status changes to PROCESSING.

12. Data Verification

Finally the data is arrive in the Oracle Cloud Infrastructure Object Storage and is ready for use. The file processing is logged in the new created file upload_summary.txt.

13. Object Storage Replication Policy (optional)

The files are now in the data center Frankfurt, but I want to have them in the Swiss data center region Zurich. Therefore I set a replication policy on level Object Storage. In Zurich, a new bucket called data_transfer_usb_from_FRA is created. And a few minutes later, the files were available in the Object Storage Zurich. Sure, it depends on the file size 😉

Finally…

Detach the transfer disk so the data center guys can send it back to you.

And after a few days…welcome FedEx in Kestenholz / Jurasüdfuss / Solothurn / Switzerland!

Some words about Shipping and Costs

Shipping costs from DHL and Fedex:

Vendor From To Costs
DHL Langenthal / Switzerland Frankfurt 79.50 CHF
FedEx Frankfurt Kestenholz 130.45 CHF

Links

Summary

To watch nice marketing slides and documents about  cool features is not enough. To find out how it works in the real word, a real is test is required. How to migrate data into a data center of any cloud provider should be basic know-how of each consultant which is working with and on cloud themes. Moving data by a disk ro an appliance opens a lot of possibilities for data migrations into the cloud. For example a huge DWH: Transfer the RMAN backup into the cloud, restore it, close the GAP by an incremental backup and synchronize it with Oracle Golden Gate. #ilike

Oracle Cloud Infrastructure Classic Object Storage – Cleanup Day with FTM CLI

Yesterday I decided to cleanup old Oracle Cloud Infrastructure Classic objects. There were a lot of files lying around in the Object Storage of a project from 2018. Cleaning up these files in the OCI console was no option, they can only be deleted one by one. And with over 1500 files, a bad idea. During the search for an option for Object Storage mass deletion, I found this tool: ftmcli – Object Storage Classic File Transfer Manager. The MOS note contains the script and a short manual how to use. It’s a Java based script,  a perfect match for my Windows Subsytem for Linux (Ubuntu), which I often use for OCI actions.

OAC-Classic : How To Delete A Storage Container That has Multiple Objects. (Doc ID 2634021.1)

Link to the User Guide: https://docs.oracle.com/en/cloud/iaas-classic/storage-cloud/csclr/preparing-use-ftm-cli.html#GUID-5BB8647F-DDAD-4371-A519-1116402245FB

The Container List

Here is the content of my Object Storage Classic what I have to clean up – it contains five containers.

ftmcli – Installation

Over the WSL mountpoint in /mnt where you have access to your local Windows disk drives, transfer the package to the home directory and extract it.

In the extracted subdirectory is a file called ftmcli.properties – there you have to set your OCI Object Storage information. Two parameters are used:

  • user=<OCI login>
  • rest-endpoint=<your Storage Classic Endpoint>

The Storage Classic Endpoint is visible in the Storage Classic Account tab. There is no need to set the password in the properties file. You are prompted for it when ftmcli is executed.

ftmcli – Commands

Available commands – output from ftmcli:

upload Upload a file or a directory to a container.
download Download an object or a virtual directory from a container.
create-container Create a container.
restore Restore an object from an Archive container.
list List containers in the account or objects in a container.
delete Delete a container in the account or an object in a container.
describe Describes the attributes of a container in the account or an object in a container.
set Set the metadata attribute(s) of a container in the account or an object in a container.
set-crp Set a replication policy for a container.
copy Copy an object to a destination container.

 

ftmcli – List Object Storage Containers

Here you will be prompted for your OCI password. Curious thing: My existing OCI password what I have used for months contained a lot of special characters( ,LCaOQ3|~[PT”+x), and there were not accepted.

I had to set a new OCI account password with less complexity and special characters and then it worked.

ftmcli – Delete Object Storage Containers

With the -f flag, the deletion of a container which contains objects can be forced. I did it for all my containers.

And finally all containers are removed. Job successfully done!

Summary

This is what I like in Oracle Cloud Infrastructure, for a lot of problems and technical questions are a lot of scripts and tools available like this one: ftmcli. But sometimes it’s not so easy to find them.

MV2OCI – One-Click Move of your Data into Oracle Cloud Infrastructure Database

mv2oci is a tool which helps to migrate on-premise data to the Oracle Cloud Infrastructure based on Oracle Data Pump and works as a data load tool. The local Data Pump export is transferred and imported to/on the target cloud server automatically. There is no use of Oracle Cloud Object Storage, the dump files are transferred with rsync or scp to the target database node. This is the different behavior to mv2adb – see my blogpost here – which uses the Object Storage. As an option, the data can be transferred via Database Link (mv2oci Parameter –netlink).

All you need to know about mv2oci is written in the My Oracle Support Note (OCI) MV2OCI: move data to Oracle Cloud Database in “one-click” (Doc ID 2514026.1).  The newest version of the rpm package can be downloaded there. The package has to be installed on the source server.

Prerequisites

  • SQL*Net connection between the two databases
  • A Java executable – in my case I have installed jre (yum install jre)
  • Verify if the firewall to the VCN Subnet is open for Port 1521 – Port 22 is open as per default
  • Password of database user SYSTEM

The Use Case

Let’s move the database schema SOE from my on-premise Oracle Linux Server into the cloud step-by-step. An Oracle Cloud Infrastructure database instance is already up and running, the target tablespace is created. The data centers are connect by VPN.

 

Database Information

Source Target
CDB Name CDB118 CDB118
PDB Name pdb11801 pdboci
Hostname heckenweg srv-cdb118
IP Address 192.168.1.184 172.16.0.8
PDB Service Name pdb11801.kestenholz.net pdboci.subnetvcnmohnwe.vcnmohnwegvpn.oraclevcn.com

1. Package Installation

Download and transfer the package to the on-premise server, for example in directory /tmp. As user root, install the package.

Verify that the SSH private key which is used for the connection to the Oracle Cloud Infrastructure server is available and the connection is working. Here is the OCI SSH key available in the $HOME/.ssh.

2. Encrypt the SYSTEM passwords for both databases – mv2oci encpass

3. Configuration File

A template of the configuration file is located in /opt/mv2oci/oci. I used the following parameters – other parameters like ICHOME for Instance Client configuration are well described.

Source DB Parameters

Parameter Value
DB_CONSTRING //heckenweg/pdb11801.kestenholz.net
SYSTEM_DB_PASSWORD Encrypted SYSTEM password
SCHEMAS SOE
DUMP_FILES /tmp/exp_soe_18102020_01.dmp, /tmp/exp_soe_18102020_02.dmp
OHOME /u01/app/oracle/product/19.0.0/dbhome_1

Expdp/Impdp Parameters

Parameter Value
Dump Name exp_soe_18102020.dmp
DUMP_PATH /tmp
PARALLEL 2 – creates two Dumpfiles called exp_soe_18102020_01.dmp and exp_soe_18102020_02.dmp

OCI Parameters

Parameter Value
OC_HOST 172.16.0.8
OC_SSHKEY /home/oracle/.ssh/id_rsa_oci_29012020
OC_DB_CONSTRING //172.16.0.8/pdboci.subnetvcnmohnwe.vcnmohnwegvpn.oraclevcn.com
OC_DB_PASSWORD Encrypted SYSTEM password
OC_DUMP_LOC /tmp

 

4. Export Data – mv2oci expdp

Dump files created in /tmp.

5. Transfer Data – mv2oci putdump

Files are available now on target server.

6. Import Data

Tablespace SOEDATA exists on target server, otherwise you can use to the EXTRA_IMPDP parameters in the mv2oci configuration file to do a remapping etc.

Analysis of the error in the SQL*Developer – there is a missing execution permission on package DBMS_LOCK.

This is an easy thing:

7. Reporting – mv2oci report

The report compares the objects on source and target database.

8.  All in One – mvoci auto

We did the steps one-by-one, by using the parameter auto, the steps above are done automatically (except reporting).

9. Logfiles

Logfiles tom the mv2oci actions are located in:

mv2oci /opt/mv2oci/out/log
Data Pump Directory in parameter DUMP_PATH

Summary

mv2oci is another great tool to support the movement to Oracle Cloud Infrastructure. Easy to configure, easy to use. #ilike

Oracle Release Update 19.9 – Lab Update Time (Grid Infrastructure Standalone & RDBMS)

The Oracle Release update 19.9 for Linux is available since a few days. Time to upgrade my lab environment at home which consists of the following components:

  • Oracle Grid Infrastructure Standalone 19.8.0 with ASM Normal Redundancy – +ASM
  • Oracle 19.8.0 RDBMS as Repository for Oracle Enterprise Manager – EMREPO

The running 19.8 Environment

Output from Trivadis base environment tool TVD-Basenv(TM).

Patch Download, Transfer and Extract

I have downloaded the Combo which contains the RU for Grid Infrastructure and Oracle Java Virtual Machine.

  • COMBO OF OJVM RU COMPONENT 19.9.0.0.201020 + GI RU 19.9.0.0.201020(Patch 31720429)
    • OJVM RELEASE UPDATE 19.9.0.0.0(Patch 31668882)
    • GI RELEASE UPDATE 19.9.0.0.0(Patch 31750108)

The local stage directory with the extracted files:

OPatch

OPatch in Grid Infrastructure home directory has to be version 12.2.0.1.19 or later.

Version Verification

+ASM – Grid Infrastructure Standalone

EMREPO – RDBMS

CheckConflictAgainstOHWithDetail

+ASM – Grid Infrastructure Standalone

EMREPO – RDBMS

Take care, this line here produces an error:

According My Oracle Support Note opatch CheckSystemSpace Command For Grid Infrastructure RU Fails With: “This command doesn’t support System Patch” (Doc ID 2634165.1), this error can be ignored and the line removed in future patch apply actions.

CheckSystemSpace

To check for space, we create there two files which contain the patch directories. The checks have to be successful.

+ASM – Grid Infrastructure Standalone

EMREPO – RDBMS

Release Update Apply

As first action I stop the OEM. The I run opatchauto as user root. Grid Infrastructure and RBDMS components are stopped, started and patch automatically one by one. Here is the full output of the patch apply where you can see the executed steps. In my lab environment, it took about 20 minutes.

Version Verification

+ASM – Grid Infrastructure Standalone

EMREPO – RDBMS

As you can see, the components were updated successfully. Time to start Oracle Enterprise Manager 13c Release 4 Update.

OJVM Apply

At the end, the OJVM patch has to applied. Set ORACLE_SID and ORACLE_HOME according the RDBMS environment.

Stop the RDBMS with srvctl

Change to OJVM Patch Directory

Apply the OJVM Patch

Startup Upgrade – Container Database and Pluggable Database – in SQL*Plus

Run datapatch

Shutdown Database in SQL*Plus

Start the RDBMS with srvctl

Version Verification

EMREPO – RDBMS

Summary

There were no issues. Ok, it’s just a GI Standalone environment. But this was really a pleasure.

The Grafana Plugins for Oracle Cloud Infrastructure Monitoring are back!

In September 2019 I wrote a blog post how to monitor an Oracle Cloud Infrastructure Autonomous database with Grafana plugin oci-datasource. But some weeks after publication, the plugin was not available on the Grafana page anymore. And only Oracle and Grafana had a clue why.

Now everything will be fine now. Since the 6th of October, there are two new Grafana plugins available for download. They both don’t require a Grafana enterprise account.

The first one is a successor of the former oci-datasource plugin, the second allows to get logs from OCI resources like Compute or Storage. As an infrastructure guy, let’s install the Oracle Cloud Infrastructure Metrics on an local Oracle Enterprise Linux 8 installation!

Install and configure the OCI CLI

Link: https://docs.cloud.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm

OS User oci and Installer

As OS user root, create a new user mentioned oci, change to new created user oci.

Run the installer script.

In this demo case, I use the default settings and the tab completion. After some seconds, all packages are installed and the OCI CLI is ready to configure.

Configure the OCI CLI

If you have already a created SSH key pair from a former OCI action, then you can use it here. Otherwise this setup process creates a new private and public key for you. Take care, the public key has to be in the PEM format!

Required values to finish the setup:

config location /home/oci/.oci/config
user OCID OCI > Identity > Users > [YOUR_USER] > OCID
tenancy OCID OCI > Administration > Tenancy Details > [YOUR_TENANCY] > OCID
region choose your region, e.g. eu-zurich-1
generate a new API signing RSA key pair Y -> only if you don’t have already an existing key pair
key directory /home/oci/.oci
key name oci_api_key_07102020

 

Run the setup.

OCI Console API Key

The content of the created public key has to be added in OCI Console as API key – just copy and paste it. OCI Console >> Identity >> Users >> User Details >> API Keys >> Add Public Key.

How to: https://docs.cloud.oracle.com/Content/API/Concepts/apisigningkey.htm#How2

OCI CLI Configuration Test

Verify the configuration by execute a CLI command. Example to list images based on Oracle Linux.

OCI Console Group Policy

If your user is not part of the Administrator group, a new group and a group policy is needed which has the permissions to read tenant metrics. OCI Console >> Identity >> Groups >> Create Group.

Create the policy in the root compartment of your tenant. OCI Console >> Identity >> Policy>> Create Policy.

Install and configure Grafana and the Oracle Cloud Infrastructure Metrics Data Source Plugin

Grafana

Link: https://docs.cloud.oracle.com/en-us/iaas/Content/API/SDKDocs/grafana.htm

Start and enable the service.

Don’t forget to open the firewall port 3000 for the Grafana UI.

Oracle Cloud Infrastructure Metrics Data Source Plugin

List the available OCI Grafana plugins.


Install the metric plugin.

Restart Grafana Server.

Grafana Data Source Configuration

RSA Key Configuration

Grafana needs the configuration file and the RSA Key from the user oci. One solution: as user root, copy the files and set the ownership to OS user grafana.

Change the path to the key file in /usr/share/grafana/.oci/config.

from:

to:

Add a new Data Source

Login into the Grafana server by address <server-ip>:3000. The initial username and password is admin. It’s recommended to change the password at the first login. Add a new data source. Configuration >> Data Sources >> Add data source.

Filter by oracle and select the Oracle Cloud Infrastructure Metrics plugin.

Set Tenancy OCID, select your Default Region and set the Environment to local. Press Save & Test to verify the functionality.

Create a new Dashboard and add a new panel.

Now you can query the data, for example the VPN bandwidth for region eu-zurich1 in my personal compartment. Feel free to add new panels based on the available metrics.

Example

Summary

Great to have the Oracle Cloud Infrastructure Grafana plugins back. To get an idea, which metrics are all available, verify it in the OCI Console >> Monitoring >> Metrics Explorer. The free ADB is not available in the collected metrics. But this is a general issue.

This was a review of the first OCI plugin. In the next week I will take a deeper look into the Oracle Cloud Infrastructure Logging Data Source plugin.