Friday, October 14, 2016

What happens to attached volumes when snapshotting VMs on an OCM

Introduction

With OCM there is the capability to take a "snapshot" of a virtual machine.  As described in the documentation a snapshot will take a copy of the machine image boot disk.  Essentially there are two purposes for this action to take place, firstly if we take a machineimage then we can use that machineimage to create additional VMs from this copy - using it as a template.  Secondly it can be used as a backup copy to recreate the VM if needed.  (Really just the same as the first but using the copy to recreate rather than clone.)

This leads me to think of a couple of questions that this blog posting will be answering.

  1. What happens to storage volumes that have been added to the VM.  Are these copied as well?
  2. Is this a good mechanism to increase the root volume size to make more space for VMs that might want a bit more disk space?

I have tested two specific scenarios out starting from a VM based on the OL6 template with an attached volume.

  1. use the new volume to add to the root disk logical volume
  2. create a new logical volume and attach it to the filesystem, say from /u01.
Snapshot both cases and create a new VM from the machineimage created and look to see what happened.

 Extending root volume

In the first case I simply create a VM from the OL6 base template using a simple orchestration.  I create an additional volume then attach the new volume to the VM.  Having created the VM I log onto it and use the unix commands for LVM to extend the size of the root disk.

The steps taken are:-
  1. Use fdisk to format the attached volume to LVM.
  2. Use lvdisplay and vgdisplay to identify the current root volume (Prob VolGroup00)
  3. Use vgextend to extend the current volume group to add the storage from the attached volume.
  4. Use lvextend to make the root logical volume larger
  5. Use resize2fs the device mapper to make the extra space available to the filesystem.

Some of the key commands and output are shown below.  The result of all these commands is that the root filesystem has grown from 11G to 61G using all the 50Gb in the attached volume.

#df -kh
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00   11G  3.3G  6.9G  33% /
tmpfs                            873M     0  873M   0% /dev/shm
/dev/xvda1                       239M   55M  168M  25% /boot
/dev/mapper/VolGroup00-LogVol02  2.0G  3.0M  1.9G   1% /opt/emagent_instance


# vgextend VolGroup00 /dev/xvdb1
  Volume group "VolGroup00" successfully extended
# pvdisplay
  --- Physical volume ---
  PV Name               /dev/xvda2
  VG Name               VolGroup00
  PV Size               17.75 GiB / not usable 2.12 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              4543
  Free PE               191
  Allocated PE          4352
  PV UUID               VnZ5PZ-8IrP-nggg-B7Fo-9sog-g4bw-skbPBE
  
  --- Physical volume ---
  PV Name               /dev/xvdb1
  VG Name               VolGroup00
  PV Size               50.00 GiB / not usable 3.31 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              12799
  Free PE               12799
  Allocated PE          0
  PV UUID               c1m21x-2IBX-Qrsy-1AJI-3UQ7-apUe-weETeq


# lvextend -L+55G /dev/VolGroup00/LogVol00
  Extending logical volume LogVol00 to 66.00 GiB
  Insufficient free space: 14080 extents needed, but only 12990 available

# lvextend -l+12990 /dev/VolGroup00/LogVol00
  Extending logical volume LogVol00 to 61.74 GiB
  Logical volume LogVol00 successfully resized
 

# resize2fs /dev/mapper/VolGroup00-LogVol00
resize2fs 1.43-WIP (20-Jun-2013)
Filesystem at /dev/mapper/VolGroup00-LogVol00 is mounted on /; on-line resizing required
old_desc_blocks = 4, new_desc_blocks = 4
The filesystem on /dev/mapper/VolGroup00-LogVol00 is now 16185344 blocks long.

# df -kh
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00   61G  3.3G   55G   6% /
tmpfs                            873M     0  873M   0% /dev/shm
/dev/xvda1                       239M   55M  168M  25% /boot
/dev/mapper/VolGroup00-LogVol02  2.0G  3.0M  1.9G   1% /opt/emagent_instance

I now use the UI from EMCC to create a snapshot of the VM.  This could also be done from the command line.

Snapshotting a VM
The snapshot takes a few minutes to complete and once done there is a template available that will allow the creation of new VMs.

Snapshot appearing as a template in the OCM library



Creating a VM based on the snapshot


Once the new VM is created just log on and have a look at the root disk size.  It is immediately clear that it has a root disk of 61Gb and no attached volumes so by expanding the root LVM partition and snapshotting it will effectively increase the size of the disk in the machine image.

ERRATA - This approach does not work for increasing the root disk size.  Further investigation shows that while the OS reports the increased disk size all looks good but issuing a pvdisplay command reports that a device is missing.  This was confirmed by simply filling the disk up, as soon as the used space reached the same level as the original disk then warnings about the possible loss of data were reported and some of the new writes failed to get persisted to disk.  The conclusion - The actual disk space available was not expanded.

EXT4-fs error (device dm-2): ext4_wait_block_bitmap:448: comm flush-252:2: Cannot read block bitmap - block_group = 117, block_bitmap = 3670021
EXT4-fs (dm-2): delayed block allocation failed for inode 13345 at logical offset 16665 with max blocks 1 with error -5
EXT4-fs (dm-2): This should not happen!! Data will be lost

Adding the volume as a new partition/logical volume

The same process was completed for a volume being attached but this time rather than extendingVolGroup00 I created a new physical volume, volume group and logical volume which I mounted on /u01.  (fdisk to format volume to LVM [8e type], pvcreate to create the physical volume, vgcreate to create  a volume group using the new volume, lvcreate to create the logical volume and then mount the logical volume of /u01.)  Exactly the same process was used to create a snapshot and then create a new VM from the machineimage that was created.  This time round the resulting disk space on the new VM was just 11Gb - the disk size of the original template.  i.e. The snapshot has ignored the additional volume and done what the docs say, specifically take an image of the machine's boot disk.

Conclusion

As a mechanism to increase the root disk space of a template the approach of creating a VM with an attached volume and using this volume to extend the size of the root volume group/logical volume will NOT enable a new template with larger disk space.

If you are planning to create a VM with an attached volume and think that the snapshot will backup the entire VM then think again.  It will be necessary for you to also snapshot any volumes that are not part of the machine image boot disk to create a full snapshot of your virtual machine.  


Friday, September 23, 2016

Using Reporting in Enterprise Manager

Introduction

A common question that arises when talking to customers is to have visibility on the quantity of compute resource (CPU, Memory and Storage) that is being used.  This information is fairly easy to extract from the oracle-compute command line but it also is surfaced in Enterprise Manager.  As an exercise I wanted to try and produce a report from Enterprise Manager which gives the detail on the compute resource used.  This blog posting is just a capture of my experiences and not necessarily a best practice approach to reporting using EM12c against an OCM.

EM12c Reporting Overview

Enterprise Manager, as a monitoring tool, captures a great deal of information about anything it is monitoring.  Both configuration details and of course some historical information on the usage.  Provided you are logged in to the tool as a user with permissions to access reports and use the BI Publisher to create custom reports you will be able to find the reports under the Enterprise menu which typically is in the top left corner of the screen.


There are two options under reports, Information Publisher Reports which is a list of pre-defined reports that can be run to pull out commonly used reports and the BI Publisher Enterprise Reports.  The BI Publisher approach is the preferred route to use as this is now the report generator of choice for Enterprise Manager, others are deprecated and may eventually be dropped.  Like the Information Publisher Reports there are a series of out-the-box reports you can utilise but for many cases a custom report is the way to go. 

Creating a BI Publisher Report

BI Publisher is an incredibly powerful reporting tool that can query any database (or indeed even other sources of data) and push that data into an on-line report that can then be run on a regular basis, converted into PDF e-mailed out or run ad-hoc as needed.  With Enterprise Manager the main source of data is the underlying database of EM12c.

To create a report the process is essentially a two step process.  Firstly you must create a datamodel where you specify which tables to query, what the associations are between the tables, add filters and conditions to extract the specific data of interest.  Once the model has been defined you can optionally add in additional query parameters which can tune the report at run-time.  Once done you build a report up based on the data in the model, the report is built using simple wizards to produce tables of data, total up columns, display details on various graph types etc.

Building the DataModel for OCM

The Oracle Cloud Machine makes use of an EM12c Virtual Infrastructure plugin for most of the monitoring and management functionality.  This plugin stores much of its data in the tables that are prefixed with "MGMT$VI" and using BI Publisher we can create a new data model  that will pick the data we are interested.  When creating a new data model the default data source is called EMREPOS which is the database used by Enterprise Manager.  We can then simply type in the SQL if we know it in advance or alternatively use a "Query Builder" which allows us to dynamically build up the query using a fairly intuitive web based GUI.




The query builder allows us to drag and drop the tables onto a palate and select the fields we are interested in.  We can add conditions to the query, define linkage between tables etc.



In our specific use case we are looking to understand the resource used by the virtual machines, specifically allocated CPU, Memory and the storage volumes that have been added to the VMs.  This information is available in two tables, MGMT$VI_NM_OSV_CFG_DETAILS and MGMT$VI_NM_STORAGE_CFG.

BI Publisher has a mechanism to allow the report user to specify "parameters" which can be used to filter the data returned by the model.  It seems sensible to be able to query the model by tenancy I have added into the data model a parameter which will allow the user to specify one or more tenancies to report on.  For these tables the tenancy is effectively defined in the Quota. (an alternative breakdown might be per-orchestration)  To build up a parameter we have to create a "list of values" which the user can select from, as with the main data set this is defined via SQL queries against the database.  To show all tenancies I used the following SQL query:-


select "MGMT$VI_NM_OSV_CFG_DETAILS"."QUOTA" as "QUOTA" from "MGMT_VIEW"."MGMT$VI_NM_OSV_CFG_DETAILS" "MGMT$VI_NM_OSV_CFG_DETAILS" 
 where "MGMT$VI_NM_OSV_CFG_DETAILS"."QUOTA" !='RANDOMTEXT' 
   AND VNC_URL=(SELECT MAX(VNC_URL) from MGMT_VIEW.MGMT$VI_NM_OSV_CFG_DETAILS "B" WHERE b.QUOTA=MGMT$VI_NM_OSV_CFG_DETAILS.QUOTA)

This allows me to build up a list of tenancies (quota) which has been de-duplicated via the where clause.  (Could not get select distinct to work....)  This value list (the tenancies) is used as the selection for the Parameter which will be presented on the report to allow the user to narrow the report down to specific tenancies.



As shown in the screnshot I have selected to allow the user to chose multiple tenancies to report on or all the tenancies.  If all then a comma separated list of all tenancies on the rack is passed in as the parameter to the report.

Building the report

Once done we can turn attention to the report.  The first thing to do is to click on the data tab in the data model and press View to have a look at what data is actually returned by your datamodel.  If it looks like the correct information is being returned then click on "Save as Sample Data" and the data you returned is saved and used as the basis for data shown as the report is developed.

The easiest way to create the report from here is to click the "Create Report" button on the top right of the screen, this will open up a wizard to allow you to create the report and add charts and data tables to the report.




Having completed the report design we can then run the report.  In the screenshot below I have picked out three of the tenancies I am interested in and we can see at a moments glance that the JCS Demo tenancy is using most memory and CPU while the DBCS demo account is using most storage space.  Exactly as we would expect for a relatively small application.


Conclusion

Even although the OCM is managed by Oracle as a tenant user of the OCM rack it is fairly easy to use Enterprise Manager to gain insight into the usage of the OCM and BI Publisher provides a way to extract data to put into useful management reports.

Tuesday, June 7, 2016

Encrypting "disks" on Oracle Cloud Machine

Introduction


The Oracle Cloud Machine, like the public cloud, is administered by Oracle.  While the Oracle staff who manage the rack are highly skilled professionals and all their actions audited there is an obvious concern about the security of customer data at rest.  On the OCM the administrators of the rack have no direct access to the customer's virtual machines.  This article demonstrates how storage volumes can be used by a tenant to mount block storage devices that are encrypted and hence further obscured from system administrators.

(As a side effect of demonstrating the security aspect this is also a useful reference for using cryptsetup to encrypt disks.)

Setup


In order to demonstrate that a storage volume is encrypted and hence not visible to cloud administrators we do a very simple setup where two storage volumes are created, one to be encrypted and the other left in plain text.  These volumes are "attached" to a virtual machine and then within the virtual machine we use the linux utility cryptsetup to encrypt one of the volumes the other is simply mounted with an ext4 filesystem on it.  Plain text files are created in both volumes and then we will switch to the cloud administration side of things to see if it is possible to read the content of the two volumes.

Virtual Machine Instance Creation



First of all we create two storage volumes.  This can be done from the command line easily.


# oracle-compute add storagevolume /osc/public/encrypt-storage-001 10G /oracle/public/storage/default --description "A test 10Gb storage volume that we will try to have encrypted" 

# oracle-compute add storagevolume /osc/public/plain-storage-001 10G /oracle/public/storage/default --description "A test 10Gb storage volume that we will try to have encrypted"


Then we create a virtual machine via an orchestration defined in a json file

# cat simple_vm_with_storage.json
{
"name": "/osc/public/encryption-vm",
"oplans": [
{
 "obj_type": "launchplan",
 "ha_policy": "active",
 "label": "encryption volume launch plan",
 "objects": [
 {
 "instances": [
 {
 "label": "encryption-vm001",
 "imagelist": "/oracle/public/linux6_16.1.2_64",
 "networking":
 {
   "net0": { "vnet": "/osc/public/vnet-eoib-1706" }
 },
 "storage_attachments": [
 { "volume": "/osc/public/encrypt-storage-001", "index": 1},{"volume": "/osc/public/plain-storage-001", "index": 2}],
 "shape": "ot1",
 "sshkeys": ["/osc/public/labkey"],
 "attributes":
 {
 "userdata":
 {
 "key1": "value 1",
 "key2": "value 2"
 }
 }
 } ]
 } ]
} ]
}



This json file will create a single instance called encryption-vm001 based on the OL6 base template, connect it to the EoIB public network and attach the two storage volumes that we created earlier.  (Storage volumes created independently of this orchestration in this case.)

We upload the orchestration and start it.  Once up and running then the instance will be listed as running and we can see the IP address assigned to it.

# oracle-compute add orchestration ./simple_vm_with_storage.json 


(see above for json)

# oracle-compute start orchestration /osc/public/encrytption-vm

# oracle-compute list instance /osc -Fname,state,ip


Configuring volumes within instance


Having created and started up our instance we can look at the attached volumes and run through the process using Oracle Linux to setup one of the volumes as an encrypted one.   To see the volumes on the instance we use the fdisk command.



# fdisk -l

Disk /dev/xvda: 19.3 GB, 19327352832 bytes
255 heads, 63 sectors/track, 2349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000c520c


    Device Boot      Start         End      Blocks   Id  System
/dev/xvda1   *           1          32      256000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/xvda2              32        2349    18611318+  8e  Linux LVM



Disk /dev/xvdb: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000



Disk /dev/xvdc: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/VolGroup00-LogVol01: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/VolGroup00-LogVol00: 11.8 GB, 11811160064 bytes
255 heads, 63 sectors/track, 1435 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/VolGroup00-LogVol02: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


With the OCM each volume that is attached gets an index, in the orchestration above we use indexes 1 and 2.  These numbers equate to the xvd<char> devices that appear in the fdisk output.where 1 equates to b, 2 equates to c etc.  Thus in the output above the two attached volumes are /dev/xvdb and /dev/xvdc.  The next step is to setup one of the volumes as an block device encrypted one.  To do this I used the linux command cryptsetup defining cipher information etc.  In the example shown below I show it run twice as the first time I answered the question with a lower case yes.  The command mandated uppercase YES as an answer.  Easy mistake to make!



# cryptsetup --verbose --cipher aes-xts-plain64 --key-size 512 --hash sha512 --iter-time 5000 --use-random luksFormat /dev/xvdb



WARNING!
========
This will overwrite data on /dev/xvdb irrevocably.


Are you sure? (Type uppercase yes): yes
Command failed with code 22: Invalid argument

# cryptsetup --verbose --cipher aes-xts-plain64 --key-size 512 --hash sha512 --iter-time 5000 --use-random luksFormat /dev/xvdb

WARNING!
========

This will overwrite data on /dev/xvdb irrevocably.
Are you sure? (Type uppercase yes): YES
Enter LUKS passphrase:
Verify passphrase:
Command successful.




Now we can open the encrypted drive such that it appears as normal.  This will create the /dev/mapper/<name> device file and allow it to be mounted by the OS.  The luksOpen command will prompt for the passphrase used earlier.

# cryptsetup luksOpen /dev/xvdb encrypted-drive

# cryptsetup -v status encrypted-drive
/dev/mapper/encrypted-drive is active.
  type:  LUKS1
  cipher:  aes-xts-plain64
  keysize: 512 bits
  device:  /dev/xvdb
  offset:  4096 sectors
  size:    20967424 sectors
  mode:    read/write
Command successful.


This is a new raw volume so we need to put some sort of filesystem onto it.  In this case I use the ext4 filesystem.


# mkfs.ext4 /dev/mapper/encrypted-drive
mke2fs 1.43-WIP (20-Jun-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
655360 inodes, 2620928 blocks
131046 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2684354560
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: done                           
Writing inode tables: done                           
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

Now simply create a directory where we can mount the encrypted drive and create a simple text file.

# mkdir /u01
# mount /dev/mapper/encrypted-drive /u01
# df -kh
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00   11G  3.3G  7.0G  32% /
tmpfs                            3.8G     0  3.8G   0% /dev/shm
/dev/xvda1                       239M   55M  168M  25% /boot
/dev/mapper/VolGroup00-LogVol02  2.0G  3.0M  1.9G   1% /opt/emagent_instance
/dev/mapper/encrypted-drive      9.8G   23M  9.2G   1% /u01


 Having done this we can do a quick check to ensure that we can unmount and close the encrypted disk and re-open it providing the passphrase and mount it for use.


#umount /u01
# cryptsetup luksClose encrypted-drive
# mount /dev/mapper/encrypted-drive /u01
mount: you must specify the filesystem type



# cryptsetup luksOpen /dev/xvdb encrypted-drive
Enter passphrase for /dev/xvdb:
# mount /dev/mapper/encrypted-drive /u01
# df -kh
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00   11G  3.3G  7.0G  32% /
tmpfs                            3.8G     0  3.8G   0% /dev/shm
/dev/xvda1                       239M   55M  168M  25% /boot
/dev/mapper/VolGroup00-LogVol02  2.0G  3.0M  1.9G   1% /opt/emagent_instance
/dev/mapper/encrypted-drive      9.8G   23M  9.2G   1% /u01




Using fdisk we can format the /dev/xvdc volume, create a file system on this volume and mount it into another directory.  Then create a plain text file in this volume as well.   If the encryption has all worked then cloud operations may be able to access the plain text volume and read the content but the encrypted volume content is kept secret unless the passphrase is known.

Testing

As a general rule cloud operations do not have access to the customer's virtual machines unless the customer shares the login credentials or the ssh keys with Oracle.  However, because the OCM stores the volumes as raw disk images on the internal ZFS storage appliance in the EPC_<rack>/storagepool1 filesystem it is possible for cloud operations to access these files and mount the images directly to access the content.


As a cloud operations user I have accessed the ZFS storage device and can copy the storage volume disks off the rack.  In a linux server I attempt to mount these volumes to see the content.

# file plain_storage.raw
plain_storage.raw: Linux rev 1.0 ext4 filesystem data (extents) (large files) (huge files)
# mount -o loop ./plain_storage.raw /mnt/don
# cat /mnt/don/don-plain

This text is in the unencrypted volume and hence should be readable by anyone.....
# unmount /mnt/don




So it is obviously fairly easy to access the unencrypted storage.  Now lets see what is involved in accessing the encrypted storage volume.

# file encrypted_storage.raw
encrypted_storage.raw: LUKS encrypted file, ver 1 [aes, xts-plain64, sha512] UUID: edff3d80-3813-4abc-a58c-e2f1862

# mount -o loop ./encrypted_storage.raw /mnt/don
mount: unknown filesystem type 'crypto_LUKS'

# losetup /dev/loop0 ./encrypted_storage.raw
# mount /dev/loop0 /mnt/don
mount: unknown filesystem type 'crypto_LUKS'


# cryptsetup luksOpen /dev/loop0 encrypted-dev
Enter passphrase for /dev/loop0:

# mount /dev/mapper/encrypted-dev /mnt/don

# cat /mnt/don/don

some text

#


In the above I attempt to mount the encrypted filesystem using the same mechanism as previously was successful but to no effect.  The only way to mount the disk is to make use of the cryptsetup command which mandated entering the passphrase.  Obviously the passphrase is not something that is shared with cloud operations so they would be unable to access the content of the raw file.

Conclusion

Certainly using the standard linux command of cryptsetup it is a relatively simple task to encrypt any storage volume that is mounted on a VM such that the data is kept private to the end customer/tenant and cloud operations has no mechanism of seeing the content.

The down side of encrypting is that it means that the administrator of the virtual machine (end customer) has to log on and provide the passphrase to mount the volume.  Not a major problem unless you are looking at trying to automatically start up the applications deployed that use the encrypted volume, in this case it becomes necessary to have a manual startup procedure.

Monday, June 6, 2016

Introducing Oracle Cloud Machine


As mentioned in my exablurb blog I am now working beyond the bounds of Oracle Engineered Systems to incorporate the Oracle Cloud Machine as introduced here.  As such this blog will include postings that apply to both Exalogic and the Cloud Machine.


I'll start by quoting one of the PM's from the cloud machine to explain just what the cloud machine actually is:

"Oracle Cloud Machine is a cloud offering which gives you new choices for the Oracle Cloud Platform by bringing the Oracle Cloud to your data center. Leveraging our Public Cloud’s PaaS and IaaS capabilities, it enables the innovation that cloud provides, at the same time meeting the business and regulatory requirements behind your firewall. It provides a stepping-stone in the journey to cloud, as it allows you to get the advantages of cloud faster, easier and with less disruption. As an on- premises implementation of Oracle Cloud, Oracle Cloud Machine lets you run your applications seamlessly wherever you want, as workloads are completely portable between the public cloud and your data center. You can now leverage the latest innovations for rapid development that cloud provides, all while meeting any data sovereignty and residence requirements. It also provides subscription based pricing in your data center, managed by Oracle, with single vendor accountability."

Or to put it simply the cloud machine is a bunch of compute services that Oracle come along and install in your data center and then run it as a service such that you, as a customer, can consume IaaS and PaaS services without having to worry about building up the management infrastructure and on-going operational management of the platform.


Lots more information can be found from the public documentation.