Tuesday, June 7, 2016

Encrypting "disks" on Oracle Cloud Machine

Introduction


The Oracle Cloud Machine, like the public cloud, is administered by Oracle.  While the Oracle staff who manage the rack are highly skilled professionals and all their actions audited there is an obvious concern about the security of customer data at rest.  On the OCM the administrators of the rack have no direct access to the customer's virtual machines.  This article demonstrates how storage volumes can be used by a tenant to mount block storage devices that are encrypted and hence further obscured from system administrators.

(As a side effect of demonstrating the security aspect this is also a useful reference for using cryptsetup to encrypt disks.)

Setup


In order to demonstrate that a storage volume is encrypted and hence not visible to cloud administrators we do a very simple setup where two storage volumes are created, one to be encrypted and the other left in plain text.  These volumes are "attached" to a virtual machine and then within the virtual machine we use the linux utility cryptsetup to encrypt one of the volumes the other is simply mounted with an ext4 filesystem on it.  Plain text files are created in both volumes and then we will switch to the cloud administration side of things to see if it is possible to read the content of the two volumes.

Virtual Machine Instance Creation



First of all we create two storage volumes.  This can be done from the command line easily.


# oracle-compute add storagevolume /osc/public/encrypt-storage-001 10G /oracle/public/storage/default --description "A test 10Gb storage volume that we will try to have encrypted" 

# oracle-compute add storagevolume /osc/public/plain-storage-001 10G /oracle/public/storage/default --description "A test 10Gb storage volume that we will try to have encrypted"


Then we create a virtual machine via an orchestration defined in a json file

# cat simple_vm_with_storage.json
{
"name": "/osc/public/encryption-vm",
"oplans": [
{
 "obj_type": "launchplan",
 "ha_policy": "active",
 "label": "encryption volume launch plan",
 "objects": [
 {
 "instances": [
 {
 "label": "encryption-vm001",
 "imagelist": "/oracle/public/linux6_16.1.2_64",
 "networking":
 {
   "net0": { "vnet": "/osc/public/vnet-eoib-1706" }
 },
 "storage_attachments": [
 { "volume": "/osc/public/encrypt-storage-001", "index": 1},{"volume": "/osc/public/plain-storage-001", "index": 2}],
 "shape": "ot1",
 "sshkeys": ["/osc/public/labkey"],
 "attributes":
 {
 "userdata":
 {
 "key1": "value 1",
 "key2": "value 2"
 }
 }
 } ]
 } ]
} ]
}



This json file will create a single instance called encryption-vm001 based on the OL6 base template, connect it to the EoIB public network and attach the two storage volumes that we created earlier.  (Storage volumes created independently of this orchestration in this case.)

We upload the orchestration and start it.  Once up and running then the instance will be listed as running and we can see the IP address assigned to it.

# oracle-compute add orchestration ./simple_vm_with_storage.json 


(see above for json)

# oracle-compute start orchestration /osc/public/encrytption-vm

# oracle-compute list instance /osc -Fname,state,ip


Configuring volumes within instance


Having created and started up our instance we can look at the attached volumes and run through the process using Oracle Linux to setup one of the volumes as an encrypted one.   To see the volumes on the instance we use the fdisk command.



# fdisk -l

Disk /dev/xvda: 19.3 GB, 19327352832 bytes
255 heads, 63 sectors/track, 2349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000c520c


    Device Boot      Start         End      Blocks   Id  System
/dev/xvda1   *           1          32      256000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/xvda2              32        2349    18611318+  8e  Linux LVM



Disk /dev/xvdb: 10.7 GB, 10737418240 bytes

255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000



Disk /dev/xvdc: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/VolGroup00-LogVol01: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/VolGroup00-LogVol00: 11.8 GB, 11811160064 bytes
255 heads, 63 sectors/track, 1435 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/VolGroup00-LogVol02: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


With the OCM each volume that is attached gets an index, in the orchestration above we use indexes 1 and 2.  These numbers equate to the xvd<char> devices that appear in the fdisk output.where 1 equates to b, 2 equates to c etc.  Thus in the output above the two attached volumes are /dev/xvdb and /dev/xvdc.  The next step is to setup one of the volumes as an block device encrypted one.  To do this I used the linux command cryptsetup defining cipher information etc.  In the example shown below I show it run twice as the first time I answered the question with a lower case yes.  The command mandated uppercase YES as an answer.  Easy mistake to make!



# cryptsetup --verbose --cipher aes-xts-plain64 --key-size 512 --hash sha512 --iter-time 5000 --use-random luksFormat /dev/xvdb



WARNING!
========
This will overwrite data on /dev/xvdb irrevocably.


Are you sure? (Type uppercase yes): yes
Command failed with code 22: Invalid argument

# cryptsetup --verbose --cipher aes-xts-plain64 --key-size 512 --hash sha512 --iter-time 5000 --use-random luksFormat /dev/xvdb

WARNING!
========

This will overwrite data on /dev/xvdb irrevocably.
Are you sure? (Type uppercase yes): YES
Enter LUKS passphrase:
Verify passphrase:
Command successful.




Now we can open the encrypted drive such that it appears as normal.  This will create the /dev/mapper/<name> device file and allow it to be mounted by the OS.  The luksOpen command will prompt for the passphrase used earlier.

# cryptsetup luksOpen /dev/xvdb encrypted-drive

# cryptsetup -v status encrypted-drive
/dev/mapper/encrypted-drive is active.
  type:  LUKS1
  cipher:  aes-xts-plain64
  keysize: 512 bits
  device:  /dev/xvdb
  offset:  4096 sectors
  size:    20967424 sectors
  mode:    read/write
Command successful.


This is a new raw volume so we need to put some sort of filesystem onto it.  In this case I use the ext4 filesystem.


# mkfs.ext4 /dev/mapper/encrypted-drive
mke2fs 1.43-WIP (20-Jun-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
655360 inodes, 2620928 blocks
131046 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2684354560
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: done                           
Writing inode tables: done                           
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

Now simply create a directory where we can mount the encrypted drive and create a simple text file.

# mkdir /u01
# mount /dev/mapper/encrypted-drive /u01
# df -kh
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00   11G  3.3G  7.0G  32% /
tmpfs                            3.8G     0  3.8G   0% /dev/shm
/dev/xvda1                       239M   55M  168M  25% /boot
/dev/mapper/VolGroup00-LogVol02  2.0G  3.0M  1.9G   1% /opt/emagent_instance
/dev/mapper/encrypted-drive      9.8G   23M  9.2G   1% /u01


 Having done this we can do a quick check to ensure that we can unmount and close the encrypted disk and re-open it providing the passphrase and mount it for use.


#umount /u01
# cryptsetup luksClose encrypted-drive
# mount /dev/mapper/encrypted-drive /u01
mount: you must specify the filesystem type



# cryptsetup luksOpen /dev/xvdb encrypted-drive
Enter passphrase for /dev/xvdb:
# mount /dev/mapper/encrypted-drive /u01
# df -kh
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00   11G  3.3G  7.0G  32% /
tmpfs                            3.8G     0  3.8G   0% /dev/shm
/dev/xvda1                       239M   55M  168M  25% /boot
/dev/mapper/VolGroup00-LogVol02  2.0G  3.0M  1.9G   1% /opt/emagent_instance
/dev/mapper/encrypted-drive      9.8G   23M  9.2G   1% /u01




Using fdisk we can format the /dev/xvdc volume, create a file system on this volume and mount it into another directory.  Then create a plain text file in this volume as well.   If the encryption has all worked then cloud operations may be able to access the plain text volume and read the content but the encrypted volume content is kept secret unless the passphrase is known.

Testing

As a general rule cloud operations do not have access to the customer's virtual machines unless the customer shares the login credentials or the ssh keys with Oracle.  However, because the OCM stores the volumes as raw disk images on the internal ZFS storage appliance in the EPC_<rack>/storagepool1 filesystem it is possible for cloud operations to access these files and mount the images directly to access the content.


As a cloud operations user I have accessed the ZFS storage device and can copy the storage volume disks off the rack.  In a linux server I attempt to mount these volumes to see the content.

# file plain_storage.raw
plain_storage.raw: Linux rev 1.0 ext4 filesystem data (extents) (large files) (huge files)
# mount -o loop ./plain_storage.raw /mnt/don
# cat /mnt/don/don-plain

This text is in the unencrypted volume and hence should be readable by anyone.....
# unmount /mnt/don




So it is obviously fairly easy to access the unencrypted storage.  Now lets see what is involved in accessing the encrypted storage volume.

# file encrypted_storage.raw
encrypted_storage.raw: LUKS encrypted file, ver 1 [aes, xts-plain64, sha512] UUID: edff3d80-3813-4abc-a58c-e2f1862

# mount -o loop ./encrypted_storage.raw /mnt/don
mount: unknown filesystem type 'crypto_LUKS'

# losetup /dev/loop0 ./encrypted_storage.raw
# mount /dev/loop0 /mnt/don
mount: unknown filesystem type 'crypto_LUKS'


# cryptsetup luksOpen /dev/loop0 encrypted-dev
Enter passphrase for /dev/loop0:

# mount /dev/mapper/encrypted-dev /mnt/don

# cat /mnt/don/don

some text

#


In the above I attempt to mount the encrypted filesystem using the same mechanism as previously was successful but to no effect.  The only way to mount the disk is to make use of the cryptsetup command which mandated entering the passphrase.  Obviously the passphrase is not something that is shared with cloud operations so they would be unable to access the content of the raw file.

Conclusion

Certainly using the standard linux command of cryptsetup it is a relatively simple task to encrypt any storage volume that is mounted on a VM such that the data is kept private to the end customer/tenant and cloud operations has no mechanism of seeing the content.

The down side of encrypting is that it means that the administrator of the virtual machine (end customer) has to log on and provide the passphrase to mount the volume.  Not a major problem unless you are looking at trying to automatically start up the applications deployed that use the encrypted volume, in this case it becomes necessary to have a manual startup procedure.

No comments:

Post a Comment