Friday, October 14, 2016

What happens to attached volumes when snapshotting VMs on an OCM

Introduction

With OCM there is the capability to take a "snapshot" of a virtual machine.  As described in the documentation a snapshot will take a copy of the machine image boot disk.  Essentially there are two purposes for this action to take place, firstly if we take a machineimage then we can use that machineimage to create additional VMs from this copy - using it as a template.  Secondly it can be used as a backup copy to recreate the VM if needed.  (Really just the same as the first but using the copy to recreate rather than clone.)

This leads me to think of a couple of questions that this blog posting will be answering.

  1. What happens to storage volumes that have been added to the VM.  Are these copied as well?
  2. Is this a good mechanism to increase the root volume size to make more space for VMs that might want a bit more disk space?

I have tested two specific scenarios out starting from a VM based on the OL6 template with an attached volume.

  1. use the new volume to add to the root disk logical volume
  2. create a new logical volume and attach it to the filesystem, say from /u01.
Snapshot both cases and create a new VM from the machineimage created and look to see what happened.

 Extending root volume

In the first case I simply create a VM from the OL6 base template using a simple orchestration.  I create an additional volume then attach the new volume to the VM.  Having created the VM I log onto it and use the unix commands for LVM to extend the size of the root disk.

The steps taken are:-
  1. Use fdisk to format the attached volume to LVM.
  2. Use lvdisplay and vgdisplay to identify the current root volume (Prob VolGroup00)
  3. Use vgextend to extend the current volume group to add the storage from the attached volume.
  4. Use lvextend to make the root logical volume larger
  5. Use resize2fs the device mapper to make the extra space available to the filesystem.

Some of the key commands and output are shown below.  The result of all these commands is that the root filesystem has grown from 11G to 61G using all the 50Gb in the attached volume.

#df -kh
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00   11G  3.3G  6.9G  33% /
tmpfs                            873M     0  873M   0% /dev/shm
/dev/xvda1                       239M   55M  168M  25% /boot
/dev/mapper/VolGroup00-LogVol02  2.0G  3.0M  1.9G   1% /opt/emagent_instance


# vgextend VolGroup00 /dev/xvdb1
  Volume group "VolGroup00" successfully extended
# pvdisplay
  --- Physical volume ---
  PV Name               /dev/xvda2
  VG Name               VolGroup00
  PV Size               17.75 GiB / not usable 2.12 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              4543
  Free PE               191
  Allocated PE          4352
  PV UUID               VnZ5PZ-8IrP-nggg-B7Fo-9sog-g4bw-skbPBE
  
  --- Physical volume ---
  PV Name               /dev/xvdb1
  VG Name               VolGroup00
  PV Size               50.00 GiB / not usable 3.31 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              12799
  Free PE               12799
  Allocated PE          0
  PV UUID               c1m21x-2IBX-Qrsy-1AJI-3UQ7-apUe-weETeq


# lvextend -L+55G /dev/VolGroup00/LogVol00
  Extending logical volume LogVol00 to 66.00 GiB
  Insufficient free space: 14080 extents needed, but only 12990 available

# lvextend -l+12990 /dev/VolGroup00/LogVol00
  Extending logical volume LogVol00 to 61.74 GiB
  Logical volume LogVol00 successfully resized
 

# resize2fs /dev/mapper/VolGroup00-LogVol00
resize2fs 1.43-WIP (20-Jun-2013)
Filesystem at /dev/mapper/VolGroup00-LogVol00 is mounted on /; on-line resizing required
old_desc_blocks = 4, new_desc_blocks = 4
The filesystem on /dev/mapper/VolGroup00-LogVol00 is now 16185344 blocks long.

# df -kh
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00   61G  3.3G   55G   6% /
tmpfs                            873M     0  873M   0% /dev/shm
/dev/xvda1                       239M   55M  168M  25% /boot
/dev/mapper/VolGroup00-LogVol02  2.0G  3.0M  1.9G   1% /opt/emagent_instance

I now use the UI from EMCC to create a snapshot of the VM.  This could also be done from the command line.

Snapshotting a VM
The snapshot takes a few minutes to complete and once done there is a template available that will allow the creation of new VMs.

Snapshot appearing as a template in the OCM library



Creating a VM based on the snapshot


Once the new VM is created just log on and have a look at the root disk size.  It is immediately clear that it has a root disk of 61Gb and no attached volumes so by expanding the root LVM partition and snapshotting it will effectively increase the size of the disk in the machine image.

ERRATA - This approach does not work for increasing the root disk size.  Further investigation shows that while the OS reports the increased disk size all looks good but issuing a pvdisplay command reports that a device is missing.  This was confirmed by simply filling the disk up, as soon as the used space reached the same level as the original disk then warnings about the possible loss of data were reported and some of the new writes failed to get persisted to disk.  The conclusion - The actual disk space available was not expanded.

EXT4-fs error (device dm-2): ext4_wait_block_bitmap:448: comm flush-252:2: Cannot read block bitmap - block_group = 117, block_bitmap = 3670021
EXT4-fs (dm-2): delayed block allocation failed for inode 13345 at logical offset 16665 with max blocks 1 with error -5
EXT4-fs (dm-2): This should not happen!! Data will be lost

Adding the volume as a new partition/logical volume

The same process was completed for a volume being attached but this time rather than extendingVolGroup00 I created a new physical volume, volume group and logical volume which I mounted on /u01.  (fdisk to format volume to LVM [8e type], pvcreate to create the physical volume, vgcreate to create  a volume group using the new volume, lvcreate to create the logical volume and then mount the logical volume of /u01.)  Exactly the same process was used to create a snapshot and then create a new VM from the machineimage that was created.  This time round the resulting disk space on the new VM was just 11Gb - the disk size of the original template.  i.e. The snapshot has ignored the additional volume and done what the docs say, specifically take an image of the machine's boot disk.

Conclusion

As a mechanism to increase the root disk space of a template the approach of creating a VM with an attached volume and using this volume to extend the size of the root volume group/logical volume will NOT enable a new template with larger disk space.

If you are planning to create a VM with an attached volume and think that the snapshot will backup the entire VM then think again.  It will be necessary for you to also snapshot any volumes that are not part of the machine image boot disk to create a full snapshot of your virtual machine.