LVM: Logical Volume Manager
LVM provides some great uses such as creating a single logical volume out of multiple hard disks (similar to RAID 0) and letting you expand or reduce filesystems and disk sizes of logical volumes even while the disks are live and operational. But care needs to be taken in understanding how to execute these procedures because you can accidentally delete data if you are not careful.
Here's some basic information to get you started with the VM tau used as the example:
Physical Volume - are physical (or virtual if they are VM images) disk drives. LVM will not recognize attached disk drives unless they are declared to be allocated for LVM usage using the 'pvcreate' command.
Use command 'pvs' to see the physical volumes that have been allocated to lvm: [root@tau ~]# pvs PV VG Fmt Attr PSize PFree /dev/vda2 vg_system lvm2 a--u 14.84g 0 /dev/vdb1 vg_system lvm2 a--u 19.97g 0
Volume Group - a group of physical volumes that are linked together under a single name.
Use command 'vgs' to see volume groups. The number of Physical Volumes is 2 because both /dev/vda2 and /dev/vdb1 are under this group, vg_system. Because vg_system is a grouping of /dev/vda2 and /dev/vdb1, the volume group's size is the sum of the physical size of the two physical disk partitions. [root@tau ~]# vgs VG #PV #LV #SN Attr VSize VFree vg_system 2 2 0 wz--n- 34.81g 0
Logical Volume - A logical partition of a volume group. All logical volumes are listed under the directory /dev/mapper.
Use command 'lvs' to see logical volumes. Under here, I have two logical volumes under volume group vg_system. [root@tau ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv_root vg_system -wi-ao---- 31.31g lv_swap vg_system -wi-ao---- 3.50g
Rules to Remember about LVM
1) Think of Logical Volumes as a container and the filesystem is the contents. If you wish to reduce a logical volume size, you must reduce the filesystem's size first if you can. Reducing a logical volume size before reducing the filesystem's size will lead to the lost data! (It would be like having a full cup of water then reducing the size of your cup)
2) The opposite holds true. If you wish to expand a logical volume, expand the logical volume and then expand the filesystem or else the size will remain the same! Use the command resize2fs.
3) To summarize LVM:
a) Physical volumes are disks or disk partitions that are initialized to be used by LVM. b) Physical volumes form a volume group. c) Logical volumes are partitions of volume groups. d) Filesystems are placed in logical volumes.
Expanding a Logical Volume After Adding a New Physical/Virtual Disk
Here's the steps I took to increase the logical volume size of the VM Tau:
1) Tau's root directory was previously 11.34 Gbs which was too small and kept filling up. From this command, I can see / belongs to a logical volume because its filesystem exists on /dev/mapper.
[root@tau usr]# df -hl Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_system-lv_root 12G 11G 41M 100% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/vda1 146M 102M 37M 74% /boot [root@tau usr]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv_root vg_system -wi-ao---- 11.34g lv_swap vg_system -wi-ao---- 3.50g
2) I listed out the existing physical volumes and volume groups using the command 'pvs' and 'vgs' to take into account what existed already. I could see we were using one disk partition /dev/vda2 as the physical volume and it belonged to the volume group 'vg_system'
[root@tau usr]# pvs PV VG Fmt Attr PSize PFree /dev/vda2 vg_system lvm2 a--u 14.84g 0 [root@tau usr]# vgs VG #PV #LV #SN Attr VSize VFree vg_system 1 2 0 wz--n- 14.84g 0
3) I had a disk partition /dev/vdb1 which was added but unused. Since it was free, I first initialized it to be an LVM physical volume.
[root@tau usr]# pvcreate /dev/vdb1 Physical volume "/dev/vdb1" successfully created [root@tau ~]# pvs PV VG Fmt Attr PSize PFree /dev/vda2 vg_system lvm2 a--u 14.84g 0 /dev/vdb1 vg_system lvm2 a--u 19.97g 0
4) I added the new physical volume, /dev/vdb1, to the existing volume group vg_system. You can see the size of the volume group extends after I did vgextend.
[root@tau usr]# vgs VG #PV #LV #SN Attr VSize VFree vg_system 1 2 0 wz--n- 14.84g 0 [root@tau usr]# vgextend vg_system /dev/vdb1 Volume group "vg_system" successfully extended [root@tau usr]# vgs VG #PV #LV #SN Attr VSize VFree vg_system 2 2 0 wz--n- 34.81g 19.97g
5) After the volume group expanded, next comes the logical volume. I issued the command to resize the logical volume lv_root to fill up all the free space in the volume group vg_system
[root@tau usr]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv_root vg_system -wi-ao---- 11.34g lv_swap vg_system -wi-ao---- 3.50g [root@tau usr]# lvextend --verbose -l +100%FREE /dev/vg_system/lv_root Using volume group(s) on command line. Converted 100%FREE into at most 639 physical extents. Archiving volume group "vg_system" metadata (seqno 5). Extending logical volume vg_system/lv_root to up to 31.31 GiB Size of logical volume vg_system/lv_root changed from 11.34 GiB (363 extents) to 31.31 GiB (1002 extents). Loading vg_system-lv_root table (253:0) Suspending vg_system-lv_root (253:0) with device flush Resuming vg_system-lv_root (253:0) Creating volume group backup "/etc/lvm/backup/vg_system" (seqno 6). Logical volume lv_root successfully resized.
6) Lastly, we need to resize the filesystem that resides on logical volume lv_root.
[root@tau usr]# resize2fs /dev/mapper/vg_system-lv_root resize2fs 1.41.12 (17-May-2010) Filesystem at /dev/mapper/vg_system-lv_root is mounted on /; on-line resizing required old desc_blocks = 1, new_desc_blocks = 2 Performing an on-line resize of /dev/mapper/vg_system-lv_root to 8208384 (4k) blocks. The filesystem on /dev/mapper/vg_system-lv_root is now 8208384 blocks long.
7) Then everything should be swell!
[root@tau usr]# df -h . Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_system-lv_root 31G 11G 19G 36% /
Reducing a Logical Volume
Remember the cardinal rule for reducing logical volumes. The filesystem must be reduced first prior to the reduction of the logical volume! Otherwise, you risk reducing the volume size to a point it destroys filesystem data. Here are the steps to reduce a filesystem: 1) Use df -hl to view the disk filesystems in use. Below, I intend to reduce lv_home to 20GB.
[root@mk-1-d ~]# df -hl Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_mk1d-lv_root 50G 31G 16G 67% / tmpfs 32G 0 32G 0% /dev/shm /dev/sda1 477M 167M 286M 37% /boot /dev/mapper/vg_mk1d-lv_home 156G 124M 148G 1% /home
2) To interact with the filesystem size, we must unmount the directory we wish to work on so we will unmount the /home volume:
[root@mk-1-d ~]# umount /home
3) Prior to filesystem resizing, the OS will demand we run a filesystem check.
[root@mk-1-d ~]# e2fsck -f /dev/vg_mk1d/lv_home e2fsck 1.41.12 (17-May-2010) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/vg_mk1d/lv_home: 931/10371072 files (0.1% non-contiguous), 715184/41476096 blocks
4) After filesystem check, we can proceed with resizing the filesystem.
[root@mk-1-d ~]# resize2fs /dev/vg_mk1d/lv_home 20G resize2fs 1.41.12 (17-May-2010) Resizing the filesystem on /dev/vg_mk1d/lv_home to 5242880 (4k) blocks. The filesystem on /dev/vg_mk1d/lv_home is now 5242880 blocks long.
5) With the reduced filesystem, we can now safely reduced the logical volume to the same size.
[root@mk-1-d ~]# lvreduce -L 20G /dev/vg_mk1d/lv_home WARNING: Reducing active logical volume to 20.00 GiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce vg_mk1d/lv_home? [y/n]: y Size of logical volume vg_mk1d/lv_home changed from 158.22 GiB (40504 extents) to 20.00 GiB (5120 extents). Logical volume lv_home successfully resized.
6) Check your logical volumes to see if lvm recognizes this change:
[root@mk-1-d ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv_home vg_mk1d -wi-a----- 20.00g lv_root vg_mk1d -wi-ao---- 50.00g lv_swap vg_mk1d -wi-ao---- 23.19g
7) Remount your now resized logical volume to its original location and check everything's sized correctly
[root@mk-1-d ~]# mount /dev/vg_mk1d/lv_home /home [root@mk-1-d ~]# df -hl Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_mk1d-lv_root 50G 31G 16G 67% / tmpfs 32G 0 32G 0% /dev/shm /dev/sda1 477M 167M 286M 37% /boot /dev/mapper/vg_mk1d-lv_home 20G 108M 19G 1% /home
8) Note the additional space you have in your other volumes because of the size reduction. Find a use for that free space!
[root@mk-1-d ~]# pvs PV VG Fmt Attr PSize PFree /dev/sda2 vg_mk1d lvm2 a--u 231.41g 138.22g [root@mk-1-d ~]# vgs VG #PV #LV #SN Attr VSize VFree vg_mk1d 1 3 0 wz--n- 231.41g 138.22g
LVM Live Snapshot
Live logical volume snapshots are a way for us to back up files that are constantly changing (such as VMs and databases). A logical volume snapshot contains all data when the snapshot was created. The data in the snapshot does not change but the data the snapshot is based on is allowed to continue changing. When an LV snapshot is created, a new logical volume becomes available for you to mount. Once the snapshot is mounted, all the data can be accessed and then backed up, separate from the original data which might continue to change.
Example: I store my virtual machine images on /var/lib/libvirt/images. In this example, I have many image files. One of these image files is constantly updated because the VM is actively running which changes the image file contents. I would like to backup the VM images directory but directly copying image files while they are active causes the copies to be corrupted so we will use a logical volume snapshot to copy the contents to another directory where it won't be continuously updated.
1) Identify the logical volume you want to snapshot. Use 'lvs' to list all the logical volumes. Below, I want to snapshot the lv_images logical volume where my VM image files reside.
[root@vav ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv_images vg_data owi-aos--- 2.95t lv_root vg_system -wi-ao---- 34.19g lv_scratch vg_system -wi-ao---- 229.16g
2) Find the logical volume in /dev/<volume_group_name>/<logical_volume_name>.
[root@vav ~]# ls /dev/vg_data/lv_images /dev/vg_data/lv_images
3) Use lvcreate to begin creating a logical volume snapshot.
[root@vav ~]# lvcreate -L 2G -s -n lv_images_snap /dev/vg_data/lv_images Logical volume "lv_images_snap" created. -L <size 2 GB>: limits differences between snapshot and logical volume to be 2GB. After 2GB of changes, the lv snapshot will become unusable. -s <snapshot>: creates snapshot -n <name>: snapshot name will be lv_images_snap <logical volume path>: points to logical volume to snapshot (/dev/vg_data/lv_images)
4) Use 'lvs' to list logical volumes again. The new snapshot should appear.
[root@vav ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv_images vg_data owi-aos--- 2.95t lv_images_snap vg_data swi-a-s--- 2.00g lv_images 0.72 lv_root vg_system -wi-ao---- 34.19g lv_scratch vg_system -wi-ao---- 229.16g
5) Now that the logical volume exists, we can mount it somewhere to access these files. They will mirror the contents of the logical volume at the time the snapshot was created but if any changes happen in the original location, the snapshot will not contain them.
[root@vav ~]# mkdir image-backups [root@vav ~]# mount /dev/vg_data/lv_images_snap /root/image-backups/ [root@vav ~]# ls -lh /root/image-backups/ total 67G drwxr-xr-x. 4 root root 4.0K Sep 12 11:21 backups -rwxr--r--. 1 root root 7.2G Jun 27 21:46 chronos-disk1 drwxr-xr-x. 2 root root 4.0K Oct 9 15:30 inactive -rw-r--r--. 1 root root 14G Jun 27 21:46 kappa-disk1.qcow2 drwxr-xr-x. 2 root root 4.0K Jun 29 14:41 livesnaps drwx------. 2 qemu qemu 16K Apr 24 15:52 lost+found -rwxr-----. 1 qemu qemu 16G Oct 9 16:28 nu-disk1.qcow2 -rwxr-----. 1 root root 16G Oct 5 16:20 nu-disk1.qcow2~2017-10-05 -rwxr-----. 1 qemu qemu 15G May 22 16:06 zeta-disk1
6) While the snapshot is active, backup the snapshot images with your backup tool of choice.
[root@vav backups]# tar -czvf /tmp/nu_vm_images-20171009.tar /root/image-backups/nu-disk1.qcow2 tar: Removing leading '/' from member names /root/image-backups/nu-disk1.qcow2 -sh-4.1$ ls /tmp BES hsperfdata_root nu_vm_image-2017.tar orbit-root orbit-s_bwong1
7) Once the backup is done, we don't need the live snapshot anymore. You can umount it and lvremove the logical volume we made in the steps.
[root@vav ~]# umount /root/image-backups/ [root@vav ~]# lvremove /dev/vg_data/lv_images_snap Do you really want to remove active logical volume lv_images_snap? [y/n]: y Logical volume "lv_images_snap" successfully removed
Other Reference: http://www.tldp.org/HOWTO/LVM-HOWTO/snapshots_backup.html