Guidelines

This site is for tech Q&A. Please keep your posts focused on the subject at hand.

Ask one question at a time. Don't conflate multiple problems into a single question.

Make sure to include all relevant information in your posts. Try to avoid linking to external sites.

Links to documentation are fine, but in addition you should also quote the relevant parts in your posts.

0 votes
861 views
861 views

I have a hypervisor (KVM) using LVM for providing volumes to its virtual machines. Within the virtual machine I also use LVM for partitioning the disk (to avoid rendering the system unusable because all disk space on the root volume got used up e.g. from excessive logging).

I currently have a situation where the VM doesn't start, and I need to access the nested logical volumes for debugging purposes. However, LVM on the host only sees the "outer" logical volume.

How do I get access to the "inner" logical volumes?

in Sysadmin
by (115)
2 18 33
edit history

Please log in or register to answer this question.

1 Answer

0 votes
 

LVM has a three-tiered structure: at the bottom are physical volumes (vulgo disks) that are combined into one or more volume groups (the middle tier), which provide an abstract storage layer to the OS. The volume groups can then be divided into logical volumes (the top tier), similar to partitioning a physical disk.

For LVM on the host to recognize the nested volumes it must first recognize the underlying physical volume, which most likely is a partition inside the outer logical volume. You can use a tool like kpartx to create the proper devices for the partitions inside the logical volume:

root@localhost:~# kpartx -av /dev/vg_host/vm_name
add map vg_host-vm_name1 (252:84): 0 2048 linear 252:82 2048
add map vg_host-vm_name2 (252:85): 0 41936896 linear 252:82 4096

The new device files (actually symlinks with descriptive names to the actual devmapper device files) can be found under /dev/mapper. kpartx constructs their names from the "parent" device name with an appended index.

LVM should then automatically recognize the new devices. If not, run pvscan to make it re-read the physical volumes and activate the new volume group:

root@localhost:~# pvscan
root@localhost:~# vgchange -ay vg_vm

Now you should be able to see the nested volume group along with the volume groups from the host:

root@localhost:~# vgs
  VG          #PV #LV #SN Attr   VSize   VFree  
  vg_host       1   2   0 wz--n-   1.83t 831.61g
  vg_vm         1   9   0 wz--n-  20.00g   1.51g
  ...
root@localhost:~# lvs vg_vm
  LV     VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  boot   vg01 -wi-a----- 500.00m
  root   vg01 -wi-ao----   4.00g
  ...

Then mount the nested logical volume like any other logical volume:

root@localhost:~# mount /dev/vg_vm/root /mnt

To clean up after you finished reverse the commands:

root@localhost:~# umount /mnt
root@localhost:~# vgchange -an vg_vm
  0 logical volume(s) in volume group "vg_vm" now active
root@localhost:~# kpartx -dv /dev/vg_host/vm_name
del devmap : vg_host-vm_name2
del devmap : vg_host-vm_name1

One problem that could arise is if the nested volume group has the same name as the "outer" volume group. But since LVM internally uses UUIDs for identifying its objects you can work around that by (temporarily) renaming the group:

root@localhost:~# vgdisplay vg_duplicate_name
  --- Volume group ---
  VG Name               vg_duplicate_name
  ...
  VG Size               2.00 TiB           # ← larger VG is the "outer" one
  ...
  VG UUID               998b3c2a-cbaa-458a-8d47-526547bd8bdb

  --- Volume group ---
  VG Name               vg_duplicate_name
  ...
  VG Size               20.00 GiB          # ← smaller VG is the "inner" one
  ...
  VG UUID               bc450626-0ede-4e4b-9f29-38e747716667
root@localhost:~# vgrename bc450626-0ede-4e4b-9f29-38e747716667 vg_temp
...
root@localhost:~# vgchange -ay vg_temp
by (115)
2 18 33
edit history
...