LVM - Logical volume management

LVM Howto

Overview of the elements of an LVM system

PV PV PV (e.g., /dev/hda, /dev/hdb5, /dev/sdb3)
| /
|/
VG (e.g., MYLVMVOLUME1)
/|
/|
LV LV LV (e.g., /mnt/mydata, /mnt/moredate)

Prepare PV (discs or partitions)

# pvcreate /dev/sdb1
# pvdisplay -m /dev/sdb1

--- Physical volume ---
PV Name /dev/sdb1
...
Allocated PE 54206

Create new VG with the prepared PV

# vgcreate MYLVMVOLUME1 /dev/hda /dev/hdb5
# vgchange -a y MYLVMVOLUME1

Remove VG

# vgchange -a n MYLVMVOLUME1
# vgremove MYLVMVOLUME1

Add new PV to VG

# vgextend my_volume_group /dev/sdb3

Remove PV from VG

(e.g., when want to remove an old disc) Is there enough space even without the PV?

# pvscan
# pvdisplay /dev/hda

If not, first add a new disc like described above:

  • Prepare discs / partitions
  • Add new PV to VG

Then do a backup! Note that the following will require that you enabled the kernel option

CONFIG_DM_MIRROR

Move everything from the old discs to the other PV

# pvmove /dev/hda

Or specify to which PV the data should be moved

# pvmove /dev/hda /dev/hde3

Now remove the PV

# vgreduce MYLVMVOLUME1 /dev/sdb3
# pvremove /dev/sdb3

pvmove

The command pvmove obviously requires, that you have enough space left on your VG which has not yet been occupied by a LV. But even when you have enough free space in total, pvmove will need your help if the amount of space is not available on a single PV. Even you try to move for example this sdb1

# pvcreate /dev/sdb1
# pvdisplay -m /dev/sdb1

--- Physical volume ---
PV Name /dev/sdb1
...
Allocated PE 54206

but there is no other PV which can carry it, this will fail

# pvmove /dev/sdb1
Insufficient suitable contiguous allocatable extents for logical volume pvmove0: 43955 more required

Just help it with some manual intervention. Use pvdisplay to find the PV with the largest value for Free PE for your LV. In this example it is sda6

# pvdisplay

--- Physical volume ---
PV Name /dev/sda6
...
Free PE 33267
Allocated PE 0

Now you can ask pvmove to move 33267 PE on this PV

# pvmove /dev/sdb1:0-33267 /dev/sda6

Check if the destination is now full, if not, just repeat the last command with an increased upper limit. When it is finally full just move the next area

# pvmove /dev/sdb1:33268-40000 /dev/hda6

to the next PV with some free space. If you specify an area where parts have already been moved, this parts will be automatically ignored. So this one

# pvmove /dev/sdb1:0-40000 /dev/hda6

would be equivalent in our case, as we moved the part at the beginning already. However, this method might leads to LV which are fragmented. A better way is, to initiate a backup of the LVM structure

vgcfgbackup

change into this directory

/etc/lvm/backup

and read the information about your LVM. Let's assume you have the following output

segment2 {
start_extent = 1
extent_count = 250 # 1000 Megabytes

type = "striped"
stripe_count = 1 # linear

stripes = [
"pv2", 25346
]
}

The source device and the start point can be found in the stripes section

pv2
25346

This shows how long it is

extent_count = 250

Now we only have to find the mapping for pv2 to a real device in the file. In this example we assume it is sda6. Now the segment can be moved completely to another device

pvmove -v /dev/sda6:25346-`echo 25346 - 1 + 250 | bc` /dev/sdb6

If you move the segments of a LVM consecutively to the same device they will merge automatically.

Create LV on a VG

How much space is left?

# vgdisplay MYLVMVOLUME1

Create LV with 1.5GB size, linear

# lvcreate -L1500 -n MYDATA MYLVMVOLUME1

Create LV with 1.5GB size, stripped

# lvcreate -i2 -I4 -L500 -n MOREDATA MYLVMVOLUME1

Create LV with 1.5GB size, linear and use only /dev/hdb5

# lvcreate -L1500 -n FASTDATA MYLVMVOLUME1 /dev/hdb5

Now create a filesystem on it

# mkfs.ext3 -L NAME -O dir_index -T news -m0 -N 250000 /dev/MYLVMVOLUME1/FILESYSTEM1
# mkfs.ext4 -L NAME -O dir_index,extent -T largefile -m0 /dev/MYLVMVOLUME1/FILESYSTEM1

# tune2fs -c 31

Remove LV

# lvremove /dev/MYLVMVOLUME1/FASTDATA

Extend LV

How much space is left?

# vgdisplay MYLVMVOLUME1

Extend +1 GB (negativ numbers are discussed below, dangerous!)

# lvextend -L+1G /dev/MYLVMVOLUME1/MYDATA

Now extend the filesystem (ext2 / ext3)

# umount /dev/MYLVMVOLUME1/MYDATA
# resize2fs -p /dev/MYLVMVOLUME1/MYDATA
# mount /dev/MYLVMVOLUME1/MYDATA

Extend the filesystem (reiserfs, unmount may be optional)

# umount /dev/MYLVMVOLUME1/MYDATA
# resize_reiserfs /dev/MYLVMVOLUME1/MYDATA
# mount -treiserfs /dev/MYLVMVOLUME1/MYDATA

Shrink LV

Use lvm to shrink lvm and filesystem in one step

lvresize --resizefs --size 40G /dev/mapper/foo

Do it by yourself:

First reduce the filesystems size, then the LV! This will reduce the filesystem to a size of 20GB

resize2fs -p /dev/MYLVMVOLUME1/MYDATA 20G

Then shrink the LV to a size of 21GB (for paranoia a bit larger)

lvreduce -L21G /dev/MYLVMVOLUME1/MYDATA

If you get this error

Volume group mapper doesn't exist

you probably tried to use the /dev/mapper/... device instead of the normal device.

Now you can extend the filesystem up to the maximum size which is possible with your LV, removing the gap we introduced for paranoia.

resize2fs -p /dev/MYLVMVOLUME1/MYDATA

Forgot to issue vgreduce

When you moved all the data from a PV with pvmove away, but forgot to use vgreduce to remove it from the VG, the next LVM start will fail with this error

Couldn't find device with uuid ...
Couldn't find all physical volumes for volume group ...

If you are sure that there is no more data on the PV, you can skip and remove it

vgchange -ay --partial
vgchange --removemissing

Move VG to another system

unmount everything Disable VG

# vgchange -an MYLVMVOLUME1

Export

# vgexport MYLVMVOLUME1

On the other system

# vgimport MYLVMVOLUME1
# vgchange -ay MYLVMVOLUME1

mount again

Partition table has been deleted

LVM fails to start with:

Couldn't find device with uuid ...
Couldn't find device with uuid ...
...
PV unknown device VG foo lvm2 [186,26 GiB / 158,33 GiB free]
PV unknown device VG foo lvm2 [372,13 GiB / 0 free]
...
PV /dev/sda6 VG foo lvm2 [212,16 GiB / 212,16 GiB free]
Total: 6 [1,57 TiB] / in use: 6 [1,57 TiB] / in no VG: 0 [0 ]

In this case the partition table of /dev/sdb had be deleted.

Try to guess the old partition table (while take some time ...)

# gpart /dev/sdb
Begin scan...
Possible partition(Linux swap), size(486mb), offset(0mb)
...

If it succeeds:

# gpart -W /dev/sdb /dev/sdb