h1

Logical Volume Management basics

June 3, 2006

What is Logical Volume Management?

Table of Contents

2.1. Why would I want it?

2.2. Benefits of Logical Volume Management on a Small System

2.3. Benefits of Logical Volume Management on a Large System

Logical volume management provides a higher-level view of the disk storage on a computer system than the traditional view of disks and partitions. This gives the system administrator much more flexibility in allocating storage to applications and users.

Storage volumes created under the control of the logical volume manager can be resized and moved around almost at will, although this may need some upgrading of file system tools.

The logical volume manager also allows management of storage volumes in user-defined groups, allowing the system administrator to deal with sensibly named volume groups such as "development" and "sales" rather than physical disk names such as "sda" and "sdb".

 

Why would I want it?

Logical volume management is traditionally associated with large installations containing many disks but it is equally suited to small systems with a single disk or maybe two.

 

Benefits of Logical Volume Management on a Small System

One of the difficult decisions facing a new user installing Linux for the first time is how to partition the disk drive. The need to estimate just how much space is likely to be needed for system files and user files makes the installation more complex than is necessary and some users simply opt to put all their data into one large partition in an attempt to avoid the issue.

Once the user has guessed how much space is needed for /home /usr / (or has let the installation program do it) then is quite common for one of these partitions to fill up even if there is plenty of disk space in one of the other partitions.

With logical volume management, the whole disk would be allocated to a single volume group and logical volumes created to hold the / /usr and /home file systems. If, for example the /home logical volume later filled up but there was still space available on /usr then it would be possible to shrink /usr by a few megabytes and reallocate that space to /home.

Another alternative would be to allocate minimal amounts of space for each logical volume and leave some of the disk unallocated. Then, when the partitions start to fill up, they can be expanded as necessary.

As an example: Joe buys a PC with an 8.4 Gigabyte disk on it and installs Linux using the following partitioning system:

 
/boot    /dev/hda1     10 Megabytes
swap     /dev/hda2    256 Megabytes
/        /dev/hda3      2 Gigabytes
/home    /dev/hda4      6 Gigabytes
        

This, he thinks, will maximize the amount of space available for all his MP3 files.

Sometime later Joe decides that he want to install the latest office suite and desktop UI available but realizes that the root partition isn't large enough. But, having archived all his MP3s onto a new writable DVD drive there is plenty of space on /home.

His options are not good:

1.      Reformat the disk, change the partitioning scheme and reinstall.

2.      Buy a new disk and figure out some new partitioning scheme that will require the minimum of data movement.

3.      Set up a symlink farm on / pointing to /home and install the new software on /home

With LVM this becomes much easier:

Jane buys a similar PC but uses LVM to divide up the disk in a similar manner:

 
/boot     /dev/hda1        10 Megabytes
swap      /dev/vg00/swap   256 Megabytes
/         /dev/vg00/root     2 Gigabytes
/home     /dev/vg00/home     6 Gigabytes
 
         

 

Note

boot is not included on the LV because bootloaders don't understand LVM volumes yet. It's possible boot on LVM will work, but you run the risk of having an unbootable system.

Warning

root on LV should be used by advanced users only

 

root on LVM requires an initrd image that activates the root LV. If a kernel is upgraded without building the necessary initrd image, that kernel will be unbootable. Newer distributions support lvm in their mkinitrd scripts as well as their packaged initrd images, so this becomes less of an issue over time.

When she hits a similar problem she can reduce the size of /home by a gigabyte and add that space to the root partition.

Suppose that Joe and Jane then manage to fill up the /home partition as well and decide to add a new 20 Gigabyte disk to their systems.

Joe formats the whole disk as one partition (/dev/hdb1) and moves his existing /home data onto it and uses the new disk as /home. But he has 6 gigabytes unused or has to use symlinks to make that disk appear as an extension of /home, say /home/joe/old-mp3s.

Jane simply adds the new disk to her existing volume group and extends her /home logical volume to include the new disk. Or, in fact, she could move the data from /home on the old disk to the new disk and then extend the existing root volume to cover all of the old disk.

 

Benefits of Logical Volume Management on a Large System

The benefits of logical volume management are more obvious on large systems with many disk drives.

Managing a large disk farm is a time-consuming job, made particularly complex if the system contains many disks of different sizes. Balancing the (often conflicting) storage requirements of various users can be a nightmare.

User groups can be allocated to volume groups and logical volumes and these can be grown as required. It is possible for the system administrator to "hold back" disk storage until it is required. It can then be added to the volume(user) group that has the most pressing need.

When new drives are added to the system, it is no longer necessary to move users files around to make the best use of the new storage; simply add the new disk into an existing volume group or groups and extend the logical volumes as necessary.

It is also easy to take old drives out of service by moving the data from them onto newer drives – this can be done online, without disrupting user service.

 

Chapter 3. Anatomy of LVM

Table of Contents

3.1. volume group (VG)

3.2. physical volume (PV)

3.3. logical volume (LV)

3.4. physical extent (PE)

3.5. logical extent (LE)

3.6. Tying it all together

3.7. mapping modes (linear/striped)

3.8. Snapshots

This diagram gives a overview of the main elements in an LVM system:

 
+-- Volume Group --------------------------------+
|                                                |
|    +----------------------------------------+       |
| PV | PE |  PE | PE | PE | PE | PE | PE | PE |       |
|    +----------------------------------------+       |
|      .          .          .        .          |
|      .          .                 .        .          |
|    +----------------------------------------+       |
| LV | LE |  LE | LE | LE | LE | LE | LE | LE |       |
|    +----------------------------------------+       |
|            .          .        .               .     |
|            .          .        .                .     |
|    +----------------------------------------+       |
| PV | PE |  PE | PE | PE | PE | PE | PE | PE |       |
|    +----------------------------------------+       |
|                                                |
+------------------------------------------------+
 
         

Another way to look at is this (courtesy of Erik Bågfors on the linux-lvm mailing list):

 
    hda1   hdc1      (PV:s on partitions or whole disks)                        
       \   /                                                                    
        \ /                                                                     
       diskvg        (VG)                                                       
       /  |  \                                                                  
      /   |   \                                                                 
  usrlv rootlv varlv (LV:s)
    |      |     |                                                              
 ext2  reiserfs  xfs (filesystems)                                        
 
         

 

volume group (VG)

The Volume Group is the highest level abstraction used within the LVM. It gathers together a collection of Logical Volumes and Physical Volumes into one administrative unit.

physical volume (PV)

A physical volume is typically a hard disk, though it may well just be a device that 'looks' like a hard disk (eg. a software raid device).

logical volume (LV)

The equivalent of a disk partition in a non-LVM system. The LV is visible as a standard block device; as such the LV can contain a file system (eg. /home).

physical extent (PE)

Each physical volume is divided chunks of data, known as physical extents, these extents have the same size as the logical extents for the volume group.

logical extent (LE)

Each logical volume is split into chunks of data, known as logical extents. The extent size is the same for all logical volumes in the volume group.

Tying it all together

A concrete example will help:

Lets suppose we have a volume group called VG1, this volume group has a physical extent size of 4MB. Into this volume group we introduce 2 hard disk partitions, /dev/hda1 and /dev/hdb1. These partitions will become physical volumes PV1 and PV2 (more meaningful names can be given at the administrators discretion). The PV's are divided up into 4MB chunks, since this is the extent size for the volume group. The disks are different sizes and we get 99 extents in PV1 and 248 extents in PV2. We now can create ourselves a logical volume, this can be any size between 1 and 347 (248 + 99) extents. When the logical volume is created a mapping is defined between logical extents and physical extents, eg. logical extent 1 could map onto physical extent 51 of PV1, data written to the first 4 MB of the logical volume in fact be written to the 51st extent of PV1.

mapping modes (linear/striped)

The administrator can choose between a couple of general strategies for mapping logical extents onto physical extents:

1.      Linear mapping will assign a range of PE's to an area of an LV in order eg., LE 1 – 99 map to PV1 and LE 100 – 347 map onto PV2.

2.      Striped mapping will interleave the chunks of the logical extents across a number of physical volumes eg.,

 
1st chunk of LE[1] -> PV1[1],
 
2nd chunk of LE[1] -> PV2[1],
 
3rd chunk of LE[1] -> PV3[1],
 
4th chunk of LE[1] -> PV1[2],
            

3.      and so on. In certain situations this strategy can improve the performance of the logical volume.

Warning

LVM 1 Caveat

 

LVs created using striping cannot be extended past the PVs they were originally created on in LVM 1.
  1. In LVM 2, striped LVs can be extended by concatenating another set of devices onto the end of the first set. So you can get into a situation where your LV is a 2 stripe set concatenated with a linear set concatenated with a 4 stripe set. Are you confused yet?

 

LVM 2 FAQ

4.1.1. I have LVM 1 installed and running on my system. How do I start using LVM 2?

4.1.2. Do I need a special lvm2 kernel module?

4.1.3. I get errors about /dev/mapper/control when I try to use the LVM 2 tools. What's going on?

4.1.4. Which commands and types of logical volumes are currently supported in LVM 2?

4.1.5. Does LVM 2 use a different format from LVM 1 for it's ondisk representation of Volume Groups and Logical Volumes?

4.1.6. Does LVM 2 support VGs and LVs created with LVM 1?

4.1.7. Can I upgrade my LVM 1 based VGs and LVs to LVM 2 native format?

4.1.8. I've upgraded to LVM 2, but the tools keep failing with out of memory errors. What gives?

4.1.9. I have my root partition on an LV in LVM 1. How do I upgrade to LVM 2? And what happened to lvmcreate_initrd?

4.1.10. How resilient is LVM to a sudden renumbering of physical hard disks?

4.1.11. I'm trying to fill my vg, and vgdisplay/vgs says that I have 1.87 GB free, but when I do an lvcreate vg -L1.87G it says "insufficient free extends". What's going on?

4.1.12. How are snapshots in LVM2 different from LVM1?

4.1.13. What is the maximum size of a single LV?

4.1.1. I have LVM 1 installed and running on my system. How do I start using LVM 2?

Here's the Quick Start instructions :)

1.      Start by removing any snapshot LVs on the system. These are not handled by LVM 2 and will prevent the origin from being activated when LVM 2 comes up.

2.      Make sure you have some way of booting the system other than from your standard boot partition. Have the LVM 1 tools, standard system tools (mount) and an LVM 1 compatible kernel on it in case you need to get back and fix some things.

3.      Grab the LVM 2 tools source and the device mapper source and compile them. You need to install the device mapper library using "make install" before compiling the LVM 2 tools. Also copy the dm/scripts/devmap_mknod.sh script into /sbin. I recommend only installing the 'lvm' binary for now so you have access to the LVM 1 tools if you need them. If you have access to packages for LVM 2 and device-mapper, you can install those instead, but beware of them overwriting your LVM 1 tool set.

4.      Get a device mapper compatible kernel, either built in or as a kernel module.

5.      Figure out where LVM 1 was activated in your startup scripts. Make sure the device-mapper module is loaded by that point (if you are using device mapper as a module) and add '/sbin/devmap_mknod.sh; lvm vgscan; lvm vgchange -ay' afterward.

6.      Install the kernel with device mapper support in it. Reboot. If all goes well, you should be running with lvm2.

4.1.2. Do I need a special lvm2 kernel module?

No. You need device-mapper. The lvm2 tools use device-mapper to interface with the kernel and do all their device mapping (hence the name device-mapper). As long as you have device-mapper, you should be able to use LVM2.

4.1.3. I get errors about /dev/mapper/control when I try to use the LVM 2 tools. What's going on?

The primary cause of this is not having run the "dmsetup mknodes" after rebooting into a dm capable kernel. This script generates the control node for device mapper.

If you don't have the "dmsetup mknodes", don't despair! (Though you should probably upgrade to the latest version of device-mapper.) It's pretty easy to create the /dev/mapper/control file on your own:

1.      Make sure you have the device-mapper module loaded (if you didn't build it into your kernel).

2.      Run

# cat /proc/misc | grep device-mapper | awk '{print $1}'

3.      and note the number printed. (If you don't get any output, refer to step 1.)

4.      Run

# mkdir /dev/mapper

5.      - if you get an error saying /dev/mapper already exists, make sure it's a directory and move on.

6.      Run

# mknod /dev/mapper/control c 10 $number

7.      where $number is the number printed in step 2.

You should be all set now!

4.1.4. Which commands and types of logical volumes are currently supported in LVM 2?

If you are using the stable 2.4 device mapper patch from the lvm2 tarball, all the major functionality you'd expect using lvm1 is supported with the lvm2 tools. (You still need to remove snapshots before upgrading from lvm1 to lvm2)

If you are using the version of device mapper in the 2.6 kernel.org kernel series the following commands and LV types are not supported:

·         pvmove

·         snapshots

The beginnings of support for these features are in the unstable device mapper patches maintained by Joe Thornber.

4.1.5. Does LVM 2 use a different format from LVM 1 for it's ondisk representation of Volume Groups and Logical Volumes?

Yes. LVM 2 uses lvm 2 format metadata. This format is much more flexible than the LVM 1 format metadata, removing or reducing most of the limitations LVM 1 had.

4.1.6. Does LVM 2 support VGs and LVs created with LVM 1?

Yes. LVM 2 will activate and operate on VG and LVs created with LVM 1. The exception to this is snapshots created with LVM 1 – these should be removed before upgrading. Snapshots that remain after upgrading will have to be removed before their origins can be activated by LVM 2.

4.1.7. Can I upgrade my LVM 1 based VGs and LVs to LVM 2 native format?

Yes. Use vgconvert to convert your VG and all LVs contained within it to the new lvm 2 format metadata. Be warned that it's not always possible to revert back to lvm 1 format metadata.

4.1.8. I've upgraded to LVM 2, but the tools keep failing with out of memory errors. What gives?

One possible cause of this is that some versions of LVM 1 (The user that reported this bug originally was using Mandrake 9.2, but it is not necessarily limited to that distribution) did not put a UUID into the PV and VG structures as they were supposed to. The most current versions of the LVM 2 tools automatically fill UUIDs in for the structures if they see they are missing, so you should grab a more current version and your problem should be solved. If not, post to the linux-lvm mailing list

4.1.9. I have my root partition on an LV in LVM 1. How do I upgrade to LVM 2? And what happened to lvmcreate_initrd?

Upgrading to LVM 2 is a bit trickier with root on LVM, but it's not impossible. You need to queue up a kernel with device-mapper support and install the lvm2 tools (you might want to make a backup of the lvm 1 tools, or find a rescue disk with the lvm tools built in, in case you need them before you're done). Then find a mkinitrd script that has support for your distro and lvm 2.

Currently, this is the list of mkinitrd scripts that I know support lvm2, sorted by distro:

mkinitrd scripts with lvm 2 support

fedora

The latest fedora core 2 mkinitrd handles lvm2, but it relies on a statically built lvm binary from the latest lvm 2 tarball.

Redhat 9 users may be able to use this as well

Debian

There is an unofficial version here

Generic

There is a version in the lvm2 source tree under scripts/lvm2_createinitrd/. See the documentation in that directory for more details.

4.1.10. How resilient is LVM to a sudden renumbering of physical hard disks?

It's fine – LVM identifies PVs by UUID, not by device name.

Each disk (PV) is labeled with a UUID, which uniquely identifies it to the system. 'vgscan' identifies this after a new disk is added that changes your drive numbering. Most distros run vgscan in the lvm startup scripts to cope with this on reboot after a hardware addition. If you're doing a hot-add, you'll have to run this by hand I think. On the other hand, if your vg is activated and being used, the renumbering should not affect it at all. It's only the activation that needs the identifier, and the worst case scenario is that the activation will fail without a vgscan with a complaint about a missing PV.

Note

The failure or removal of a drive that LVM is currently using will cause problems with current use and future activations of the VG that was using it.

4.1.11. I'm trying to fill my vg, and vgdisplay/vgs says that I have 1.87 GB free, but when I do an lvcreate vg -L1.87G it says "insufficient free extends". What's going on?

The 1.87 GB figure is rounded to 2 decimal places, so it's probably 1.866 GB or something. This is a human-readable output to give you a general idea of how big the VG is. If you want to specify an exact size, you must use extents instead of some multiple of bytes.

In the case of vgdisplay, use the Free PE count instead of the human readable capacity.

 
              Free  PE / Size          478 / 1.87 GB
                                       ^^^
              

So, this would indicate that you should do run

 
# lvcreate vg -l478 

Note that instead of an upper-case 'L', we used a lower-case 'l' to tell lvm to use extents instead of bytes.

In the case of vgs, you need to instruct it to tell you how many extents are available:

 
# vgs -o +vg_free_count,vg_extent_count
              

This tell vgs to add the free extents and the total number of extents to the end of the vgs listing. Use the free extent number the same way you would in the above vgdisplay case.

4.1.12. How are snapshots in LVM2 different from LVM1?

In LVM2 snapshots are read/write by default, whereas in LVM1, snapshots were only read-only. See Section 3.8 for more details

4.1.13. What is the maximum size of a single LV?

The answer to this question depends upon the CPU architecture of your computer and the kernel you are a running:

·         For 2.4 based kernels, the maximum LV size is 2TB. For some older kernels, however, the limit was 1TB due to signedness problems in the block layer. Red Hat Enterprise Linux 3 Update 5 has fixes to allow the full 2TB LVs. Consult your distribution for more information in this regard.

·         For 32-bit CPUs on 2.6 kernels, the maximum LV size is 16TB.

·         For 64-bit CPUs on 2.6 kernels, the maximum LV size is 8EB. (Yes, that is a very large number.)

 

About these ads

3 comments

  1. beautiful online information center. greatest work… thanks


  2. Nice blog having nice information. some times we ignore this sort of things & also suffer a lot as well.

    http://www.digitalinux.com/2010/11/lvm-logical-volume-manager.html


  3. My family members always say that I am killing my time here at web, except I know I am getting know-how all the time by reading thes pleasant posts.



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: