Archive for the ‘LVM’ Category

h1

Resize an ext2 or ext3 filesystem on Red Hat Enterprise Linux 4?

June 3, 2006

Is it possible to resize an ext2 or ext3 filesystem on Red Hat Enterprise Linux 4?

Resolution:

It is possible to resize an ext2 or ext3 filesystem in Red Hat Enterprise Linux 4.

Currently the only way to do so is by using the ext2online command. This can only be done when the filesystem is online (mounted) and where the the filesystem is on a resizable logical volume. Red Hat Enterprise Linux 4 sets up the root filesystem (/) on LVM2 logical volumes by default during system installation.

It may be necessary to add physical volumes to the logical volumes and the volume group, as well as extend the logical volume before resizing the filesystem.

Also note that there are restrictions on what you are able to do with the ext2online program:

  • Currently you can only increase the size of the filesystem. It is not possible to reduce a filesystem's size
  • The filesystem currently may be increased in size by the order of 1000. For example an original filesystem created at 100MB in size can only be increased up to 1000 times it's original size:100MB x 1000 = 100000MB
h1

moving a Physical Volume that to another volume group

June 3, 2006

I created a physical volume now I want to move that to another volume group but its not allowing me. How can I do this?

Resolution:

When initializing physical devices to be used with Logical Volume Manager (LVM) the pvcreate command writes meta data to the first 100K or so of the physical device that you want to activate. When removing this physical volume from a volume group and attempting to modify the existing partition structure, running the pvcreate command on those new partitions can cause errors. Some of these errors report back that the physical device belongs to an old volume group or that the device contains metadata.

To get around this the metadata on the physical device needs to be overwritten so the new metadata written by pvcreate can take effect. Doing this is in effect similar to a pvremove command.

Below are two example scenario's that you might have used the pvcreate command and how to wipe the metadata off those particular physical devices.

  1. Initializing an entire disk:
  2. pvcreate /dev/hda

    To delete the metadata created by initializing this physical volume use the following command:

    dd if=/dev/zero of=/dev/hda bs=512 count=1

  3. Initializing a partition with the pvcreate command:
  4. pvcreate /dev/hda2

    To delete the metadata created by initializing this physical volume use the following command:

    dd if=/dev/zero of=/dev/hda2 bs=512 count=1

This command writes zeros to the first sector of the physical volume you are trying to initialize. This does not wipe all the LVM metadata but wipes the entry point to the metadata therefore allowing you to run pvcreate on the physical device successfully.

LVM2 will contain a pvremove command voiding the need to run the dd command whenever you run into these sort of issues with LVM. Red Hat Enterprise Linux 4 will contain this version of LVM.

The information provided in this document is for your information only. The origin of this information may be internal or external to Red Hat. While Red Hat attempts to verify the validity of this information before it is posted, Red Hat makes no express or implied claims to its validity.

© 2003-2006 Red Hat, Inc. All rights reserved. This article is made available for copying and use under the Open Publication License, v1.0 which may be found at http://www.opencontent.org/openpub/.

h1

Logical Volume Management basics

June 3, 2006

What is Logical Volume Management?

Table of Contents

2.1. Why would I want it?

2.2. Benefits of Logical Volume Management on a Small System

2.3. Benefits of Logical Volume Management on a Large System

Logical volume management provides a higher-level view of the disk storage on a computer system than the traditional view of disks and partitions. This gives the system administrator much more flexibility in allocating storage to applications and users.

Storage volumes created under the control of the logical volume manager can be resized and moved around almost at will, although this may need some upgrading of file system tools.

The logical volume manager also allows management of storage volumes in user-defined groups, allowing the system administrator to deal with sensibly named volume groups such as "development" and "sales" rather than physical disk names such as "sda" and "sdb".

 

Why would I want it?

Logical volume management is traditionally associated with large installations containing many disks but it is equally suited to small systems with a single disk or maybe two.

 

Benefits of Logical Volume Management on a Small System

One of the difficult decisions facing a new user installing Linux for the first time is how to partition the disk drive. The need to estimate just how much space is likely to be needed for system files and user files makes the installation more complex than is necessary and some users simply opt to put all their data into one large partition in an attempt to avoid the issue.

Once the user has guessed how much space is needed for /home /usr / (or has let the installation program do it) then is quite common for one of these partitions to fill up even if there is plenty of disk space in one of the other partitions.

With logical volume management, the whole disk would be allocated to a single volume group and logical volumes created to hold the / /usr and /home file systems. If, for example the /home logical volume later filled up but there was still space available on /usr then it would be possible to shrink /usr by a few megabytes and reallocate that space to /home.

Another alternative would be to allocate minimal amounts of space for each logical volume and leave some of the disk unallocated. Then, when the partitions start to fill up, they can be expanded as necessary.

As an example: Joe buys a PC with an 8.4 Gigabyte disk on it and installs Linux using the following partitioning system:

 
/boot    /dev/hda1     10 Megabytes
swap     /dev/hda2    256 Megabytes
/        /dev/hda3      2 Gigabytes
/home    /dev/hda4      6 Gigabytes
        

This, he thinks, will maximize the amount of space available for all his MP3 files.

Sometime later Joe decides that he want to install the latest office suite and desktop UI available but realizes that the root partition isn't large enough. But, having archived all his MP3s onto a new writable DVD drive there is plenty of space on /home.

His options are not good:

1.      Reformat the disk, change the partitioning scheme and reinstall.

2.      Buy a new disk and figure out some new partitioning scheme that will require the minimum of data movement.

3.      Set up a symlink farm on / pointing to /home and install the new software on /home

With LVM this becomes much easier:

Jane buys a similar PC but uses LVM to divide up the disk in a similar manner:

 
/boot     /dev/hda1        10 Megabytes
swap      /dev/vg00/swap   256 Megabytes
/         /dev/vg00/root     2 Gigabytes
/home     /dev/vg00/home     6 Gigabytes
 
         

 

Note

boot is not included on the LV because bootloaders don't understand LVM volumes yet. It's possible boot on LVM will work, but you run the risk of having an unbootable system.

Warning

root on LV should be used by advanced users only

 

root on LVM requires an initrd image that activates the root LV. If a kernel is upgraded without building the necessary initrd image, that kernel will be unbootable. Newer distributions support lvm in their mkinitrd scripts as well as their packaged initrd images, so this becomes less of an issue over time.

When she hits a similar problem she can reduce the size of /home by a gigabyte and add that space to the root partition.

Suppose that Joe and Jane then manage to fill up the /home partition as well and decide to add a new 20 Gigabyte disk to their systems.

Joe formats the whole disk as one partition (/dev/hdb1) and moves his existing /home data onto it and uses the new disk as /home. But he has 6 gigabytes unused or has to use symlinks to make that disk appear as an extension of /home, say /home/joe/old-mp3s.

Jane simply adds the new disk to her existing volume group and extends her /home logical volume to include the new disk. Or, in fact, she could move the data from /home on the old disk to the new disk and then extend the existing root volume to cover all of the old disk.

 

Benefits of Logical Volume Management on a Large System

The benefits of logical volume management are more obvious on large systems with many disk drives.

Managing a large disk farm is a time-consuming job, made particularly complex if the system contains many disks of different sizes. Balancing the (often conflicting) storage requirements of various users can be a nightmare.

User groups can be allocated to volume groups and logical volumes and these can be grown as required. It is possible for the system administrator to "hold back" disk storage until it is required. It can then be added to the volume(user) group that has the most pressing need.

When new drives are added to the system, it is no longer necessary to move users files around to make the best use of the new storage; simply add the new disk into an existing volume group or groups and extend the logical volumes as necessary.

It is also easy to take old drives out of service by moving the data from them onto newer drives – this can be done online, without disrupting user service.

 

Chapter 3. Anatomy of LVM

Table of Contents

3.1. volume group (VG)

3.2. physical volume (PV)

3.3. logical volume (LV)

3.4. physical extent (PE)

3.5. logical extent (LE)

3.6. Tying it all together

3.7. mapping modes (linear/striped)

3.8. Snapshots

This diagram gives a overview of the main elements in an LVM system:

 
+-- Volume Group --------------------------------+
|                                                |
|    +----------------------------------------+       |
| PV | PE |  PE | PE | PE | PE | PE | PE | PE |       |
|    +----------------------------------------+       |
|      .          .          .        .          |
|      .          .                 .        .          |
|    +----------------------------------------+       |
| LV | LE |  LE | LE | LE | LE | LE | LE | LE |       |
|    +----------------------------------------+       |
|            .          .        .               .     |
|            .          .        .                .     |
|    +----------------------------------------+       |
| PV | PE |  PE | PE | PE | PE | PE | PE | PE |       |
|    +----------------------------------------+       |
|                                                |
+------------------------------------------------+
 
         

Another way to look at is this (courtesy of Erik Bågfors on the linux-lvm mailing list):

 
    hda1   hdc1      (PV:s on partitions or whole disks)                        
       \   /                                                                    
        \ /                                                                     
       diskvg        (VG)                                                       
       /  |  \                                                                  
      /   |   \                                                                 
  usrlv rootlv varlv (LV:s)
    |      |     |                                                              
 ext2  reiserfs  xfs (filesystems)                                        
 
         

 

volume group (VG)

The Volume Group is the highest level abstraction used within the LVM. It gathers together a collection of Logical Volumes and Physical Volumes into one administrative unit.

physical volume (PV)

A physical volume is typically a hard disk, though it may well just be a device that 'looks' like a hard disk (eg. a software raid device).

logical volume (LV)

The equivalent of a disk partition in a non-LVM system. The LV is visible as a standard block device; as such the LV can contain a file system (eg. /home).

physical extent (PE)

Each physical volume is divided chunks of data, known as physical extents, these extents have the same size as the logical extents for the volume group.

logical extent (LE)

Each logical volume is split into chunks of data, known as logical extents. The extent size is the same for all logical volumes in the volume group.

Tying it all together

A concrete example will help:

Lets suppose we have a volume group called VG1, this volume group has a physical extent size of 4MB. Into this volume group we introduce 2 hard disk partitions, /dev/hda1 and /dev/hdb1. These partitions will become physical volumes PV1 and PV2 (more meaningful names can be given at the administrators discretion). The PV's are divided up into 4MB chunks, since this is the extent size for the volume group. The disks are different sizes and we get 99 extents in PV1 and 248 extents in PV2. We now can create ourselves a logical volume, this can be any size between 1 and 347 (248 + 99) extents. When the logical volume is created a mapping is defined between logical extents and physical extents, eg. logical extent 1 could map onto physical extent 51 of PV1, data written to the first 4 MB of the logical volume in fact be written to the 51st extent of PV1.

mapping modes (linear/striped)

The administrator can choose between a couple of general strategies for mapping logical extents onto physical extents:

1.      Linear mapping will assign a range of PE's to an area of an LV in order eg., LE 1 – 99 map to PV1 and LE 100 – 347 map onto PV2.

2.      Striped mapping will interleave the chunks of the logical extents across a number of physical volumes eg.,

 
1st chunk of LE[1] -> PV1[1],
 
2nd chunk of LE[1] -> PV2[1],
 
3rd chunk of LE[1] -> PV3[1],
 
4th chunk of LE[1] -> PV1[2],
            

3.      and so on. In certain situations this strategy can improve the performance of the logical volume.

Warning

LVM 1 Caveat

 

LVs created using striping cannot be extended past the PVs they were originally created on in LVM 1.
  1. In LVM 2, striped LVs can be extended by concatenating another set of devices onto the end of the first set. So you can get into a situation where your LV is a 2 stripe set concatenated with a linear set concatenated with a 4 stripe set. Are you confused yet?

 

LVM 2 FAQ

4.1.1. I have LVM 1 installed and running on my system. How do I start using LVM 2?

4.1.2. Do I need a special lvm2 kernel module?

4.1.3. I get errors about /dev/mapper/control when I try to use the LVM 2 tools. What's going on?

4.1.4. Which commands and types of logical volumes are currently supported in LVM 2?

4.1.5. Does LVM 2 use a different format from LVM 1 for it's ondisk representation of Volume Groups and Logical Volumes?

4.1.6. Does LVM 2 support VGs and LVs created with LVM 1?

4.1.7. Can I upgrade my LVM 1 based VGs and LVs to LVM 2 native format?

4.1.8. I've upgraded to LVM 2, but the tools keep failing with out of memory errors. What gives?

4.1.9. I have my root partition on an LV in LVM 1. How do I upgrade to LVM 2? And what happened to lvmcreate_initrd?

4.1.10. How resilient is LVM to a sudden renumbering of physical hard disks?

4.1.11. I'm trying to fill my vg, and vgdisplay/vgs says that I have 1.87 GB free, but when I do an lvcreate vg -L1.87G it says "insufficient free extends". What's going on?

4.1.12. How are snapshots in LVM2 different from LVM1?

4.1.13. What is the maximum size of a single LV?

4.1.1. I have LVM 1 installed and running on my system. How do I start using LVM 2?

Here's the Quick Start instructions :)

1.      Start by removing any snapshot LVs on the system. These are not handled by LVM 2 and will prevent the origin from being activated when LVM 2 comes up.

2.      Make sure you have some way of booting the system other than from your standard boot partition. Have the LVM 1 tools, standard system tools (mount) and an LVM 1 compatible kernel on it in case you need to get back and fix some things.

3.      Grab the LVM 2 tools source and the device mapper source and compile them. You need to install the device mapper library using "make install" before compiling the LVM 2 tools. Also copy the dm/scripts/devmap_mknod.sh script into /sbin. I recommend only installing the 'lvm' binary for now so you have access to the LVM 1 tools if you need them. If you have access to packages for LVM 2 and device-mapper, you can install those instead, but beware of them overwriting your LVM 1 tool set.

4.      Get a device mapper compatible kernel, either built in or as a kernel module.

5.      Figure out where LVM 1 was activated in your startup scripts. Make sure the device-mapper module is loaded by that point (if you are using device mapper as a module) and add '/sbin/devmap_mknod.sh; lvm vgscan; lvm vgchange -ay' afterward.

6.      Install the kernel with device mapper support in it. Reboot. If all goes well, you should be running with lvm2.

4.1.2. Do I need a special lvm2 kernel module?

No. You need device-mapper. The lvm2 tools use device-mapper to interface with the kernel and do all their device mapping (hence the name device-mapper). As long as you have device-mapper, you should be able to use LVM2.

4.1.3. I get errors about /dev/mapper/control when I try to use the LVM 2 tools. What's going on?

The primary cause of this is not having run the "dmsetup mknodes" after rebooting into a dm capable kernel. This script generates the control node for device mapper.

If you don't have the "dmsetup mknodes", don't despair! (Though you should probably upgrade to the latest version of device-mapper.) It's pretty easy to create the /dev/mapper/control file on your own:

1.      Make sure you have the device-mapper module loaded (if you didn't build it into your kernel).

2.      Run

# cat /proc/misc | grep device-mapper | awk '{print $1}'

3.      and note the number printed. (If you don't get any output, refer to step 1.)

4.      Run

# mkdir /dev/mapper

5.      - if you get an error saying /dev/mapper already exists, make sure it's a directory and move on.

6.      Run

# mknod /dev/mapper/control c 10 $number

7.      where $number is the number printed in step 2.

You should be all set now!

4.1.4. Which commands and types of logical volumes are currently supported in LVM 2?

If you are using the stable 2.4 device mapper patch from the lvm2 tarball, all the major functionality you'd expect using lvm1 is supported with the lvm2 tools. (You still need to remove snapshots before upgrading from lvm1 to lvm2)

If you are using the version of device mapper in the 2.6 kernel.org kernel series the following commands and LV types are not supported:

·         pvmove

·         snapshots

The beginnings of support for these features are in the unstable device mapper patches maintained by Joe Thornber.

4.1.5. Does LVM 2 use a different format from LVM 1 for it's ondisk representation of Volume Groups and Logical Volumes?

Yes. LVM 2 uses lvm 2 format metadata. This format is much more flexible than the LVM 1 format metadata, removing or reducing most of the limitations LVM 1 had.

4.1.6. Does LVM 2 support VGs and LVs created with LVM 1?

Yes. LVM 2 will activate and operate on VG and LVs created with LVM 1. The exception to this is snapshots created with LVM 1 – these should be removed before upgrading. Snapshots that remain after upgrading will have to be removed before their origins can be activated by LVM 2.

4.1.7. Can I upgrade my LVM 1 based VGs and LVs to LVM 2 native format?

Yes. Use vgconvert to convert your VG and all LVs contained within it to the new lvm 2 format metadata. Be warned that it's not always possible to revert back to lvm 1 format metadata.

4.1.8. I've upgraded to LVM 2, but the tools keep failing with out of memory errors. What gives?

One possible cause of this is that some versions of LVM 1 (The user that reported this bug originally was using Mandrake 9.2, but it is not necessarily limited to that distribution) did not put a UUID into the PV and VG structures as they were supposed to. The most current versions of the LVM 2 tools automatically fill UUIDs in for the structures if they see they are missing, so you should grab a more current version and your problem should be solved. If not, post to the linux-lvm mailing list

4.1.9. I have my root partition on an LV in LVM 1. How do I upgrade to LVM 2? And what happened to lvmcreate_initrd?

Upgrading to LVM 2 is a bit trickier with root on LVM, but it's not impossible. You need to queue up a kernel with device-mapper support and install the lvm2 tools (you might want to make a backup of the lvm 1 tools, or find a rescue disk with the lvm tools built in, in case you need them before you're done). Then find a mkinitrd script that has support for your distro and lvm 2.

Currently, this is the list of mkinitrd scripts that I know support lvm2, sorted by distro:

mkinitrd scripts with lvm 2 support

fedora

The latest fedora core 2 mkinitrd handles lvm2, but it relies on a statically built lvm binary from the latest lvm 2 tarball.

Redhat 9 users may be able to use this as well

Debian

There is an unofficial version here

Generic

There is a version in the lvm2 source tree under scripts/lvm2_createinitrd/. See the documentation in that directory for more details.

4.1.10. How resilient is LVM to a sudden renumbering of physical hard disks?

It's fine – LVM identifies PVs by UUID, not by device name.

Each disk (PV) is labeled with a UUID, which uniquely identifies it to the system. 'vgscan' identifies this after a new disk is added that changes your drive numbering. Most distros run vgscan in the lvm startup scripts to cope with this on reboot after a hardware addition. If you're doing a hot-add, you'll have to run this by hand I think. On the other hand, if your vg is activated and being used, the renumbering should not affect it at all. It's only the activation that needs the identifier, and the worst case scenario is that the activation will fail without a vgscan with a complaint about a missing PV.

Note

The failure or removal of a drive that LVM is currently using will cause problems with current use and future activations of the VG that was using it.

4.1.11. I'm trying to fill my vg, and vgdisplay/vgs says that I have 1.87 GB free, but when I do an lvcreate vg -L1.87G it says "insufficient free extends". What's going on?

The 1.87 GB figure is rounded to 2 decimal places, so it's probably 1.866 GB or something. This is a human-readable output to give you a general idea of how big the VG is. If you want to specify an exact size, you must use extents instead of some multiple of bytes.

In the case of vgdisplay, use the Free PE count instead of the human readable capacity.

 
              Free  PE / Size          478 / 1.87 GB
                                       ^^^
              

So, this would indicate that you should do run

 
# lvcreate vg -l478 

Note that instead of an upper-case 'L', we used a lower-case 'l' to tell lvm to use extents instead of bytes.

In the case of vgs, you need to instruct it to tell you how many extents are available:

 
# vgs -o +vg_free_count,vg_extent_count
              

This tell vgs to add the free extents and the total number of extents to the end of the vgs listing. Use the free extent number the same way you would in the above vgdisplay case.

4.1.12. How are snapshots in LVM2 different from LVM1?

In LVM2 snapshots are read/write by default, whereas in LVM1, snapshots were only read-only. See Section 3.8 for more details

4.1.13. What is the maximum size of a single LV?

The answer to this question depends upon the CPU architecture of your computer and the kernel you are a running:

·         For 2.4 based kernels, the maximum LV size is 2TB. For some older kernels, however, the limit was 1TB due to signedness problems in the block layer. Red Hat Enterprise Linux 3 Update 5 has fixes to allow the full 2TB LVs. Consult your distribution for more information in this regard.

·         For 32-bit CPUs on 2.6 kernels, the maximum LV size is 16TB.

·         For 64-bit CPUs on 2.6 kernels, the maximum LV size is 8EB. (Yes, that is a very large number.)

 

h1

The Linux Logical Volume Manager

June 3, 2006

The Linux Logical Volume Manager

by Heinz Mauelshagen and Matthew O'Keefe

Storage technology plays a critical role in increasing the performance, availability, and manageability of Linux servers. One of the most important new developments in the Linux 2.6 kernel—on which the Red Hat® Enterprise Linux® 4 kernel is based—is the Linux Logical Volume Manager, version 2 (or LVM 2). It combines a more consistent and robust internal design with important new features including volume mirroring and clustering, yet it is upwardly compatible with the original Logical Volume Manager 1 (LVM 1) commands and metadata. This article summarizes the basic principles behind the LVM and provide examples of basic operations to be performed with it.

Introduction

Logical volume management is a widely-used technique for deploying logical rather than physical storage. With LVM, "logical" partitions can span across physical hard drives and can be resized (unlike traditional ext3 "raw" partitions). A physical disk is divided into one or more physical volumes (Pvs), and logical volume groups (VGs) are created by combining PVs as shown in Figure 1. LVM internal organization. Notice the VGs can be an aggregate of PVs from multiple physical disks.

Figure 1. LVM internal organization

Figure 2. Mapping logical extents to physical extents shows how the logical volumes are mapped onto physical volumes. Each PV consists of a number of fixed-size physical extents (PEs); similarly, each LV consists of a number of fixed-size logical extents (LEs). (LEs and PEs are always the same size, the default in LVM 2 is 4 MB.) An LV is created by mapping logical extents to physical extents, so that references to logical block numbers are resolved to physical block numbers. These mappings can be constructed to achieve particular performance, scalability, or availability goals.

Figure 2. Mapping logical extents to physical extents

For example, multiple PVs can be connected together to create a single large logical volume as shown in Figure 3. LVM linear mapping. This approach, known as a linear mapping, allows a file system or database larger than a single volume to be created using two physical disks. An alternative approach is a striped mapping, in which stripes (groups of contiguous physical extents) from alternate PVs are mapped to a single LV, as shown in Figure 4. LVM striped mapping. The striped mapping allows a single logical volume to nearly achieve the combined performance of two PVs and is used quite often to achieve high-bandwidth disk transfers.

Figure 3. LVM linear mapping

Figure 4. LVM striped mapping (4 physical extents per stripe)

Through these different types of logical-to-physical mappings, LVM can achieve four important advantages over raw physical partitions:

  1. Logical volumes can be resized while they are mounted and accessible by the database or file system, removing the downtime associated with adding or deleting storage from a Linux server
  2. Data from one (potentially faulty or damaged) physical device may be relocated to another device that is newer, faster or more resilient, while the original volume remains online and accessible
  3. Logical volumes can be constructed by aggregating physical devices to increase performance (via disk striping) or redundancy (via disk mirroring and I/O multipathing)
  4. Logical volume snapshots can be created to represent the exact state of the volume at a certain point-in-time, allowing accurate backups to proceed simultaneously with regular system operation

Basic LVM commands

Initializing disks or disk partitions

To use LVM, partitions and whole disks must first be converted into physical volumes (PVs) using the pvcreate command. For example, to convert /dev/hda and /dev/hdb into PVs use the following commands:


pvcreate /dev/hda
pvcreate /dev/hdb

If a Linux partition is to be converted make sure that it is given partition type 0x8E using fdisk, then use pvcreate:


pvcreate /dev/hda1

Creating a volume group

Once you have one or more physical volumes created, you can create a volume group from these PVs using the vgcreate command. The following command:


vgcreate  volume_group_one /dev/hda /dev/hdb

creates a new VG called volume_group_one with two disks, /dev/hda and /dev/hdb, and 4 MB PEs. If both /dev/hda and /dev/hdb are 128 GB in size, then the VG volume_group_one will have a total of 2**16 physical extents that can be allocated to logical volumes.

Additional PVs can be added to this volume group using the vgextend command. The following commands convert /dev/hdc into a PV and then adds that PV to volume_group_one:


pvcreate /dev/hdc
vgextend volume_group_one /dev/hdc

This same PV can be removed from volume_group_one by the vgreduce command:


vgreduce volume_group_one /dev/hdc

Note that any logical volumes using physical extents from PV /dev/hdc will be removed as well. This raises the issue of how we create an LV within a volume group in the first place.

Creating a logical volume

We use the lvcreate command to create a new logical volume using the free physical extents in the VG pool. Continuing our example using VG volume_group_one (with two PVs /dev/hda and /dev/hdb and a total capacity of 256 GB), we could allocate nearly all the PEs in the volume group to a single linear LV called logical_volume_one with the following LVM command:


lvcreate -n logical_volume_one   --size 255G volume_group_one 

Instead of specifying the LV size in GB we could also specify it in terms of logical extents. First we use vgdisplay to determine the number of PEs in the volume_group_one:


vgdisplay volume_group_one | grep "Total PE"

which returns


Total PE   65536

Then the following lvcreate command will create a logical volume with 65536 logical extents and fill the volume group completely:


lvcreate -n logical_volume_one  -l 65536 volume_group_one

To create a 1500MB linear LV named logical_volume_one and its block device special file /dev/volume_group_one/logical_volume_one use the following command:


lvcreate -L1500 -n logical_volume_one volume_group_one

The lvcreate command uses linear mappings by default.

Striped mappings can also be created with lvcreate. For example, to create a 255 GB large logical volume with two stripes and stripe size of 4 KB the following command can be used:


lvcreate -i2 -I4 --size 255G -n logical_volume_one_striped volume_group_one

It is possible to allocate a logical volume from a specific physical volume in the VG by specifying the PV or PVs at the end of the lvcreate command. If you want the logical volume to be allocated from a specific physical volume in the volume group, specify the PV or PVs at the end of the lvcreate command line. For example, this command:


lvcreate -i2 -I4 -L128G -n logical_volume_one_striped volume_group_one /dev/hda /dev/hdb 

creates a striped LV named logical_volume_one that is striped across two PVs (/dev/hda and /dev/hdb) with stripe size 4 KB and 128 GB in size.

An LV can be removed from a VG through the lvremove command, but first the LV must be unmounted:


umount /dev/volume_group_one/logical_volume_one
lvremove /dev/volume_group_one/logical_volume_one

Note that LVM volume groups and underlying logical volumes are included in the device special file directory tree in the /dev directory with the following layout:


/dev// 

so that if we had two volume groups myvg1 and myvg2 and each with three logical volumes named lv01, lv02, lv03, six device special files would be created:


/dev/myvg1/lv01
/dev/myvg1/lv02
/dev/myvg1/lv03
/dev/myvg2/lv01
/dev/myvg2/lv02
/dev/myvg2/lv03

Extending a logical volume

An LV can be extended by using the lvextend command. You can specify either an absolute size for the extended LV or how much additional storage you want to add to the LVM. For example:


lvextend -L120G /dev/myvg/homevol

will extend LV /dev/myvg/homevol to 12 GB, while


lvextend -L+10G /dev/myvg/homevol

will extend LV /dev/myvg/homevol by an additional 10 GB. Once a logical volume has been extended, the underlying file system can be expanded to exploit the additional storage now available on the LV. With Red Hat Enterprise Linux 4, it is possible to expand both the ext3fs and GFS file systems online, without bringing the system down. (The ext3 file system can be shrunk or expanded offline using the ext2resize command.) To resize ext3fs, the following command


ext2online /dev/myvg/homevol

will extend the ext3 file system to completely fill the LV, /dev/myvg/homevol, on which it resides.

The file system specified by device (partition, loop device, or logical volume) or mount point must currently be mounted, and it will be enlarged to fill the device, by default. If an optional size parameter is specified, then this size will be used instead.

Differences between LVM1 and LVM2

The new release of LVM, LVM 2, is available only on Red Hat Enterprise Linux 4 and later kernels. It is upwardly compatible with LVM 1 and retains the same command line interface structure. However it uses a new, more scalable and resilient metadata structure that allows for transactional metadata updates (that allow quick recovery after server failures), very large numbers of devices, and clustering. For Enterprise Linux servers deployed in mission-critical environments that require high availability, LVM2 is the right choice for Linux volume management. Table 1. A comparison of LVM 1 and LVM 2 summarizes the differences between LVM1 and LVM2 in features, kernel support, and other areas.

Features LVM1 LVM2
RHEL AS 2.1 support No No
RHEL 3 support Yes No
RHEL 4 support No Yes
Transactional metadata for fast recovery No Yes
Shared volume mounts with GFS No Yes
Cluster Suite failover supported Yes Yes
Striped volume expansion No Yes
Max number PVs, LVs 256 PVs, 256 LVs 2**32 PVs, 2**32 LVs
Max device size 2 Terabytes 8 Exabytes (64-bit CPUs)
Volume mirroring support No Yes, in Fall 2005

Table 1. A comparison of LVM 1 and LVM 2

Summary

The Linux Logical Volume Manager provides increased manageability, uptime, and performance for Red Hat Enterprise Linux servers. You can learn more about LVM by visiting to following websites:

About the authors

From 1990 to May 2000, Matthew O'Keefe taught and performed research in storage systems and parallel simulation software as a professor of electrical and computer engineering at the University of Minnesota. He founded Sistina Software in May of 2000 to develop storage infrastructure software for Linux, including the Global File System (GFS) and the Linux Logical Volume Manager (LVM). Sistina was acquired by Red Hat in December 2003, where Matthew now directs storage software strategy.

Starting in 2000, Heinz Mauelshagen began working on device mapper and LVM at Sistina Software. Sistina was acquired by Red Hat in December 2003. Heinz Mauelshagen is currently continuing his work on clustering and storage as a Red Hat developer in Germany. Before joining Sistina, Heinz was a senior system administrator at T-Systems for a decade.

h1

LVM–What steps are needed to resize an existing disk partition when the Logical Volume Manager (LVM) is not being used?

June 3, 2006

Issue:

What steps are needed to resize an existing disk partition when the Logical Volume Manager (LVM) is not being used?

Resolution:

Release Found: Red Hat Enterprise Linux version 3

Symptom:
You do not have enough space on one of your filesystems, but do on a filesystem located on another partition. You are not using the Logical Volume Manager (LVM) to "virtualize" your physical storage.

Solution:
Note: Resizing filesystems and their underlying partitions can be VERY dangerous. Also, you can only resize partitions from their end position on the disk, you can not move partitions on the disk or resize them from their beginning. While it is possible in most situations, it is not a practice Red Hat can provide support for, in any way. That said, proceed at your own risk!

Please be sure to read through all the steps first before executing the provided commands.

Here are the steps involved for SHRINKING a filesystem & partition:

  1. Backup your data.
  2. Print the output from the fdisk -l command on a printer, or write down the details by hand. This is in case you need to restore your partition table to it's previous state.
  3. Make note of the cylinder size of the disk in bytes, and call this number C. For example, from the following output you would note C = 8225280
    Disk /dev/hda: 40.0 GB, 40000000000 bytes
    255 heads, 63 sectors/track, 4863 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
  4. Run the command tune2fs -l <device> where <device> represents the device file of the partition which contains the filesystem you are going to resize.
  5. Make note of the following values from the above output: Block count, Block Size, and Free blocks. Label the Block Count T, the Block Size K, and Free Blocks F.
  6. Reboot the system into rescue mode. Do this by booting off of the first CD and typing linux rescue at the boot: prompt.
  7. DO NOT mount any partitions, especially those on the device containing filesystems you are going to resize.
  8. Take the number of bytes you would like to resize the filesystem by, and label it Z.
  9. Divide Z by C, rounding any fraction up to the next whole number or add 1 if the result is zero. Call this resulting number N.
  10. Calculate ((N*C)/K)+1 and call the result X.
  11. Make sure that X is not greater than F. If it is, then you will need to reduce Z and repeat the last 2 steps.
  12. Subtract X from T calling the result R.
  13. Execute the command e2fsck -fy <device>.
  14. Execute the command resize2fs <device> <R>.
  15. Assuming the resize was successful, run the command e2fsck -y <device> to verify that the filesystem is still intact. If it it checks out, then continue. If it fails then you most likely have lost the data on this filesystem.
  16. Enter the fdisk utility with the command fdisk <device>.
  17. Display the current partition table with the p command.
  18. Use the d command to delete the partition the resized filesystem is on.
  19. Use the n command to create a new partition.
  20. For the starting cylinder, specify the same value as before. (You did print out the original partition table in step 2, right?)
  21. For the ending cylinder, specify the original ending cylinder minus N.
  22. Use the w command to save the new partition table.
  23. Run the command e2fsck -y <device> to verify that the filesystem is still intact.
  24. If it checks out, then your done. If the check fails to find a valid filesystem, then repeat the last 8 steps with a smaller value for N.
  25. If everything fails, you can usually recover by re-creating the partition table EXACTLY as it was before. Then resize the partition back to it's original size with resize2fs <device> <T>. Again, use the e2fsck -y <device> command to verify that the filesystem is intact.

Here are the steps involved for EXPANDING a filesystem & partition:

  1. Backup your data.
  2. Print the output from the fdisk -l command on a printer, or write down the details by hand. This is in case you need to restore your partition table to it's previous state.
  3. Make note of the cylinder size of the disk in bytes, and call this number C. For example, from the following output you would note C = 8225280
    Disk /dev/hda: 40.0 GB, 40000000000 bytes
    255 heads, 63 sectors/track, 4863 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
  4. Run the command tune2fs -l <device> where <device> represents the device file of the partition which contains the filesystem you are going to resize.
  5. Make note of the following values from the above output: Block count, Block Size, and Free blocks. Label the Block Count T, the Block Size K, and Free Blocks F.
  6. Reboot the system into rescue mode. Do this by booting off of the first CD and typing linux rescue at the boot: prompt.
  7. DO NOT mount any partitions, especially those on the device containing filesystems you are going to resize.
  8. Take the number of bytes you would like to add to the filesystem and label it Z.
  9. Divide Z by C, rounding any fraction up to the next whole number or add 1 if the result is zero. Call this resulting number N.
  10. Calculate ((N*C)/K) and call the result X.
  11. Add X to T calling the result R.
  12. Execute the command e2fsck -fy <device>.
  13. Enter the fdisk utility with the command fdisk <device>.
  14. Display the current partition table with the p command.
  15. Use the d command to delete the partition the resized filesystem is on.
  16. Use the n command to create a new partition.
  17. For the starting cylinder, specify the same value as before. (You did print out the original partition table in step 2, right?)
  18. For the ending cylinder, specify the original ending cylinder Plus N.
  19. Use the w command to save the new partition table.
  20. Run the command e2fsck -y <device> to verify that the filesystem is intact.
  21. Execute the command resize2fs -f <device> <R>
    Note: you must use -f to grow filesystem otherwise it will refuse the command.
  22. Assuming the resize was successful, run the command e2fsck -y <device> to verify that the filesystem is still intact. If it it checks out, your done! If it fails then you most likely have lost the data on this filesystem.

The information provided in this document is for your information only. The origin of this information may be internal or external to Red Hat. While Red Hat attempts to verify the validity of this information before it is posted, Red Hat makes no express or implied claims to its validity.

© 2003-2006 Red Hat, Inc. All rights reserved. This article is made available for copying and use under the Open Publication License, v1.0 which may be found at http://www.opencontent.org/openpub/.

How well did this entry answer your question?

good wrong incomplete out of date

var LP6=’ Related Solutions’ var HDTS = escape(LP6); var HDE = ‘%3C%21–ref–%3E’ if(HDTS == HDE || LP6 ==”){document.getElementById(“REF”).style.visibility = “hidden”;} else{ var REH = ‘

 Related Solutions

‘; document.write(REH);}

 Related Solutions

var LPR=’ Other Visitors Recommend’ var HDTE = escape(LPR); var HDS = ‘%3C%21–ovr–%3E’ if(HDTE == HDS || LPR ==”){document.getElementById(“OVR”).style.visibility = “hidden”;} else{ var OVH =’

Other Users Also Viewed

‘; document.write(OVH); }

Other Users Also Viewed

Follow

Get every new post delivered to your Inbox.