Linux Logical Volume Manager (LVM) on Software RAID

November 2002

Logical Volume Manager is now included with most Linux distributions. The RedHat 8.0 installer even allows you to create LVM volumes during initial install. LVM offers capabilities previously only found in expensive products like Veritas.

If you plan on using LVM, I really recommend doing so on a RAID system, either hardware or software. Disks are dirt cheap nowadays, so there's really no excuse not to have mirroring (or RAID 5 if that's better for your usage). The RedHat installer will let you create LVM volumes on top of RAID volumes (it's a bit confusing, but it works), or you can do it later. Also remember that raid is not a substitute for regular and reliable backup!

Before we get to how you'd do this, let's go over what LVM gives you. First, LVM combines physical devices (partitions on disks in this case) into what it calls Volume Groups. Filesystems are then built on the Volume Group or groups. You can have a Volume Group that has only one disk partition in it, or several partitions on one or more disks. Each Volume Group can contain multiple filesystems. With this, you gain a lot of new capabilities.

  • Increase or decrease filesystem size

    Decreasing the size of a filesystem returns space to the Volume Group. Increasing draws space from it. If you have too much space allocated for a file system, you can decrease it, and use that space somewhere else that needs it.

    Yes, you can do that with "parted", but it's not at all the same. You aren't dealing with contiguous disk blocks with LVM - all you need is free space in the Volume Group.

  • Add more physical storage

    You can add more physical drives to an existing Volume Group, which of course immediately gives you more room to extend file systems.

  • Create "snapshots" of filesystems.

    This is a great feature, but it requires a bit of explanation, so we'll leave the details for later. The use of snapshots is to freeze a filesystem in time, and let you go on using it while you leisurely back up the frozen data. What's wonderful about this is that it is NOT necessarily a full copy of your original file system - more on that later.

  • Move Volume Groups to new physical storage

    I'm not going to cover this here, but it's a great feature, and you can even do it while the filesystems are in use!

  • Striping

    I'm not going to cover that here (there are some references at the bottom of this article that do cover it), but you can get the performance benefit of striping as part of LVM. If you do this, it is even more critical that you mirror your drives. If you used RAID 5, there wouldn't be any point in striping in LVM as you already would have that benefit (see Raid if you don't understand that).

Configuring LVM on mirrored drives during RedHat Install

You might want to review Software Mirroring if you have never done that. Software mirroring is where we start by creating raid devices upon which we will create LVM volumes. For this test machine, I created a layout like this:

Primary disk Size Filesystem type
/dev/hda1 102 MB primary partition Software RAID
/dev/hda2 128 MB Swap
/dev/hda3 (rest of disk) Software Raid
Secondary disk Size Partition type
/dev/hdb1 102 MB primary partition Software RAID
/dev/hdb2 128 MB Swap
/dev/hdb3 (rest of disk) Software Raid

Note that there are two swap partitions, and that they will not be mirrored.

I then made two raid devices, choosing "Physical Volume (LVM)" as the filesystem type for the second.

Raid device Partitions used Filesystem type Mount point
/dev/md0 /dev/hda1 and /dev/hdb1 ext3 /boot
/dev/md1 /dev/hda3 and /dev/hdb3 Physical Volume (LVM) N/A

Let's stop for a second and go over this carefully because it can be confusing. We've created mirrors of our boot partition and what would ordinarily become root. But there is no root partion yet. The place where it will go is mirrored, but we aren't creating a file system just yet. The next step is to click LVM to create a Volume Group. A volume group needs one or more Physical Volumes, and we have one. Note that is very different from Raid where you need at least two Software Raid partitions to do anything. For a LVM volume group, one physical volume is enough. It is here, in the Volume Group, that we create filesystems. Note that is plural. With raid, your partition is one filesystem, but with LVM you can create multiple filesystems within one Volume Group. To do that, just click the LVM button and it will bring up a new Volume Group. As we only have one Physical Volume here, that's what it will use, and it's ready to let you add file systems.

Note: although I put / into the LVM for this test setup, it may not be a good idea to do so because emergency boot media probably won't understand LVM filesystems. Supposedly RedHat 8.0 does support / on LVM (but not /boot).

So I added two filesystems, each 4GB in size, and had one mount at / and the other at /var. I had much more space available (this was a 30GB drive), but I didn't bother to use it at this time. I then proceeded with the install as usual. Well, almost as usual. Several times in testing I had the problem of not getting focus when it was time to put in the root password. The solution is to right click over the field, choose "Select All" and then click in the field again.

Extending File Systems

After booting up, I had the expected 4GB / and 4GB /var. As you will remember, the actual disk is much larger, so let's add more filesystems to it:

lvcreate -L 4G -n mylv Volume00
mkfs -t ext3 /dev/Volume00/mylv
mkdir /mylv
mount /dev/Volume00/mylv /mylv

That added a 4GB filesystem. Unfortunately, I made a mistake - I needed 6 GB. Well, at this point I could just blow it away (lvremove /dev/Volume00/mylv), but suppose I'd already added a lot of data? No problem, just unmount it and extend it:

umount /mylv
e2fsadm -L+2G /dev/Volume00/mylv
mount /dev/Volume00/mylv /mylv

The "ef2fsadm" actually runs lower level commands for you: it fscks the file system, runs lvextend to extend the logical volume, and then resizes the filesystem to fit. Much easier to let e2fsadm handle the whole job.

Adding more disks

If I really needed a lot more space, I could add a hard drive and use "pvcreate" to add it as a physical volume, and then "vgextend" picks up that space:

pvcreate /dev/sda1
vgextend Volume00 /dev/sda1

It just can't get much easier! Remember though: those disks should be mirrored too.


This is a very useful feature. Many of us have the situation where important data needs to be backed up, but it cannot be used while the backup is running because then the backed up files would be out of sync with each other. For example, you have an accounting system that is recording orders. The accounts receivable file gets backed up now, and you take an order. Both a/r and the customer file get updated to reflect the new order, but a/r has already been backed up. When the customer file finally makes it to tape, it's not consistent with a/r, and of course it needs to be. Without snapshots, your only recourse is to stop taking orders while the backup runs. If you have lots of disk space, you could copy the whole accounting system and backup the copy, but that can take a lot of time too, and you may not have the space. Snapshots are the solution. Before you do the next step, make sure you've put a few files in /little, and make at least one of them unimportant. Then create the snapshot.

lvcreate --size 200M --snapshot -n mysnap /dev/Volume00/mylv
mkdir /mylvsnap
mount /dev/Volume00/mysnap /mylvsnap

Right off the bat you should have noticed something strange. We created mysnap very specifically with a size of 200MB, and trust me, that's all it took away from us, but df shows it being the same size (6GB) as mylv. We'll get back to why this is in a minute, but first take a look at the files in /snap. They are identical to the files in /mylv, right? OK, now go edit a file in /mylv. Does it change in /snap? No, it does not. Remove a file in /mylv - it's still there in /snap. Add a new file to /mylv, and that does NOT appear in /snap. How is this done, and most especially how is it done in 200MB?

It's not magic

OK, it is magic. What is going on is that /snap contains absolutely nothing UNLESS something changes back at /mylv. If you ask for a file from /snap that has not changed, the data is read right from /mylv. But if a file IS changed, before the change is written, the data blocks that don't yet have the changes are written to /snap. Note that entire files are NOT written, just data blocks that are about to change. So, as long as we don't change more than 200MB worth of data in /mylv, we can have our cake and eat it too. Our procedure will be:

  • Stop using the filesystem, shut down any databases that need to be shutdown etc.
  • Create the snapshot
  • Start up our databases, go back to work.
  • Start backing up /snap

Our time without access is minutes or seconds - just however long it takes to stop the processes and restart them, basically. The backup can take its sweet time. Well it can if it doesn't take so long that we need more than 200 MB to store our data that is changing. That does mean that the size of mysnap does have to be a bit of an educated guess. It also means that as soon as you are done with the backup, mysnap should be removed:

umount /snap
lvremove /dev/Volume00/msnap

If you don't remove it, it will go on copying data as it is changed and eventually it will run out of room. You can't just leave it there for next time!

Some other helpful links:

Publish your articles, comments, book reviews or opinions here!

© November 2002 Tony Lawrence All rights reserved

(OLDER) <- More Stuff -> (NEWER)    (NEWEST)   

Printer Friendly Version

-> -> Linux Logical Volume Manager (LVM) on Software RAID


More Articles by

Find me on Google+

© Tony Lawrence

---February 6, 2005

Tony, is there a reason why you recommend RAID in conjunction with a LVM setup? I just created a new LVM, which is going to be primarily used as my video work area. I need lots of storage for editing video, creating ISO's to burn to DVD's etc, so the reliability does not have to be a priority. Are LVM setups less stable? I set mine up as striping, and when using hdparm -T to do some performance testing, I was getting about 250MB/s on average! Using hdparm -t, I would get about 51MB/s. That is pretty damn fast! My main disk with the OS, gets about 45MB/s, and I thought that was fast.

I think my next project will be to create a LVM for my main OS partition, and see what kind of speed I can get out of that.


---February 6, 2005

It's just reliability. If you don't need it, and can rely on backups, fine.


Sat Apr 30 06:23:48 2005: 414   anonymous

Hello Tony. Previously I've use LVM on HPUX machines. There was an add-in product called MirrordiskUX that allowed native LVM to support mirroring.

Two questions:

1. Does Linux LVM support mirroring?

2. If so, is it more performant to place a "software RAID" md device in a volume group -or- make an LVM mirror out of two physical devices? (Ex. hda3-hdc3.)

Sat Apr 30 09:55:25 2005: 415   TonyLawrence

1. Does Linux LVM support mirroring?

2. If so, is it more performant to place a "software RAID" md device in a volume group -or- make an LVM mirror out of two physical devices? (Ex. hda3-hdc3.)

You can mirror by putting LVM on top of an MD as discussed here. But if your concern is performance, you should probably be looking at hardware raid.

Thu Jul 7 07:06:45 2005: 756   anonymous

I'm used to the HP-UX LVM, which uses the same commands and options as the Linux LVM. But the HP-UX LVM had an extension called "MirrorDisk/UX" which allowed mirroring on logical volume level, seamless integrated in the LVM management commands. I'm missing the mirrordisk features on Linux machines, because I'm not able to add and remove mirrors in an existing LVM environment online. The ability to do this with Linux too is an essential feature for me. Which tools are able to do this? Do I have to buy Veritas Foundation Suite for Linux and work with Veritas instead Linux LVM?
Thanks, Petr

Thu Jul 7 09:52:33 2005: 757   TonyLawrence

I'd suggest checking with the LVM mailing lists and developers sites. That may be a feature under consideration.

Thu Jul 14 12:53:39 2005: 801   anonymous

The kernel LVM code supports mirroring. The userspace tools that I have don't support mirroring (I haven't checked the latest ones), but you could set it up manually using dmsetup if you like.

Sat May 23 04:54:46 2009: 6391   Diego

I havent fully understood that about snapshots. i think.
If this is like this:

A program loads in memory a file.
It does some changes.
Writes it down.
The old blocks are read and writen to the snapshot.
Then the new blocks are written to the real volume.

Is there a way to this in the other way.
I may need to do some changes which mostly sure i will want to undo.
I want to "freeze" the volume and the changed blocks should be writen to the snapshot as if it is a binary diff patch.
If i am unhappy with the changes, i delete the snapshot, if i am happy, i write it down.

Ah, and in the first case (your case), Supouse a massive disk crash in a non raid volume, how do i recover an snapshot? It requires a previous complete backup to be restored in another disk or not? In that case it would be better to periodically mirror the data.

Sat May 23 09:50:20 2009: 6392   TonyLawrence

I think you need to read the section about snapshots again. You might also read
(link) and (link)

Sat May 23 10:22:48 2009: 6393   TonyLawrence

I may need to do some changes which mostly sure i will want to undo.

You could use a snapshot for that - it's just a matter of running the app aggainst the snapshot rather than its source.

Sat May 23 22:11:29 2009: 6397   Diego

You may also need to read again my comments because you didnt answered. Of course, you are not forced to.

"However, if something is changed in your original source, the filesystem copies data to the /snap are before overwriting the new data."

First at all a write operation will not overwirte the new data. It will overwrite the old data with the new data. But i get the point.
And what you say is that the old blocks are being copied to snapshots before writing to the real destiny. "copy before write"
And a copy requires a read and a write operation, so it is
read old from destiny, write old in snap, write new in destiny.
Which is not very good for my disk life and for performance.

What I need is more like unionfs works. An overlay. But i wanted to know if lvm provides that too.
You have destiny folder and make an overlay over it.
When you write to destiny, the difference between new and old are stored in the overlay and destiny is not touched. When you read from it, it first reads from overlay and next from destiny if necesary.
If you delete overlay, you delete all the changes that you have done since you have created the overlay and the system "goes back in time", which is what i and many people call a freeze. And is much more better for the disk life, and requires no aditional write operation.
writes are always more expensive than reads.

So. Do you know if LVM support that?

Sun May 24 10:08:36 2009: 6398   TonyLawrence

Which is not very good for my disk life and for performance.

I think you are still misunderstanding the purpose of this, which is why I asked you to re-read it and the other links. The *purpose* is to allow you to back up data somewhere else in relative leisure while allowing the system to run. This is not something that is constantly in place - you stop work, initiate the snap, let work begin and do the backup. When it's done, you destroy the snap.

I did answer your question: yes, you can do what you want with a snapshot if you test from the snap data rather than the "frozen" data. Example: you are concerned that a patch will break the app. You set up the snap, and then start up against the snap data. If it crashes, you just remove it and go back to your safe data.

Mon Mar 29 23:37:57 2010: 8301   Rejean


This is very interesting. I like reading pertinent information on similar topic I experienced. I am glad to see that LVM has been out there a long time. If I may also suggest some reading that I wrote about similar issue and with LVM v2...

It's an howto on how to Create a Raid5 under Linux RHEL5.4 using md, lvm and ext4 filesystem. (look at
(link) )

and I also wrote the experience on Testing for a Raid5 failure (with LVM and MD) on Linux RHEL (


Mon Mar 29 23:40:42 2010: 8302   TonyLawrence


Thanks! I barely remember writing this :-)

Tue May 25 03:55:15 2010: 8633   anonymous


Hello Tony,
I want to have a high availability setup. I have a shared storage of type LVM called /dev/sdb and I want to create a disk group, volume group and logical volume out of this shared storage. I need your help in doing this. Please advice.


Tue May 25 11:03:55 2010: 8635   TonyLawrence


That kind of help doesn't usually come free. You can visit my Services and Rates page or find other consultants at

Tue Nov 23 19:09:30 2010: 9131   Karthik


> But if a file IS changed, before the change is written, the data blocks that don't yet have the changes are written to /snap

This is not quite right. The blocks that are referenced from /snap are left alone (but will be copied to somewhere else & then changed)... otherwise it will involve changing the inodes in /snap & will interfere with backups on /snap.

Fri Jan 7 09:25:56 2011: 9214   Noor


How can perform software mirroring in solaris 10 is there any need two solaris machine or from one machine we can make another solaris machine by software mirroring..??

Fri Jan 7 10:37:45 2011: 9215   TonyLawrence


You are confused about something. Mirroring is not clustering or HA - it's just disks.

Kerio Samepage

Have you tried Searching this site?

Unix/Linux/Mac OS X support by phone, email or on-site: Support Rates

This is a Unix/Linux resource website. It contains technical articles about Unix, Linux and general computing related subjects, opinion, news, help files, how-to's, tutorials and more.

Contact us