APLawrence.com -  Resources for Unix and Linux Systems, Bloggers and the self-employed
RSS Feeds RSS Feeds











(OLDER) <- More Stuff -> (NEWER) (NEWEST)
Printer Friendly Version
->
-> Understanding RAID


Raid





Just a very few years ago, RAID was an expensive option. That's all changed, and today anyone with high disk performance needs or concerns about data reliability should consider some sort of RAID configuration.

Hardware RAID is transparent to the operating system. There will be some utility either in the BIOS or installed as an application that lets you see through that transparency, but the operating system itself only sees the logical drive that the RAID presents.

This might mean, for example, that three physical disks look like one to your operating system. Anything that references disks will (/etc/conf/cf.d/mscsi on SCO Unix, /etc/fstab on any Unix/Linux) will only appear to have ONE disk drive.<

Often RAID controllers present each array as the id of its first member, or they'll just go in sequence, first array seen as id 0, second as 1, but yours could be different. So, if you looked in /dev, you'd see one entry as though there was one physical drive. If your OS creates entries that represent SCSI id's and your first physical disk was ID 0, that might be what you'd apparently see in /dev: a drive that looks like a physical ID 0 device as far as the OS knows, but of course it really isn't..

Software raid is slightly different, but at the end it all comes to the same place: your OS sees what the RAID software wants it to see. With software raid, you can get at the physical disks without any special utility, and the device node used for the RAID will be different than any physical pointer. See Linux Logical Volume Manager (LVM) on Software RAID for an example.

RAID means "Redundant Array of Inexpensive Disks". There are basically 5 defined levels of RAID:

  • RAID 0

    Striped disks. Highest performance, but no redundancy, and therefore really isn't RAID at all. Very seldom used because of the reduced reliability: if one drive fails, the entire array fails.

    However, it is the concept of striping that is important to understand: rather than writing a data to sequential blocks on one drive, each subsequent block is written to the next physical drive. This gives great read performance because multiple drives work on getting their portion of the data, thus delivering it back to the computer much faster than one drive alone could.

    To the OS and to the user, a striped RAID drive looks like ONE larger drive. Indeed, in hardware RAID implementations, you wouldn't know at all that this was not just one big disk without special software provided by the RAID manufacturer. Note that there is nothing special about the disk drives themselves: it is the disk controller that provides the abstraction that makes the striped drives look like one drive.

  • RAID 1

    Mirrored disks. Each drive has a twin and all data is written to both drives, and can be read from either drive. This increases read performance and, if accomplished by hardware rather than software (see below) has no adverse affect on write performance. There's no striping here; the read gain is only due to the fact that the controller can read from whichever drive is less busy or whose heads are closer to the data. In the event of one drive failing, the twin drive is immediately available with little perfomance degradation. When the failed drive is replaced, the new drive is rebuilt by reading data from the good drive.

    On very high end configurations, the twin can also provide a "snap-shot" ability, where writes to one of the twins are temporarily turned off so that a backup can be done as of that moment in time while continuing to allow writes to the other drive. This feature is often found in software configurations (see Veritas Volume Manager) or high end hardware/software combinations, but not in the inexpensive segment of the market.

    In both hardware and software implementations, it is possible to have more than one mirror for data that is really critical. While expensive, this provides even greater scurity against catastrophic failure and (with properly designed software/firmware) continues to increase read performance for each mirror added.

    This mirroring can be combined with RAID 0, giving striped disks that are mirrored. Although this offers the high performance of RAID 0 and redundancy, it is much more expensive and typically not seen except in environments where cost just doesn't matter.

  • RAID 2

    Seldom implemented. This is striping, but with some drives storing ECC data. As all modern drives implement ECC information themselves, there's no particular advantage to this configuration.

  • RAID 3

    Stripes data across multiple drives, but dedicates one drive to parity information. Data is read from all drives at once. This uses the imbedded ECC data to detect errors, and recovers data using the parity information. RAID 3 can give high performance in dedicated situations where large amounts of data need to be read quickly. It requires synchronized spindle drives and really doesn't perform well in multi-user situations and therefore is not usually seen except in unusual and very specialized circumstances.

    The concept of parity utilizes the mathematical properties of the XOR (Exclusive OR). You can see how this works by using the Javascript Bit Twiddler. For example, if you XOR the values 12 and 15, you'll get 3. If you XOR 3 (the result of your first XOR) with either 12 or 15, you'll get the other value. That "3" would be the parity information that would be used to reconstruct the "12" or the "15" if each of those represented data stored on a different disk. The XOR is a very quick operation, handled directly by the CPU. but of course it does involve some overhead, so writing involves both an extra calculation (the XOR) and another disk write (the disk write is concurrent with the other writes though, so that really doesn't hurt anything).

    If a drive fails, the controller provides the "missing" data by calculating it from the data it does have: the other data drive(s) and the parity drive. This is, of course, slower than reading it from the disk would be.

    This also suffers badly on write performance because the parity drive will need to be written constantly, making it impossible to overlap multiple writes.

  • RAID 4

    Very similar to RAID 3, but allows individual drives to be separately read. It has no particular advantage over RAID 5 and the disadvantage of a dedicated parity drive.

  • RAID 5

    This is a popular configuration offering excellent read performance and high reliability. The concept here is to have parity data, but to spread it over all the drives. This lets writes overlap because typical small writes access one data drive and one parity drive. If another write is accessing a different set of drives, the two writes can be done in parallel, which is not possible with a dedicated parity drive as described above. This requires a minimum of 3 disk drives, and more is better. RAID 5 is less expensive than mirroring (for equivalent storage), can provide very fast reads, particularly with more than 3 drives, and can survive a single drive failure. The disadvantage is that write performance is not as good (but most applications do much more reading than writing) and that in the event of failure, both read and write performance suffer due to the overhead involved in reconstructing data from parity information.

    Recently, non-official designations such as RAID 6 have been offered at the very high end of the market. These are really just RAID 5 implementations but with multiple parity writes, so that the array can withstand the failure of multiple drives simultaneously. Obviously only very high end systems need such redundancy.


As alluded to above, RAID can be implemented in hardware or software. There can also be configurations that are really both: Sun's high end RAID products are tightly coupled software and hardware.

It used to be that RAID was always SCSI based. That's no longer true; inexpensive IDE RAID configurations are now available. They are not going to have the performance characteristics of a SCSI design, but they cost less, and certainly would give good value for the money.

March 1999 Anthony Lawrence. All rights reserved.




Just a very few years ago, RAID was an expensive option. That's all changed, and today anyone with high disk performance needs or concerns about data reliability should consider some sort of RAID configuration.

RAID means "Redundant Array of Inexpensive Disks". There are basically 5 defined levels of RAID:

  • RAID 0

    Striped disks. Highest performance, but no redundancy, and therefore really isn't RAID at all. Very seldom used because of the reduced reliability: if one drive fails, the entire array fails.

    However, it is the concept of striping that is important to understand: rather than writing a data to sequential blocks on one drive, each subsequent block is written to the next physical drive. This gives great read performance because multiple drives work on getting their portion of the data, thus delivering it back to the computer much faster than one drive alone could.

    To the OS and to the user, a striped RAID drive looks like ONE larger drive. Indeed, in hardware RAID implementations, you wouldn't know at all that this was not just one big disk without special software provided by the RAID manufacturer. Note that there is nothing special about the disk drives themselves: it is the disk controller that provides the abstraction that makes the striped drives look like one drive.

  • RAID 1

    Mirrored disks. Each drive has a twin and all data is written to both drives, and can be read from either drive. This increases read performance and, if accomplished by hardware rather than software (see below) has no adverse affect on write performance. There's no striping here; the read gain is only due to the fact that the controller can read from whichever drive is less busy or whose heads are closer to the data. In the event of one drive failing, the twin drive is immediately available with little perfomance degradation. When the failed drive is replaced, the new drive is rebuilt by reading data from the good drive.

    On very high end configurations, the twin can also provide a "snap-shot" ability, where writes to one of the twins are temporarily turned off so that a backup can be done as of that moment in time while continuing to allow writes to the other drive. This feature is often found in software configurations (see Veritas Volume Manager) or high end hardware/software combinations, but not in the inexpensive segment of the market.

    In both hardware and software implementations, it is possible to have more than one mirror for data that is really critical. While expensive, this provides even greater scurity against catastrophic failure and (with properly designed software/firmware) continues to increase read performance for each mirror added.

    This mirroring can be combined with RAID 0, giving striped disks that are mirrored. Although this offers the high performance of RAID 0 and redundancy, it is much more expensive and typically not seen except in environments where cost just doesn't matter.

  • RAID 2

    Seldom implemented. This is striping, but with some drives storing ECC data. As all modern drives implement ECC information themselves, there's no particular advantage to this configuration.

  • RAID 3

    Stripes data across multiple drives, but dedicates one drive to parity information. Data is read from all drives at once. This uses the imbedded ECC data to detect errors, and recovers data using the parity information. RAID 3 can give high performance in dedicated situations where large amounts of data need to be read quickly. It requires synchronized spindle drives and really doesn't perform well in multi-user situations and therefore is not usually seen except in unusual and very specialized circumstances.

    The concept of parity utilizes the mathematical properties of the XOR (Exclusive OR). You can see how this works by using the Javascript Bit Twiddler. For example, if you XOR the values 12 and 15, you'll get 3. If you XOR 3 (the result of your first XOR) with either 12 or 15, you'll get the other value. That "3" would be the parity information that would be used to reconstruct the "12" or the "15" if each of those represented data stored on a different disk. The XOR is a very quick operation, handled directly by the CPU. but of course it does involve some overhead, so writing involves both an extra calculation (the XOR) and another disk write (the disk write is concurrent with the other writes though, so that really doesn't hurt anything).

    If a drive fails, the controller provides the "missing" data by calculating it from the data it does have: the other data drive(s) and the parity drive. This is, of course, slower than reading it from the disk would be.

    This also suffers badly on write performance because the parity drive will need to be written constantly, making it impossible to overlap multiple writes.

  • RAID 4

    Very similar to RAID 3, but allows individual drives to be separately read. It has no particular advantage over RAID 5 and the disadvantage of a dedicated parity drive.

  • RAID 5

    This is a popular configuration offering excellent read performance and high reliability. The concept here is to have parity data, but to spread it over all the drives. This lets writes overlap because typical small writes access one data drive and one parity drive. If another write is accessing a different set of drives, the two writes can be done in parallel, which is not possible with a dedicated parity drive as described above. This requires a minimum of 3 disk drives, and more is better. RAID 5 is less expensive than mirroring (for equivalent storage), can provide very fast reads, particularly with more than 3 drives, and can survive a single drive failure. The disadvantage is that write performance is not as good (but most applications do much more reading than writing) and that in the event of failure, both read and write performance suffer due to the overhead involved in reconstructing data from parity information.

    Recently, non-official designations such as RAID 6 have been offered at the very high end of the market. These are really just RAID 5 implementations but with multiple parity writes, so that the array can withstand the failure of multiple drives simultaneously. Obviously only very high end systems need such redundancy.


As alluded to above, RAID can be implemented in hardware or software. There can also be configurations that are really both: Sun's high end RAID products are tightly coupled software and hardware.

It used to be that RAID was always SCSI based. That's no longer true; inexpensive IDE RAID configurations are now available. They are not going to have the performance characteristics of a SCSI design, but they cost less, and certainly would give good value for the money.

March 1999 Anthony Lawrence. All rights reserved.




If this page was useful to you, please help others find it:  





1 comment




More Articles by - Find me on Google+



Click here to add your comments
- no registration needed!




Fri Sep 18 17:03:27 2009: 6933   TonyLawrence

gravatar


http://www.enterprisestorageforum.com/technology/features/article.php/3839636 "RAID's Days May Be Numbered"

Quote:
The hard error rate for disk drives has not, for the most part, improved with the density. For example, the hard error rate for 9GB drives was 10E14 bits, and that error rate has increased an order of magnitude to 10E15 for the current generation of enterprise SATA and 10E16 for the current generation of FC/SAS drives. The problem is that the drive densities have increased at a faster rate.


What this means for you is that even for enterprise FC/SAS drives, the density is increasing faster than the hard error rate. This is especially true for enterprise SATA, where the density increased by a factor of about 375 over the last 15 years while the hard error rate improved only 10 times. This affects the RAID group, making it less reliable given the higher probability of hitting the hard error rate during a rebuild.






Don't miss responses! Subscribe to Comments by RSS or by Email

Click here to add your comments


If you want a picture to show with your comment, go get a Gravatar

Kerio Samepage


Have you tried Searching this site?

Unix/Linux/Mac OS X support by phone, email or on-site: Support Rates

This is a Unix/Linux resource website. It contains technical articles about Unix, Linux and general computing related subjects, opinion, news, help files, how-to's, tutorials and more. We appreciate comments and article submissions.

Publishing your articles here

Jump to Comments



Many of the products and books I review are things I purchased for my own use. Some were given to me specifically for the purpose of reviewing them. I resell or can earn commissions from the sale of some of these items. Links within these pages may be affiliate links that pay me for referring you to them. That's mostly insignificant amounts of money; whenever it is not I have made my relationship plain. I also may own stock in companies mentioned here. If you have any question, please do feel free to contact me.

I am a Kerio reseller. Articles here related to Kerio products reflect my honest opinion, but I do have an obvious interest in selling those products also.

Specific links that take you to pages that allow you to purchase the item I reviewed are very likely to pay me a commission. Many of the books I review were given to me by the publishers specifically for the purpose of writing a review. These gifts and referral fees do not affect my opinions; I often give bad reviews anyway.

We use Google third-party advertising companies to serve ads when you visit our website. These companies may use information (not including your name, address, email address, or telephone number) about your visits to this and other websites in order to provide advertisements about goods and services of interest to you. If you would like more information about this practice and to know your choices about not having this information used by these companies, click here.

pavatar.jpg

This post tagged:

       - Basics
       - Disks/Filesystems
       - Install/Upgrade
       - RAID















My Troubleshooting E-Book will show you how to solve tough problems on Linux and Unix systems!


book graphic unix and linux troubleshooting guide



Buy Kerio from a dealer
who knows tech:
I sell and support

Kerio Connect Mail server, Control, Workspace and Operator licenses and subscription renewals



Click and enter your name and phone number to call me about Kerio® products right now (Flash required)