User Tools

Site Tools


info:linux_raid

Linux Software RAID

Common Commands

# Viewing Status
cat /proc/mdstat # also try 'watch cat /proc/mdstat'
mdadm --detail /dev/mdX

LVM RAID Support

RHEL 6.3 adds support for creating real RAID LVM volumes (RAID 1, 4, 5, 6). It sounds like it's basically creating multiple logical volumes and them using MD to create the RAID on top of them, but hiding the complexity.

See the "Logical Volume Manager Administration" document for RHEL 6.3.

I'm not sure when this is added to other Linux distributions.

Linux RAID10

mdadm has the option to create RAID10 with a so-called "far" layout. This seems to double read speed (write speed remains approximately the same) over single drives or the standard RAID1+0 layout. It may decrease write speed slightly. Unlike standard RAID1+0, this can even work across just 2 drives, and gives a similar performance increase over RAID1.

When creating the array, use the "–layout=f2" option (i.e. far with 2 copies of the data). "n2" is the default (standard) layout. RAID10 with layout=n2 on 2 disks is the same as RAID1, and would offer no advantages, while the f2 layout gives improved performance.

For example (approximate averages; actual performance may vary):

RAID Level # of drives Read Speed Write Speed
RAID1 2 drives 50 MB/s 50 MB/s
RAID10 f2 2 drives 100 MB/s 50 MB/s
RAID0 2 100 MB/s 100 MB/s
RAID10 n2 4 100 MB/s 100 MB/s
RAID10 f2 4 200 MB/s 100 MB/s

These are just general conclusions from looking at a couple of benchmarks, and as such should be taken with a grain of salt.

Recovery

RAID10,f2 + LVM, 4 drives

For a test, I installed Ubuntu 10.10 on 4 drives using RAID10 in the f2 layout, with LVM on top. I used the alternate install CD. With grub2, I was able to remove 2 non-adjacent disks without losing any data, so it seems that the f2 layout doesn't cause any problems with that. After removing some of the disks, I was not able to boot in degraded mode, but I was able to restore using a recovery CD.

Recovery:

  • Boot to CD
    • If mdadm and lvm2 are not installed (e.g. with Ubuntu CD), install them, then scan for RAID arrays (mdadm --assemble --scan)
  • Check RAID status (cat /proc/mdstat)
  • Create new unformatted partitions on the new drive(s) to add to the array
  • Add the new partitions (e.g. mdadm --add /dev/md1 /dev/sda1 where /dev/md1 is the RAID device and /dev/sda1 is the newly-created partition)
  • If the first drive was damages, reinstall grub:
    • I tried "Method 2" at Grub2 on the Ubuntu wiki:
      • Boot from Live CD
      • Install mdadm and lvm2, scan for arrays (probably not necessary with e.g. SystemRescueCD, but I'm not sure whether it has grub2)
      • Mount root drive
      • sudo grub-setup -d /media/XXXX/boot/grub /dev/sda (note: this is for Grub 2)

Local Setup

mdadm -v --create /dev/md0 --level=raid10 --raid-devices=4 --layout=f2 /dev/sda2 /dev/sdc1 /dev/sdb2 /dev/sdd2
pvcreate /dev/md0
vgcreate vg0 /dev/md0
info/linux_raid.txt · Last modified: 2012-06-28 18:26 by sam