Linux RAID 5 – Redundant Array of Inexpensive Disks

Discussion in 'all things UNIX' started by lotuseclat79, Jan 31, 2014.

Thread Status:
Not open for further replies.
  1. lotuseclat79

    lotuseclat79 Registered Member

    Joined:
    Jun 16, 2005
    Posts:
    5,096
  2. mirimir

    mirimir Registered Member

    Joined:
    Oct 1, 2011
    Posts:
    6,029
    Friends don't let friends use RAID5 ;)

    Here's what I said in https://www.ivpn.net/privacy-guides/advanced-privacy-and-anonymity-part-4/:

    Cites:

    http://www.reddit.com/r/sysadmin/comments/ydi6i/dell_raid_5_is_no_longer_recommended_for_any/

    http://www.standalone-sysadmin.com/blog/2012/08/i-come-not-to-praise-raid-5/
     
  3. NGRhodes

    NGRhodes Registered Member

    Joined:
    Jun 23, 2003
    Posts:
    2,331
    Location:
    West Yorkshire, UK
    We don't have a problem with RAID5, we have approx 50 servers with a minimum of 4 physical drives and a max of 16 drives, approx 1 to 15tb arrays, these machines are getting towards 5 years old. We get about 2 drives a month failing, some machines should in theory of suffered rebuild failure according various people's claims about RAID, but our real world experience is that the oft touted RAID5 is now useless appears to be scare mongering. The only single RAID failure we have had was not at rebuild time !
    Maybe drives often perform better than error rates suggest (e.g. hardware error correction mitigates things), calculations maybe were for consumer grade drives where we use enterprise grade drives.
    Then there is the fact we use hot spares, which build a lot faster with less reads if the controller replaces pre-failure, which our controllers often sense and adding a new drive in this case to RAID5 is as fast RAID10 as parity does not need to be rechecked/calculated (which invokes reading all drives across each stripe).

    Write caching makes a more significant difference to RAID5 than RAID10, brought latency down to a level that was no longer a significant bottleneck for our work loads.

    Yes RAID5 is more error prone on rebuild than RAID10, but RAID5 and RAID6 even more so are more resistant to errors than RAID10 during normal operation, parity offers error protection that RAID10 cannot offer.

    From what I understand, RAID6 will always be more resilient to errors/failures than running same number of drives in RAID5, hot spare or not, due to the double parity, but at a greater performance penalty.

    You need to decide on an individual basis what your priorities are, such as rebuild time, on line data integrity, performance, storage size, if you are running enterprise grade drives, what performance and reliability features your controller offers
     
    Last edited: Feb 2, 2014
  4. mirimir

    mirimir Registered Member

    Joined:
    Oct 1, 2011
    Posts:
    6,029
    @NGRhodes

    Thanks for your professional perspective.

    Your RAID5 setup using controllers that replace pre-failure with hot spares is very cool. You get the speed of and parity checking of RAID5, with reliability more like RAID10.

    What RAID controllers are you using?

    I wonder if something similar is possible in Linux software RAID.
     
  5. NGRhodes

    NGRhodes Registered Member

    Joined:
    Jun 23, 2003
    Posts:
    2,331
    Location:
    West Yorkshire, UK
    We use mostly HP P400 controllers (on HP servers), they call it "Predictive failure".
     
  6. Durad

    Durad Registered Member

    Joined:
    Aug 13, 2005
    Posts:
    591
    Location:
    Canada
    What filesystem do you guys use?

    Did anybody tested RAID-Z and ZFS filesystem on Ubuntu?
     
  7. NGRhodes

    NGRhodes Registered Member

    Joined:
    Jun 23, 2003
    Posts:
    2,331
    Location:
    West Yorkshire, UK
    OS default, for best support.
     
  8. mirimir

    mirimir Registered Member

    Joined:
    Oct 1, 2011
    Posts:
    6,029
    I use ext4.

    I've been meaning to look into that.

    I find this <https://help.ubuntu.com/community/encryptedZfs>.
     
Loading...
Thread Status:
Not open for further replies.