questions re new VM host

Discussion in 'all things UNIX' started by mirimir, Oct 26, 2014.

  1. mirimir

    mirimir Registered Member

    Joined:
    Oct 1, 2011
    Posts:
    9,252
    I finally got around to setting up a new VM host (Debian 7.7 x64) with SSD-based Linux RAID. I'm using four inexpensive 120 GB SSDs. The Disk Utility read benchmark for the individual SSDs is 0.2-0.6 GB/s (average 0.5 GB/s). There are two RAID10 arrays. One (md0, at the beginning of the SSDs) holds /boot. Its read benchmark is 0.4-12.8 GB/s (average 1.1 GB/s). The other (md1) holds a dm-crypt/LUKS volume, and that is used for LVM (swap, /root and /home). Its read benchmark is 0.3-1.0 GB/s (average 0.8 GB/s).

    That all seems vaguely reasonable, although the shapes of the read rate vs % capacity curves are rather strange. The machine isn't at all remarkable (3 GHz i5 x4 with 8 GB RAM). But the improved disk performance is obvious.

    Still, I'm left with some questions. My main goal here is getting speed and capacity from small inexpensive SSDs. I'll be picking good models from good manufacturers, but focusing on small ones that are being phased out, and selling at discount.

    I've read that SSDs generally have much lower URE rates than HDDs. And so I'm tempted to experiment with large RAID6 arrays. I'd love to try RAID50 or RAID60, but I'm not sure that's doable (or wise) with Linux software RAID.

    What am I missing, that might come around to bite me?
     
  2. deBoetie

    deBoetie Registered Member

    Joined:
    Aug 7, 2013
    Posts:
    1,832
    Location:
    UK
    I'll try to dredge up a more detailed analysis I read about failure modes of SSD vs HDD (not as simple as it sounds) - I have had some fairly catastrophic no-warning failures with some multi-TB HDDs - one of the advantages of at least the older HDDs would be that you get some warning. My experience with SSDs so far is that dodgy firmware causes most of the errors!

    I think you'd have to be careful with the throughput available from the controllers with some of the more extreme configurations, you might then have to go for dedicated hardware which tends to be awkward and expensive - although likely to be well supported for the server space.

    There's also some partition planning you might need to ensure there's enough room for wear levelling - haven't researched how to arrange that with Linux. There's also some alignment things that have to be right on install.

    My immediate thought though is to avoid the Sandforce type compression since half of what you're doing is dm-crypt.

    Finally, my instinct here would be to bite the bullet with higher capacity SSDs even though their price per G is higher - it's at least not as eye-watering as it used to be, and it's really simple.
     
  3. mirimir

    mirimir Registered Member

    Joined:
    Oct 1, 2011
    Posts:
    9,252
    @deBoetie

    Thanks :)

    The possibility of catastrophic SSD failure (perhaps firmware related) pushes me toward RAID10, because RAID10 arrays rebuild fast. I already managed to firmly wedge the new box (caused by the LibreOffice graphics bugs re hardware acceleration and anti-aliasing) and had to hard reboot. The 4x 120 GB SSD RAID10 array rebuilt in about 15 minutes, and was usable while rebuilding. In my experience, 4x 1 GB HDD RAID10 arrays rebuild in about 6-10 hours after replacing a failed HDD. For SSDs, I suspect that CPU is the limiting resource.

    For more SDDs, what I meant to say was RAID15 or RAID16, not RAID50 or RAID60 (or even RAID51 or RAID61). RAID5 and RAID6 arrays take much longer to rebuild, and striping two of them (RAID50/RAID60) or even mirroring them (RAID51 or RAID61) seems iffy. I don't see much about RAID15/RAID16 but it seems safer, more like RAID10. It's a RAID5/RAID6 array of mirrors, just as RAID10 is a stripe of mirrors. If one SSD fails, there's no need to recalculate parity, just a plain copy.

    I'm assuming that the SSDs handle their own wear leveling. But I'll look into that. I'll also explore and test SSDs with and without Sandforce.

    You may be right about just using large SSDs. But I like to play. And given the dynamic SSD market, there seems to be an opportunity for RAID (with "I" actually being "Inexpensive").
     
  4. deBoetie

    deBoetie Registered Member

    Joined:
    Aug 7, 2013
    Posts:
    1,832
    Location:
    UK
    I see you do like to play! I guess if you can get hold of a job lot of drives being price dumped, then some of the exotic configurations make a lot of sense, because it doesn't matter if they go wrong. But I think you have to factor in the cost of a sata port too.

    One thing that occurs to me for the future though, is what to do after a few years, because ideally any replacements would be the same type/capacity, which would not then be available (unless you bought spares from the outset, which might as well be part of the array then - I guess that's what you're thinking in going RAID16 for instance). How many SATA ports do you have?!

    The SSDs do their own wear levelling, but need to have space to do so, that's the purpose of keeping some unallocated space around (especially when the whole disk is going to be encrypted where all the blocks will be written to).

    I'd recommend doing some careful tests for SMART and other disk errors (event log stuff) both on install and periodically afterwards, that tends to highlight firmware problems.

    I suppose my attitude to redundancy is slightly different - I focus on the data, I'm not too fussed about the OS, and I also want the data to be offline redundant. It's all very well having mirroring/RAID, but if the user securely deletes a file, it's still deleted! So I have the OS plus frequently used stuff on a single SSD, plus a large mirrored pair of HDD for everything else. I've been using the Crucial M4/M5s and these have been fine after initial firmware problems a few years back. The M5s support AES hardware encryption, but I don't know if Linux/dm-crypt supports that?
     
  5. mirimir

    mirimir Registered Member

    Joined:
    Oct 1, 2011
    Posts:
    9,252
    Thanks. I do like to play :D Also, I have an old R710 around, that I'm not using, and I've been wondering what to do with it. The machine itself is rather energy-efficient, but six 15 Krpm SAS HDDs were not, and they're also quite noisy :( I see that the internal RAID controller has two 4-port (3 Gbps per port) connectors, and can handle up to 10 disks (SAS or SATA) and two hot spares. R710s will accept non-Dell disks, so maybe I can drop in 120 GB (or maybe 240 GB) SSDs. However, the controller only does RAID0 and RAID1. I could do RAID0 in the controller, with a hot spare for each array, and RAID1 in Linux. That would make a trippy MySQL server ;)

    For RAID16 in my play box. I'd need to add a SATA port card. For RAID16 in the r710, I'd need a RAID card that at least does RAID6, with RAID1 in Linux. Maybe I can find one used.

    As far as I know, Linux does not yet support secure hardware encryption (Bitlocker style) for the Crucial M5s. Sad :(
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.