Advice for Linux Software RAID 1

Discussion in 'Acronis True Image Product Line' started by Dustyb, Sep 28, 2004.

Thread Status:
Not open for further replies.
  1. Dustyb

    Dustyb Registered Member

    Joined:
    Sep 28, 2004
    Posts:
    7
    What is the best strategy for imaging (and restoring) individual ext3 type fd (Linux sofware RAID) partitions, RAID 1? Or a whole drive comprised of fd RAID 1? Is it necessary to break the mirror(s) first, especially for the restore phase, and then add the raid disk back into the array after the restore? I was thinking about a raidsetfaulty/raidhotremove on the second members of the array, or even just disconnect the second drive. I guess I'm not sure what would happen if one partition in a fully functional RAID 1 array was restored, which would sync with when the array rebuilds on boot? Or is Linux sofware RAID not supported? Any workarounds?
     
  2. Devinco

    Devinco Registered Member

    Joined:
    Jul 2, 2004
    Posts:
    2,524
    Welcome to Wilders Dustyb.

    This thread discusses using hardware RAID 1 but the concept is similar. I have not tried it yet, but it should work.
     
  3. Dustyb

    Dustyb Registered Member

    Joined:
    Sep 28, 2004
    Posts:
    7
    Thanks. Image-less backup using the Raid controller, I like it!

    I guess sticking to the idea of using the Acronis product, I was hoping for linux software Raid experts to weigh in with a sequence of steps. You're looking at the basic question on how to best rebuild the software Raid 1 arrays after a restore. I'm still hung up on the idea that you'd need to break the software Raid 1 mirror before you restore, then reboot in degraded mode and add the disabled mirrored Raid 1 disk back in. I'm just going to have to lookup how one can fake out the kernel with a raidsetfaulty and have it be persistent after a restart (when the disk isn't really faulty). If you can't do that, I just don't know exactly what would happen when a linux software Raid 1 starts up and the two mirrored partitions are not synched (restored versus old), which would rebuild which? Also not sure if there wouldn't be a step to reset the superblock on the mirrored one.
     
  4. wdormann

    wdormann Registered Member

    Joined:
    Jun 27, 2004
    Posts:
    480
    My NAS machine uses software RAID1 and RAID5. I haven't tried backing it up with ATI, though.

    I'll play around with my virtual NAS machine (VMWare) to see what I can figure out and I'll post my findings here. For what it's worth, both machine (virtual and real) are running Gentoo, but it really depends on whether ATI has md support built-in.

    I'm relatively green with the linux RAID stuff, but it's pretty slick. I've simulated a drive failure with VMWare and it behaved as expected, and it also re-built the arrays properly when I introduced a new drive. I'm not quite as daring with my "production" machine, though...
     
  5. Dustyb

    Dustyb Registered Member

    Joined:
    Sep 28, 2004
    Posts:
    7
    Great, thanks! Look forward to your findings. It isn't immediately clear to me either whether ATI supports md devices. Certainly, it "sees" the software raid partitions and identifies them properly (which is more than I can say for Ghost), and seems to offer no complaints imaging them. I'm sure the Acronis Linux Image Server product has documentation on this. I'll take a look. If Linux software Raid (RAID 1) is supported, there has to be a recommended procedure for how to image and restore... whether you have to degrade the array to a single disk, how you rebuild the array after restore, etc.
     
  6. Dustyb

    Dustyb Registered Member

    Joined:
    Sep 28, 2004
    Posts:
    7
    Can't get it to work, seems the partition restores a filesystem bigger than what the superblock thinks it should be. It isn't clear if Linux Software RAID is supported, wish I could get an answer. Here is my email to Support:

     
  7. wdormann

    wdormann Registered Member

    Joined:
    Jun 27, 2004
    Posts:
    480
    I have started to look into this issue, but I was going to wait until I had found something more conclusive. But since you've brought it up, here are my findings thus far:

    I have a 3-drive array with various RAID levels:
    BOOT partition ext3 on RAID1 with hot spare
    SWAP partitions distributed across the 3 drives (not sure if this is the
    best, but I've heard of potential issues with having SWAP on RAID)
    ROOT partition xfs on RAID1 with hot spare
    STORE partition xfs on RAID5

    The drives in the RAID array are hda, hdb, and hdd

    I backed up the entire hdb drive to an image. The backup process seemed to go fine.

    Restoring my single BOOT partition (hdb1) to the existing hda1 partition seemed to clobber the partition table. linux fdisk complained about not detecting a valid partition table. I have not yet checked into this issue and the circumstances that cause it, though. I restored the entire drive image, and things were fine.

    One thing I have noticed, however, is that restoring seems to be a little tricky with software RAID1. If I have a working RAID1 array and restore one drive from an ATI image, the system will sync the drives up on next boot, essentially overwriting the drive that I restored the image to with the data on the remaining drive from the mirror.

    I tried removing a drive from the array first. I have a mirror on hda1 and hdb1. I used raidsetfaulty and raidhotremove to remove hdb1 from the array. (so the array was just hda1). I then restored hda with the ATI boot CD. Surprisingly, though, when I next booted the system, the data from hdb1 was synced over to hda1.

    So in both of the above cases, restoring a single drive for the purpose of a "rollback" failed. The drive with the data that was pulled from the ATI image was overwritten by the data from the remaining drive from the array.

    Questions that I have not answered yet are:
    1. Will a drive that has been removed via raidhotremove stay removed after a reboot? Might possibly explain why the second option above (removing a drive from the array and then restoring the TIB to the remaining drive didn't work as I had expected)
    2. Since I haven't had much luck with restoring a single drive of a RAID 1 array, how about restoring both of the drives?
    3. Can RAID5 arrays be backed up? If so, it would seem that you would need to image n-1 drives, where n is the total amount of drives in your array.
    4. Is it possible to manually specify or override which direction the data flows when an array is rebuilt?
    5. What is the cause of the partition table anomaly that I described above, when restoring a single partition rather than the whole drive?

    It's not that I can't answer the above questions, but rather that I haven't found the time to do the appropriate testing yet. :)
     
  8. Dustyb

    Dustyb Registered Member

    Joined:
    Sep 28, 2004
    Posts:
    7
    It should. Do a raidsetfaulty and then reboot. The kernel will then remove the "faulty" disk from the array entirely... ie you won't see it at all in /proc/mdstat. That's what you want. If the disk is is just flagged failed, but still part the array, it is probably just going to synch right up with its RAID 1 pair on reboot because there is nothing truly wrong with it. Oldest syncs with newest. That sounds pretty much what you were experiencing. I'm pretty convinced that a RAID 1 needs to be completely broken down in order to restore.

    Once the disk is removed from the array, you add it back in with mdadm or a raidhotadd. I'm almost postive that the disk you are adding will always sync with the existing array members.

    Anyway, I think we both invision the basic procedure, but I'm still not convinced ATI is able to restore an entire software raid partition without parition table errors or superblock inconsistency.
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.