Alternative to True Image (nervous nellie)

Discussion in 'backup, imaging & disk mgmt' started by bellgamin, Jul 18, 2006.

Thread Status:
Not open for further replies.
  1. Pedro

    Pedro Registered Member

    Joined:
    Nov 2, 2006
    Posts:
    3,502
    Peter2150: I'm thinking the same. Keeping it simple.
    For a home user some extra tasks can be handy, but if you backup files and folders regularly (or even scheduled with another program, possibly freeware) there is a great advantage in simplifying things. With todays HD's, where are the space problems?

    I imagine that an enterprise values these extra functions, since it can manage multiple computers. grnxnm made some strong points in this thread about the reliability of SP.
    That's why i'd like to see the other perspective/ counter argument, since there's always at least one. I can't remember exactly what he wrote, but it was referring to the process of taking a snapshot, the "phylock" function. It sounds something like "best practice"...
     
  2. Peter2150

    Peter2150 Global Moderator

    Joined:
    Sep 20, 2003
    Posts:
    20,590
    I believe that Shadowprotect uses the microsoft service VSS. I've noticed monitoring the Acronis threads, that a lot of the problems come from trying to use all the bells and whistles. Granted if they are there they should work, but buying a program like Acronis to backup Outlook Emails. Duh.
     
  3. grnxnm

    grnxnm Registered Member

    Joined:
    Sep 1, 2006
    Posts:
    391
    Location:
    USA
    With ShadowProtect, the only true weakness I know of when it comes to incrementals is that you have more actual files that are necessary to restore a particular point-in-time. If you lose just one of these files, you lose the point-in-time captured by that lost file, as well as all points-in-time represented by any incrementals that are dependent on the lost file. It's a file management issue. Most users set up a backup directory, specify a retention policy (where ShadowProtect's job will automatically, and safely, delete older image chains for them, to conserve space), and don't touch the files in the backup directory. In this most common use case it's generally unlikely that an individual incremental will be somehow deleted.

    I know many of you feel more comfortable only using base/full images. For home user needs this is just fine. Just realize that it's not sufficient for enterprises. Enterprises need to minimize the resources used by backup software, and maximize uptime. ShadowProtect provides enterprises with the ability to maintain 100% uptime and minimize resource utilization through fast incrementals. The difference in time it takes to make an incremental image, vs. a full image, of a multi-terabyte volume, is huge. ShadowProtect incremental images are often taken in a matter of mere seconds. A full image of a terabyte volume can take quite a long time, even with highly-optimized code.

    As far as trusting incrementals goes, the only way you can learn to trust incrementals in a particular product is to try them out. Let me give you a vivid case-in-point of my trust for our incrementals. At StorageCraft our primary file server, despite having TBs of disk space, had reached its limit, and we needed to double its storage capacity. Like all our production servers, our primary file server has ShadowProtect installed on it and an automatic backup job configured to make bases and incrementals on a specified schedule. Unfortunatly its RAID controller didn't allow us to dynamically grow the RAID array, so we simply turned it off, threw in a bunch of larger disks, configured a larger array, booted our ShadowProtect Recovery Environment CD, and restored the last incremental that was taken. At the time we did this, there was one base image and 7 incremental images in the chain that we restored. We've tested and used our own product enough that we didn't hesitate a moment to trust it with our own production data (and believe me it would be devestating if we lost this data) when restoring an incremental.

    The core code that deals with our image file manipulation for bases and/or incrementals was polished off about two years ago and hasn't been altered since then. This code has proven, over time, to be quite robust (we haven't yet found a single bug related to this code). The great thing about this code is that it's written in a platform/environment-independent way, such that this same, unmodified, core image-file manipulation code can be incorporated in other projects/components with ease. For instance, this code is shared by the imaging engine (which makes the full and incremental images), as well as by the mount driver, and also by our image access API (licensed to VMware). We have not received a single bug report related to this functionality in any of these three components. If we had received bug reports then I would feel less confident in incrementals, but as it stands I'm pretty confident that this technology is solidly implemented.

    As far as the KISS principle goes, yes, we have a major incentive to KISS, namely that it also minimizes support issues with confused customers. It's a very tricky balancing act trying to KISS and also to expose advanced functionality.
     
  4. grnxnm

    grnxnm Registered Member

    Joined:
    Sep 1, 2006
    Posts:
    391
    Location:
    USA
    Oh, hey, I just remembered another weakness of incremental imaging - multi-boot environments.

    In order to generate fast incremental images, it's necessary to know exactly which sectors have changed on a volume since the last image was taken. To do this, a filter device driver monitors I/O and maintains a map (we use a highly-optimized compressed bitmap similar to the one used by NTFS) of the sectors that have changed. When the next backup is made, this map is used and is exposed to the backup image engine so that it will only backup the sectors that have changed since the previous backup. Because the incremental tracking done by this driver is an entirely in-memory structure, in addition to being an efficient data structure, it negligible overhead (the real bottleneck in disk I/O is the disk, not memory).

    Now, suppose you have ShadowProtect installed on your C: windows volume, but that you also have a D: bootable windows volume and D: does not contain ShadowProtect. If you create a backup job in C:, and it has made a base and some incrementals, and so its incremental tracking is occurring, and you reboot into D:, and from the D: windows system you make changes to C:, there will be no driver active to track the changes you made to C:, so if you boot back to C:, the next incremental will not capture the changes you made from the other boot environment.

    The same issue can occur if you boot a bootable CD and make changes to a system on which incremental tracking normally occurs (but won't occur from your alternate boot environment). ShadowProtect's recovery environment CD will turn off incremental tracking for this very reason, so if you boot our CD it will make sure that you won't be able to corrupt incrementals (if you for instance make changes to an incrementally-tracked volume during your CD-boot session).

    Finally, fast incremental tracking requires that very little overhead is imposed and this is why our fast incremental tracking maps changes only in memory. On shutdown the change map is serialized to disk, and then reloaded on the next boot. If your system has an ungraceful crash, the map will be lost, and in this case ShadowProtect will make a new base image for the next image, rather than an incremental image.
     
  5. Howard Kaikow

    Howard Kaikow Registered Member

    Joined:
    Apr 10, 2005
    Posts:
    2,802
    I do not have the time to read thru recent posts.

    The issue is simple.

    Users need to be able to maintain multiple backup sets.
    In SP, independent FULL backup sets can be obtained by NOT enabling sector tracking.

    If sector tracking is enabled, it does not work as expected on a multiboot system, or when using multiple backup sets as the sector tracking gets out of sync.
     
  6. Howard Kaikow

    Howard Kaikow Registered Member

    Joined:
    Apr 10, 2005
    Posts:
    2,802
    WE went thru this a number of months ago.
    The issue is the algorithm used.

    TI stores the sector info with each archive, rather than burdening the system with the overhead of the sector tracking. THis is one of the reasons that might explain why, on my system, a tI backup takes longer than either a Ghost or SP backup. However, I sure would rather have a longer backup time than the overhead of t emonitoring of sector tracking, not to mention the issues related to multiboot and multiple backup sets.
     
  7. Howard Kaikow

    Howard Kaikow Registered Member

    Joined:
    Apr 10, 2005
    Posts:
    2,802
    That's the problem.

    Users need to maintain independent backups.

    Tying each to the sector tracking does not really work.

    Users need to be able to create backup set A on Monday, backup set B on Tueday, then go back to incrementally update each backup set on, say, alternate days. Once backup set B is created, the sector tracking for backup set A is gone.

    And on a multiboot system, this just doesn't work.

    SP is fine for creating multiple backup sets if sector tracking is not enabled.
    My recollection is that SP was faster and did a better job of compression than Ghost.
     
  8. grnxnm

    grnxnm Registered Member

    Joined:
    Sep 1, 2006
    Posts:
    391
    Location:
    USA
    Well, I'm sure you'll get to it eventually. It did take me a couple of days to digest your statements. The important thing in a dialog is that we do actually listen to each other, and carefully consider each other's words.

    I don't believe this is the case in ShadowProtect. Can you please, for ShadowProtect, specify in exacting detail the evidence that you have to make this statement? I would like the exact repro steps, and an explanation of the evidence, that you used to arrive at this conclusion. If you can provide such steps and evidence, then there is indeed a glaring bug and I would like to make sure that it is addressed immediately.

    Of course we also store a map of the stored-sectors within each archive. Regading the "burden" you speak of for incremental tracking, have you actually taken metrics on this overhead. We have. It's not even measureable. In other words, when we perform an identical prolonged I/O operation, with, and then without, incremental sector-change tracking occurring, we are not able to measure any meaningful hit from enabling sector-change tracking. How can this be explained? Our incremental sector tracking does not need to perform the copy-on-write operation. All it has to do is to mark off a bit of memory, in a highly-optimized bitmap data structure, which is entirely resident in memory, whenever a sector is written. A memory operation takes many orders of magnitude less time than a disk operation. This mapping only occurs when an actual write is being made to the disk, and the disk, relative to the memory, is so amazingly slow, that doing a little work in memory before the data hits the disk, poses essentially no overhead.

    I don't recall discussing any algorithms. Can you please point me to the post to which you're referring?

    Without making an overly-complicated schedule, we've determined (through user case study) that the schedule types that ShadowProtect provides satisfy the majority of users. Even many of this forum's participants are asking for simplicity, not more complexity. We are adding some additional flexibility to the scheduling, but nothing on the order of multi-snap support, and I doubt we'll ever expose the multi-snap capability (it would just make things way too complicated for most users). If nothing short of multi-snap will satisfy you, then ShadowProtect is not for you.

    Howard, I want to reiterate that I truly value your opinions. I am doing my utmost to understand your claims and if there is a bug in ShadowProtect related to this topic then it is a critical bug and I definitely want to resolve it.

    Frankly, StorageCraft makes almost no money from the home user market. Our real profits are from our enterprise customers. Therefore, one might wonder why a senior engineer focused on enterprise solutions would spend his time discussing issues in a forum composed primarily of home users. I am not here to try to sell you on ShadowProtect. I frankly don't care if anyone here is influenced to purchase ShadowProtect. My objective in participating in this forum is to gather feedback from experienced power users who delight in pushing products to the breaking point. These forums are filled, as you know, with exactly this type of user. If I can incorporate such feedback into ShadowProtect, then it will be a better product, and that's my goal. So, in light of this explanation, please know that I do fully value your opinions and feedback. They're the only reason I'm here. Although it takes time to participate here, I feel that it has been a productive investment. Feedback from this thread alone has directly-contributed to three bug fixes and one enhancement.
     
    Last edited: Jan 28, 2007
  9. grnxnm

    grnxnm Registered Member

    Joined:
    Sep 1, 2006
    Posts:
    391
    Location:
    USA
    Wishlist

    I'm curious to know if there are any features that you guys would really like to see in future releases of ShadowProtect.
     
  10. Pedro

    Pedro Registered Member

    Joined:
    Nov 2, 2006
    Posts:
    3,502
    No offense, but why do you want this? You're way ahead of me, years probably, but i can't see the benefit of being able to do this.o_O

    So it won't drag the computer anymore than some other program with similar memory footprint. Is that correct?

    In my case, that is correct. Keeping it simple to minimize errors from the user, and the program itself, since programmers have to deal with more functions. This is a kind of product that can't fail.

    I think we understood you. You come here to see what these guys do with your product. They can try things no other user would remember, or accomplish, and find new uses too. They stress it so far that if no bugs are found, SP has reached maturity.
    And i appreciate you coming here and writing all this. Impressive:thumb:

    But, even if you don't earn that much from home users, i must say 2 things:
    (sorry, it's my background:) )

    1- You have the potential to grow in this market segment. So this visibility can only do you good.
    2- Even if you'll never earn enough from the "home segment", there is an indirect effect. The home user uses it, and if impressed, and fully confident that the product will always perform, he can implement it in his company, or advise to (if he doesn't own it).

    Thank you all for this discussion:thumb:
     
  11. Huupi

    Huupi Registered Member

    Joined:
    Sep 2, 2006
    Posts:
    2,024
    Do you like backup/recovery outside windows with recoverydisk buy SP,its fast reliable and never met any troubles that burdens the other major player in the market,i imaging to very different configurations,like ext. sata and ata,dualcore,single core etc. SP takes them all !I trialled Acronis,Paragon even O&O but no joy ,mostly not recognising ext. drive,but SP is Windows PE ,maybe that makes the difference ?!
     
  12. grnxnm

    grnxnm Registered Member

    Joined:
    Sep 1, 2006
    Posts:
    391
    Location:
    USA
    I'm sorry if I was confusing on this point. "Memory Footprint" usually refers to the amount of memory used by a particular component, including its own data segments, to perform its given function. So with that definition, I wasn't really referring to memory footprint, per se. The reason I pointed out memory usage is to highlight the fact that the only thing being done by StorageCraft's snapshot driver while it is actively performing incremental tracking is that it is setting bits in a memory-resident bitmap (this bitmap is itself a highly-optimized data structure implemented very similarly to the bitmaps used within the NTFS driver) to indicate which sectors have changed on the volume from one snapshot to the next. The salient point here is that incremental tracking doesn't do any disk I/O in the normal flow of traffic (it does flush this map to disk on shutdown, and read it in on boot. bit during normal I/O traffic it doesn't use the disk at all). Because incremental tracking is a very simple and small operation, taking very few instructions, and using memory alone (not disk), it is many many orders of magnitude faster than the write operation itself, which ultimately does go to a disk, which is phenomenally slow compared to the memory. You have to also consider how deep the windows storage stack really is. It's not unusual for your write operation, from the time it leaves your app to the time it hits disk media, to pass through ten or so kernel-mode components. Each one of them adding some kind of value to the operation. The incremental tracking is a very light-weight operation that occurs only in memory and only at one layer of this large stack (where our driver is filtering) and so its overhead (if you can even call it that) is basically insignificant, especially, as I said, as the write hits the disk, which takes sometimes hundreds of thousands of times more time to complete than it took for the write IRP (I/O Request Packet) to make its way down the storage stack to the media.

    The bottom line is that the incremental tracking poses negligible overhead.
     
  13. Howard Kaikow

    Howard Kaikow Registered Member

    Joined:
    Apr 10, 2005
    Posts:
    2,802
    If you have only 1 backup archive, you are dead in the water if the backup media becomes unusable or the archive becomes corrupt.

    More backup archives reduce the chance of such a disaster.

    I have 4 USB drives:

    O has a FULL backup and several differential backups.
    P has a FULL backup and several didfferential backups.
    I have two drives that swap as N, each has FULL backup and several incremental backups.

    I do FULL/incremental/differential; back ups nearly every day, sometimes more than once per day, alternating amongst the drives.
     
  14. grnxnm

    grnxnm Registered Member

    Joined:
    Sep 1, 2006
    Posts:
    391
    Location:
    USA
    Hmm, let's see here...

    If I make sure my C: drive is always backed up to my D: drive (where D: is on a different physical disk), then the only time I have a catastrophe is if *both* physical disks fail at the same time. This would be an extremely rare event.

    Howard's contention is that you can avoid such a rare event by making sure that C: is backed up to not one, but two or more other physical disks, say D: and E:, each which resides on separate physical disks. However, there's always the incredibly ultra-rare possibility Howard that all these disks will fail. You can extend this argument on and on.

    You just have to draw a line somewhere. Generally home users are pretty safe as long as their backup is on a different physical disk. In version 3 we've added a feature that enables you to have your backup files duplicated on multiple targets, so this should (I hope) satisfy those like Howard who want the additional security.

    This gets into the realm of image-file management, about which not much has been done yet by any of the major Windows image-backup vendors. I'm pretty excited about the stuff we're working on in this area but can't disclose anything about it. I know that our enterprise guys are gonna love it.
     
  15. ChairmanMeow

    ChairmanMeow Registered Member

    Joined:
    Jan 3, 2007
    Posts:
    49
    I backup my entire HD0 disk alternatively to files on E: (a separate drive HD1) and F: (an external USB drive) using ATI 10 - image size 9G.

    As I'm worried about the possibility of a failure with the ATI software (backup or restore) I also use BootIt NG to make image copies of C: to different partitions on HD1 and the USB drive. As these partitions are BING image partitions they are not mounted by Win XP so are protected from accidental or malicious deletion.

    I think the only danger is if some virus/malware started deleting partitions off my disks. In case this happens I've printed off all the partinfo details to allow me to manually rebuild the partition structure and undelete the partitions.

    So my risk is spread over 3 physical disks and 2 images are not available to XP.
     
  16. aigle

    aigle Registered Member

    Joined:
    Dec 14, 2005
    Posts:
    11,164
    Location:
    UK / Pakistan
    How to do this? Any info? Thanks
     
  17. Howard Kaikow

    Howard Kaikow Registered Member

    Joined:
    Apr 10, 2005
    Posts:
    2,802
    If D is not an external drive, there is a higher probability that both C and D could be affected by a single power glitch.

    It makes no sense to not use multiple backup sets. THe cost of losing a single backup set is large. The cost of a 2nd, or more, external hard drive is trivial compared with the loss/grief of having a backup archive go belly up.

    Duplicating backup sets is not what users want. They can do that themselves.

    Rather there is a need to maintain backups with different file content in case one needs to go back to earlier versions of files.
    In any case, it is the user's choice as to what is in each backup set, and how many backup sets are required.

    Ghost at least provides the simple option of using an Independent REcovery Set, which allows multiple full backups anytime/anywhere the user chooses.

    This can also be done with SP, but the interface is more complex. SP should add an explicit menu option to create "independent recovery sets".

    When I first used SP, I remembered to select appropriate options, the next time, my few brain cells did not choose the options. The GUI leaves something to be desired, otherwise SP appears to be a good product.

    I do not like TI, but it does allow one to make all the archives one needs using full/incremental/differential backups.

    If all one wants to do is make FULL backups, then I would suggest SP.
    But for incremental/differential backups, I would use TI.

    Note: Up until 3 Nov, I used Independent Recovery Sets in Ghost 10. Doing a full backup, at least on my system, with Ghost did not take much longer than doing an incremental/differential backup with TI, Each includes the time for the verify.

    My recollection is that SP was even faster than Ghost and had better compression.

    So, if SP 3 works on my system, I plan on using it for full backups, but no incremental/differential backups.
     
  18. Peter2150

    Peter2150 Global Moderator

    Joined:
    Sep 20, 2003
    Posts:
    20,590
    Howard

    You speak from what you want, but not all users agree with you. I for one would love it if I could image to both my internal 2nd drive, and my external at the same time.

    Also I could care less about images for earlier versions of files,at least when talking about imaging. I do keep several older images in case... but I rotate them out. For earlier versions of critical files, there are much better solutions, at least for me, then an image. I can restore earlier versions of a critical file with a right click in explorer, and have many more versions, then I'd ever want to keep images.

    Pete
     
  19. ChairmanMeow

    ChairmanMeow Registered Member

    Joined:
    Jan 3, 2007
    Posts:
    49
    Go to the free software page on the terabyte site and download the partinfo utility.

    Link: http://www.terabyteunlimited.com/utilities.html

    Using BootIt NG you can back up the MBR and track 0 (which contains the EMBR) to the boot floppy (mine is 32K). I have done this and then copied it as a file onto both E: and F: drives (in case the floppy goes lala). Therefore, using BootIt NG I can restore the whole of track 0 on drive HD0 if something has altered the partition info.

    The printed partinfo is just a backup in case all else fails.
     
  20. aigle

    aigle Registered Member

    Joined:
    Dec 14, 2005
    Posts:
    11,164
    Location:
    UK / Pakistan
    Thanks.
     
  21. grnxnm

    grnxnm Registered Member

    Joined:
    Sep 1, 2006
    Posts:
    391
    Location:
    USA
    Oh, I just thought of an additional disadvantage (albeit not a major one) of ShadowProtect. Its partitioning capability, within its recovery environment, is basic. It's sufficient for the needs of general restore, but is not on par with a heavy-weight partitioning tool such as PartitionMagic. IIRC True Image ships with a pretty decent partitioning tool that supports partition shrinking/moving/etc. Is this right? If so, and if you demand heavy-weight partitioning capability within the recovery environment, then True Image is probably your best bet (at this time...).

    Don't get me wrong though. ShadowProtect's partitioning capability is just fine for restore purposes.
     
    Last edited: Jan 31, 2007
  22. Howard Kaikow

    Howard Kaikow Registered Member

    Joined:
    Apr 10, 2005
    Posts:
    2,802
    A tenet of my religion is that heavy duty partitioning stuff should be done with a product such as Partition Magic.
    This can be done from te PN recovery cD or floppies.
    Then, a backup program can be used to restore.
    And one can repartition after the restore with PM.

    Some folkes like Acronis Disk Director better thgan PM, but I've not tried it.
     
  23. grnxnm

    grnxnm Registered Member

    Joined:
    Sep 1, 2006
    Posts:
    391
    Location:
    USA
    Yeah, that's kind of my feeling as well (which is why I don't feel that it's much of an issue, really).
     
  24. Howard Kaikow

    Howard Kaikow Registered Member

    Joined:
    Apr 10, 2005
    Posts:
    2,802
    Uh, oh!
    We agree, should that make us worry?
     
  25. grnxnm

    grnxnm Registered Member

    Joined:
    Sep 1, 2006
    Posts:
    391
    Location:
    USA
    LOL - Right, sorry, I'll try to be more disagreeable in future posts. :)
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.