Confirming my impression that HD Manager Pro lacks source verification at backup?

Discussion in 'Paragon Drive Backup Product Line' started by Stilez, Apr 28, 2012.

Thread Status:
Not open for further replies.
  1. Stilez

    Stilez Registered Member

    Joined:
    Apr 25, 2012
    Posts:
    19
    Hi,

    I've been considering Hard Drive Manager Pro 12 but have grave concerns because it doesn't seem to have, or perform, source verification on backup.

    By this I mean once a backup is done, an option to perform a verification pass to re-read and compare the image contents back to the source drive or files.

    I can see there is "archive verification" but a careful look says this isn't source verification, it is just verification of archive readability and that its contents match what was hashed at the point of backup.

    Why is this important? Because it's well known the steps to backup are: (1) source data (ie disks/files) -> paragon (2) paragon -> backup archive. Hashes in the image and checking the image against its internal hashes can only verify step 2, not step 1. If a read error occurred, read was not faithful somehow, or "shadow copy" was not working on some file perhaps, or other conditions possibly, then it may be that the image content matches the bytes that Paragon read - but what Paragon read is not a faithful 100% consistent copy of the true source data. Only a 2nd verification pass that re-reads the source data to check it's matching what would be restored from the archive, can do that.

    The option doesn't seem to exist. The manual and KB don't seem to have anything under "verify" saying it exists. Can someone confirm my impression that Paragon's software does not have this capability.
     
  2. Robin A.

    Robin A. Registered Member

    Joined:
    Feb 25, 2006
    Posts:
    2,557
    AFAIK, the option isn´t available in Paragon programs.

    I think if the backup process is well designed and implemented, this option is not required. The backup process must be inherently reliable and must have its own built-in checks. In the case of Paragon, user experience proves that this is the case, because restore errors are not common.

    Besides, the “source validation” increases significantly the time needed to create an image, and can only be performed once, as part of the process of creating the image.
     
  3. Stilez

    Stilez Registered Member

    Joined:
    Apr 25, 2012
    Posts:
    19
    Thanks. I don't necessarily agree the rest of what you say carries weight though. Here is why.

    Backups get checked and most often used because something else relied upon has failed or might fail, to mitigate against data loss for whatever reason. "Well designed and implemented" software have bugfixes or updates all the time, untested edge cases or new updates or subtle incompatibilities in s/w and h/w, all can mean that even "well designed" can have errors sometimes. Is VSS or Paragon's shadower perfect and never going to have any situation where it gives data to Paragon that's not 100% self-consistent due to other things going on in the system? Some may not wish the risk. I think with backup, a truly "well designed" solution anticipates this possibility and at least allows the option, not assume it cannot be important.

    Almost all the time it probably does work, because edge cases will be few and common failures (hw OS or sw) will be slammed. User experience does not prove it won't happen, for example it might not be detected by the user, or may be rare.

    The checksums catch the error "what is in the image now isn't what we wrote to the image at the time". But only a 2nd pass re-reading the source and comparing it to an expand of the just-written file, truly proves the file when expanded will give the correct source data. Some are fine with the extra time, some aren't worried, same as how some value a quick BIOS memory check at POST and some value 10 seconds faster boot. Of course this only needs checking once.

    Example: suppose I have a kind of power blip during backup, so my sata interface provides a sector of 00's and also reports the hardware checksum was ok so the OS believes this is the correct data. Or some sw race condition or edge case in VSS where inconsistency of shadows happens. So the backup program is told "the data for this file is (bunch of 00's)" and of course it backs up that value and its checksum. Checksums will say if the value later changes, and confirm the file restored is truly the file written. But a verify-to-source pass (only) and NOT a checksum test will warn that the just-written data when expanded does NOT match what's on the disk at the time of imaging, by re-reading it a 2nd time to check data on disk truly same as data in saved archive, so the user can act.
     
    Last edited: Apr 28, 2012
  4. seekforever

    seekforever Registered Member

    Joined:
    Oct 31, 2005
    Posts:
    4,751
    You are correct that a compare of the source and the archive is the best verification but with live imaging it can't be done and this is how the programs are intended to be used. The source very likely will change from OS activities and possible user activity as the image is being created. Paragon is not alone in this method. The program could be made to do it from the rescue media which operates on a static disk but it doesn't.

    It is easy to speculate about what might go wrong but in reality I don't think the history of these programs supports "getting the source wrong without knowing it" as an issue.

    Power blip concern can be mitigated with a UPS but modern power supplies smooth blips very well and a significant drop out will cause the PC to shut down.

    Regardless of imaging program, it is a good idea to run chkdsk or equivalent from time to time before creating the image. The imaging programs are pretty tolerant of structure problems though.

    If you want to get real anal about the integrity of your image you can do what a person using Acronis does. He makes the image and then immediately restores it to another drive and puts that drive into service. He knows for certain that both the image and the restore worked since it now is his working version. To facilitate this he uses drive caddies for convenience.

    Personally, I take the split approach. The files I have concern for are my personal data files that aren't available anywhere else at any price. I use a different program for my data files which I never keep on C. This program keeps the files in their native format not a proprietary container file (the main reason I use it) and it also can do source comparisons. I only image my C drive and if it gets lost I can always rebuild it from scratch if necessary. Of course, it has never been necessary.

    I also keep a history of archives so if the last one should be bad for any reason I can fall back to an earlier one. In all my years of imaging with Acronis and now Paragon the ony image problem I had was an Acronis image on a laptop HD going bad because of the disk storing it developed some bad sectors.
     
    Last edited: Apr 29, 2012
  5. Stilez

    Stilez Registered Member

    Joined:
    Apr 25, 2012
    Posts:
    19
    Good points, thank you.

    The underlying concerns are bug and edge conditions, below that subtle hardware issues. I had a bad RAM module a month ago, which caused intermittent problems for a month affecting just a few "rows" and only when under load (I think), before it became worse and suggested a hw issue. A backup in that time might have saved faulty data for some file(s) and calculated from it a verifiable but dud checksum. Bug fixes get released for race and other edge conditions, and discussion of shadow volume is seen where some program is or isnt VSS aware (noting Paragon and others include their own VSS handler as an option or suggest some handlers won't be reliable with some programs, note IANAE). Put together these suggest some non-trivial risk that just checking an archive's internally self-consistency may not be sufficient to confirm it's a faithful reflection of the source.

    What all of these have in common is they probably cause intermittent errors where two distinct reads a while apart give different results, so a check-to-source eliminates most of that (slim) risk.

    But I disagree absolutely that "it seems to work" is good evidence some file didn't get a bit switched in some crucial part. Some files and some parts of files only get accessed rarely, maybe not for years. Last month I was asked to dig up an email from 2001 for a legal query about T&C. I doubt I'll ever need my Win98 installer but one never knows. There are thousands of files on my system that if corrupt won't show up except in narrow circumstances. Only source verification at imaging confirms data held is data imaged. Only checksums (or multiple copies) can confirm the image is still unchanged since creation. "It restored and seemed to work" means nothing.

    Where you say it "can't be done" or is somehow outdated, I have doubts. Often one is imaging, copying, or restoring non-system disks where a full lock can be obtained. if a disk can't be locked then VSS can be retained after initial imaging. The same files re-read within the same VSS session should give the same results. (If they don't, the original image would also be unreliable). It's always possible to re-read and verify the new image to source if a lock is achieved or shadowing is working correctly.

    And yeah, chkdsk before any backup - always!
     
    Last edited: Apr 29, 2012
  6. Robin A.

    Robin A. Registered Member

    Joined:
    Feb 25, 2006
    Posts:
    2,557
    I think it can be done in any kind of image process, but only as part of the process of creating the image. During this process, the partition that is being imaged is "static". This is the method used in Terabyte programs, according to the documentation (I haven´t used them).
     
    Last edited: Apr 29, 2012
  7. seekforever

    seekforever Registered Member

    Joined:
    Oct 31, 2005
    Posts:
    4,751
    OK, I shouldn't have said it couldn't be done - you can pretty well do anything if the effort is put into it. However this is the way it is in Paragon, Acronis and IIRC, Ghost so you may have to look further.

    No argument about the comparison being the best method. As far as bad RAM goes the checksum method tends to pick it up. Many of the verification problems on the Acronis forum are due to flakey RAM. If the RAM is flakey it is very improbable that the same "flakeys" will exist when the verification pass is done and even more so it is extremely unlikely the same RAM locations would be used for each byte.

    Paragon actually has a slightly bigger issue with verification from the user perspective because there is no obvious verification available that can be done automatically when the image is created. It either has to be done as a separate task or the script for the task fudged, AFAIK. I'd say most Paragon casual users never verify.

    A possible method that would improve the archive integrity is to create it from the Paragon rescue CD which does not use VSS or any form of live-imaging and deals with a static disk. Create the archive 2 times and then do a CRC compare of the 2 archives which should be the same. If any scratch stuff gets written to the partition then this method won't work though.

    I can't give any absolutes of course but it doesn't appear that this a real source of problems. Like I said, my problem was the archive going bad due to a disk problem. My other problem was a flakey SATA cable causing an Acronis verification error. Neither because it recorded the wrong data.
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.