Discussion in 'backup, imaging & disk mgmt' started by beethoven, May 18, 2008.
You couldn't be worse than me.
Thats the fine touch of auto verify running as a background task. it for lazy people you do nothing. its all well and good not running verify on decent hardware but when the hardware is failing or lesser hardware then it becomes more of an issue. The Acronis froum is awash with failed restores. At end of day when it's a good day and hardware sound a good image is the result but on that one off day things are not well, a bad image will happen as things are not perfect. When you sat there thinking about your lost data you will suddenly think why didn't i do the verify if only, things could of been different. ahhh well you live and learn. I think over confidence in the no verify is fine if the hardware is sound but you can't be reliant on that will always be the case.
I'm also lazy. As long I can use my pointing finger, it's OK, but when I have to use all 10 fingers, I think about it twice.
Erik, when run on a schedule you don't even need to use your pointing finger. The ultimate in laziness.
I don't backup my actual system partition anymore, it might be infected.
Can we please get off the SP no verify is the be all and end all. It's just over confidence and you know full well the hardware can go bad. I am glad you happy with SP. There will come a time when the no verify will let you down. It rare but it is so.
OK. Back to the "old" and "fast" Terabyte.
Why do you say it's old Erik? when it had more updates than SP. It came out with 2.00 and low and behold 2.09. I detect you ridicule it because your no.1 no verify wonder is living upto your expectations. IFW already has the superior byte for byte verify and not want to embrace the superior features as it alittle too hard work for you as it uses the command line. You got to click on the verify with SP and yet IFW can already do it blindfolded as it auto. Can you not see IFW is not so tied to corporate interests and can do more. Alot More. You just don't want to admit it. It takes it's pride for being reliable. At end of day SP is a good product and looks after it's customers needs which little can be said of Acronis.
I agree with those who feel that verification is a good idea. Even if we assume that your backup program has no bugs (not a good assumption IMHO), hardware and other software issues can result in a corrupt backup. Memory can work fine today but be flaky someday in the future. Disk sectors that work fine today might not work so well someday in the future. A failed verification is far from useless - it's an indication that you need to take action, whether it's diagnosing and replacing faulty hardware, making a new backup, using a different backup program, etc. The fact that a backup that failed verification can sometimes be successfully restored anyway and appear to work doesn't make verification useless, for the same reason that changing a few random bits on your hard disk might not result in readily apparent problems.
Because nontrivial software, backup programs included, can be expected to have bugs, I recommend using more than one backup program, at least initially until you have confidence in one of the backup programs. At least one of these backup programs should be run with the system partition being backed up not in use, to eliminate the possibility of partition in-use bugs for at least one of your backups.
You should run a verification a few times from the same environment that you would do a restore from, because a different operating system is sometimes used for the restore program than used by the backup program.
It is theoretically possible for a backup to verify successfully but nonetheless be corrupt. This can happen if the data was read incorrectly when the backup was made, or if the verification algorithm in the backup program is weak, or due to bugs in the backup program. Thus, if you use differential or incremental backups, I recommend that you start a brand new full backup once in awhile, keeping at least one previous backup.
You should also once in awhile, or at least once, try to restore a few files from a backup to a different location, if your restore program is capable of this, to give yourself confidence that a full restore would hopefully also work. Better yet, restore the system partition to a different drive, or a different partition on the same drive. Tips on how to do this for Acronis True Image are found here. You can use TestPath to check if files of interest in different locations are identical.
In theory you are RIGHT and when an user asks me "Do I have to verify my images", I always say "YES", because that is the right way of doing backup and there is no discussion about it, regardless what other users are telling me.
When I had ATI, the Acronis Support adviced me to verify backup AND restore, which means double verification, they explained me why, but I forgot it.
Verifying backup is acceptable, but verifying restore makes me impatient, because I have to wait longer and I want my computer back. So I stopped verifying restore. As long I used ATI, I kept on verifying my backup.
Then I installed SPv2 and I knew already from Peter, that SP was very reliable and Peter is like me, Peter likes robust and reliable softwares. Peter has more reasons than me to have such softwares, because he is running a business, I don't and Peter has several computers, I have only one.
The only thing I didn't know "Was SP good enough for MY computer ?"
I tortured SP as long as possible and it never failed, so it was OK.
Because SP was so good and above all fast, I forgot one thing to check in SP : verifying images. There was no option in the screens to verify an image. I wasn't complaining about it, because verification takes more time and meanwhile I had done so many successfull backups and restores, I lost count.
The way I use SP now, forces me to restore each image and each restore proves my backup was reliable without needing verification.
I also asked myself "When do I really need SP to restore my system partition ?"
Almost NEVER, because FDISR cleans and repairs my computer every day and more than once. I reboot minimum 3 times a day that means 1000 times a year without a problem.
How would you VERIFY this boot-to-restore, while FDISR is changing my system partition completely during that time. I don't even see it because it happens while I'm looking at the Windows Welcome screen during reboot.
The only thing I can do is reading the FDISR Activity Log AFTER the boot-to-restore, which is already too late.
FDISR is so robust and reliable that it doesn't need verification and I have to trust FDISR, otherwise I better ditch it.
If I can't even verify FDISR, why would I verify SP.
SP works even in a much better and safer environment than FDISR and is therefore even more reliable, because nothing can go wrong with a Recovery CD, external harddisk and a tested restorable image.
My conclusion is : verification is more a personal matter, than a technical matter.
If you don't trust your hardware, verify it with the approperiate tools.
If you don't trust your softwares, verify as much as possible.
I trust my softwares, because they did their job very well numerous times and that is enough for me.
I'm not going to waste my time and my computer time on something that might never happen.
The risk is as good as nihil and when it goes wrong, I consider this as bad luck, nothing more than that.
And I have more than just my Recovery CD and external harddisk to get back in business. It only will take a little longer.
ANYTHING can go wrong in hardware and software and the older your hardware is the more risks.
That is NORMAL, nothing lasts forever in life.
A good well thought out personal opinion about the realm of imaging and related issues.
My thinking is along the same lines.
I would add one point that since both software and hardware can go bad,its adviceable to have images on at least 2 different ext./int. disks,also save multiple copies of your favorite software,it happens sometimes that vendor goes out of business.
Good point .
I have two internal drives in my computer, with one just for backups. This protects against hard disk failure for the main drive. However, it doesn't protect against malware or errant software that could affect both hard disks, and also doesn't protect against local disasters (lightning, flood, theft, etc) that could affect both drives. In addition, failure issues with the backup drive could result in loss of backups. Thus, I also periodically copy the backups to DVD, making two DVD copies of everything for redundancy. One DVD copy stays at my place and the other DVD copy goes to an alternate location. I always verify the backup image on the hard disk before copying the files to DVD, but I don't verify every time I backup. I tend to verify when enough effort has gone into the work being backed up. I never verify the DVD backups with the backup program, because having 2 copies of every DVD provides good redundancy (but I do of course verify each DVD with the burning program as it's made). I also keep at least one previous DVD backup at both locations, in case there is a problem with the most recent backup, and also to be able to retrieve older files from an older backup if necessary. Using incremental backup lessens the burden of copying backup files to DVD. I also occasionally make a backup of the system partition with an alternate (free) program, DriveImage XML, run from Ultimate Boot CD for Windows, in order to not put all my "backup eggs" in one basket, so to speak.
I also keep copies of all my smaller-sized software setup programs on DVD-RAM, as a record of what software I have, and also to have the same version of the software available if needed for reinstallation. I also recently started keeping the prior version of each setup program, in case I need to revert back to the prior version if the current version of a program is problematic.
I also don't do a full restore of my system partition just for testing, except when I first installed the backup program. Because of that, and also because I've had to resort to a full system restore a few times out of necessity, I have some confidence that my backup program works well on my computer. For testing purposes I do occasionally restore a few random files from my separate data partition backup to a different location.
If you update to a newer version of a backup program, IMHO you need to consider it untested. I've stuck to an older version of my main backup program that's worked for me without fail. Newer isn't always better, IMHO, because nontrivial software can be expected to have bugs.
That is very possible and that counts for any software.
It wouldn't be the first time that a software after an upgrade doesn't work properly anymore. I experienced this myself several times.
I always keep my older installation file(s) when I do an upgrade and certainly with softwares, like IB and ISR.
After all, softwares are written by programmers and they make mistakes like anybody else.
My 2 cents...
Restorations are the only true way to know if an image is good. That said, verification is useful as an alternative. But it's not a restoration.
I own (and like) BOTH Image for DOS and ShadowProtect. I do not verify images made with either one. They have both proven to me that they are rock solid reliable. I have never had a failed restoration. So I no longer bother with verification...
Do memory tests and do harddisk tests, if both are good, your reliable Image Backup/Restore will be fine without verification, if NOT your Image Backup/Restore can't be trusted either due to memory/harddisk problems, in that case fix your hardware problems.
In other words : do what has to be done first and don't put the cart before the horse and get back on the horse
A restoration overwriting the existing source data is a good idea to try at least once, but I'd be a bit cautious about doing frequent restoration overwrites of source data simply for testing purposes. Here's why: during backup, 1) data was read from the source, and 2) written to the backup. During the restore, 3) data was read from the backup, and 4) written over the source. Corruption could happen during any of these 4 steps. If I understand correctly, verification in most backup programs aims to provide detection of corruption in steps 2 and 3. However, for most backup programs, if my understanding is correct, verification provides no detection of corruption in steps 1 and 4. Thus, things are not perfect even if you use verification, and even worse without it. My conclusion is that frequent restoration overwriting of source data may expose you to a higher risk of source data corruption.
This would indicate to me a degree of lack of confidence in the backup software. I've probably done over 500 restores on the drives I've imaged, with Shadowprotect. No issues at all
All the more i get a feeling reading this thread,and it shows up in other threads as well that people who have arguments against testing their images by immediate restore are dictated by their own innate fear to do so !
I have no english substitute,but in dutch we have a saying '' Als een kat om de hete brei draaien '' that explains it all,perhaps people more literate in english would translate this ?
"arguments against testing......own innate fear" very good. Perhaps we should start a thread on the necessity of watching the little blocks move when defragging ? Remember hardware might fail and software conflict and just because it has worked 10,000 times before doesn't mean that it will work this time - unless you watch it might fail !!!!!!!
Verification is something that should be done when setting up a new system or making changes to an existing system ( new external drive). After that, although it does no harm, it can not be described as essential. Perhaps those who support the verification view could report as to when one last failed ? what is the frequency of failure ? did all or any of the failed images work ? and so on.
I've tried just about all the imaging programs and I've restored several thousand images. Created, a lot more than that. I can honestly say I've never had an image fail to create, verify or restore. Even with Acronis True Image. You can put your own interpretation on those figures.
Thanks Brian K . I'm not sure whether I should be surprised or not and certainly am not sure what interpretation I should put on your figures. "Even with AcronisTrue Image" - A very much overly criticized program in my view. Yes recent versions are bloated and many of the silly options just don't work but since version 6 I have found Acronis to be both stable and fast ( faster than SP3 on very small drives). Has never failed for me nor has SP.
So the only interpretation I can put on your figures is: you do things properly ( the vast majority of complaints with all programs come from people who have not read "the manual")
The image software isn't clever at all detecting that it's a bad image. Change 1 byte in the image and the software won't know exactly if 1 byte had changed, gone wrong from corruption. There is no error checking. It's only when it gets to a certain point of restore then it knows it's faulty. Verify is sophisticated error checking. meaning it does all the error checking the software never does during the backup, that is why certain image software can take an image so fast. The image software doesn't know your hard drive is failing or memory is going bad. There is also other variables resulting in a corrupt image. It isn't a big thing to do for those software that have auto verify as you backing up in windows as a background task you can get on with other tasks.
I have never had an image that didn't pass verification with my current backup program version, to the best of my recollection, which could be faulty. Having acknowledged that, let's take a look at some research related to this area.
Q) What are the causes of data loss?
A) According to data recovery firm Kroll Ontrack:
Hardware or system problem - 56%
Human error - 26%
Software corruption or problem - 9%
Computer viruses - 4%
Disaster - 1 to 2%
Q) What percentage of disk drives have problems with sector errors that are undetected until they are accessed?
A) The paper 'An Analysis of Latent Sector Errors in Disk Drives' analyzes data collected from production storage systems over 32 months across 1.53 million disks. The term 'latent sector errors' refers to sector errors that are undetected until they are accessed and that result in loss of all data in the sector.
About 1 in 12 of consumer-class disk drives were affected by latent sector errors during the 32 month period in the study. This is something you may wish to consider if you save your backups to only one hard disk. The way that latent sector errors are discovered is by accessing the sectors. One way to do this is to verify your backups, since the verification causes the backup sectors to be accessed. An additional advantage of verifying your backups is that the hard drive itself will transparently relocate data from troubled sectors that have correctable errors to spare sectors, before the sector reaches the point of becoming unreadable (i.e. of having a latent sector error).
If you use programs such as QuickPar, ICE ECC, or Dvdisaster to create redundant information for your backup files, you may be able to recover your backup files even if latent sector errors affect them.
Q) I ran a utility program that accessed all my backup files, and no read errors were reported, so I have nothing to worry about, right?
A) Wrong. The paper 'An Analysis of Data Corruption in the Storage Stack' analyzes data corruption in production storage systems containing a total of 1.53 million disk drives over a period of 41 months.
We see that over the 41 month period, about 1 in 115 consumer-grade hard disks suffered from checksum silent data corruption errors. The authors also considered two other types of silent data corruption errors, but focused on checksum errors because they found that checksum corruptions occur "much more frequently" than the other types.
CERN, the world’s largest particle physics lab, also did an analysis of data corruption in their data center. First they ran a test program:
Note that one in 30 of CERN's machines tested by the program exhibited silent data corruption in the 5 week period.
CERN also tested for corruption of user data:
A way to detect silent data corruption errors in backup data is to use checksums, which is exactly what your backup program verification does.
Q) How are the various elements of a storage system affected by partial disk failures, the disk failure types that are discussed above?
A) According to Dr. Lakshmi Narayanan Bairavasundaram:
In conclusion, I stand by the advice I gave in my previous posts. I also recommend considering using QuickPar, ICE ECC, or Dvdisaster to protect your backup files if you have only one copy of the files in a backup set.
I like DVDDisaster but who would have only one copy in a backup set ? I would trade DVDDisaster and verification for extra backups everytime.
Separate names with a comma.