I have been trying the CrashPlan backup client for backups to a NAS and USB drive. Overall I was happy with the way the software works until I triggered a "deep maintenance" cycle. The problem is that deep maintenance, specifically the "deep pruning" phase takes forever even though CPU and network usage seems to be very low while it is running. For my testing I was using a mapped network drive connected to a 2.66 GHz Core2Quad. Actual backup speed was very good, it is the maintenance performance which I find unacceptable. As a test, I created a fresh backup archive that was about 20 GB in total size. Immediately after creating this archive, I used the "Compact" option in the GUI to trigger deep maintenance. This took about an hour even though this archive should have had no previous versions which needed pruning. There were long periods of time when the progress counter would increment 0.1%, and during this time both the CPU and network usage were close to 0. When I increased the backup archive size to 200+ GB, the deep maintenance took several hours, also with long periods when the process didn't actually seem to be doing anything. According to the documentation (http://support.code42.com/Administrator/3.6_And_4.0/Monitoring_And_Managing/Archive_Maintenance), the difference between "deep" and "shallow" maintenance is that deep maintenance verifies block checksums and compacts archives, whereas "shallow" maintenance only checks for file corruption. By default, shallow maintenance runs every 7 days and deep maintenance runs every 28 days, though these intervals can be increased. My question is, if I am not backing up to the cloud, what is the risk to my backups if the deep maintenance interval is increased to the point that it is effectively disabled? Would shallow maintenance be enough to catch potential file corruption?