Best SSD defraggers?

Discussion in 'backup, imaging & disk mgmt' started by pajenn, Jun 13, 2012.

Thread Status:
Not open for further replies.
  1. pajenn

    pajenn Registered Member

    Joined:
    Oct 26, 2009
    Posts:
    930
    I'm not talking about traditional defragging, but rather the type of SSD specific defragmention utilities or algorithms that many defraggers have been introducing recently (PerfectDisk, Auslogics, etc.). I think they mainly do free space consolidation.

    For my old HDD I used MyDefrag which has an option for 'Consolidate free space' and 'Flash memory disks.' The author recommends using the 'Flash memory disks' option for SSD, but only rarely (once a month). The algorithm defrags the files and consolidates free space by moving everything to the beginning of the disk, but does so by moving as little data as possible to minimize the erase-write cycles used. Note that it does no file placement optimization as it would with regaular HDD. My question is whether this is as good as it gets for SSD, or do some of the other defraggers offer something more? What's the best SSD defragger?
     
  2. Hungry Man

    Hungry Man Registered Member

    Joined:
    May 11, 2011
    Posts:
    9,146
    I've never seen anything showing that those have any effect at all on performance. If anyone can link me to an independent benchmark (ie: not from Auslogics, PD, etc) showing improvements I'd appreciate that.
     
  3. Triple Helix

    Triple Helix Specialist

    Joined:
    Nov 20, 2004
    Posts:
    13,362
    Location:
    Ontario, Canada
  4. TheRollbackFrog

    TheRollbackFrog Imaging Specialist

    Joined:
    Mar 1, 2011
    Posts:
    5,083
    Location:
    The Pond - USA
    Don't even think about it. If your SSD is really old (and very early issued), something like described above may help some. Almost all SSDs today, especially the ones that use a SANDFORCE internal controller, use an ever improving Garbage Collection system that solves any need for being careful with your writes.

    The problem with HyperFast and other like technologies is they only have access to the Windows file allocation tables and diskmaps. This information is almost useless with current SSDs because they manage their own internal space maps and usage tables, which are not accessible by outside processes. They have their own FAT... it's required for them to optimally manage their internal writes (especially the internal 7-10% "spare" space they use for this garbage collection function).
     
  5. treehouse786

    treehouse786 Registered Member

    Joined:
    Jun 6, 2010
    Posts:
    1,411
    Location:
    Lancashire
    i will go one step further than hungry man and say all SSD defraggers/optimizers are snake-oil.

    according to an Intel SSD engineer, firmware takes care of data allocation etc. software solutions (not by the SSD manufacturer) can not benefit an SSD in any way unless they have access to all the 'blueprints' of that SSD in question which no one but a manufacturer has.

    dont know about you guys but i would rather trust that intel engineer more than a company whos best interest is for me to believe that they can write software better than the SSD manufacturer can make firmware.

    my own personal opinion is that all SSD optimisers/defraggers actually hurt the SSD rather help it.
     
  6. napoleon1815

    napoleon1815 Registered Member

    Joined:
    Sep 9, 2010
    Posts:
    734
    Agreed. We've actually tested this where I work. Do not waste money on any defrag program for an SSD.
     
  7. Keatah

    Keatah Registered Member

    Joined:
    Jan 13, 2011
    Posts:
    1,029
    First of all, remember that Flash memory is completely and utterly alien to modern file systems in use today. To make them into a nice polished consumer product; SSD's require sophisticated controllers that didn't exist when they first hit the market. When companies and engineers received a steady stream of complaints surrounding lackluster write performance, they 1-upped their game and got to work for real.

    A new cutting edge modern day SSD will manage its free space in its own way. It is a black box and you should treat it as such. A modern SSD will see any defrag attempt simply as the schizoid OCD user trying to copy and delete tons of data for fun and games. All you are doing is re-arranging the internal lookup tables, the table of contents if you will. And this table is a huge monster mess right from day one.

    That, and using up write cycles in a chip that can only handle 2800 write/erase operations to begin with!

    In fact all this is so complex that some SSD's now use multi-core processors to manage itself and free space. To even *think* about trying to outguess it with a one-shot defrag every month is just utterly ridiculous. It would be like trying to unscramble a Rubik's Cube with the color labels on the inside. That's how a defragger program would see the drive. A black box for sure. Furthermore, do you believe any SSD defragger publisher has all the inside trade secrets of the controller and its firmware? Do you think they're gonna sell it to you for 29.95? If you do I have a nuclear powered shuttlecraft bound for a diamond asteroid. Wanna ride? On the way there we can play with hot women in the penthouse bay and discuss DRAM memory optimizers from the Windows 3.1 and 95 era.

    Now, understand the very first SSD's would experience a massive slowdown once capacity was reached. Same thing would happen when the amount of data that has been written and erased equaled the capacity of the drive. There was no support for free space consolidation anywhere. Early drives were pretty stupid (like any first run of a new tech), then we got support from the O/S in the form of TRIM to help make free space available. And now drives are coming with multi-core chips to manage all this in background. One model is even aware of the NTFS Metafiles. It reads them and manages space accordingly. This is nice.

    Only consider using a free-space consolidator on older SSD's, if you have to. And you will know if you have an old SSD by the fact that it has become slower than a mechanical Spinner. If you got stuck with this **** your best bet is to suck up the $499.95 "investment" and get a newer drive.

    If you are defragging a modern-day SSD you deserve this!

    ~ Copyrighted Image Removed ~
     
    Last edited by a moderator: Jun 20, 2012
  8. pajenn

    pajenn Registered Member

    Joined:
    Oct 26, 2009
    Posts:
    930
    Thanks for the feedback.

    My SSD is a 256 GB Samsung 830, which I believe is a modern day SSD with TRIM support. It got good reviews, was cheap for SSD, and my laptop is also a Samsung so that's why I bought it. I have it split into two partitions: 120 GB for OS (windows 7 64-bit, currently 75 GB in use) and 121 GB for data. My original idea was to make backups of the OS partition to the data partition, which I thought would be super fast, but for some reason it's not any faster than if I make those backups to my second internal HDD. A differential backup takes about 7 minutes so it's still fast.

    In any case, I've decided not to defrag based on the comments here, although from reading a separate thread by Isso I'm not ruling it out completely.

    As for registry defrags, I assume those are still considered a good practice even on SSD?
     
  9. Keatah

    Keatah Registered Member

    Joined:
    Jan 13, 2011
    Posts:
    1,029
    Hold up - I have some comments. but can't make them till tonight.
     
  10. Keatah

    Keatah Registered Member

    Joined:
    Jan 13, 2011
    Posts:
    1,029
    A consumer shouldn't ever have to worry about the write/erase life of their SSD's. Nor should they have to wonder why they slow down. They should not have to run utilities and fuss over this. This tells me SSD's are incomplete products still in beta testing stages. Purchasers of these devices are considered early adopters and experience all the pitfalls associated with them.

    With a Spinner there is no write/erase lifespan, other than that of the mechanics wearing out. And the whole of the computing industry, experts and novices alike have known about defragging requirements since the early DOS days.

    People understand HDD Spinners on a reasonable level. SSD's are still a mystery. We know why HDD's can become slow (or are slow to begin with). Whereas SSD's are being touted as the latest and greatest ultra-hypersonic-fast storage devices. But how much fiddling and hemming and hawwing is required to make them live up to that claim?

    Whatever solutions present themselves, they should be 100% contained within the black-box device. Until the problem is solved (by whatever methods, I really don't care which) then all I can say is sorry.. yeh.. SSD's are not really ready for prime-time. We're getting there. Perhaps by next year.

    The Samsung 830 is about as modern a consumer-level drive as one can get. I find it quite fascinating that it is aware of NTFS Metafiles and optimizes accordingly.

    I read through the other thread by Isso and agree more or less with what was said. SSD's and Free Space management are still rough spots on the way to the perfect black box ideal storage device. The industry is first, now, beginning rapid changes in that direction.

    The SSD is still in diapers and what is in vogue today will be outmoded in 3 months. A number of "features", or changes if you will, are still not being implemented, and one is actually going backwards.

    0- Something mfgs won't tell you, if you get a 256GB drive, you should really only use perhaps 60-70% capacity. Especially if your cheap-o SSD has no over-provisioning. This is to help accommodate incomplete GC and TRIM operations. Maybe by next year this won't be an issue if mfg's get their act together.

    1- The free space fragmentation problem. For a 3rd party "defrag" utlity to be effective it need knowledge of how the controller works and it must have access to the controller's block mapping logic. I'm not aware of any product that can do this.

    2- The short life of NAND. It's ridiculous the amount of gymnastics a controller needs to go though to wear-level storage elements with less than 3,000 P/E cycles. The industry is going as low as 2,800 cycles and wouldn't mind doing 2,000 if they could get away with it. This downward trend is the big negative, but it is hidden from the consumer.

    2a- As far as I'm concerned, just make them with 1,000 cycle lifespan and over-provision the drive by 4x! This development trend will reverse itself, but not before it goes lower and problems start showing up.

    3- A proper consumer SSD *absolutely must be* a black box to the consumer, absolutely! There should be no utilities to run other than the initial setup perhaps. The Samsung 830 has 3 arm cores going. Let's add a 4th one and dedicate it to properly managing free space consolidation. A properly engineered drive will manage itself, completely, 100%, no external utilities to run. Put it all in firmware.

    4- Granularity and reliability of Flash needs to become greater. Ideally it would be nice to erase a bit at a time. You'd be appalled to see the amount of errors that are internally generated! Might as well just re-assign the ECC bits to data a 50/50 ratio and call it a day! This goes for Solids and Spinners alike. But Spinners use PRML, an analog technique for storing data, and the ECC is different here.

    5- Perhaps the industry will turn away from the crap multi-cell NAND and come up with something better.




    Now: Backups tend to be sequential. So the power of the SSD is more or less limited by the SATA interface. Not only that, the SSD is really doing double duty and not streaming as fast as if it was acting as a "read only source" device. When you read from SSD then write to a Spinner, the Spinner's performance appears to improve. The whole system is tending to read from one device then write to the other. Not reversing directions for every file. No vibrating back and forth. That's it in a nutshell. Perhaps someone can chime with other comments if this is an important sub-topic.


    Regarding backups, It's good practice to backup to a separate device not attached to the system. That way you are better protected. But "backing up" to an in-use, in-system drive, is alright if it's purpose is to be a ready reference and you have other copies elsewhere.
     
  11. Keatah

    Keatah Registered Member

    Joined:
    Jan 13, 2011
    Posts:
    1,029
    Registry defrags and cleaners and stuff.. In the old days of 386/486's perhaps. With today's i7 chips. You won't be able to see any improvement.

    I don't use any registry cleaners or defraggers unless there is a very specific problem that needs attention or to help insure a complete uninstall of something.
     
  12. 2YsUR

    2YsUR Registered Member

    Joined:
    Jun 3, 2012
    Posts:
    61
    Love it.:thumb:
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.