Help me make conclusions for my defrag tool review...

Discussion in 'other software & services' started by NGRhodes, Mar 18, 2008.

Thread Status:
Not open for further replies.
  1. NGRhodes

    NGRhodes Registered Member

    Jun 23, 2003
    I have ran a series of tests that has taken a LONG time to do, I hope to start writing up my little paper tonight, but not really sure what to put as my conclusions.

    I concentrated on refragmentation rates and performance loss.
    Drives were defragmentated before using DEFAULT settings only (apart from the first run where no defragging takes place).
    Command line was used for defragmentation times (I also measured CPU usage ), so that gui performance did not upset results.

    I test 4 defrag tools (as well as a none).
    I installed 3 apps and updated 2 folders from zip and a web site from source control, as these were examples of real life operations I had noted over the past month that were 100% repeatable.

    I benchmarked the data files only (2 zips and website) - I figured prefetch might skew executable benchmarks.

    Before and after analysis by 2 of the tools concur with my findings.

    What I would like help is with making some conclusions on the results below:

    Which would you pick and why ?
    There is a trend between defrag time and refragmentation - which would you prefer, quicker defrags but faster refragmentation, or longer defrags and slower refragmentation ?
    Any comments on performance loss and fragment increase (eg signifance to machine performance) ?

    Any other comments (eg suggestions for further testing)?

    Here are the results:

    None (note this system was left for approx 1 month without defragging and daily use and provided the image which the other tools were tested with).

    Before: 22090 fragments, 0.88s
    After: 23605 fragments, 1.0s

    Tool A

    Before: 2 fragments, 0.85s
    After: 5098 fragments, 0.93s
    Time to defrag: 3:38

    Tool B

    Before: 2 fragments, 0.85s
    After: 4739 fragments, 0.90s
    Time to defrag: 3:19

    Tool C

    Before: 2 fragments, 0.87s
    After: 2232 fragments, 0.87s
    Time to defrag: 10:27

    Tool D

    Before: 2 fragments, 0.85s
    After: 2247 fragments, 0.85s
    Time to defrag: 11:25
    Last edited: Mar 18, 2008
  2. Mrkvonic

    Mrkvonic Linux Systems Expert

    May 9, 2005

    I'd go with either C or D.

    Now, what you did not tell us is - what filesystem did you check this on...
    I believe you'll get a noticeable difference if you test this on a system with non-journaled versus journaled filesystem, the size of inode...

    Furthermore, did you perform the defrag in vivo or the filesystem was unmounted? Lastly, what platform did you run, how many processes were running... and what they were... some processes might interfere with the defragmentation, especially if there's some sort of indexing, cashing etc...

    Sorry for asking too many questions.

    If performance / time is not an issue, I'd go for a slower, more thorough job. If you can afford, run the defrag with OS inactive etc...

  3. NGRhodes

    NGRhodes Registered Member

    Jun 23, 2003

    I deliberatly tried to provide minimum info... no worries with the questions, its what I hoped for - helps me thing about things, as after spending a few days running the tests (did I mention the benchmarks were run 14 for before and after 5 times... yes 140 times !!!)

    It was ntfs under xp sp2, fully patched

    Services = default + iis.

    All defrag tools defragged the same amount of files (all but one left in 2 fragments), I didn't bother checking which process it was lockin the file, as its snapshot of a real system and its something I would encounter outside testing, but its concistant amongst all tests.
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.