Core i5 and Core i7: there is difference in a single pc for an home use ?

Discussion in 'hardware' started by blacknight, Sep 16, 2015.

Thread Status:
Not open for further replies.
  1. Bill_Bright

    Bill_Bright Registered Member

    Joined:
    Jun 29, 2007
    Posts:
    3,836
    Location:
    Nebraska, USA
    @wshrugged - thanks for that. Good read.
    It was produced before W7 was released to the public, but it does refer to W7. And note it says,
     
  2. Kobayashi maru

    Kobayashi maru Registered Member

    Joined:
    Nov 7, 2009
    Posts:
    124
    Location:
    Drivin' all night my hands wet on the wheel....
    You always need faster for Windows. Unfortunately, that's just the way it is. I use W7 on i3 and it's anything but snappy. 10 year old XP on a now core2duo walks all over it.

    Win 7 defrag is still not optimal, and NTFS is poor. Use any decent defrag to do a check only and you'll see a total mess of files that are not movable, even placed at the end of a partition.
    Leaving spaces early in the layout purposefully so the OS can move files closer to where all the main files are reaps no benefit and it all seems rather haphazard. It's a dire situation.
     
  3. RJK3

    RJK3 Registered Member

    Joined:
    Apr 4, 2011
    Posts:
    862
    Well, you can choose to read it in the way he intended, or you can continue to be confused by a misinterpretation. At a certain point it becomes a deliberate choice.

    As I've already stated, I've set it manually as a dynamic pagefile with a low minimum/initial setting, and a much higher maximum setting. I see no purpose in permanently wasting 4-6gb allocated to a pagefile when there's no measurable performance difference.

    I set it to allow for:
    1. the creation of crash dumps (not really an issue for me);
    2. any old programs that may be hardcoded to use a pagefile;
    3. to grow as necessary if I do a rare task that exceeds my RAM.

    If a larger pagefile was routinely needed, then logically Windows would have grown the pagefile to accomodate this - and so far I've not seen this happen. The pagefile would remain expanded until the next reboot and therefore the excessive paging would be something easy to determine in retrospect. Even loading every application on this machine doesn't lead to a corresponding rise in the initial pagefile size, ergo the Windows recommended 6gb pagefile isn't necessary for me.

    You have no clue what my needs are, nor my ability to manage space. Giving such general and unnecessary advice isn't constructive. If it came to it, I'm certainly not going to spend money on a larger SSD just to accommodate your superstitious views on pagefiles.

    There is no need to have a pagefile permanently allocated to 4-6gb gigabytes worth of disk space. If there is a justification for this, then by all means make an argument and I'll give it due consideration. So far you've only really made an appeal to authority which is a logical fallacy I automatically distrust.
     
  4. Bill_Bright

    Bill_Bright Registered Member

    Joined:
    Jun 29, 2007
    Posts:
    3,836
    Location:
    Nebraska, USA
    By you, I meant the crowd.

    My question would be, where is the documented justification to change the default settings in the first place?

    Windows performance is a top priority at Microsoft. Why? Because users, the IT press, bloggers and everyone else will relentlessly bash Microsoft to no end if they do something that bogs down performance - even in the name of increased security! Windows knows how much RAM is installed and how much is being used. It would be a piece of cake to code Windows to minimize or even disable the PF if it would improve performance. But they don't. That tells me to leave it alone! And I've been tweaking computers for better performance since before there was Windows!

    That's my justification to leave it. What is your justification to change the defaults? What says changing from the defaults improves performance?

    Are you so low on free disk space that 6Gb dedicated to the PF is actually affecting your computer's performance? Because being critically low on disk space is the only justification I know of to set a small fixed PF with a modern version of Windows. But even then, that is a Band-Aid patch. The proper fix is to uninstall unneeded programs, delete clutter and move large user files to a different drive to free up space, or buy more space.

    The exception would be if you have two physical drives (not one drive partitioned) on the computer. Then a small fixed page file on the boot drive for dump files makes sense, as long as you then put a system managed page file on the secondary drive. But if the primary is a SSD and the secondary a HD, leave the system managed on the SSD.

    W7 is 6 years old too. But it is a fallacy to think one defragger is better over another because this one achieves a more "optimal" defrag over that one. Why? Because almost the second after you start to use the computer after defragging, the fragmentation process starts all over again. Windows system and temporary files and user files are immediately opened and if modified by just one byte, stored in a different location, then the previous location is marked as free. Then other files (or fragment of files) are stored in those now free locations.

    With today's large, fast, big buffered, and affordable drives, and with Windows vastly improved management capabilities, it is no longer necessary to consolidate free disk space. It is important file segments be together, but files can be scattered across the disk. Any excess "seek" time is only apparent when locating the first file segment and with today's drives, that's a couple milliseconds difference, worst case, depending where that first segment is. Once the first segment is located, then there's no difference in load times from there.

    So any defragging efficiency advantage a fancy 3rd party defragger provides is only realized during the first session or two after defragging - and even then, with today's fast drives, humans could not "see" that advantage - except, maybe, on paper through benchmarking programs.

    Plus, Windows has its prefetch features which work with Windows own Disk Optimize (the real name of the Windows defrag program) to move the user's most used programs so they load faster. 3rd party defaggers don't.

    Even the "best" (whatever that means) 3rd party defragger can't move or defrag all opened files. The OS will not allow it.

    And again, with more and more computers using SSDs instead of spinners, defragging is a moot point. Defragging is disabled automatically when the OS detects the drive is a SSD.

    What does not mean? And what is the alternative?
     
  5. Rasheed187

    Rasheed187 Registered Member

    Joined:
    Jul 10, 2004
    Posts:
    14,569
    Location:
    The Netherlands
    I have yet to play modern video games with my Core i5, but I think GPU is probably more important. But like I said, in day to day usage you won't feel any difference between the i5 and i7. That is, if you do stuff like browsing, playing music. I can imagine that with stuff like photo and video editing it does matter.
     
  6. Kobayashi maru

    Kobayashi maru Registered Member

    Joined:
    Nov 7, 2009
    Posts:
    124
    Location:
    Drivin' all night my hands wet on the wheel....
    Still in wide use, so I don't see what you're getting at.
    Win defrag can't do offline. Keeps 'the old baggage' with the main files.
    In any case, we can agree to disagree over the merits of third party defrag if you wish, as I believe differently.

    Why is NTFS poor? You answered the question yourself, almost:
    That is sub-optimal placement, not fragmentation. Windows splits the file into umpteen pieces to fit where it can to keep the writing speed up, that's fragmentation. If the background defrag service later optimized the positions/fragments of the files, I could live with it, but it doesn't.
    As I said, leaving a space near the main used files of over a Gigabyte had no effect. The system doesn't optimize the position of, nor apparently efficiently place files even when the provision is made to allow it at high speed.
    I consider that poor.
    It's a reasonable thing to do though isn't it. Quite why it's the rule rather than the exception is beyond me though.
    And yet, optimizing an SSD is something that can speed up a well used SSD.

    This isn't Linux, so there is no alternative to the old clunker that is NTFS version whatever. It far surpasses windows 7 in age, right back to win 2K (edit: NT was before), albeit with some token changes. That's Windows for you, merging the old, with the 'more recent'....
    MS still include a defrag tool in Win 10, which is likely still Win 7 underneath it all and still related to Win 95 in it's methodology.
     
    Last edited: Sep 26, 2015
  7. Bill_Bright

    Bill_Bright Registered Member

    Joined:
    Jun 29, 2007
    Posts:
    3,836
    Location:
    Nebraska, USA
    That has nothing to do with the file system being NTFS. So I ask again, what is the alternative?

    Old baggage? No it doesn't. Offline? Again, no defragging program can defrag a file that is locked by the OS. And while some programs can defrag during boot before some files are locked, so what? Drives are huge these days and with Windows Update, critical system files, those that might be locked, or frequently updated anyway. So again, any advantage is quickly negated.

    You are right, the fact W7 is 6 years old has little bearing on this. What does matter is the rest of that paragraph you extracted that out of.

    What? That is exactly what it does. And more than that, it uses Windows prefetch routines to optimize positions based on use. That is something 3rd party apps don't do.

    Plus 3rd party apps take up space, consuming extra resources. Windows Optimizer is already in there.

    What? Sorry, but it seems you don't understand the difference between optimizing a SSD and optimizing (defragging) a HD. Two totally different things!
     
  8. Kobayashi maru

    Kobayashi maru Registered Member

    Joined:
    Nov 7, 2009
    Posts:
    124
    Location:
    Drivin' all night my hands wet on the wheel....
    Nothing to do with NTFS.... If you think so, no problem then, I'm not arguing with you.
    Drive size is of no consequence.

    But this:
    I think I do.
    Same outcome, different technology. Exactly the same to the end user.
    Random defrags do no good, and eventually will do harm, but an optimize once a year will effect a benefit on a well used drive.
    For all it's fast seeking, eventually, the overall seek time will increase. Again, if you're happy with sub par performance, then go for it.

    On the contrary, optimize is not defrag. There is disk position in relation to other files, and there are files split into pieces, AKA non contiguous, or fragmented.

    Third party tools give the option of a defrag, which doesn't move files to a more suitable area and minimises shuffling, whereas optimize will move things around to gain optimum performance. So the built in routines are sorely lacking.

    What about this:
    http://www.hanselman.com/blog/TheRealAndCompleteStoryDoesWindowsDefragmentYourSSD.aspx

    There are others.
     
  9. Bill_Bright

    Bill_Bright Registered Member

    Joined:
    Jun 29, 2007
    Posts:
    3,836
    Location:
    Nebraska, USA
    You lost me there. Drive size has a huge significance. The larger the drive, the less likely you will run low on disk space.

    No, it is not the same outcome. SSDs are in no way affected by file fragmentation. It takes exactly the same amount of time to read in a file segment if located in the adjacent storage location or on the total other side of the SSD.
    Not on a SSD, but in Windows with the Windows Disk Optimize program, it is a defrag program for HDs. Don't take my word on it. Use google and look it up.
    No one is talking about "random" defrags. And there is no evidence it will do any harm - got a link? In fact, on an HD, if the files segments are NOT fragmented, the R/W head has to jump around less, saving wear and tear and time to read in the entire file. It only has to "step" to the adjacent sector.

    First you said,
    Yes it does, but then you said,
    You are not making sense and seem to be flipping back and forth. You need specify what you mean by optimize.

    And defragging does indeed minimize shuffling. That is exactly its purpose - to minimize shuffling thus wear and tear on the R/W actuator motor and to speed up time because of less shuffling to find the next fragment.

    As for your SSD, read it, in particular the first sentence in the last paragraph. It says SSDs sometimes need "a kind of defragmentation" (if volume snapshots are enabled).
     
  10. Rolo42

    Rolo42 Registered Member

    Joined:
    Jan 22, 2012
    Posts:
    571
    Location:
    USA
    Photo editing wouldn't matter but with video editing it may but only insofar as which integrated GPU is included with the CPU--in which case you would look at transcoding benchmarks for those CPUs (which outperform discrete GPUs at that task).

    Yes, they do. MyDefrag is one of them.

    Yeah that whopping 3.3 megabytes MyDefrag gobbles up makes it a real hog! (You're really basing technical decisions on that?)

    Some do but the tradeoff may be worth it as said resources are pretty minimal (like, a handful of CPU cycles per day) by they typically run the same way Windows defrag does: Task Scheduler.

    Incorrect. Learn how SSDs work and how files taking multiple blocks can lead to faster wear. Writing or modifying a file involves reading the entire block (512M usually), changing its contents, and rewriting the entire block and there are only so many write cycles before the medium wears out. Technically, that would affect performance but not a perceptible effect.

    Additionally, not having data sprawled across the entire drive (which Windows loves to do, especially if you have partitions) diminishes hinders the SSD's efficiency at wear-levelling. Over time, this can have a perceptible effect, especially with Samsung SSDs.

    Normally, I don't have a problem with this but I do when it comes from a person demanding links to expert sources from everyone else.

    ...especially when it's two sentences later. Where are your links?
    (That was rhetorical; my point is to suggest dropping the "put up or shut up" routine--it's neither nice nor productive.)


    My take on SSDs:
    • If you don't have one, get one; you'll enjoy using your PC much more but you'll hate using a PC that doesn't have an SSD (Samsung 850 EVO 256GB cost me ~$86 on Amazon)
    • Don't defrag SSDs except to consolidate space once/year, depending on your usage
    • Overprovision it ~20% if you have the space; ~10% otherwise if you can, especially Samsung SSDs
    • Leave the pagefile on the SSD and managed by Windows (dynamic)
     
    Last edited: Sep 26, 2015
  11. Bill_Bright

    Bill_Bright Registered Member

    Joined:
    Jun 29, 2007
    Posts:
    3,836
    Location:
    Nebraska, USA
    Well, at least we agree on this. And if you believe Kobayashi's link, Windows will defrag the SSD once a month so no need for the yearly on. That said, the new Samsungs are smarter than the old too.
     
  12. Kobayashi maru

    Kobayashi maru Registered Member

    Joined:
    Nov 7, 2009
    Posts:
    124
    Location:
    Drivin' all night my hands wet on the wheel....
    I make perfect sense, but as you wish, Bill. You're not going to agree on anything I say, or show to support me.

    Agree to disagree as I said. Let the reader decide what fits for them personally.

    Let's end it as amicable... Works for me.
     
  13. Rolo42

    Rolo42 Registered Member

    Joined:
    Jan 22, 2012
    Posts:
    571
    Location:
    USA
    I disabled Windows defrag as part of the build process as it, frankly, sucks. Among many reasons is that it doesn't count any fragment larger than 64MB as a fragment. This will do absolutely nothing for games with 3D assets.
    I write my own MyDefrag scripts for maximum performance (games and some applications are still on the platter drive until larger SSDs come down in price).

    IIRC, Windows defrag only became SSD-aware at a later point. I think it skips them now but it wasn't always the case. Either way: trust but verify--it's your hardware!**

    The out-of-the-box MyDefrag scripts are written using Microsoft's research on fragmentation and performance; the author, J. Kessels, wrote a good article about it. He let his site expire a few days ago (as announced in advance), so you may be able to find a cached copy.

    ** I verified. In Windows 10, Defrag runs with the -o option which only retrims SSDs and does not defragment them:

    Code:
    C:\WINDOWS\system32>defrag c: /o
    Microsoft Drive Optimizer
    Copyright (c) 2013 Microsoft Corp.
    
    Invoking retrim on System10 (C:)...
    
    
    The operation completed successfully.
    
    Post Defragmentation Report:
    
            Volume Information:
                    Volume size                 = 118.45 GB
                    Free space                  = 35.51 GB
    
            Retrim:
                    Total space trimmed         = 32.94 GB
    
    It still has it's 18% fragmentation afterwards:
    Code:
    C:\WINDOWS\system32>defrag c: /a
    Microsoft Drive Optimizer
    Copyright (c) 2013 Microsoft Corp.
    
    Invoking analysis on System10 (C:)...
    
    
    The operation completed successfully.
    
    Post Defragmentation Report:
    
            Volume Information:
                    Volume size                 = 118.45 GB
                    Free space                  = 35.51 GB
                    Total fragmented space      = 18%
                    Largest free space size     = 24.51 GB
    
            Note: File fragments larger than 64MB are not included in the fragmentation statistics.
    
            It is recommended that you defragment this volume.
    
     
    Last edited: Sep 26, 2015
  14. RJK3

    RJK3 Registered Member

    Joined:
    Apr 4, 2011
    Posts:
    862
    Well those are very general reasons, and aren't a technical justification for anything.

    Changing from defaults doesn't improve performance and nor does it need to, but it does rationalise what are some irrational default settings in Windows 7.

    As even Mark Russinovich acknowledges (since you brought him up), Windows recommended pagefile sizes appear to have been influenced by the out-dated rule of thumb of setting the pagefile to 1.5 times that of physical RAM. Even he recommends setting the initial size of the pagefile to something that reflects actual usage. In my case, my initial page file setting is actually larger than it strictly needs to be - but it's significantly smaller than the wasteful 4-6gb that Windows would allocate by default. Since I prefer to run a lean machine, and have a reasonable understanding of how paging works, then I can comfortably make that choice.
     
  15. Rolo42

    Rolo42 Registered Member

    Joined:
    Jan 22, 2012
    Posts:
    571
    Location:
    USA
    Paging was created when RAM was expensive ($90/megabyte in 1990, Windows 3.0--you can get the full 16GB now for under that). RAM is no longer expensive.

    If you have enough RAM to where you're not regularly paging due to exceeding installed RAM or you're content with paging regularly, great. Wonderful. Your PC is 'adequate'.

    If you don't have enough RAM to support processes regularly without paging, you would certainly benefit from installing more RAM; the pagefile is there to keep the OS from crashing.

    It isn't rocket surgery.
     
  16. blacknight

    blacknight Registered Member

    Joined:
    Sep 25, 2007
    Posts:
    3,087
    Location:
    Europe, UE citizen

    Does this affect the reliability and durability of SSDs ? I mean. Other questions:

    - I often try some software in virtual mode ( Acronis try and decide ), this should not be a write cycle on SSD, is it ?

    - And what about virtual machines and SSDs ( always about reliability and durability ) ?

    - In my traditional HD, I have a second partition in which I move or delete many files: can I do safe this the same on a SSD ?
     
    Last edited: Sep 27, 2015
  17. Rolo42

    Rolo42 Registered Member

    Joined:
    Jan 22, 2012
    Posts:
    571
    Location:
    USA
    I'm not sure which point you're referring to but SSDs have a finite number of write cycles: the more frequently you write to them, the shorter its lifespan (in time). Defrag = lots of write cycles per instance. Heavy fragmentation = more blocks affected per file change = more write cycles.

    Note, however, SSD longevity is measured in petabytes, so it's not like you need to be careful to get years of service out of one drive (I've used this m4 for 4 years and it reports 85% life left).

    For sandboxing/virtualisation, you'll have to see how your product works but I imagine, like Shadow Defender, it redirects writes to it's own file structure on the disk. However, I wouldn't worry about it as we're splitting hairs here. Your hardware serves you, not the other way around...(except "In Soviet Russia...")

    Also note that HDDs have a finite lifespan as well; it's just a probability rather than clearly defined inevitability. Don't let foreknowledge skew your perception and judgement. It's like the difference between knowing you'll likely live 80-90 years versus having a death date of 83 years from birth.
     
  18. blacknight

    blacknight Registered Member

    Joined:
    Sep 25, 2007
    Posts:
    3,087
    Location:
    Europe, UE citizen
    Thank you. Another question, the last of my previous post, I had to edit it: I don't know how SSD work: if I create, for example, a second partition for data in a SSD, and then I move, delete, also often the files in this partition, the decrease of duration will affect the hole SSD or only this partition ? ( i don't know if this can be useful for me, but I ask ).
     
  19. Bill_Bright

    Bill_Bright Registered Member

    Joined:
    Jun 29, 2007
    Posts:
    3,836
    Location:
    Nebraska, USA
    How is that not a contradiction?

    If changing from the defaults does not improve performance, and if changing is not needed, then how can the defaults be irrational?

    That's hardly justification. 64Mb is HUGE! NTSF uses 4K sectors so even if your monster file is fragmented, the drive head is hardly bouncing around to excess. And compared to all the other files on your system, any that are larger than 64Mb will be extremely few.

    It is most likely you have well over 200,000 files on your C drive (I have 282,333 on C, and another 47,480 on my D drive). You are letting exceptions rule your world. I recommend using a program like WinDirStat to view your drive and you will surely discover you have very few files larger than 64Mb.

    Even if you have 1,000 3D asset files, that is a drop in the bucket.

    Open a command prompt and navigate to your \windows\system32 folder. Then enter: dir *.* /os

    That will list all the files in that folder by size. You should have well over 3000 files in there and I bet only a handful at most are over 64MB. I have 3705 files and only 1, MRT.exe, is over 64Mb, coming in a 134Mb.

    07/10/2015 06:01 AM 28,851,224 WindowsCodecsRaw.dll
    08/25/2015 01:38 PM 42,840,184 nvcompiler.dll
    07/10/2015 06:00 AM 46,214,656 imageres.dll
    08/26/2015 06:37 PM 134,753,440 MRT.exe
    3705 File(s) 1,799,155,842 bytes
    115 Dir(s) 182,082,899,968 bytes free​

    :( To keep it from crashing is but a small function of the PF.

    It is important to note that today's generation of SSDs do not suffer from the durability problems of first generation SSDs. Note many SSDs now come with 10 year warranties and I note SSDs are ideally suited for Page Files (see SSD FAQs, Should the pagefile be placed on SSDs?).

    If you want to dink with your Page File, I have no problem with that. Just understand what you are doing. Don't guess and in particular, do not think what was necessary with XP is now necessary with W7/W8/W10. That would be wrong.

    And I say again, it is NOT a set and forget thing (unless you let Windows manage it!). If do not take the time to study and understand commit rates, and if you don't regularly go back and re-evaluate your commit rates, then it is a mistake to change from the default settings.

    And above all else, do not assume what is right for your system is right for some one else's unless you have done a full analysis of their commit rate too AND will continue to re-analyze those rates regularly!

    I wish everyone (professionals, enthusiasts, hobbyist and the "just curious") who are, or feel they are technically inclined and like to give technical advice or opinions, had to sit through a technical "peer review". I've been in many. They are brutal. Every bit, byte, nut, bolt, and word of your work is torn apart bit by bit for critical review by your peers and superiors. You grow up fast. You learn to be tough skinned and not take criticisms and different opinions personally. And you learn not stoop to puerile personal criticisms. You keep it professional and in the end, the customer has a better product.

    Sadly, in forums like this, too many take different views, or presented evidence that counters their belief as some sort of personal affront, and then they get emotional and angry and lash out with a tirade of insults and whining and crying and other personally derogatory comments. :(

    Heated discussions between mature people can still be amicable - and highly productive and informative. We are good! :thumb: :)

    Now this really has nothing to do with the OPs questions about i5s and i7s and I suggest we return control of this thread back to him.
     
  20. Bill_Bright

    Bill_Bright Registered Member

    Joined:
    Jun 29, 2007
    Posts:
    3,836
    Location:
    Nebraska, USA
    Don't worry about it. Again today's generation SSDs don't suffer from the this problem. Your SSD will not wear out before you are ready to replace it with what ever technologies have superseded it.

    If you are not using this SSD in an extremely busy sever in a data center, no home user needs to worry about wearing out one of today's generation SSDs.
     
  21. blacknight

    blacknight Registered Member

    Joined:
    Sep 25, 2007
    Posts:
    3,087
    Location:
    Europe, UE citizen
    Thank you ! :thumb:
     
  22. Bill_Bright

    Bill_Bright Registered Member

    Joined:
    Jun 29, 2007
    Posts:
    3,836
    Location:
    Nebraska, USA
    No problem. And just for the record, my main computers are very busy systems nearly all day, every day, almost 365 days of the year. I've been using (now older generation) Samsung 840 Pro 256Gb SSDs on this system for 3 years now with no issues. I will NEVER go back to HDs (except for networked backup storage of all my system).

    In other words, I have no reservations whatsoever in recommending anyone buying or building or upgrading a computer today to go SSD. I note even major data centers are going SSDs and it is not just for speed. SSDs consume less power, generate less heat, and the latest generation SSDs have extremely long life expectancies with no moving parts. So even their higher initial costs ultimately pay off and result in savings over the long run. Unless you are on a really tight budget and need a computer right now with no time to save up to increase the budget, I see no reason not to go SSD.

    In fact, if you have to scrimp on the budget, I would recommend buying a less capable CPU in favor of a SSD over a HD!
     
  23. blacknight

    blacknight Registered Member

    Joined:
    Sep 25, 2007
    Posts:
    3,087
    Location:
    Europe, UE citizen
    As I opened this thread, I want to say what I decided to buy, after all the advices that I had here :) . So, I'll get a I5 3.2GHZ and a SSD. The only doubt is the RAM: I think 8 GB ( they should be enough also if I'll want to install a virtual machine ). No problem in the past with VirtualBox and an old pc 3 GB RAM.
     
  24. Bill_Bright

    Bill_Bright Registered Member

    Joined:
    Jun 29, 2007
    Posts:
    3,836
    Location:
    Nebraska, USA
    I think those are wise choices and I recommend at least a 250-256 SSD as it will hold all your applications too and allow your page file plenty of wiggle room while still keeping a nice chunk of free space for the various leveling features to do their thing.
    For sure, I recommend no less than 8Gb as again, that is the sweet spot. Some people might see performance gains if you install more, but it will not be near as much as it is going from 4Gb to 8Gb. And regardless, even with "only" :rolleyes: 8Gb of RAM and a SSD, this system will be no slouch by any means! :)

    And the nice thing about upgrading RAM down the road is it usually is not very expensive (as long as you don't wait until after it is obsolete).
     
  25. Rolo42

    Rolo42 Registered Member

    Joined:
    Jan 22, 2012
    Posts:
    571
    Location:
    USA
    SSDs remap logical/physical blocks to spread the wear evenly. A full disk (like, < 10% contiguous free space) will make this regular remap take longer. Partitioning won't affect this beyond leaving a portion unallocated (called "overprovisioning"); Samsung Magician leaves 10% unallocated by default when you turn on overprovisioning. I currently have it set to 20% since I'm nowhere near filling the disk and it can be changed at any time.

    re: Bill's points.
    • Definitely 250GB SSD or larger. Windows Side-by-Side (WinSxS directory) will grow like The Blob (B-rate '70's movie) and it's nice to also have \Users on it too (move Downloads, Videos, et. al. to platter storage if you have a large library in those and they don't need speed anyway)
    • 8GB or more definitely (though I would strongly recommend considering 16...my moderate browsing, PDF viewing, file exploring used 10GB yesterday...and I barely have anything installed on this since it was an insider preview build I was supposed to rebuild at launch...been busy..heh)
    • Adding RAM isn't quite the same as buying it outright, specifically two sticks versus four. Heat and timing issues may come into play
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.