Frankly, I see no reason to go with hard drives anymore for the vast majority of users. When you spread the cost of the SSD over the life of the SSD (many are warrantied for 10 years!) it is a no-brainer - especially when you factor in the lower energy costs to power the drives and to cool the drives too.
The only parity that are nearing is on adoption rates from the manufacturers. 1TB ssd will cost 170$ in 2017, when the cost of 1tb hdd is around 50-70$. Panagiotis
And yet you can still expect a SSD to last many years longer, consume less energy and produce less heat making them still a bargain. There's a reason more and more data centers are moving to SSDs - it is not just for better performance - it is because they pay off in the long run. But note your analogy will not hold true indefinitely. In a few years, more computers will being using SSDs than hard drives. That will drive down the costs of SSDs further and slow down, even reverse the costs of hard drives as they become less popular. Parity will be achieved, then reversed just as it has with every single other technology that supplanted a legacy technology.
When I last looked into this (a year ago), SSD offered poor cost to performance to storage size ratio compared to 10KRPM drives in RAID10 (we were using hundreds of terabyte drives), for 5 year supported enterprise drives. Some locations energy costs can be significant, if your data is very bursty SSD can be a lot more energy efficient on average (I've yet to encounter an enterprise drive that can safely spin down in RAID configuration), not forgetting that less heat generated also means lower Air Con costs.
I really hope this soon will become a reality, I freaking hate stupid HDD's. I currently have a SSD + HDD combo, I just hate to hear my HDD spin. My dream is to rely completey on SSD's, would be cool to have 3, one for Windows and apps, one for data storage, and one for games.
I've noticed in just the past couple months the cost of SSD's, including the mSATA variety have gone down quite a bit. When I first bought a Samsung 850 EVO mSATA they were so new there were 0 reviews on Amazon about them. They may have just come out that day. It was about $125 for a 256 MB one. Now they're going for about $75, so I grabbed 2 more as I own 2 mobile workstations now that each have 2 slots to hold mSATA cards. Regular SSD's are already lauded for being energy efficient, cool & quiet. The mSATA's are those things to like the 3'rd power. I think most laptops will have them in there in the next 2 years, and even most PC users. They'll inevitably build slots into the motherboards where you can install them just as you would sticks of RAM, and/or just slide them in a card slot. As of right now you can get adapters like Icy Dock's + Syba mSATA to SSD converters to use them in towers. I won't use anything but them anymore. Save even more energy, not to mention protect your equipment by getting APC BackUPS.
We use two 2 TB HDs per computer. For data and backups. SSDs for the OS. SSDs are too expensive for large amounts of home storage.
M.2 is a different standard and in many ways even better than mSata. If the motherboard has M.2 ngff slots, I would use them in preference to any other hard drive interfaces. mSata is pin compatible with mini PCI and a lot of newer laptops with mini PCI slots for a Wan card can use mSata drives in that slot. I just got one with that feature being one of the selling points. It isn't even that new, 3 years old at this point. Another Lenovo ex corporate castaway. The market is flooded with them right now. I saw several complete ones with the mSata compatible Wan slot for under $100 yesterday.
True, though I wasn't suggesting they were the same. I don't follow the laptop scene much at all, but for desktops, mSata doesn't seem to be all that popular. M.2 (and mainly NVMe) is definitely the future for SSD's connectivity. I don't see mSata growing much considering what M.2/NVMe has to offer.
This box has six inexpensive 240GB SATA SSDs in Linux software RAID10. That gives me 720GB of very fast storage. I have another box with four 120GB SATA SSDs in Linux software RAID10. I was messing around, and one of the SSDs got disconnected. By the time I noticed, it was totally out of sync. But the array rebuilt in about 20-40 minutes.
That's an advantage they have, and that I like about them too. Not to mention being older they're going down in price quicker and will continue to do so. And I don't really see any advantage with the M.2's. I've compared them in benchmarking and see no difference... they're just bulkier and less practical. I think they'll be the ones phased out and the mSATA's will stand the test of time instead. Only time will tell.
The main benefit of M.2 is that they allow the use of the NVMe protocol instead of AHCI since M.2 ports are essentially PCIe slots. However your ssd, motherboard, and OS need to support it. Here's 2 1Tb NVMe M.2 SSD's in RAID 0 .https://scontent-ord1-1.xx.fbcdn.net/hphotos-xtp1/v/t1.0-9/12316501_916068608481400_7211240022508779422_n.png?oh=ab23e7567bc0ed80b5566a46d862135f&oe=571EB3E3
In normal home computer setups, almost no one use any RAID setup. Here it comes the first disadvantage of SSDs: once it's go bad, all your data are gone, with little hope to have your data recovered. And most of the time, there was no warning at all before your SSD dies. On the contrary, HDDs normally gives you some warning sign before it goes bad, and there are normally pretty good chance that you can recover your data before the HDD completely goes unreadable.
That's a very good reason to use one of the redundant RAID options with SSDs The problem is that there's no decent software RAID in Windows, so you either need an expensive RAID controller card, or you're stuck with glitchy motherboard RAID. Also, if the RAID controller fails, you're often hosed, unless you can find an exact replacement (driver revision too) and are lucky. It's much easier in Linux using software RAID. RAID0 (striping on at least two drives) gives you 100% of total capacity, and roughly additive speed, but no redundancy. All data is gone if any of the drives fails. RAID1 (mirroring on at least two drives) gives you 50% of total capacity, total redundancy, and roughly additive read speed, but no increase in write speed. RAID10 (striping of mirrors on at least four drives) is the best compromise for speed, capacity, reliability and rebuild time. You get 50% of total capacity, you can lose one drive (even one from each mirror) with no data loss, and speed about midrange between RAID0 and RAID1. RAID6 (striping with distributed parity data on at least four drives) gives you N-2 capacity, is faster than RAID10 (especially for arrays with many drives) and has about the same redundancy, but rebuilding after drive failure and replacement takes much longer.
The point of me posting the benchmark is to show that NVMe is significantly faster than SATA. I wasn't suggesting everyone run a RAID array. This isn't entirely true. Some of the dual bay laptops come pre-setup with a RAID 0 array.
Really? RAID0? That's crazy! For persistent storage, you want RAID1. I never use RAID0 for anything except tempfiles, fast editing space, etc.
I was working on my roommate's laptop (DV7) once and was surprised to see that he has 2 640Gb HDD's in RAID 0.
Raid0 is the default configuration I would suspect. If the laptop has the hardware capacity for two drives, Raid0 is more than likely the default configuration. I've taken a look at a couple and that was the way they were set up. Speed has its selling points, even if it is living dangerously.
maybe these type of laptops will account for 0.01% of the laptop market? Majority of consumer laptops sold nowadays only has 1 drive bay, or only have 1 drive installed, even when there are two drive bays. My laptop, a Toshiba p75-a7100 is such configuration.