I have to say...
...I always stick another heatsink on the controller of any NVMe drive I install in a PC. The ones for the Raspberry Pi or similar work quite well.
Samsung has updated its 970 EVO SSD gumstick SSD to boast numbers that beat Western Digital's fancy heatsink-sporting SN750. The Plus version of the 970 EVO has the same capacity levels as the original: 250GB, 500GB, 1TB and 2TB. It has a higher 3D NAND layer count than the 970 EVO's 64, but Samsung wouldn't provide the actual …
Is anyone else looking at "The EVO Plus can endure up to 1,200TB written and has a five-year warranty. All models are available now except the 2TB" and thinking, "That's only 600 writes."? The Samsung specs actually put the warranty at 150TBW https://www.samsung.com/uk/business/memory-storage/970-evoplus-nvme-m2-ssd/mz-v7s250bw/ that might be a lot if used in a laptop, but as cache for a storage box it could work out fairly low.
Cache is low-write. It's literally picking up "things you use often", writing them once and then reading them lots. That's the job.
Few tasks actually require huge write lifetimes. Active storage in a large array dedicated to nothing but huge amounts of write-able live storage, that's about it.
For ordinary servers, computers, etc. you just aren't generating anywhere NEAR enough data to write that amount over any reasonable lifetime.
1200Tbyte written means that if you can write at 3Gbit a second... what's that? An entire month of constant, full-on, maxed-out, nothing-but-writing, never-reading. It would literally take you a month or more to kill it, on average, in the worst possible kind of destruct test, generating data direct from a CPU and just writing it straight to disk, by writing alone at it's top specification speed (which is an entirely unrealistic scenario).
Nobody's generating data that needs be written constantly at 3Gbit/s onto one of these. If they are, these things aren't suitable and nor is anything except something made specially for that (IBM enterprise SATA SSDs for huge blade storage are in the 3 - 30PB range with 5-year life designs, this is 1.2PB, so approaching what you'd get in a datacenter SSD for a few hundred dollars). Most places will be bursting-writes, swamping constantly with READS (which are "free"), and anything serious will be spreading over hundreds/thousands of such devices.
Honestly, I wouldn't ever worry about SSD/NVMe lifetime anymore. It's now outside any average person's / IT department range for concern and has been for a few years.
I replaced all my client machine's drives with SSD. They are literally not even hitting 1% of their write-life, many years later. I have an EVO 850 in my laptop that gets 24/7 use. It's just hit 3-4% lifetime after nearly four years. In that time, I've replaced DOZENS of hard drives, even RAID-sets, high-write surveillance sets, and just ordinary server drives because of failure. I've not yet replaced one SSD that have been around for as long, if not longer. And I literally bought the cheapest, junkiest SSDs I could, expecting to just bin them as they failed and budget replacements for when I would have more money available. I've not had to.
I can't see me ever buying a hard drive again, unless it's to conform to manufacturer's specifications (e.g. storage arrays that don't "officially" support SSDs, CCTV sets, etc.)
I agree for everyday use, and 1200TBW (from the article, but I couldn't find it in a quick check of Samsung's tech specs) seems hard to hit, but the 150TBW warranty figure, in 5 years? If it's the cache on the front of some kind of NAS? We deal with data that comes in by the GB, gets churned for a bit and then sat on. Without knowing in detail how the caching algorithms on a storage box work I could imagine that all of that data will at some point get written to the cache layer. Once is not a problem, you've got a few SSD drives in a device doing cache and 150 TB is slightly more than I currently expect to generate in 5 years. However, if that data ends up going through the cache a few times in the process then it starts to get close.
"Nobody's generating data that needs be written constantly at 3Gbit/s onto one of these...."
Are you kidding me? 1200 TB i literally read AND write that every HOUR! Our Video file libraries are now into the EXABYTES of data! Just ONE video project at 60 fps 8192 by 4320 pixels at 16 bits per RGBA channel is (16 986 931 200 bytes PER SECOND or 16 Gigabytes per second or 61 Terabytes per HOUR!
I use 20 to 30 of those clips at a time at which 1.8 Petabytes per one hour project! AND we have like 20 projects on the go so we are doing 50 Petabytes PER DAY just in our department! This Drive would literally LAST ONE SINGLE HOUR in our facility!
I run two NVMe cards in my PC. I use a Asus NVMe PCIe x4 expansion card for the second. Works a treat. I am running x99 setup so that helps with the lanes a little.
And yes NVMe has hit the buffers with PCIe3 tech. I think PCIe4 is largely being passed over for 5 later this year.
From the review i read over at RPS (https://www.rockpapershotgun.com/2019/01/22/samsung-970-evo-plus-review/), they actually found the 970 Plus to be slower than the 970.
That was running the 970 Plus version which doesnt come with a heat sink, and they had planned to rerun the tests once they had an appropriate heatsink version, (I'm at work so cant access to check if they updated the review or not), but if your planning to get the non heatsink one, then you are at least according to RPS advised to stick with the regular 970.
Will probably be downvoted here but just what is the point of listing IOPS? 500,000 IOPS on one doesn’t mean it is the same speed as another when it comes to read/write surely? If one drive has 1k block size and the other has 2k surely the second would result in twice as much data transfer per second?
Biting the hand that feeds IT © 1998–2019