Flash had a fantastic year in 2013 with an enormous number of developments. It was a year of generally positive flash transitions, with cell geometry shrinking, all-flash arrays springing up, flash companies being bought, flash companies crashing back to earth after inflated IPOs or just crashing, and happiness spraying out …
TLC is present in Samsung's SSDs
"TLC flash remains relegated to thumb drives, camera flash cards and the like."
I believe that is not true.
Samsung 840 EVO SSDs (as well as their predecessors 840), use TLC.
Since they are the best value currently on the market for non-pro SSDs, I believe they might have quite a chunk of the consumer SSD market.
(and I would be surprised if the Samsung supplied 1TB SSDs in current MacBook Pros had anything else than TLC)
The biggest innovation this year is the large capacity Samsung 840 Evos.
I just bought a 1TB version of this for our backup server for less than 500 quid!
Now that is revolutionary!!
Using cheaper flash for servers
Due to the lower life expectancy of the consumer grade SSDs it is never recommended for servers and critical applications.
However the price difference and capacities are many times more attractive and if you are not able to use enterprise grade then is there a solution using consumer grade...
How about running 2 x SSD in a RAID 1 with different initial usage. So if MTBF is 2 years for your current usage profile you put 2 x SSD in your machine and after a year move the second drive to a new machine with a new SSD in a new RAID1 (or the same machine to provide an extra volume or create a RAID 10 array). Obviously this could be extended to different RAID types to create a RAID 10 array of many disks or you could start with RAID 6.
This minimises the chance that both disks will fail at the same time (probably less than two enterprise units installed at exactly the same time). As long as you have hot spares the data should remain safe and also be magnitudes faster than using HDDs but within a sensible budget.
Re: Using cheaper flash for servers
Feasible, though a test rig / lab experience seems to be called for. I am not sure the raid hardware knows about trim commands for instance.
Re: Using cheaper flash for servers
"I am not sure the raid hardware knows about trim commands for instance."
I've found that consumer drives actually work fine. Use RAID 1/10, increase over-provisioning (I just double whatever the drive shipped with), and back up to spinning disk (preferably in another chassis/location). Samsung drives have great garbage collection which does the job nicely. All this assumes you have periods of idle time (at night usually) for rsync and garbage collection though. You could even use a clustered file system if you need realtime multibox redundancy at SSD speeds, though you'd still have to have some time each day for garbage collection. Also 10Gb/sec networking probably. That's all a little over my pay grade though.
Right. And what the writer is pleased to call " ridiculously short write endurance level" is calculated to be 31+ years (EVO 840 1 TiB drives 100 GiB written per day). Some notes on this number. It is based on Samsung's conservative p/e cycle rating; test on the drive have shown that it will likely get twice the lifetime promised by Samsung. Typical consumer workloads are estimated at 10-30 GiB per day. So 6 times the cited number (180 years) does not sound improbably. Ridiculously short endurance indeed.
On the one hand, it is quite a bit higher than my expected endurance let alone that of my PC. On the other hand let's try to map that to enterprise usage. I'd say 5 years is a highish time frame for a server to be used in an enterprise before being upgraded. Taking the lowest life estimate we get a write rate of 600 GiB per day. I suggest that is a pretty high number for a single drive even in an enterprise setting.
Need more pics
of the HGST SSD1000MR.
Only spotted two.
Disappointed capacities at the <£100 price level haven't gone up in the past year or two.
Patents more harm than good
How can there possibly be a patent covering the idea of Flash memory on DIMM? Ever since flash memory was invented, it has been placed on the memory bus to provide persistent storage. Linux has an entire device subsystem devoted to that sort of device, the MTD subsystem.
I concede that putting Flash memory on a DIMM is a new invention. I claim that it is an obvious invention to anybody skilled in the art, that nobody did it before because it wasn't economical, so it shouldn't have been patented. Beyond the technical slog of actually making it work, the main barrier I saw was the software to use it effectively. I seriously doubt that Netlist had anything to contribute to Diablo and SMART's product.
Re: Patents more harm than good
You're joking, right? Gluing NAND onto a DDR3 interface is just a matter of a couple of alligator clips and spit, right?
There is significant innovation involved in creating a device that physically talks laid/store hardware protocol off a memory controller @ 1600 MT/s, represented in the OS as a block device.
I guess there is a significant amount of disrespect for the difficult process of innovation and R&D required to make something like this go from an idea to product.
Diablo's strength is the ridiculously low latency write perf. Read perf is pretty much on par with any NAND flash. Flash is flash after all.
I take my hat off to the amazing team at diablo.
Enterprise drives are quicker then consumer drives. As well as how they deal with non-compressible files, encryption, monitoring / report and other enterprise features and of course warranty.
In terms of write cycles techreport has been running a long term test on consumer drives,
You can see that TLC is starting to get increased failures past the 200TB of writes to a 250GB drive. MLC still going strong past 300TB of writes. And as long as wear leveling is working then you can 4x these figures for a 1TB drive so expect at least 800TB of writes for a 1TB drive.
As for the 840 EVO, the software is interesting. RAPID driver uses RAM for caching potentially giving some PCIe SSD level performance depending on what apps you are using. I'm running a MySQL app on a development PC and experiencing some pretty decent performance improvement. Also potentially reduces writes to the drive. Shame the driver doesn't work with linux or Windows Server.
- Twitter: La la la, we have not heard of any NUDE JLaw, Upton SELFIES
- China: You, Microsoft. Office-Windows 'compatibility'. You have 20 days to explain
- Apple to devs: NO slurping users' HEALTH for sale to Dark Powers
- Is that a 64-bit ARM Warrior in your pocket? No, it's MIPS64
- Apple 'fesses up: Rejected from the App Store, dev? THIS is why