Move over Violin, OCZ, Seagate, Intel, et al. SMART Modular Technologies is crowing that it's now number one in the notebook-format flash drive space, having just introduced its new Optimus solid state drive, which it calls "the world's fastest multi-level cell and highest capacity SAS SSD." The 2.5-inch Optimus uses 2-bit multi …
Holy hell 1.6TB in a 2.5in?
Bugger me a SSD that holds 1.6Tb in a 2.5in form factor that going to make most people remortgage or sell elderly relatives...
I really want one.
How quick will this babe burn out under Write load?
Nand is great, really is and as a database professional I totally embrace it - its a game changer. But, they've still not sorted the write ware problem out; MLC especially, we have PCM which may be the fix but time will tell.
I've been doing a series of blog articles (http://tinyurl.com/3u2ycpc) on SSD and how it changes how we think in the database world, only on part 3 of what will probably end up a 10 parter :)
Not so much of a problem
These days wear isn't so much of an issue. For exaple they say these drives should be able to support 10 full writes a day for five years. with the 1.6TB model, my back of the envelope calculations put that at nearly 30PB! I think 30PB of write is enough to cover 99.99% of users. Only realy big enterprise is going to go beyond that.
Amen to that..
I think is more that 3 years since the last time i heard that wear was an issue.
The real problem now it that those bugger are near impossible to "clean" if you want to get rid of them and private data is stored...
Beer for all those who sell porn bloated drivers SSDs to female coworkers...
It would be good if that is the case.
Just checked one of my client's servers, a call centre, just a medium sized business and one of there databases has done 4TB of writes since middle of April (5 months). Another 14.5TB's since middle of May (3 months), those aren't enterprise installations either. Another 17TB's since end of June....
SO, my point is that in a database environment ware is a consideration, the biggest worry I have is there is absolutely no tests publicised in this area to see how long the live.
@Tony: write wear
Since your TB written increased as your time range decreased, thus making your info fairly illegible, lets go off the assumption you write 14.5TB over a 3 month period of time...
14.5TB x 1.6 (for write amplification) x 1000 (convert to GB) / 120 (120GB vertex 2 drive size) = 193.33 full drive writes every 3 months. 3000 (expected cell endurance) / 193.33 (full writes from above) * 3 (months: time period) = 46.55 months (3.88 years) expected life span on a 120GB vertex2 SSD, not accounting for overprovisioning. Put a couple of 240GBs in a RAID1 and you can double that and have redundancy. If you're waiting more than 5 years to replace your primary DB server disk drives, you dont care enough to bother with SSDs.
Lessons about "wear" and "ware" and "their" and "there" are for another post.
Hard to clean?
It *ought* to be simplicity itself. SATA has boasted a drive-erase command for quite some time, that logically writes the whole drive to zero in one command. Because of the nature of the beast, it takes the drive firmware quite a while to actually do that ... but it's non-interruptible by power-failing it, the erasure resumes as soon as power comes back. Also one has to take it on trust that the drive firmware really does also zero the bad sectors that have been reallocated (or not care, since it's beyond most ordinary hackers to read them).
So implement that command on SSD firmware (or use it, if it's there). I'd expect that drive-erase on flash memory ought to be an awful lot faster. If I remember right, a flash erase operation is much faster than either read or write.
Using RAID 1 isn't going to double the life, RAID 1 is mirroring so both drives will wear (sorry ware :)) out equally, perhaps you were thinking RAID 0 but with that there is no redundancy.
I've a test running now I've stopped the drive timeouts I was getting, I'll leave it for a few weeks and see what the results are. Am capturing all the logical disk through perfmon so should see any degradation.
Also - it was two entirely different systems I was looking at, neither enterprise level writes which was my point!
"""So implement that command on SSD firmware (or use it, if it's there). I'd expect that drive-erase on flash memory ought to be an awful lot faster. If I remember right, a flash erase operation is much faster than either read or write."""
As far as I've seen, every SSD from the last few years has this, though I haven't looked at any of the really low end consumer controllers. And it is fast. Some SSDs take /up to/ 2 minutes. Some return in seconds. From what I can tell, the speed depends on the architecture of the drive, whether they can use the bulk NAND erase commands or they have to do it page by page.
Plus, as far as I've heard, NAND cells don't have memory like a magnetic bit (Where you can tell that a 1 used to be a 0 because it's slightly less magnetic than a 1 that was a 1 before) so an erase should ultimately destroy everything, without requiring multiple overwrite passes.
Also, the drives which store data encrypted on the NAND chips (If you securely lock the controller, people could pull the chips off and read directly... so nice controllers store things encrypted) generally toss the old key and generate a new one on secure erase. So even if someone read the data, they'd be hard pressed to decrypt it.
I think that the original complaint about securely deleting info is that if you don't want to wipe the whole drive, just a file or two, you're a bit out of luck, since those over writes will land on entirely different blocks. (Store your sensitive files encrypted, maybe?)
If its under $200, i go off. If not.. meh.
How much for the 1.6TB baby? We're not saying
Because the price would like trigger instantaneous cardiac arrest in many of El Reg's readers.
Replying to two posts.
@Tony Rogerson If you do use a SSD you might be able to give a bit of insight into how long it would last if given it can take 10 full writes a day for 5 years that's 18,250 full writes so if your writing the full capacity of 1600Gb that's a total of 29,200,00Gb of writes before it reaches a point that could raise the ware issue, I would be interested to see the results.
@ Joe User - Yea it would, but I think I have a few elderly relatives I can sell off, should make it not so wallet destroying.
SDD's are catching up
Granted they are still expensive per MB but it goes to show it is indeed happening and in a couple of years we may indeed start to see SSD's replacing HDD's *IF* the price per MB ratio is right. Just look at how fast SDD's have increased in capacity compared to the now sluggish and modest increases in their spinning cousins.
Bring it on I say.
I've kicked off a test using IOMeter for a 54GB file, 8KB 100% sequential write with 8 outstanding IO's on a OCZ Agility 3 60GB, its on Windows 2008 R2 and I'm logging each minute the logical disk so I can see any degradtation over time. I'll leave it running and come back next weekend and let you know the results.
Yer yer, my spelling is crap, but I more than make up technically for that in SQL Server ;)
Does anybody have any figures in this area, I'm actually doing a project on using commodity SSD's for my masters BI project.
Won't take a weekend
If you're writing data which the drive cannot magic too much (IE don't write blocks of zeros, or repeating patterns) then, under a random (sequential too, probably, but I've never bothered running quite such a useless test) write test, from a fresh drive, you'll inevitably see a drop in performance once you've written one drive capacity (In your case 60GB.) Since the drive should be quite fast while it's fresh (secure erase to re-freshen) this does not take long at all. And since your drive doesn't have much spare capacity (60GB advertised, 64GiB actual, call that 14%, which is low for a SandForce drive,) it should look quite ugly once you've run it out of already-erased blocks. If you run it for a weekend, I imagine the performance you end up with will make you cry a little bit.
This is why we use TRIM. Without TRIM, you'll fill up all but the 14% spare blocks, even at low write speed. With TRIM, any empty space on your filesystem should be available as erased blocks, though if you're writing full speed, the drive probably won't have time to process the TRIMs.
As they say Pobodys Nerfect, I am kinda kack handed with my spelling too.
But that's for posting that reply with the IOMeter details.
Performance gone - knacke'd in less than a fortnight
Had a sudden drop from 1ms per io to 11ms per io (60mb/sec to 5.5mb/sec), another 24 hours later and its 15ms per io (4mb/sec).
Going to keep it running, it lasted longer than I thought, but it proves that in a database setting with lots of write activity then wear rate is an issue, anyway, I'll blog next week sometime once I've all the data collated.
drives are exciting again.
About time too, CPUs are Meh since they went multi-core instead of Hi-GHz, HDDs stopped evolving 15 years ago.
At least some exciting and fast-paced improvements to look fwd to in IT.
- Vid Hubble 'scope snaps 200,000-ton chunky crumble conundrum
- Updated + vids WHOA: Get a load of Asteroid DX110 JUST MISSING planet EARTH
- 10 years of Facebook Inside Facebook's engineering labs: Hardware heaven, HP hell – PICTURES
- Very fabric of space-time RIPPED apart in latest Hubble pic
- Massive new AIRSHIP to enter commercial service at British dirigible base