Dara?
What should we 'Reference dara' for?
FFS - PROOF READ
The all-flash data centre: does it exist? Will it ever? Can we really imagine a data centre with no spinning disks and all the executing apps relying on solid-state storage? Such is the vision put forward by Violin Memory in its "Disk is Dead" campaign. But is this the future of the data centre? Violin chief marketeer Amy …
I don't have an email program set up on this laptop so the Tips and Corrections link is useless, pity it wasn't a form instead. I've sent corrections on my main PC in the past which got ignored so what's the point? El Reg has taken a dip in its quality of stories recently which isn't helped by sloppy typing or lack of checking. A pendant? yes a bit, an ass? - he aw, he aw, he always calls me donkey.
While most customers may not "require" flash for all of their online data, the economics of the media will dictate what is available.
cMLC NAND is reaching a $/TB price parity with 2.5"/SFF 15K RPM SAS drives. 3D cMLC NAND will likely reach $/TB price parity with 2.5"/SFF 10K RPM SAS drives. 3D TLC NAND will likely reach $/TB price parity with 2.5"/SFF 7.2K RPM SATA drives.
At the same time, 3.5"/LFF SATA will continue to increase in density via SMR. But SMR kills random write performance, which will limit SMR to sequential data access pattern apps.
Because of these trends, I believe storage will bifurcate, with a high performance tier of 3D cMLC and TLC media, and a cold/object tier of SMR LFF SATA media. The 2.5"/SFF 7.2K RPM SATA may emerge as part of a hybrid middle tier.
Object storage systems could be built with metadata stored on SSD, and data stored on SMR SATA.
Additionally, an aggressively hybrid storage system with a large flash tier and SMR back end might be useful for file shares and similar types of use cases.
The next battle will be for the middle performance tier. Can flash players create systems using high density 3D TLC (or perhaps QLC) NAND and aggressive efficiencies to squeeze capacity out of flash to provide storage cheap enough for fileshares? Or will hybrid players create storage systems which can squeeze performance out of SMR SATA to support fileshares?
Chris must be feeling a need to defend the drive vendors after the huge drop in enterprise disk demand this last quarter. 15K RPM drives are dead already and 10K seems destined for the same fate perhaps by the end of this year, though we'll always find people with nostalgic desires to use spinning rust. Violin is essentially right for primary tier storage!
That leaves the thorny question of what happens to the bulk storage tier, The "media evolution" comment above is close to the mark...economics will decide, and if there is a sure profit, the market will fund the foundries. With prices closing fast, the economics start to become compelling even before price parity is reached..
But will we need Chris' trillions. Data compression and deduplication will reduce space demand by as much as 5x. And, once 3D NAND gets out of the lab properly, the incremental cost to add 2 or 4x the layer count will be small. New error correction will move the sweet spot to TLC.
1 Trillion now looks like $40 Billion and possibly less! That's not much given market size.
Don't drink the one that smells like methanol. It's having some negative cognitive effects, mate.
Bulk storage it not goign to be "flash" in our lifetimes. A post-flash technology may displace disk eventually, but we don't have the planetary capacity to replace tier 1 storage today, and probably won't for decades.
So, no. Bulkstorage is magnetic. Forever and ever until a radically new technology that isn't so damned hard to make is found amen.
When I read this
"Violin chief marketeer Amy Love blogs a colourful metaphor. “HDDs are being replaced as the primary IT data storage medium."
I see the words "primary IT data storage medium", to me that doesn't mean archive, or nearline, that means transaction processing. I think everyone agrees flash has already taken over a lot of that today and won't be long until it takes over the rest in the near future(only thing holding it back right now I think is just simple refresh cycles on acquiring new equipment).
You clearly didn't get the propaganda e-mails proclaiming the death of disk and the rise of the all-flash datacenter. You would have to be a political consultant to read those and come up with a sense that they were trying to anything other than the death of disk everywhere, in all datacenters was both inevitable and would occur soon.
Really, the propaganda e-mails and press releases were that bad.
to compare the price/gb of compressed and deduped flash, vs the price of raw 15k disk and say its the same.
esp. at the figures marketing tend to pull out of their asses with regards to the compressibility and deduplicability (might be a word) of data. You can compress and dedupe on any medium, so lets just say that flash is still significantly more expensive, eh?
Flash isn't more expensive than 15K hard drives. It's most definitely the other way round. A 1TB flash drive is around $360 today. Sure, it isn't the fastest "enterprise" flash drive, but it's still 1000x faster on random IOPS and 5x on sequential.
A 500GB 15K HDD is around $650.
Puts things in a different perspective!
>Flash isn't more expensive than 15K hard drives.
I'm quoting the article:
"We should note that Violin's raw $/GB cost is higher than 15K disk, but after deduplication and compression are applied, the effective $/GB cost is similar to 15K disk, even below it, depending upon the data reduction ratio. Intuitively, these customers are storing any immediate-access data, like mail, on flash too."
Also, a comparison of the cost of a random 15k drive vs a random "non-enterprise" flash drive means nothing, you have to look at a bunch of factors , MTBF, write endurance, power usage, warranty etc.
For instance, the Samsung 850 Pro 1TB drive, which is about £350 "has enhanced endurance, built to handle a 40GB workload daily". Which is fine if you only intend to write 40GB daily to it, what if you want to write 400GB? Then you'll need an actual enterprise SSD, which are many times more expensive..
Regular hard drives don't have a write endurance, just MTBF , so that isn't a factor for them.
Execept regular hard drives do seem to wear out after around 4 to 5 years. The failure rate tends to climb, sometimes steeply. So much for being better than flash!
Please put all your vital data on consumer SSDs. Pretty please. Then just wait for it. I want to see how happy you are after the first power event.
You *might* have to replace your HDD after 4 years, but you *will* have to replace your SSD when the write endurance is up, and SSD warranties are being given in "data written", vs "years" for a traditional HDD.
What's your source for 15k enterprise drive failure rates?
Fundamentally, the article is artificially reducing the price of the Violin SSD storage by factoring in dedupe and compression, then comparing it to (unnamed, undetailed) 15k disk. Given that nothing much is detailed by the article in terms of comparative setups (EMC array with 15k disk vs Violin array with SSD for instance?) it isn't exactly heavy on facts and just seems to be a puff piece on Violin.
How does the fact that scarcity tends to drive up prices, that "nobody" would buy 15k disks anymore, and that flash can only supply 12% of storage demand fit into your world view?
I applaud the Register for calling out this debate about the all-flash data center replacing disk. It never ceases to amaze me that we continue to debate technology instead of how to solve customer problems. In my opinion, two things drive changes in the market: customer needs and economics. Customers need different media for different types of problems, and they adopt different media at different rates due to the pain of the issue and the cost of the solution. Marketing the disk-free data center is purely a vendor ploy and not in the interests of the customer.
Gareth Taube, VP Marketing at Infinidat
- all flash data centre - would save energy, right ?
- it consolidates storage arrays, no extra arrays for dev,qa etc because of the IOPS headroom
- you only need to address geo risk and media breakline redundancy
- a flash unit is at least !!! 3.3 times more cost effective to store tier1 data (av.data reduction factor 5, 1K$ per 1TB eMLC vs 650$ 15k 0.6TB HDD)
- economics and investment life cycle set the pace
1) Tier 1 data is not "the whole datacenter"
2) Good luck finding enough flash to do even tier 1 in a large enterprise.
3) Most "Tier 1" workloads are fine on 15K SAS, so "Tier 0" is now the new term for "needs flash".
And you still haven't addressed how, exactly, we're supposed to physically manufacture enough flash to meet the tier 1 demands of the world with flash. We simply can't do it today. We cannot do it. And that's even if we turned all our fabs over to 100% flash production and completely ceased manufacturing new DRAM!
Sorry mate, but you're living in a fantasy world. Some companies can get away with flash across the entire Tier 1. Most can't. And only the smallest of the small will ever have a 100% all-flash datacenter.
This isn't going to get solved until after we're well into post-flash technologies.