Feeds

back to article NetApp: Flash as a STORAGE tier? You must be joking

Completing its array-to-the-server flash vision, NetApp is rolling out server flash caching software, reselling Fusion-io server flash cards, and validating seven third-party server flash products with its ONTAP arrays. The company emphatically disagrees with flash as a storage tier, saying it is both simpler and more efficient …

COMMENTS

This topic is closed for new posts.
Stop

Flash Accel availability

"Flash Accel is free to download by NetApp customers" - it will be actually, quite likely in Dec 2012.

0
0
Anonymous Coward

Aww how cute!

Problems with terrible IO performance? Shouldn't have bought NetApp then. Get a BlueArc/HDS box and you won't need to worry about some extra acceleration hack.

3
0
Trollface

Re: Aww how cute!

@AC, shame on you!

Server based Flash/SSD cache is about solving a problem that can't be solved in the array - yup, HDS included - latency created from the interconnections and layers. This is about putting the data as close to the application that needs it, without much of anything in between. And as much as I love HDS and BlueArc(HDS), I've also seen plenty of issues with IO Performance and latency, but the reality is, that can't be helped, it's the architecture; the AMS/HUS, and VSP are just that little too big to fit inside a normal sized computer, damn designers.....mumble....mumble... can't get the data close enough....grumble... Flash based cache for all then.

0
0
Anonymous Coward

Re: Aww how cute!

Agree that NetApp is still a nice (if costly) filer with additional features, not an enterprise array... its day in the sun is waning. What is so great about HDS? HUS, looks like a standard dual controller array with standard RAID, standard management features and standard replication. I am sure it is fine, but not substantially better than say VNX. V7000 from IBM is more differentiated than either from a software perspective (hardware being a wash) as they include real time, in-line compression and third party virtualization with VMotion (or whatever the IBM term is) in the mid-range array. That is interesting. VSP is, again, fine and there is nothing wrong with it, but there is nothing out of the ordinary either. Standard monolithic array, like VMAX or DS8. HDS has a loyal install base. Why? I seriously want to know.

0
0
J.T

"The company emphatically disagrees with flash as a storage tier"

They mean they can't get SSD to work in their arrays in any meaningful performance way. WHOOPS

Serious bonus points for leading with an API though, but every single all flash array out there is going to leverage it to be the flash tier that NetApp can't get to work.

2
1
Coat

Revlon

That'll be the old glam slap flash cache hash, stash & dash.

0
0
Bronze badge

that'll help

my 88-90% write workload..

or not..

hmm..

I wish my 3PAR had the ability to send all writes to flash by default and tier it down like Compellent !

1
2

This post has been deleted by its author

Anonymous Coward

Re: that'll help

It can, you just need some SSD and Adaptive Optimization, all new writes will go to where ever the volume was last located, you can tune it in to SSD then enable AO. This can be handled on a per application basis, so doesn't need to be system wide, but you're small quantity of SSD drives had better outperform your FC drives for writes.

No disrespect to Dell but Compellent is mid range at best,, so comparing it to 3PAR is not helpful to either platform. You should check out the Dell best practice, they're not so keen on you using default tiering model once SSD is in the box. The default profile is system wide, so You have to start changing profiles and manually assigning volumes etc to avoid SSD filling up. The devil's always in the detail....

0
0
Anonymous Coward

Tiering is overated

IBM XIV, and similar designs, is the storage array of the future. Same approach as the Googles, Yahoos, FBs of the world use. Take a bunch of Linux - x86 servers each with their own disk/SSD, cache, and CPU. Cluster those servers (or controllers, if you prefer) so that the I/O is processed by every node in the array at the same time. Use micro-caching tech so the I/O blocks are sized to the workload or job. Wide-stripe the data across every drive in the array so that it is inherently load balanced. Done. There is less to argue about and administer, but it works really well. It might not be the answer for ultra high IOPS workloads, but those will be run in-memory on the server in the future anyway (e.g. SAP HANA).

0
1
Anonymous Coward

See comments here for some XIV reality

http://forums.theregister.co.uk/forum/1/2012/05/03/hp_3par_gvgp/

0
0

This post has been deleted by its author

This post has been deleted by its author

Anonymous Coward

"See comments here for some XIV reality

http://forums.theregister.co.uk/forum/1/2012/05/03/hp_3par_gvgp/"

What's the point?... That XIV doesn't have an SPC-1 benchmark. I wrote that it isn't going to compete for the highest performance workloads, but most people don't need that kind of performance.

0
0
Anonymous Coward

Looking Through the Rear View Mirror

I agree with the earlier poster that suggests they can't get flash as a primary tier to work properly.

WAFL was never designed for the type of latency possible with modern HW. Time to admit defeat and

roll out something new. Proclaiming that flash is not a primary layer won't make it go away!

What about the write biased workloads out there (VDI anyone)

0
0
Coat

About NetApp performance

Hi all, Dimitris from NetApp here.

It would be nice if the various anonymouns posters that seem to speak with such authority divulged who they are and where they work (and even nicer if it were true).

NetApp systems run the largest DBs on the planet, including a bunch of write-intensive and latency-sensitive workloads.

ONTAP/WAFL holds its own just fine: http://bit.ly/Mp4uu1

In addition, it's all about maintaining performance without giving up advanced data management features and tight application integration.

Like dragster cars are made to race in a straight line for a short period of time, there exist arrays that are super-fast but are not reliable and/or do not have rich data management features.

Flash is just another way to get more speed for some scenarios.

Depending on where it's located can help, too.

I don't care what array you have at the back end, at some point it runs out of controller steam.

What if you could have 2000 application hosts each with their own augmented cache that is aware of what's happening in the back-end storage?

Would those 2000 hosts not have more aggregate performance potential than any array can handle?

D

0
2
This topic is closed for new posts.