back to article Don't bother competing with ViPR, NetApp - it's not actually that relevant

NetApp doesn't need to compete with EMC's all-singing all-dancing ViPR product. Why not? NetApp only has one storage array product – ONTAP – whereas EMC has an entire family of arrays, which ViPR virtualises into a single resource and delivers to server apps needing storage. Look at it like this: the EMC storage array product …


This topic is closed for new posts.

That's not my interpretation of ViPR

My interpretation of ViPR is the equivalent of self service for internal infrastructure organizations. If the VMware guys need more storage, they don't have to talk to the storage guys they can just do it via a self-service portal (self service for other infrastructure guys, not end users). If the windows admin guys need more storage for a server they can do it via self-service portal, unix guys, etc. So if you have one platform of storage, or you have 50x it doesn't matter as it's the self-portal portion that they are bringing to the table.

It's not really that new of a concept, i.e. VMware and Oracle have had the ability to carve up storage from an array for years now, but it's something I believe in general most places have been very wary to allow non-storage guys to do... I'll have to wait and see if it brings the proper controls in place to let me feel comfortable allowing non-storage guys to provision directly from the arrays.


I don't see how in the world NetApp thinks ONTAP can do what they want under a single OS. The thought of all these new features being bolted on, patched in, etc; onto an existing platform is a recipe for disaster IMO. It won't perform, won't scale, wont be easy to manage.

There is a reason Isilon is so good at what they do. There is also a reason VMAX/Symmetrix, HDS VSP, HP 3Par, etc are so good at what they do; with purpose built technologies. The thought that a single set of controllers will do all of tasks mentioned under ONTAP, as a single point of failure (like how they try to sell me on using snapshots for backups) It just scares me, and should anyone listening to their fodder.

Bronze badge

As opposed to trying to layer an Elastoplast layer to try and make EMC's product range look like a unified solution (like VNX)? Or perhaps the terrible SMB implementation on Isilon?

Sorry, but your ignorance betrays your lack of knowledge and experience of OnTap. It works, and works well because of its relatively simple architecture. Oh, and snapshots work fine (and not just for NetApp - ask Dell, HP, Amazon and others).

You argue single point of failure, I'd argue less to go wrong - and having been on the sharp end of both, believe me - at 2am, OnTap is easier to get up and running.


Speaking in tongues

EMC's issue is they are speaking a range of protocols, storage modes and interfaces, as well as management options. ViPR is an attempt to converge the management that fills a huge void in their offering.

NetApp has one solution, but it is clearly the leader in its class. The adding of Object Storage will make them really solid, though Storagegrid may not be the answer.

In both cases, they are a big step away from Unified Storage on the Ceph model. This is the real future of storage, and I understand that EMC has announced a Unified platform just today. I think they perceive a real risk in the blockIO business and are protecting themselves against a relatively rapid transition to more scalable solutions.

Thumb Down

Actually, NetApp has done quite well with the concept of a 'single' OS. The OS isn't really that big of a deal. It's WAFL that allows them to do what they do. And do it very efficiently. WIth the new version of Clustered Data ONTAP, the OS is more like VMware then any typical OS. Now it runs virtual versions of the OS across physical servers making it even more stable then ever before.

I also don't think anything is 'bolted' on, as you imply. The features are actually services/daemons, which is no different then a very powerful UNIX server running Oracle and Web software at the same time. The OS is optimized to run those services extremely efficiently and able to scale incredibly well.

While you say that Isilon is so good, I would argue that NetApp has done incredibly well given the success of the company as the second largest storage company in the world. Clustered Data ONTAP is exactly like VMware ESX, which many businesses run 20-40 virtual Operating Systems (Windows, Linux, UNIX, etc...) all under a single set of controllers and I'm sure you're using ESX.


Hi unredeemed,

Can you please advise which specific workloads HDS VSP, HP 3Par etc are good for?

Anonymous Coward

You realize that NetApp does exactly what you're saying, right? And that there are more copies of Data ONTAP installed in the world than any other storage OS?

I'm *pretty sure* this means they know what they're doing.


Respectfully, disagree.

Disclosure, EMCer here.

Chris - you probably would expect this from me, but I disagree. Let me make my argument, and lets see what people think. I ask for some patience from the reader, and an open mind. I'm verbose, and like to explore ideas completely - so this won't be short, but just because something isn't trite doesn't make it less accurate.

Read on and consider!

The choice of "multiple architectures to reflect workload diversity" vs. "try to serve as many workloads as you can with one core architecture" is playing out in the market. Ultimately, while we all have views - the customers/marketplace decides what is the right trade off.

a) EMC is clearly in one camp.

We have a platform which is designed to "serve many workloads well - but none with the pure awesomeness of a platform designed for specific purpose". That's a VNX. VNX and NetApp compete in this space furiously.

BUT we came to the conclusion a long time ago that if you tried to make VNX fit the space that VMAX serves (maniacal focus on reliability, performance, availability DURING failure events) you end up with a bad VMAX. Likewise, if we tried to have VNX fit the space Isilon fits (petabyte-level scale out NAS which is growing like wildfire in genomics, media, web 2.0 and more) - you end up with a bad Isilon. Why? Because AT THE CORE, you would still have a clustered head. Because AT THE CORE, file/data objects would be behind ONE head, on ONE volume. Because, AT THE CORE, you would still have RAID constructs. Are those intrinsically bad? Nope - but when a customer wants scale-out NAS, that's why Isilon wins almost overwhelmingly over NetApp cluster mode - when THOSE ARE THE REQUIREMENTS.

b) NetApp (a respected competitor, with a strong architecture, happy customers and partners) seems to me to be in the other camp. They are trying to stretch their single product architecture as far as it can go.

They finally seem to be "over the hump" of core spinnaker integration with ONTAP 8.2. Their approach of federating a namespace over a series of clustered FAS platforms has some arguments to be sure. The code-path means that their ability to serve a transactional IO in a clustered model is lower latency than Isilon (but not as fast as it was in simple scale-up or VNX, and certainly not the next-generation VNX). They can have multiple "heads" for a "scale out" block proposal to try to compete with HDS and VMAX. In my experience (again, MY EXPERIENCE, surely biased) - the gotchas are profound. Consider:

- With a Scale-Out NAS workload: Under the federation layer (vServers, "Infinite Volumes", there are still aggregates, flexvols, and a clustered architecture. This means that when a customer wants scale-out NAS, those constructs manifest - a file is ultimately behind one head. Performance is non-linear (if the IO follows the indirect path). Balancing capacity and performance by moving data and vServers around. Yup, NetApp in cluster mode will have lower latency than Isilon, but for that workload - that's not the primary design center. Simplicity and core scaling model are the core design center.

- Look at the high-end Reliability/Serviceability/Availability workload: In the end, for better or worse, NetApp cluster mode is not a symmetric model, with shared memory space across all nodes (the way all the platforms that compete in that space have been architected). That is at the core of why 3PAR, HDS, VMAX all have "linear performance during a broad set of failure behaviours". Yup, NetApp can have a device appear across different pairs of brains (i.e. across a cluster), but it's non-linear from port to port, and failure behavior is also non-linear. Is that OK? Perhaps, but that's a core design center for those use cases.

- And when it comes to the largest swath of the market: the "thing that does lots of things really well", I would argue that the rate of innovation in VNX has been faster over the last 3 years (due to focus, and not getting distracted by trying to be things it is not, and was never fundamentally designed to do). We have extended the places where we were ahead (FAST VP, FAST Cache, SMB 3.0, active/active behaviors, overall system envelope), we have filled places we were behind (snapshot behaviors, thin device performance, block level dedupe, NAS failover, virtualized NAS servers - VDM in EMC speak, Multistore/vServers in NetApp-speak), and are accelerating where there are still places to run (the extreme low-end VNXe vs. FAS 2000, larger filesystem support)

Look - whether you agree with me or not as readers - it DOES come down to the market and customers. IDC is generally regarded as the trusted cross-vendor slice of the market - and the Q2 2013 results are in, and public, here: http://www.idc.com/getdoc.jsp?containerId=prUS24302513

Can a single architecture serve a broad set of use cases? Sure. That's the NetApp and EMC VNX sweet spot. NetApp has chosen to try to expand it differently than EMC. EMC's view is that you can only stretch a core architecture so far before you get into strange, strange places.

This fundamentally is reflected in NetApp's business strategy over the last few years. They themselves recognize that a single architecture cannot serve all use cases. Like EMC, they are trying to branch out organically and inorganically. That's why EMC and NetApp fought so furiously for Data Domain (the B2D and cold storage use case does best with that architecture). I suspect that's why NetApp acquired Engenio (to expand into the high-bandwidth, use cases - like behind HDFS, or some video editing that DDN, VNX, and others compete in). The acquisition of Bycast to push into the exa-scale object store space (which biases towards simple no-resiliency COTS hardware) is another example.

On the organic front, while I have ZERO insight into NetApp's R&D - I would suggest that their architectures to enter into the all-flash array space (FlashRay?) would really be best served with a "clean sheet of paper" approach of the startups (EMC XtremIO, Pure Storage, etc) rather than trying to jam that into the "single architecture" way. If they choose to stick with a single architecture for this new "built for purpose" space - well - we'll see - but I would expect a pretty mediocre solution relative to the competition.

Closing my argument....

It is accurate to say that EMC needs ViPR more than NetApp. Our portfolio is already more broad. Our revenue base, and more importantly customer base is broader.

NetApp and NetApp customers can also benefit now - and we appreciate their support in the ViPR development of their southbound integration into the ONTAP APIs (and I think their customers will appreciate it too). NetApp is already more than a single stack company. Should they continue to grow, and expand into other use cases - they will need to also continue to broaden their IP stacks.

Lastly - ViPR is less about EMC or NetApp - rather a recognition that customer need abstraction and decoupling of storage control plane and policy REGARDLESS of who they choose - and that many customers whose needs are greater than the sweet spots of "mixed workload" (VNX and NetApp) have diverse workloads, and diverse architectures supporting that (often multi-vendor).

This is why ViPR is adjacent to, not competitive with SVC (array in front of array), NetApp vSeries (array in front of array), HDS (array in front of array), and EMC VPLEX and VMAX FTS (array in front of array). These are all valid - but very different - traditional storage virtualization where they: a) turn the disk from the old thing into just raw storage (and you format it before using); b) re-present it out for use. All these end up changing (for worse or for better) the characteristics of the architecture in the back into the characteristics of the architecture in the front. ViPR DOES NOT DO THAT.

Remember - ultimately the market decides. I could be completely wrong, but hey - innovation and competition is good for all!

THANKS for investing the time to read and consider my argument!


EMC may have good kit

But they are expensive and the sales team don't listen.

I'm currently in the middle of a "storage off" 6 vendors, only one will succeed. EMC failed at the first hurdle.

The brief is/was clear: 150TBs of unified storage, 1tb or there abouts of flash, 15tbs of 15k (currently we have 75 spindles of 15k) the rest NL-SAS. NFS/CIFS at 10gigs block on 8gig FC

What was the spec they recommended? 3x100gig ssd and 15x600tb 15k drives. The rest of the space with ~40 spindles of 3tb NL-SAS.

The cost? ~£300k

plus I have to buy their stupid branded rack, even after I expressly told them no.

When I quizzed them they effectively shrugged



NetApp does some great storage servers, at a high price. NetApp servers are running FreeBSD.

There are ZFS based storage servers running OpenSolaris, that are much cheaper than NetApp. Sure, ZFS are not clustered, but if you dont need clustered storage, then ZFS will provide ample resources, at a very low price. Check these ZFS benhcmarks, vs NetApp servers. ZFS is 32% faster, and NetApp is 10x more expensive:


And also, ZFS beats EMC / Isilon, NetApp, etc:



There are other ZFS vendors as well: Tegile, GreenByte, Nexenta, etc



My links are messed up, check them to find 10x more expensive NetApp, for slower performance.


Benefits and Tradeoffs to tightly coupled scale-out vs Distributed archs

Without a doubt there are benefits and tradeoffs between these types of architectures. Lets examine a few from some who is NOT a vendor although i am biased.

1) I don't know about you, but i'd prefer my failure domain to absolutely be "non-linear" as is the case for Netapp's cluste ontap (tightly coupled) vs Isilon's (distributed). In former case a dramatic failure will be isolated to the specific domain, in the later case it will affect the entire cluster. Conceptually, this is similar to creating FC zones. Did you dump everything into a single zone or did you isolate and logically partitioned the fabric? Why? Because we wanted smaller logical failure domains.

2) Performance - Aggregating performance across multiple nodes with file stripping and load balancing is certainly important, however, I would claim that this was an essential component 10-12 years ago, with older nodes with limited CPU power, limited amounts of memory and slower buses. However that's not the case any longer as it is evident from the VNX 2 benchmark. With the advent of multi-core CPUs, memories into the hundreds of GBs per node, multi PCIe buses and Flash, a single node will rarely be the bottleneck for a LUN or a file.

3) Is stripping the answer to good performance? - Well, VMware taught us that it's not and certainly i don't see anyone complaining that a single VM is not stripped across multiple vsphere nodes. What VMware has taught us is that intelligence, data mobility, fine grained control, QoS and workload isolation provide more benefits and good performance than just cross-node stripping. How different is clustered ONTAP from this concept? I'd say it's not.

4) HW Upgrade path - What does it take to do a HW upgrade on a distributed scale-out architecture? Well, cash. You need to upgrade all the nodes that comprise it, at one shot! Do tightly coupled architectures have this requirement? They don't.

5) SW Upgrade path - What does it take to a SW upgrade? See #4

6) Node failure - Can a node failure impact performance on the entire distributed architecture? It sure can. How about a SW failure? It can as well.

7) Isolation - Do distributed architectures provide complete workload isolation (SW and HW)? Unfortunately they can't.

Does this mean distributed architectures such as Isilon are not good? Absolutely not. What it means, is that they need to be positioned for the environments for which they were initially developed.

VNX-VMAX - The Symm came into the picture in the early '90s, a wonderful, rock solid array with thousands of satisfied customers, well ahead of its time, and the company's cash cow for years. If I were EMC would I consider replacing the current version with a VNX? Would you do it? I don't think so. Instead, I'd milk that cow until she was bone dry. Customers are happy it, EMC's happy, and with the company being publicly traded, and a for profit organization, so are its shareholders.

VNX 2 - Active-Active - I think this option is great, and i'd have liked to see more vendors go down that route (do you hear Netapp?) but what does this mean if I'm a VNX customer? Does it mean I have to bypass my pools and use Flare LUNs only? And what about the space efficiencies tied to pools? Do I lose them?

EMC can't on one hand beat the drum of scale-out with Isilon, XTremeIO, VMAX and Scale-IO and at the same time extoll the benefits of the VNX2 scale-up architecture. Why not have them both? Why not provide customers with the option of a dual controller system that can scale out? You can still sell them 2 controllers if that's what they need.

Does anyone have any reservations, that if EMC were to truly re-write the VNX codeline and not just update 10%, the VNX wouldn't make the transition to a scale-out? I don't.

As far as ViPR is concerned, what active need does it serve? Well, If you're an EMC customer there's a good chance you will need ViPR. EMC can positioning it any way they want to, but at the end of the day, the primary need it serves is data services enablement and management across several different platforms. If i'm an EMC customer, in this position, I will, without a doubt, consider ViPR to centralize data services and management functions and that's good thing.


Re: Benefits and Tradeoffs to tightly coupled scale-out vs Distributed archs

Disclaimer: I work for NetApp, but these are my opinions.

EMC needs ViPR because their customers need it. When you have such a broad and immiscible product portfolio as EMC does, there needs to be at least some common ground for management and administration. Frankly, I think ViPR's use cases would be a lot more clear had EMC not chosen to gone with the industry's latest buzzword -- i.e., "software-defined" -- but they're certainly not the only storage vendor out there using that moniker.

Your quote that "[at] the end of the day, the primary need it serves is data services enablement and management across several different platforms" is the most concise description I've read for ViPR's existence.


Good Response From virtualgeek

A well thought through and eloquent response from VirtualGeek, not something you often see on here.

I’m interested to see which way the market goes. If you can use ViPR to provision pools of storage you have created yourself using commodity hardware, will organizations do it?

Personally I wouldn't run my T1 apps on commodity hardware that hasn't had the rigorous testing that large storage vendors put their hardware through, but I can definitely see it being used for test and dev or small organizations with in house IT and the time to tinker.


ZFS more common than NetApp and EMC Isilon combined:


"...We [Nexenta] alone have half as much storage, we figure, under management as NetApp claims. Add Oracle and you’re already bigger than any one-storage file system. Add all Solaris and illumos deployments on top of that and you are 3-5x larger than NetApp’s OnTap. In fact, the number of ZFS users is larger than those using NetApp’s OnTap file system and EMC’s Isilon file system combined...."

This topic is closed for new posts.