11 posts • joined Thursday 1st July 2010 13:53 GMT
Disclosure, EMCer here.
Chris - you probably would expect this from me, but I disagree. Let me make my argument, and lets see what people think. I ask for some patience from the reader, and an open mind. I'm verbose, and like to explore ideas completely - so this won't be short, but just because something isn't trite doesn't make it less accurate.
Read on and consider!
The choice of "multiple architectures to reflect workload diversity" vs. "try to serve as many workloads as you can with one core architecture" is playing out in the market. Ultimately, while we all have views - the customers/marketplace decides what is the right trade off.
a) EMC is clearly in one camp.
We have a platform which is designed to "serve many workloads well - but none with the pure awesomeness of a platform designed for specific purpose". That's a VNX. VNX and NetApp compete in this space furiously.
BUT we came to the conclusion a long time ago that if you tried to make VNX fit the space that VMAX serves (maniacal focus on reliability, performance, availability DURING failure events) you end up with a bad VMAX. Likewise, if we tried to have VNX fit the space Isilon fits (petabyte-level scale out NAS which is growing like wildfire in genomics, media, web 2.0 and more) - you end up with a bad Isilon. Why? Because AT THE CORE, you would still have a clustered head. Because AT THE CORE, file/data objects would be behind ONE head, on ONE volume. Because, AT THE CORE, you would still have RAID constructs. Are those intrinsically bad? Nope - but when a customer wants scale-out NAS, that's why Isilon wins almost overwhelmingly over NetApp cluster mode - when THOSE ARE THE REQUIREMENTS.
b) NetApp (a respected competitor, with a strong architecture, happy customers and partners) seems to me to be in the other camp. They are trying to stretch their single product architecture as far as it can go.
They finally seem to be "over the hump" of core spinnaker integration with ONTAP 8.2. Their approach of federating a namespace over a series of clustered FAS platforms has some arguments to be sure. The code-path means that their ability to serve a transactional IO in a clustered model is lower latency than Isilon (but not as fast as it was in simple scale-up or VNX, and certainly not the next-generation VNX). They can have multiple "heads" for a "scale out" block proposal to try to compete with HDS and VMAX. In my experience (again, MY EXPERIENCE, surely biased) - the gotchas are profound. Consider:
- With a Scale-Out NAS workload: Under the federation layer (vServers, "Infinite Volumes", there are still aggregates, flexvols, and a clustered architecture. This means that when a customer wants scale-out NAS, those constructs manifest - a file is ultimately behind one head. Performance is non-linear (if the IO follows the indirect path). Balancing capacity and performance by moving data and vServers around. Yup, NetApp in cluster mode will have lower latency than Isilon, but for that workload - that's not the primary design center. Simplicity and core scaling model are the core design center.
- Look at the high-end Reliability/Serviceability/Availability workload: In the end, for better or worse, NetApp cluster mode is not a symmetric model, with shared memory space across all nodes (the way all the platforms that compete in that space have been architected). That is at the core of why 3PAR, HDS, VMAX all have "linear performance during a broad set of failure behaviours". Yup, NetApp can have a device appear across different pairs of brains (i.e. across a cluster), but it's non-linear from port to port, and failure behavior is also non-linear. Is that OK? Perhaps, but that's a core design center for those use cases.
- And when it comes to the largest swath of the market: the "thing that does lots of things really well", I would argue that the rate of innovation in VNX has been faster over the last 3 years (due to focus, and not getting distracted by trying to be things it is not, and was never fundamentally designed to do). We have extended the places where we were ahead (FAST VP, FAST Cache, SMB 3.0, active/active behaviors, overall system envelope), we have filled places we were behind (snapshot behaviors, thin device performance, block level dedupe, NAS failover, virtualized NAS servers - VDM in EMC speak, Multistore/vServers in NetApp-speak), and are accelerating where there are still places to run (the extreme low-end VNXe vs. FAS 2000, larger filesystem support)
Look - whether you agree with me or not as readers - it DOES come down to the market and customers. IDC is generally regarded as the trusted cross-vendor slice of the market - and the Q2 2013 results are in, and public, here: http://www.idc.com/getdoc.jsp?containerId=prUS24302513
Can a single architecture serve a broad set of use cases? Sure. That's the NetApp and EMC VNX sweet spot. NetApp has chosen to try to expand it differently than EMC. EMC's view is that you can only stretch a core architecture so far before you get into strange, strange places.
This fundamentally is reflected in NetApp's business strategy over the last few years. They themselves recognize that a single architecture cannot serve all use cases. Like EMC, they are trying to branch out organically and inorganically. That's why EMC and NetApp fought so furiously for Data Domain (the B2D and cold storage use case does best with that architecture). I suspect that's why NetApp acquired Engenio (to expand into the high-bandwidth, use cases - like behind HDFS, or some video editing that DDN, VNX, and others compete in). The acquisition of Bycast to push into the exa-scale object store space (which biases towards simple no-resiliency COTS hardware) is another example.
On the organic front, while I have ZERO insight into NetApp's R&D - I would suggest that their architectures to enter into the all-flash array space (FlashRay?) would really be best served with a "clean sheet of paper" approach of the startups (EMC XtremIO, Pure Storage, etc) rather than trying to jam that into the "single architecture" way. If they choose to stick with a single architecture for this new "built for purpose" space - well - we'll see - but I would expect a pretty mediocre solution relative to the competition.
Closing my argument....
It is accurate to say that EMC needs ViPR more than NetApp. Our portfolio is already more broad. Our revenue base, and more importantly customer base is broader.
NetApp and NetApp customers can also benefit now - and we appreciate their support in the ViPR development of their southbound integration into the ONTAP APIs (and I think their customers will appreciate it too). NetApp is already more than a single stack company. Should they continue to grow, and expand into other use cases - they will need to also continue to broaden their IP stacks.
Lastly - ViPR is less about EMC or NetApp - rather a recognition that customer need abstraction and decoupling of storage control plane and policy REGARDLESS of who they choose - and that many customers whose needs are greater than the sweet spots of "mixed workload" (VNX and NetApp) have diverse workloads, and diverse architectures supporting that (often multi-vendor).
This is why ViPR is adjacent to, not competitive with SVC (array in front of array), NetApp vSeries (array in front of array), HDS (array in front of array), and EMC VPLEX and VMAX FTS (array in front of array). These are all valid - but very different - traditional storage virtualization where they: a) turn the disk from the old thing into just raw storage (and you format it before using); b) re-present it out for use. All these end up changing (for worse or for better) the characteristics of the architecture in the back into the characteristics of the architecture in the front. ViPR DOES NOT DO THAT.
Remember - ultimately the market decides. I could be completely wrong, but hey - innovation and competition is good for all!
THANKS for investing the time to read and consider my argument!
Oracle support position is actually clear, and positive.
@ICS - Disclosure - EMCer here.
I know there's a lot of FUD out there (often from Oracle) re their support and licensing stances when it comes to virtualization.
The formal policy is actually pretty darn reasonable - but isn't what people are TOLD it is.
I did a detailed post (including a screenshot of the authoritative Metalink article) here: http://virtualgeek.typepad.com/virtual_geek/2010/11/oracle-and-vmware-a-major-milestone.html
There's also a lot of confusion re: licensing (need to license any core it could run on, and not honouring things like DRS Host Affinity settings). Done right, there is absolutely no "virtualization tax" when virtualizing Oracle on VMware, and we're finding people are saving boatloads of money and getting BETTER performance.
Again, I don't want this to seem like an ad, but I also did a video at OOW where we discuss those things that are used to scare customers from doing the right thing: Performance, Support, Licensing - and of course "are other people doing it" (answer = YES, about 50% of Oracle users according to the Oracle Users's Groups). That video is on youtube here: http://youtu.be/gHyIA454YbQ
Resolving support issues.
Disclosure, EMCer here.
@Alain - to double-up on J.T.'s comment - please escalate.
Actually, let me apologize first - you shouldnt be having support issues, and I'm sincerely sorry to hear that.
If you don't know how, or where, to escalate with your account team - you can contact me. Easiest way to do this while remaining anonymous is to post a comment on my blog (http://virtualgeek.typepad.com). I won't post your comment, but will direct you internally poste haste. If you can also get me SR (service request numbers), I can followup with the people that gave you unsatisfactory service.
BTW guys - most CLARiiONs are now 3-5 years old, and are pretty aged. And, JT has obviously been around the block (not saying there is any issue with any specific component), but as with anything mass manufactured, when a part fails, there is a tendency to fail in many places/customers at once (as there was a manufacturing issue).
LOL - no spelling mistakes = edit?
Disclosure - EMCer here.
@VeeMan - trust me - that's just how I write and speak :-) Had it come through any approval by marketing - all the technical info would have been cut. And YES, I'm **THAT** verbose (much to my wife's chagrin). If you want evidence of the type of person I am - just read my blog (easy - google "virtual geek"). I've been out there for a while, so who I am is no secret.
I'd quit before being censored in the way you suggest. Coaching, guidance - man, I need that constantly. But changing what I say? Never. My comments are my own - my blog is my own - for better or for worse....
Also - FWIW - theres a lot of "old tapes" in your response. We're pretty active in the benchmarking game - and have been through 2010 (and will continue). We've learned and adapted. True - almost all benchmarks (at least in storage land) don't reflect well on the nature of the bizarro-world that is the real world - all shared storage subsystems very rarely support a single type of workload at a given time. That said - the lesson was learnt. People like "record breakers", so - were doing it constantly now.
Thank you Chris!
Disclosure - EMCer here.
Chris - thank you for posting the comment, was honourable to post it in my view.
FWIW - while I disagree with the original article, I do think Nexenta did well on their initial participation in the HoL. Like my first comment - these sorts of live mass events are full of danger, problems, and it's a real test of tech and people.
With that said, back to the marketplace battlefield - where there is enough room for broad competition, and broad choice.
(the author of the response) - Chad Sakac (aka virtual geek - http://virtualgeek.typepad.com)
PS, if it seems erudite, overly polite, low on swear count - that's purely because I'm Canadian. Trust me - where I come from, that was a full out furious flame :-)
Disclosure - EMCer here.
@frunkis - indeed that's your choice, and every customer does indeed - make a choice. I don't dispute the validity of a broad set of solutions on the market.
Every customer makes a choice. In the last month, here's a short set of customer who have shared their public choice to choose EMC (many of them publicly including what competitors they evaluated).
- English Premier League Club: http://www.emc.com/about/news/press/2011/20110914-01.htm
- Columbia Sportswear: http://www.emc.com/about/news/press/2011/20110830-03.htm
- KPIT: http://www.ciol.com/Storage/Cloud-and-Virtualization/News-Reports/VMware-Cisco-EMC-deploy-Vblock-at-KPIT/153897/0/
- Northrup Grumman, Lone Star College, Northern Hospital of Surrey County: http://www.emc.com/about/news/press/2011/20110831-01.htm
- Heritage Auctions: http://www.emc.com/about/news/press/2011/20110825-01.htm
- Washington Trust: http://www.emc.com/about/news/press/2011/20110823-02.htm
- Texas School District: http://www.emc.com/about/news/press/2011/20110817-02.htm
- Curtin University of Technology, SPAR Group, Elliot Health Systems: http://www.emc.com/about/news/press/2011/20110824-01.htm
- Columbia University: http://www.emc.com/about/news/press/2011/20110816-01.htm
Every customer is unique - so the reasons for every choice is almost as unique.
Look - the point here (at least my point :-) is not that Nexenta bad, NetApp bad (though they seem to have been that way in your view), EMC good. I'm clearly biased. That choice is for every customer to choose, and I respect their choices (how can you not?). I'm purely disputing the facts in the article that are incorrect.
As you note - they have a nice UI for Opensolaris ZFS. And, have ported parts of it to Ubuntu to deal with the ZFS outstanding legal issues & Oracle basically killing Opensolaris - which is sad, because ZFS (like many things formerly Sun) is, IMO, good technology.
Competition = good. Good for customers, good for everyone.
If there is ever an opportunity for your business, I hope that you'll consider EMC (at least give us a chance to win your former NetApp infrastructure, now Nexenta infrastructure). There's no harm in looking at options, right?
1PB in a rack - good but not great.
Disclosure - EMCer here.
Also - missed it on my earlier comment.
@LarryRAguilar - your point is a good one, and that highlights my point. 1PB in a 42 standard rack is good, and hey - congrats to Aberdeen.
EMC's current shipping dense config is 1.8PB in a 42 standard rack.
And, as per my earlier comment - I'd encourage customers to get multiple quotes on configs - we're all subject to the same market forces :-)
Oh, and BTW, we're not stopping there. While our stuff is based on the same commodity components as the other guys - customer demand infrastructure to have certain capabilities.
When that need stops, we won't need to engineer our own storage processors and enclosures (all from commodity components). Today, the integrated value (far more in the software than in the hardware, but some in the hardware still) that drives customer choice is something customers value, and the market votes.
Disclosure - EMCer here.
Chris - the VM Volume "advanced prototype" (shown in VSP3205 at VMworld) was a technology preview of this idea, and yeah, it's an important idea, and a disruptive idea.
Anyone who has managed a moderate to large deployment of virtualization knows that the "datastore" construct (on block or NAS storage) is not ideal - as then the properties of that datastore tend to be shared by ALL the things in it. It would be better if the level of granularity was a a VM, but WITHOUT the management scale problem. That's what was shown.
Today, the storage industry (and of course, I personally think that EMC does this more than anyone, and can prove it) are doing all sorts of things to be more integrated (vCenter plugins, making the arrays "aware of VM objects through bottom up vCenter API integration, VASA, VAAI, etc) - but unless something changes, we're stuck with this core problem - VMs are the target object, but LUNs and filesystems kind of "get in the way".
I'm sure that VMware will run it like all the storage programs they have run. The APIs are open, and available to all - but of course, the early work tends to focus on the technology partners supporting the largest number of customers.
More customers use EMC storage with VMware than any other type; and invests more resources and R&D (both by a longshot) - so it's no surprise that the demonstration in the session featured EMC storage so prominently. Pulling something like that is NOT easy, and a lot of people put in a lot of work into it.
For what it's worth - VMware is simply CHANGING what is important to customers and valuable from storage. Certain data services are moving up (policy-driven placement of VMs), certain ones are pushing down (offload of core data movement), and "intelligent pool" models (auto-tiering, dedupe) become more valuable as they map to simpler policy-driven storage use models.
While this was just a technology preview - if it comes to pass - vendors who are able to deliver strong VM Volume implementations, with VM-level policy and automation will become even more valuable.
Just my 2 cents.
EMC - supports SIOC
Disclosure - EMCer here.
Chris, FYI - EMC supports SIOC (in fact any block storage target on VMware's HCL supports it - as it's a VMware feature, not an array feature - hence our focus on VAAI which actually requires us to actually do something to support it).
SIOC is aligned with autotiering (from us and others) - with SIOC resolving instantaneous contention conditions through prioritized throttling, and auto-tiering resolving the ongoing issue of VMs with different IO loads on the same datastore.
A bit more commentary
Disclosure - EMCer here...
Chris - thanks for the article.
I did a followup post here: http://virtualgeek.typepad.com/virtual_geek/2010/07/understanding-what-were-doing-with-atmos.html
(and also an update to the original post you linked to).
Hope it clarifies what we're thinking. Everyone makes boo-boos, I think this is a move in the right direction.
- Facebook offshores HUGE WAD OF CASH to Caymans - via Ireland
- Review Best budget Android smartphone there is? Must be the Moto G
- NSFW Confessions of a porn site boss: How the net porn industry flopped
- World's OLDEST human DNA found in leg bone – but that's not the only boning going on...
- OHM MY GOD! Move over graphene, here comes '100% PERFECT' stanene