20 posts • joined 1 Jul 2010
Re: virtualgeek What did you just say about vSAN 2.0?
Disclosure - EMCer here (Chad)
Matt - I absolutely agree that first and foremost is what are WE doing.
Our 100% focus has been the customer through this. Again, I'm not going to convince any haters, but this is the essence:
a) We have a rapidly growing user base of XtremIO.
b) We had features we know would require more metadata than we had in the current generation of hardware (compression, and performance improvements)
c) The internal debate was long, and hard. Should we deliver the new capabilities to the existing install base, or only to customers who buy future hardware with more RAM?
XtremIO's architecture of always storing metadata in DRAM (vs. paging to on-disk or storing on SSDs) is an important part of linear behavior always. Conversely, It does mean that total X-brick capacity and features is directly related to the DRAM capacity and the on-disk structure (which relates to the amount of metadata).
People can (and are absolutely entitled!) to second-guess our decision. We decided the right call was to:
1) make the capability available to all (existing customers and those in the future) - which requires an persistence layout change.
2) to do it quickly, as this is a very (!) rapidly growing installed base.
3) to ensure that this change would carry us through all upcoming roadmapped releases.
4) to build into the plan (seriously - we have) budget for field swing units (capacity to be deployed to assist on migrations), as well as for EMC to absorb the services cost, and wherever possible help with non-disruptive svmotion at scale (where the workloads are vSphere VMs)
5) to commit to support the happy 2.4 customers (which are legion) for years to com if they want to stay there..
This is the first disruptive upgrade (in spite of some of the earlier comments) of GA code. I agree, we should have changed the on-disk structures prior to releasing the GA code - that's 100% on us.
That all said - I'm proud of how the company that I work for is dealing with this: actively, quickly, and with the customer front and center in the thinking.
Now - on to the "never go negative" point, what I was saying Matt was this: anyone who has been around the block has seen difficult moments in every piece of the tech stack, from every vendor. As important as anything else is: *how the vendor deals with it*. This is true in the storage domain, the networking domain, the PaaS domain, the cloud domain - whatever.
If any vendor feels they are immune to issue, and make their primary argument about "the other guy" - customers generally (in my view) have a negative reaction - because they know better. That's it.
Re: What did you just say about vSAN 2.0?
Disclosure - EMCer here (Chad)
No - that's not what I said (and people can go read the blog for themselves to verify).
VSAN can be a non-disruptive upgrade to the underlying persistence layer BECAUSE you can use svmotion and vmotion to vacate workloads non-disruptively. Aka - a small amount of "swing capacity" is created by clearing data, new structure, swing workloads back.
BTW - the "hyper converged players" (Nutanix, Simplivity, EVO:RAIL partners) do this as well. It's handy (and frankly an approach that can be used broadly to avoid what would otherwise be disruptive).
Why can this always be used in those models? Well - because all their workloads are VMs.
You **CAN** version metadata (this raises other engineering tradeoffs), but when you change the format/structure of on-disk structures, it involves vacating data. VSAN 2.0 will have some on-disk structure changes, but I would wager (will defer to VMware) will use this "rolling workload move" to make it non-disruptive (although data is getting vacated through the process)
It's a joke to anyone on here that claims they are beautiful and flawless (30 seconds and google "disruptive" + "vendor name" - I'm not going to wade in the muck by posting links like that for every of the vendors piling on here) - so I'd encourage all vendors to have a little humility. Customers don't like people who go negative.
The trade off for us was: "do we save the new features for customers with new hardware" (aka more RAM for more metadata), or "do we give the features to all". We chose the latter. Hence why we continue to support 2.4 for years to come. AND we also chose to plan for swing hardware and services to help customers in the migrations. Frankly, I'm pretty proud of how EMC is approaching this difficult decision, and thinking of the customer first.
I'm sure the haters out there and competitors will disagree - but hey - so be it.
Don't be a hater :-)
Disclosure, EMCer here.
We are absolutely helping customers (along with our partners) through exactly that (in addition to svmotion). All noted in my public blog, which I'd encourage you to read (and comments welcome - though I'd suggest disclosure).
The commitment to support people on 2.4 you may not agree with, but some customers are electing to stay there for the foreseeable future. Our commitment to support them is certainly not bogus in their eyes.
Re: Customer success stories - Due diligence
Disclosure EMCer here.
I guess I'm tilting at windmills on the "anonymous posting" topic, and hey - it's a free world. I think the strength of an argument is proportional to someone's willingness to personally stand with it (nothing to do with who you are, or degrees as someone suggested). I just think an argument doesn't make as much sense without context (does the person making it have an agenda?) That's why personally - I think disclosure is right.
Re this comment on benchmarking, I personally completely agree. In fact, Vaughn Stewart (Pure) and I did a joint session this set of topics (trying to be vendor neutral) at VMworld (and will repeat in Barcelona in Oct), and in essence outlined the same point:
1) Don't trust any vendor claims. Benchmark.
2) Don't let any vendor steer you in benchmarking. Even if their bias is non-malicious, they will have a bias.
3) we warned the audience - good benchmarking is NOT EASY. Sadly most people take a single host/VM, load up IOmeter with a small number of workers and just run for a few hours. While that's data - that ain't a benchmark.
Some of the steps needed to benchmark properly:
a) run for a long time (all storage targets have some non-linearity in their behaviors). As in days, not hours.
b) a broad set of workloads, at all sorts of IO profiles - aiming for the IO blender. Ideally you don't use a workload generator, but can actually use your data and workloads in some semi-real capacity.
c) you need to drive the array to a moderate/large utilization factor - not a tiny bit of the capacity you are targeting, and all AFAs should be loaded up, and then tested. Garbage collection in flash (done at the system level or the drive level) is a real consideration.
d) you need to do the benchmark while pressing on the data services you'll use in practice.
e) ... and frankly, doing it at a scale that actually discovers the "knee" in a system is pretty hard in the flash era (whether it's AFAs or software stacks on all SSD configs). It's hard to drive a workload generator reliabily past around 20K IOps. That means a fair amount of workload generators and a reliable network.
Now - I feel confident (not arrogant) saying this, and have been through enough customer cases of all shape and size to willingly invite that opportunity.
... but, I'll note that very, very few customers have the capacity or the time to benchmark right. Some partners do. The feedback Vaughn and I gave to the audience was "if you can't do the above right, you're better off talking to a trusted partner, or talk to other customers like you at things like a VMUG".
Now changing gears - one thing in this comment put a huge smile on my face :-)
I can tell you for a *FACT* that EMC SEs are NOT all running around with a "fake workload generator" trying to deviously get customers to test to our spec... LOL! While there are ~3500 EMC SEs, only a tiny fraction (55) are setup to support a customer that wants to do a PoC. Most PoCs are supported by our partners. I can tell you that we are not organized enough, or well led enough to have all the 3500 able to set the proverbial table, and execute PoCs with a fake workload generator. And frankly those 3500 have a hard (but fun!) job. They need to cover the whole EMC portfolio (and be knowledgeable on the EMC Federation of VMware and Pivotal at the same time) as well as know the customer... Phew!
Wow - if that's what our success in the marketplace is being ascribed to - well, go right ahead and think that :-)
...if 3500 people sounds like a big number - when you overlay the world with it, it's not. Heck the 55 people able to support a PoC barely covers 1 per state, and there are 298 countries in the world we cover! Thank goodness for our partners!
I'll say it again - the EMC SEs are AWESOME humans, but they are not organized enough, or well led enough to be that devious - ** and that's coming from the person singularly responsible to organize them and lead them ** :-)
What **IS** true is that we found people really struggling to benchmark. We wanted to OPENLY, TRANSPARENTLY share how we do it, and welcome feedback (crowdsourcing it) if there was input, or a better way. This tool (which aligns pretty well with the IDC AFA testing framework) is here: https://community.emc.com/docs/DOC-35014
If people can point to a better benchmark, I'm all ears!
Disclosure - EMCer here.
That's exactly what we (and our partners together) are doing. See the followup to the original post in Australia here: http://www.itnews.com.au/News/392118,extreme-upgrade-pain-for-xtremio-customers.aspx
Customers and partners first, always!
Disclosure - EMCer here (Chad). Chris, thanks for the article (though seriously, that headline , and FWIW - it's nice to have linked to my post. I'm a big believer in transparency and disclosure.
Commenters (and readers)... man - I wouldn't put much personal trust in people without enough confidence to share their identity, and if they share their name, if they don't have the confidence to disclose any affiliations - I think the same applies.
I have no such reservation. I have the data on the XtremIO customer base. XtremIO customers are happy. Our customers partners are giving us great feedback (including healthy critiques). Availability stats across the population (far more than 1000 X-Bricks out there and growing unbelievably fast) are amazingly high. We are far from perfect, and always looking to improve - so feedback welcome. I guess for some there's no way to compete without being a hater.
Yes, this is a disruptive upgrade (certainly not a Data Loss scenario as one commenter notes), but I've detailed the "why", and the "how" on my blog post that is linked to in the article. If you want to see disclosure, and transparency, there you have it.
It's notable how much commentary is one vendor going at the other here in the comments, and how little there is of the customer voice. Seems like in the industry we all like to navel gaze - but I suppose that's the way. At least we're passionate :-)
To the NetApp commenters in here - I'm frankly a little shocked. Is the 7-mode to C-mode migration a "data vacate and migrate event"? It is. You are in the middle of a huge migration of your user base - with a fraction of the FAS user base on the clear future target and everyone else navigating a disruptive upgrade which is exactly analogous (and triggered by same cause that I note in my blog post). Further, when you point to ONTAP for this story which is about AFAs... I have to presume (assume - making an ass out of me) that customers looking at AFA solutions from NetApp are directed to the e-Series (with the engenio storage stack until FlashRay GAs - neither of which is an NDU from ONTAP. I have a great deal of respect for NetApp as a company and for their IP - this isn't about "NetApp sux" (they don't) - rather I'm scratching my head how you could make those comments without your heads exploding from cognitive dissonance, but hey - that's just my opinion :-)
And, to one of the many "anonymous cowards" - in particular the one that commented on my blog post being "misdirection"... that's not misdirection, I'm **REALLY THAT VERBOSE AND LONG WINDED** - that's just me :-)
Best comment IMHO was from MB, likely a customer - good backups are ALWAYS a good idea, and should be done before any upgrade, NDU or not.
Disclosure - EMCer here:
No, it isn't. real-time refers to "very fast analytics" and "historical" refers to "very large datasets accumulated over time".
Put it together, and DSSD has as one of it's targets applications that need to do extremely fast analytics over a very large dataset (much larger than you can fit on locally attached PCIe).
Re: Linux Controllers don't add any latency?
... Disclosure, EMCer here - while Chris does his usual work of ferreting out good deets, there are some errors in here (which is fine), and one of the errors is the data path for IOs.
Post acquisition, we disclosed that this was an early-stage startup, similar to when we acquired XtremIO (small number of customers, but pre-GA product). Just like with XtremIO, it was an extremely compelling technology - ahead of where we saw the market (and our organic work down similar paths - there was an organic project similar to DSSD codenamed "Project Thunder" - google it).
Re: organic (internal) vs. in-organic (acquisition/venture funding), it's almost an exact 50/50% split.
My own opinion (surely biased), thank goodness EMC does a lot on BOTH sides of the equation.
Time has shown again and again that without healthy internal innovation (C4, ViPR control/data services, MCx, Isilon over the last 2 years) **AND** inorganic innovation (VPLEX, DSSD, etc) - all high-tech companies ultimately struggle.
My opinion? Thinking anyone can out-innovate all the startups, all the people in schools, the entire venture ecosystem is arrogant. This is why it's such a head scratcher to me when people say its a "bad thing" to acquire - frankly customers like that we have good products and that they know we will continue to bring good ones to market (both organically and inorganically). IMO, It's a smarter move to play in the whole innovation ecosystem in parallel to organic internal-only activity.
Disclosure, EMCer here.
Chris - you probably would expect this from me, but I disagree. Let me make my argument, and lets see what people think. I ask for some patience from the reader, and an open mind. I'm verbose, and like to explore ideas completely - so this won't be short, but just because something isn't trite doesn't make it less accurate.
Read on and consider!
The choice of "multiple architectures to reflect workload diversity" vs. "try to serve as many workloads as you can with one core architecture" is playing out in the market. Ultimately, while we all have views - the customers/marketplace decides what is the right trade off.
a) EMC is clearly in one camp.
We have a platform which is designed to "serve many workloads well - but none with the pure awesomeness of a platform designed for specific purpose". That's a VNX. VNX and NetApp compete in this space furiously.
BUT we came to the conclusion a long time ago that if you tried to make VNX fit the space that VMAX serves (maniacal focus on reliability, performance, availability DURING failure events) you end up with a bad VMAX. Likewise, if we tried to have VNX fit the space Isilon fits (petabyte-level scale out NAS which is growing like wildfire in genomics, media, web 2.0 and more) - you end up with a bad Isilon. Why? Because AT THE CORE, you would still have a clustered head. Because AT THE CORE, file/data objects would be behind ONE head, on ONE volume. Because, AT THE CORE, you would still have RAID constructs. Are those intrinsically bad? Nope - but when a customer wants scale-out NAS, that's why Isilon wins almost overwhelmingly over NetApp cluster mode - when THOSE ARE THE REQUIREMENTS.
b) NetApp (a respected competitor, with a strong architecture, happy customers and partners) seems to me to be in the other camp. They are trying to stretch their single product architecture as far as it can go.
They finally seem to be "over the hump" of core spinnaker integration with ONTAP 8.2. Their approach of federating a namespace over a series of clustered FAS platforms has some arguments to be sure. The code-path means that their ability to serve a transactional IO in a clustered model is lower latency than Isilon (but not as fast as it was in simple scale-up or VNX, and certainly not the next-generation VNX). They can have multiple "heads" for a "scale out" block proposal to try to compete with HDS and VMAX. In my experience (again, MY EXPERIENCE, surely biased) - the gotchas are profound. Consider:
- With a Scale-Out NAS workload: Under the federation layer (vServers, "Infinite Volumes", there are still aggregates, flexvols, and a clustered architecture. This means that when a customer wants scale-out NAS, those constructs manifest - a file is ultimately behind one head. Performance is non-linear (if the IO follows the indirect path). Balancing capacity and performance by moving data and vServers around. Yup, NetApp in cluster mode will have lower latency than Isilon, but for that workload - that's not the primary design center. Simplicity and core scaling model are the core design center.
- Look at the high-end Reliability/Serviceability/Availability workload: In the end, for better or worse, NetApp cluster mode is not a symmetric model, with shared memory space across all nodes (the way all the platforms that compete in that space have been architected). That is at the core of why 3PAR, HDS, VMAX all have "linear performance during a broad set of failure behaviours". Yup, NetApp can have a device appear across different pairs of brains (i.e. across a cluster), but it's non-linear from port to port, and failure behavior is also non-linear. Is that OK? Perhaps, but that's a core design center for those use cases.
- And when it comes to the largest swath of the market: the "thing that does lots of things really well", I would argue that the rate of innovation in VNX has been faster over the last 3 years (due to focus, and not getting distracted by trying to be things it is not, and was never fundamentally designed to do). We have extended the places where we were ahead (FAST VP, FAST Cache, SMB 3.0, active/active behaviors, overall system envelope), we have filled places we were behind (snapshot behaviors, thin device performance, block level dedupe, NAS failover, virtualized NAS servers - VDM in EMC speak, Multistore/vServers in NetApp-speak), and are accelerating where there are still places to run (the extreme low-end VNXe vs. FAS 2000, larger filesystem support)
Look - whether you agree with me or not as readers - it DOES come down to the market and customers. IDC is generally regarded as the trusted cross-vendor slice of the market - and the Q2 2013 results are in, and public, here: http://www.idc.com/getdoc.jsp?containerId=prUS24302513
Can a single architecture serve a broad set of use cases? Sure. That's the NetApp and EMC VNX sweet spot. NetApp has chosen to try to expand it differently than EMC. EMC's view is that you can only stretch a core architecture so far before you get into strange, strange places.
This fundamentally is reflected in NetApp's business strategy over the last few years. They themselves recognize that a single architecture cannot serve all use cases. Like EMC, they are trying to branch out organically and inorganically. That's why EMC and NetApp fought so furiously for Data Domain (the B2D and cold storage use case does best with that architecture). I suspect that's why NetApp acquired Engenio (to expand into the high-bandwidth, use cases - like behind HDFS, or some video editing that DDN, VNX, and others compete in). The acquisition of Bycast to push into the exa-scale object store space (which biases towards simple no-resiliency COTS hardware) is another example.
On the organic front, while I have ZERO insight into NetApp's R&D - I would suggest that their architectures to enter into the all-flash array space (FlashRay?) would really be best served with a "clean sheet of paper" approach of the startups (EMC XtremIO, Pure Storage, etc) rather than trying to jam that into the "single architecture" way. If they choose to stick with a single architecture for this new "built for purpose" space - well - we'll see - but I would expect a pretty mediocre solution relative to the competition.
Closing my argument....
It is accurate to say that EMC needs ViPR more than NetApp. Our portfolio is already more broad. Our revenue base, and more importantly customer base is broader.
NetApp and NetApp customers can also benefit now - and we appreciate their support in the ViPR development of their southbound integration into the ONTAP APIs (and I think their customers will appreciate it too). NetApp is already more than a single stack company. Should they continue to grow, and expand into other use cases - they will need to also continue to broaden their IP stacks.
Lastly - ViPR is less about EMC or NetApp - rather a recognition that customer need abstraction and decoupling of storage control plane and policy REGARDLESS of who they choose - and that many customers whose needs are greater than the sweet spots of "mixed workload" (VNX and NetApp) have diverse workloads, and diverse architectures supporting that (often multi-vendor).
This is why ViPR is adjacent to, not competitive with SVC (array in front of array), NetApp vSeries (array in front of array), HDS (array in front of array), and EMC VPLEX and VMAX FTS (array in front of array). These are all valid - but very different - traditional storage virtualization where they: a) turn the disk from the old thing into just raw storage (and you format it before using); b) re-present it out for use. All these end up changing (for worse or for better) the characteristics of the architecture in the back into the characteristics of the architecture in the front. ViPR DOES NOT DO THAT.
Remember - ultimately the market decides. I could be completely wrong, but hey - innovation and competition is good for all!
THANKS for investing the time to read and consider my argument!
Oracle support position is actually clear, and positive.
@ICS - Disclosure - EMCer here.
I know there's a lot of FUD out there (often from Oracle) re their support and licensing stances when it comes to virtualization.
The formal policy is actually pretty darn reasonable - but isn't what people are TOLD it is.
I did a detailed post (including a screenshot of the authoritative Metalink article) here: http://virtualgeek.typepad.com/virtual_geek/2010/11/oracle-and-vmware-a-major-milestone.html
There's also a lot of confusion re: licensing (need to license any core it could run on, and not honouring things like DRS Host Affinity settings). Done right, there is absolutely no "virtualization tax" when virtualizing Oracle on VMware, and we're finding people are saving boatloads of money and getting BETTER performance.
Again, I don't want this to seem like an ad, but I also did a video at OOW where we discuss those things that are used to scare customers from doing the right thing: Performance, Support, Licensing - and of course "are other people doing it" (answer = YES, about 50% of Oracle users according to the Oracle Users's Groups). That video is on youtube here: http://youtu.be/gHyIA454YbQ
Resolving support issues.
Disclosure, EMCer here.
@Alain - to double-up on J.T.'s comment - please escalate.
Actually, let me apologize first - you shouldnt be having support issues, and I'm sincerely sorry to hear that.
If you don't know how, or where, to escalate with your account team - you can contact me. Easiest way to do this while remaining anonymous is to post a comment on my blog (http://virtualgeek.typepad.com). I won't post your comment, but will direct you internally poste haste. If you can also get me SR (service request numbers), I can followup with the people that gave you unsatisfactory service.
BTW guys - most CLARiiONs are now 3-5 years old, and are pretty aged. And, JT has obviously been around the block (not saying there is any issue with any specific component), but as with anything mass manufactured, when a part fails, there is a tendency to fail in many places/customers at once (as there was a manufacturing issue).
LOL - no spelling mistakes = edit?
Disclosure - EMCer here.
@VeeMan - trust me - that's just how I write and speak :-) Had it come through any approval by marketing - all the technical info would have been cut. And YES, I'm **THAT** verbose (much to my wife's chagrin). If you want evidence of the type of person I am - just read my blog (easy - google "virtual geek"). I've been out there for a while, so who I am is no secret.
I'd quit before being censored in the way you suggest. Coaching, guidance - man, I need that constantly. But changing what I say? Never. My comments are my own - my blog is my own - for better or for worse....
Also - FWIW - theres a lot of "old tapes" in your response. We're pretty active in the benchmarking game - and have been through 2010 (and will continue). We've learned and adapted. True - almost all benchmarks (at least in storage land) don't reflect well on the nature of the bizarro-world that is the real world - all shared storage subsystems very rarely support a single type of workload at a given time. That said - the lesson was learnt. People like "record breakers", so - were doing it constantly now.
Thank you Chris!
Disclosure - EMCer here.
Chris - thank you for posting the comment, was honourable to post it in my view.
FWIW - while I disagree with the original article, I do think Nexenta did well on their initial participation in the HoL. Like my first comment - these sorts of live mass events are full of danger, problems, and it's a real test of tech and people.
With that said, back to the marketplace battlefield - where there is enough room for broad competition, and broad choice.
(the author of the response) - Chad Sakac (aka virtual geek - http://virtualgeek.typepad.com)
PS, if it seems erudite, overly polite, low on swear count - that's purely because I'm Canadian. Trust me - where I come from, that was a full out furious flame :-)
Disclosure - EMCer here.
@frunkis - indeed that's your choice, and every customer does indeed - make a choice. I don't dispute the validity of a broad set of solutions on the market.
Every customer makes a choice. In the last month, here's a short set of customer who have shared their public choice to choose EMC (many of them publicly including what competitors they evaluated).
- English Premier League Club: http://www.emc.com/about/news/press/2011/20110914-01.htm
- Columbia Sportswear: http://www.emc.com/about/news/press/2011/20110830-03.htm
- KPIT: http://www.ciol.com/Storage/Cloud-and-Virtualization/News-Reports/VMware-Cisco-EMC-deploy-Vblock-at-KPIT/153897/0/
- Northrup Grumman, Lone Star College, Northern Hospital of Surrey County: http://www.emc.com/about/news/press/2011/20110831-01.htm
- Heritage Auctions: http://www.emc.com/about/news/press/2011/20110825-01.htm
- Washington Trust: http://www.emc.com/about/news/press/2011/20110823-02.htm
- Texas School District: http://www.emc.com/about/news/press/2011/20110817-02.htm
- Curtin University of Technology, SPAR Group, Elliot Health Systems: http://www.emc.com/about/news/press/2011/20110824-01.htm
- Columbia University: http://www.emc.com/about/news/press/2011/20110816-01.htm
Every customer is unique - so the reasons for every choice is almost as unique.
Look - the point here (at least my point :-) is not that Nexenta bad, NetApp bad (though they seem to have been that way in your view), EMC good. I'm clearly biased. That choice is for every customer to choose, and I respect their choices (how can you not?). I'm purely disputing the facts in the article that are incorrect.
As you note - they have a nice UI for Opensolaris ZFS. And, have ported parts of it to Ubuntu to deal with the ZFS outstanding legal issues & Oracle basically killing Opensolaris - which is sad, because ZFS (like many things formerly Sun) is, IMO, good technology.
Competition = good. Good for customers, good for everyone.
If there is ever an opportunity for your business, I hope that you'll consider EMC (at least give us a chance to win your former NetApp infrastructure, now Nexenta infrastructure). There's no harm in looking at options, right?
1PB in a rack - good but not great.
Disclosure - EMCer here.
Also - missed it on my earlier comment.
@LarryRAguilar - your point is a good one, and that highlights my point. 1PB in a 42 standard rack is good, and hey - congrats to Aberdeen.
EMC's current shipping dense config is 1.8PB in a 42 standard rack.
And, as per my earlier comment - I'd encourage customers to get multiple quotes on configs - we're all subject to the same market forces :-)
Oh, and BTW, we're not stopping there. While our stuff is based on the same commodity components as the other guys - customer demand infrastructure to have certain capabilities.
When that need stops, we won't need to engineer our own storage processors and enclosures (all from commodity components). Today, the integrated value (far more in the software than in the hardware, but some in the hardware still) that drives customer choice is something customers value, and the market votes.
Disclosure - EMCer here.
Chris - the VM Volume "advanced prototype" (shown in VSP3205 at VMworld) was a technology preview of this idea, and yeah, it's an important idea, and a disruptive idea.
Anyone who has managed a moderate to large deployment of virtualization knows that the "datastore" construct (on block or NAS storage) is not ideal - as then the properties of that datastore tend to be shared by ALL the things in it. It would be better if the level of granularity was a a VM, but WITHOUT the management scale problem. That's what was shown.
Today, the storage industry (and of course, I personally think that EMC does this more than anyone, and can prove it) are doing all sorts of things to be more integrated (vCenter plugins, making the arrays "aware of VM objects through bottom up vCenter API integration, VASA, VAAI, etc) - but unless something changes, we're stuck with this core problem - VMs are the target object, but LUNs and filesystems kind of "get in the way".
I'm sure that VMware will run it like all the storage programs they have run. The APIs are open, and available to all - but of course, the early work tends to focus on the technology partners supporting the largest number of customers.
More customers use EMC storage with VMware than any other type; and invests more resources and R&D (both by a longshot) - so it's no surprise that the demonstration in the session featured EMC storage so prominently. Pulling something like that is NOT easy, and a lot of people put in a lot of work into it.
For what it's worth - VMware is simply CHANGING what is important to customers and valuable from storage. Certain data services are moving up (policy-driven placement of VMs), certain ones are pushing down (offload of core data movement), and "intelligent pool" models (auto-tiering, dedupe) become more valuable as they map to simpler policy-driven storage use models.
While this was just a technology preview - if it comes to pass - vendors who are able to deliver strong VM Volume implementations, with VM-level policy and automation will become even more valuable.
Just my 2 cents.
EMC - supports SIOC
Disclosure - EMCer here.
Chris, FYI - EMC supports SIOC (in fact any block storage target on VMware's HCL supports it - as it's a VMware feature, not an array feature - hence our focus on VAAI which actually requires us to actually do something to support it).
SIOC is aligned with autotiering (from us and others) - with SIOC resolving instantaneous contention conditions through prioritized throttling, and auto-tiering resolving the ongoing issue of VMs with different IO loads on the same datastore.
A bit more commentary
Disclosure - EMCer here...
Chris - thanks for the article.
I did a followup post here: http://virtualgeek.typepad.com/virtual_geek/2010/07/understanding-what-were-doing-with-atmos.html
(and also an update to the original post you linked to).
Hope it clarifies what we're thinking. Everyone makes boo-boos, I think this is a move in the right direction.
- +Comment Anti-Facebook Ello: Here's why we're still in beta. SPAMGASM!
- Vid+Pics Microsoft WINDOWS 10: Seven ATE Nine. Or Eight did really
- Analysis Windows 10: One for the suits, right Microsoft? Or so one THOUGHT
- Xbox hackers snared US ARMY APACHE GUNSHIP ware - Feds
- George Clooney, WikiLeaks' lawyer wife hand out burner phones to wedding guests