It's kinda heartwarming to see an EMC veep publicly accuse a storage journo of misquoting him on XtremIO - but only because it feels like the spats of days gone by are back. It doesn't matter if it's about network-attached storage or flash; switch around the jargon and you’ll probably find the same blog entries work and the same …
You are correct
In the fact EMC can't stop the start-ups..
But going with a startup often costs more and they are an "unknown" (is the product long term stable and will they be around in 1 year), so the long term risk is greater. Given that, I can guarantee 99% of IT Managers in the SME market will run with the safe of a Tier 1 vendor (EMC, Dell, IBM etc), as spanking £50k on a provider that could go tits up in 6 months is gambling their job!
You forgot to mention the article is a French Newspaper that has had translation done on the interview
If you read the comments in Chuck Hollis' blog, it gets more entertaining.
1. The CEO Axxana says he had the same experience
2. The writer of the article can't figure out how to post comments to an online blog
3. The editor of the article can't help but douche up the whole thing even more by telling Chuck Hollis he should have e-mailed instead of blogged when the article didn't bother to confirm in the first place his translation was correct
For those who don't know, the noted blogger Chuck Hollis refers to is Robin Harris, who wrote this:
where you can see "General Product" in 1H2013 turns into won't ship till 2H and other logical jumps from a guy who open hates EMC and is paid to write for several of the smaller flash outfits.
Frankly, nobody innovates anymore. IBM bought Texas Ram SAN, HP bought 3PAR, Dell bought who cares, NetApp buys companies and takes 8 years if its software and kills it if it's hardware. The worse thing is that all of these startups are basically the same people, who don't want to become part of the big corporation so they take their buyout and start something new.....that basically is same as their previous startup. We had the folks in from companies bought by Dell with their new product, so new that they hadn't had a drive failure yet, but with their "new" "compelling" product where they stuck fusion io cards into a server and attached some jbod. It takes the bigger companies taking ideas from a few startups and combining them into a disruptive product.
perhaps some individuals at EMC are but EMC itself is not/should not be. They've weathered the storm of startups for a long time now and it seems their overall market share has only gone up as a result.
Cisco is in a similar boat, both cases are quite sad.
EMC has a real fear of being 'PC'd'. With $100 disk drives being sold for over $1000, they have a wonderful cash cow, and a sense of entitlement, but they also remember what happened to the mainframe.
The fear is that independent storage software vendors (ISSV's) will create enough useful code that can be run on COTS platforms to trigger a transition away from Big Iron.
It's a real worry, since it is already well along. Amazon and Google create their own code for storage systems, Red Hat is launching a converged NAS/Object store that is very interesting, and OpenStack looks to create an answer to S3 that's essentially a free object store code.
With SAN and FC being deprecated in favor of Ethernet based solutions, EMC's growth is at risk short term, and in danger of shrinking on a decade basis. EMC's response is to add more and more complexity to their feature set. This may be counter-productive, since it takes that $100/TB into the stratosphere. Maybe all that was needed was fast, efficient, no frills code with lots of cheap drives running on a COTS engine!
We recently purchased 8x1Tb drives for our EQL SAN which cost us about £8,000, they got drop shipped from the manufacturer who left the shipping receipt on the box - Get this, they cost Dell $1200 (about £800).
I'd disagree on the Red Hat comment, configuring a Red Hat cluster is.. well.. a cluster f*ck! Same goes for the rest of the Red Hat product range (I am a linux sysadmin btw). Regarding SAN and FC, you are correct 10Gb Ethernet will eventually rule the roost, but given the lack on price drops, this won't be for several years yet!
Yes, but they own the largest ISV that is commoditizing servers, VMware... so probably not too worried about it. With Nicira, EMC seems to be turning their focus to commoditizing Cisco's networking market next.
I am not sure anyone saves any money going with an independent ISV for an OS, another for a hypervisor, another for systems management tools and then buying a low cost commodity server (or other hardware) by the time they add up all those bills and hire a system integrator to put the pieces together for them. It does distribute the costs to multiple vendors and make the end user responsible for coordinating that mess. I think people are starting to understand that, which is why all the interest in appliances and integrated systems.
Is XtremIO all it was cracked up to be?
Based on Nimbus Data's blog post on the XtremIO architecture, it appears XtremIO is very similar at a hardware level to Isilon. That is, nodes containing considerable compute and disk drives, interconnected with InfiniBand.
My guess is EMC saw efficiencies of logistics and product development at a hardware level, with the prospect of integration at a software level. This likely encouraged EMC to go for the less expensive XtremIIO over the more expensive Violin Memory, or some of the other less expensive all-flash players.
Given XtremIO is little more than an Isilon-like hardware design with SSDs and block only access, it suggests any delays are either due to changes in the hardware platform for the sake of logistics efficiencies, or if EMC has kept XtremIO's original SuperMicro hardware, it suggests the delays are in the software, not the hardware.
If the latter, perhaps XtremIO's software was not far enough along, and it is taking EMC more time to get it enterprise ready.
That said, the XtremIO solution seems to throw an awful lot of hardware at the problem--four Intel Xeon CPUs and two InfiniBand HCAs per 16 SSDs--resulting in a lot of space, power, and cooing per GB.