15 posts • joined Tuesday 17th May 2011 21:03 GMT
It's tough to convince me that these upstarts are feasting on the established players when they have <200 customers (last I heard there were over 50,000 VNX systems sold). I will give credit to Nimble though with 2,000 boxes in the field. The product is hot right now in the Midwest US area from what I'm seeing as a consultant in the field. While I don't understand their secret sauce yet, they seem to have figured out how to really make an SSD cache work in front of all SATA, while EMC/NetApp talk about doing that in marketing slides but real world configs never leave the factory without a healthy amount of 10K/15K.
A very true article, hence why I'm not exactly excited about the new catch-phrase....software-defined data center.
Re: Close but not quite right
Agree with Diskcrash. This analyst is comparing apples and oranges. XtremeIO acquisition has nothing to do with 8.1 or Fusion IO.
And did I really just see D from NetApp say "there isn't one system that does it all"?!?!
Not a bad article, but don't forget to compare to the new V7000 Unified offering. I believe that is also scaling out to 4 NAS heads now with a single namespace.
I do tend to agree with the previous commenter saying HDS had been a tech spec leader but is lagging a bit. They were basically first with a SAS based mid-tier array in the AMS2000 series, but this HUS box is long overdue. I never would've expected that IBM of all storage companies would beat HDS to market with a native Unified offering.
Interesting it coincides with release of VNX
I'll be the first to say that NetApp makes a good product, but they have fallen behind a bit in innovation as well as ease-of-use. EMC's Unisphere is better than System Manager, no question about it in my mind after using both (particularly VNXe's Unisphere relative to managing FAS2040 in the SMB market). The things NetApp used to hammer EMC on were finally fixed in VNX. There is truly one GUI to manage SAN and NAS, and NAS can use the same storage pool as SAN. EMC also bundled their software into solution suites as NetApp did, so it's no longer a la carte ordering.
Now, I see NetApp folks often resort to "Well, there are still two versions of the VNX OS running under the covers". Yes, that is true. But, as long as you can manage it with one GUI and both Block and File code bases are mature and reliable, then nobody really cares. If you have to resort to that argument to try and win competitively, well you haven't innovated enough in your own product line to come up with better arguments.
*Disclaimer, I work for an EMC and NetApp VAR.
I thought FAST Cache was just cache
What's a little concerning about this to me, and some of the other rumors about failed SSD drives in FAST Cache causing big problems, is that FAST Cache is supposed to be a "cache". When hot blocks get promoted into FAST Cache, the EMC folk I spoke with in the past said the data was copied, not moved. That would cover the reads. As far as writes go, new updates to that block were supposed to get flushed down to spinning disk just as regular cache works. Your primary copy of the data wasn't supposed to be living in FAST Cache and susceptible to data loss. Additionally, if a FAST Cache drive failed and there was no hot spare (which appears to be the case here), FAST Cache was immediately supposed to go into read-only mode.
Prices are not that different
I see a lot of different vendor pricing and EMC is quite simply not the most expensive. To be fair, they earned their reputation a decade ago, but it boggles my mind how behavior that ceased to exist a long time ago because of new competition in the market still creates such a stigma.
The bottom line is if you are looking at an apples-to-apples configuration amongst all the MAJOR vendors (EMC, HP, NetApp, IBM, HDS) then the prices should be all in the same range. If you see a config where there is a huge difference, then either a) someone is playing tricks in their pricing strategy; b) they are willing to buy your business at a loss; or c) somebody has cut corners in their config and undersized the solution to try and win the deal. That's pretty much what it boils down to.
You can also compare actual profit margins here:
Keep in mind it's difficult to compare EMC and NTAP with the big conglomerates because those companies don't break out their storage division profit margins. Their overall margins are dragged down by their commodity hardware businesses. Rest assured, their storage gear has margins that are right up there with where EMC and NTAP's are.
Overall a well-written article, similar to a first-glace post I just did (http://bit.ly/opBEyh). I had some exposure to SONAS in a previous life, and this will be a serious contender that EMC and NTAP have to deal with in the mid-range Unified space. To the previous posters point, true VPLEX does more than V-Series and SVC, but one of its primary features is to virtualize back-end storage, so it wasn't a completely egregious comparison.
EMC doesn't make the drives
Another point to consider, storage companies don't manufacture the drives. Seagate, Hitachi GST, and in some cases WD do. Some arrays are better than others at detecting failures proactively, but those drives could have ended up in any array and would be failing regardless.
ATA over Ethernet.....a solution searching for a problem.
It's all about the product...
We saw what happened to the US auto industry when the bean counters took over. It no longer was about the product. It was about saving money by cutting out little details here and there, consolidating and streamlining so much that there no longer was any differentiation. The products became mediocre but the balance sheet looked good for a while. Then people stopped buying the cars and bought them from “the other guy” because they had more of the features they wanted, looked better, etc. The same thing can happen in technology. Nothing against Goulden personally, I've heard great things about him as a financial guru. But you can't have a bean counter running a technology company where it's all about the product. You've got to take risks and innovate.
If you have large volumes of data that don't de-dupe well, the best option I ever could come up with in my research was to start replicating with snapshots. Of course, that method isn't perfect either (see http://bit.ly/m5y1Cw). But, at that scale there isn't much alternative. If you are using a more economical scale-out type of storage hardware to get to the PB range, then it may not be completely cost-prohibitive to buy 2 and replicate. All of the tape infrastructure (tapes, librarires, drives, backup s/w library licenses) will be a pretty penny to backup a volume of data in the PB range. Throw in longer-term backup retention into the mix with low-levels of de-dupe, then the tape will be considerably more cost effective from a CAPEX perspective.
- World's OLDEST human DNA found in leg bone – but that's not the only boning going on...
- Lightning strikes USB bosses: Next-gen jacks will be REVERSIBLE
- Pics Brit inventors' GRAVITY POWERED LIGHT ships out after just 1 year
- Microsoft teams up with Feds, Europol in ZeroAccess botnet zombie hunt
- Storagebod Oh no, RBS has gone titsup again... but is it JUST BAD LUCK?