Why on earth bundle XIV and Storwize? These platforms have nothing in common.
IBM wrestles controls, pulls midrange array sales out of death dive
IBM's move away from legacy DS3000/5000 storage arrays to newer XIV and StorWize midrange arrays is paying off with growing sales. A graph of vendors' external disk array revenues, drawn up by Stifel Nicolaus analyst Aaron Rakers, shows an IBM upturn in the last quarter. IBM and EMC are the only two enjoying a rise, everyone …
-
-
Wednesday 10th October 2012 09:45 GMT oli_from_germany
Netapp's Dedup-Appliance...
is a Netapp. Chris, I'm disappointed that you're not aware of Netapp's most basic functionality. We do dedup in primary storage for more than 7 years now. And we can replicate all data and Snapshots deduped to a second and even third array. So absolutely no need for a dedup Appliance (and moving terabytes around for hours during backup) as everything is already deduped (and by the way: compressed too). And if you have a different primary storage and look for a dedup appliance for backup with heavy added value - go for syncsort with Netapp, it rocks :-)
-
Wednesday 10th October 2012 10:16 GMT Anonymous Coward
Re: Netapp's Dedup-Appliance...
Netapp in denial again, if it's not tiering it's flash and now real dedupe, strange how they lost that bidding war to EMC, they seemed mighty interested in plugging the gap at the time :-) Oh and Syncsort is just the latest in a series of backup software vendors they've partnered with and jilted over the years.
-
Thursday 11th October 2012 13:53 GMT Anonymous Coward
Re: Netapp's Dedup-Appliance...
Not sure what is meant by NetApp is in denial, what bidding war the lost to EMC or what you mean by real dedupe. Is that different than the fake (sarcasm) dedupe that they have been using for the past 5 years that is currently saving their customers 5.4 Exabytes in capacity across both primary production and backup storage? Regarding Syncsort, they are still a strong partner of NetApp, and partner with them quite often so not sure where the jilted message is coming from. If you are going to spread misinformation, at least make an attempt to make up stuff that sounds reasonably believable and cannot be easily verified as incorrect.
-
Monday 15th October 2012 13:28 GMT Anonymous Coward
Re: Netapp's Dedup-Appliance...
You know that little detail of the bidding war for Data Domain that Netapp lost to EMC. Real dedupe you know the stuff that the real deduplication players use inline, sliding window stuff, not the fixed block hack that Netapp use. Regarding Syncsort you'll remember similar partnerships with Backbone & Symantec to name two I can remember off the top of my head.
-
-
Thursday 11th October 2012 14:05 GMT Anonymous Coward
Re: Netapp's Dedup-Appliance...
Oh, one more tidbit regarding tiering flash, NetApp has been doing that for reads with FlashCache, previously called PAM since 2007 and this year added the ability to use SSD's as read/write cache for virtual auto-tiering which provides acceleration for slower disk using cache. Note that virtual auto tiering is where all storage vendors are moving because it is more dynamic and more responsive than the physical auto tiering you see with products like Compellent because it can react instantaneously to workload I/O changes. EMC has just announced such an offering and startups such as Tintri are making headway with this type of performance enhancement that NetApp has been doing for 5 years. Again, if you are going to spread misinformation, do it with statements that are believable and are not easily proven as incorrect.
-
Monday 15th October 2012 13:42 GMT Anonymous Coward
Re: Netapp's Dedup-Appliance...
I think most vendors have been accelerating both reads and writes in cache for decades, what you're describing is one of the simpler implementations of just extending cache. If you have an automated tiering system then only very small potions of the data should be in the wrong place at any given time, and these should easily be handled by existing cache, assuming you have an intelligent cache algorithm. Extending the cache is just a brute force method which inevitably supplies diminishing returns. It appears from your comments Netapp still can't provide automated tiering so you now have to trick WAFL into believing it's writing to traditional disk when actually its a hybrid containing SSD and disk, another hack. Senior Netapp management's denial of the need for tiering hasn't really helped their cause either,. With PAM all Netapp did was quickly plug a gap in their portfolio with a "me too" feature to buy them some more time to fix WAFL, if this new implementation does a better job than PAM why would anyone invest in PAM going forward.
-
-
-
-
-
Thursday 11th October 2012 12:43 GMT J.T
Re: Nice graphs
Why include devices that don't use hardware raid or proactive sparing or has to run with an increased amount of spare space due to an unusually high drive failure rate? Or systems if you utilize more than 80% utilization switch to a different algorythm that severely affects performance.... well that would kill a couple others from that graph....
But it's really not that interesting, for all their hype, sales pushes, and marketing they're in the other category.
-
-
Thursday 11th October 2012 13:35 GMT Franko Davidson
More Graphs Please
While the revenue share graphs are valuable, they don't tell the whole story. It would also be valuable to see the by capacity and by units sold graphs as part of this and your IDC analysis as well. For example of the top three players in the graph SYM, NetApp, and Clariion, the SYM is by far more expensive than the other two. So if you go by capacity sold or units sold, the NetApp and the Clariion would probably be way higher on the graph than the SYM because they cost less per TB meaning considerably more copies of these OS's and more of these units are in use than the SYM. Revenue numbers alone just don't give a full picture of share.
-
Thursday 11th October 2012 14:20 GMT Anonymous Coward
IBM's Orphaned parts of their storage line
The problem with IBM's DS3000/DS5000 share is that nobody at IBM is selling it. I know that sounds like stating the obvious, but what I mean is that the IBM storage reps typically are focusing on the big XIV and DS8000 sales and pay little or no attention to selling the DS3000/DS5000 product. If you were a storage rep at IBM and only have so much time in the day, would you focus your time on $500K+ deals with XIV and DS8000 or on $250K and lower deals on DS3000/DS5000 product. Additionally the IBM System X Intel server reps that used to get paid for selling DS3000/DS5000 and sold a lot of it, no longer get paid on that product, so the DS3000/DS5000 is basically a product that either nobody is getting paid to sell or nobody has interest in selling. It is a good product, but if nobody is representing it, no wonder sales are down. Same goes with nSeries that they OEM from NetApp. IBM storage reps are instructed only to sell it when a customer asks for NAS and even that competes with IBM's own SONAS and v7000 products. In some cases although some storage reps at IBM find it is easier to sell nSeires and focus almost exclusively on that regardless of what they are instructed. Net is that IBM's problems are rooted in too many storage products to sell and too many products that IBM doesn't have people focused on selling.
-
Tuesday 16th October 2012 16:34 GMT flashguy
Funny numbers
FWIW-- these numbers don't track IDC's on system revenue in regards to IBM. If you have access, just go back to a 2010 quarter and do the math yourself. DS revenues were a lot higher back then. Also, I think DS5xxx is EOL; that's probably why nobody is selling it compared to the vSeries. IBM slotted the Engenio replacement into their more HPC oriented DCS line, rather than their GPC vSeries. Either the analyst who produced these charts is lazy, or a paid hack.