* Posts by Night Owl

3 publicly visible posts • joined 27 Jun 2014

NetApp’s Raijin deal was driven by simple value for money

Night Owl

What single platform ? And hey, those EF array's DON'T run Data OnTap

Funny, but it seems to me the people making these comments haven't actually READ the article. The primary product being used here is the All-Flash EF series, the brute-force performance arrays NetApp acquired when it bought Ingenio. FAS arrays are being used for - wait for it - home directories, a great use for NAS but hardly revolutionary.

There's no "single" platform here, two completely different products running different OS's, with completely different management, being used for two different purposes.

All vendors give product away from time to time to "buy" market share. The customer usually ends up getting burned on maintenance, or upgrades, or something else, since the vendor has more experience writing contracts than the hapless customer overwhelmed by the "deal" they are getting - let's have a follow-up article in a year and see how happy they are with it (which isn't to say they won't be, just that would be a more interesting story).

I am trying to find an anti-NetApp slant to this article, but aside form El Reg's usually irreverent, tongue-in-cheek reporting style, I don't see it, what are you all complaining about ? If NetApp has been crowing about replacing DDN because of better performance, then kudos for him for sticking a pin in that bag of air. They gave some kit away, that's often how you win reference accounts or deals.

Forrester says it's time to give up on physical storage arrays

Night Owl

What rubbish. I assume Forester has a fat research contract from one or more "virtual" array companies to promote the idea of virtual arrays.

What current architecture array is NOT an entirely software-defined storage product today ? The only one I can think of is 3PAR, with it's ball-and-chain custom ASIC architecture (where the next version of the product is guaranteed to make yours obsolete, because you won't have the new ASIC in it). They are almost all running on one of the same 2-3 ODM storage platforms. What are you paying for ? The software. The development and testing of the features in that software costs money, and that's what you are really paying for.

If anything, software-defined storage and virtual arrays are going to cost more, because of the much greater testing and integration work with multiple hardware platforms that must occur for them to be "enterprise" grade.

And do you want to deal with all the finger pointing when something goes wrong ? It's the same reason don't run open source on anything misson critical - when I am having problems, I want someone on the other end of the phone whose job is dependant on this product working. Not someone who is going to tell me that "it should be working" and I should "talk to my server vendor".

Dell mashes up EqualLogic and Compellent: Eat up kids, it's Dell Storage

Night Owl

Re: what does that mean

As a former Dell SE, who sold and supported EQL for many years, I can only say it was a great story that turned out to have a very sad ending for many customers. While the "scale out" concept works great on a whiteboard, having to pay for another entire array (controllers, software, and disk) every time you just want to add some storage, caused a lot of problems for customers - many of them initally attracted by the low intro price ('cause like anything Dell, the only sales motion the AEs know is "I can get that cheaper") ended up stranded because they couldn't affford to expand the EQL footprint.

Or they bought the sales pitch and mixed models (PS4000 and PS600) and ended up with performance problems because the 4000 couldn't keep up with the faster 6000.

And then there were the almost constant bad firmware releases... We never knew what they were doing with development (Dell was never very good about being honest with its employees about quality issues, let alone customers), but it certainly seemed that it was just a bunch of monkeys banging out code, with no test or QA effort at all. Don't know if they have managed to stabilize that, but I doubt it, because you would expect all development efforts to now be focused on CML.

It is absurd how complex many vendors products are, when you should be able to build much better intelligence into the software, but for many people complexity is career safety (more than a week of training just to learn how to operate an "Enterprise" SAN like NetApp (or EMC/IBM/whatever SAN ? Seems beyond absurd to me).