The concept of enterprise storage - kit sold by large companies who charge very high margins for providing hardware and support services - is more or less done. Sound like a rash statement to you? I'll explain what I mean. Pretty much all the functionality that you might expect to be put into a storage array has been done and …
I can see some merit to it, but you've missed the most important part, people don't buy expensive tier 1 products mainly for their features. They buy it for the reliability and support.
There have been lots of free alternatives to major storage platforms for a while, think of OpenFiler, Nexenta and certain VSAs. But I personally would not deploy them on our business critical infrastructure. While I could probably fix most problems after a while, but with business critical infrastructure you do not have that luxury of time. So if its deployed on commodity hardware that we built ourselves, which cavalry do you call when the chips are down?
Because storage is only about feature sets and cost? Really?
As a man who gets to see and hear the childish foot stomping of customers when their 'cheap' but 'feature complete' storage' implementation has their business on it's knees, you'll forgive me if I just move on by and ignore this blather.
I think you missed this line:
" If you want an example of how it should be done, look to Red Hat and how it competes with free; it competes on service and support."
Yes, we know, free is better... how come Microsoft sell so many Windows licenses? How come so businesses put their mission critical systems on paid for OSs, it's not like Linux can't run their systems, is it? Yet still businesses keep paying out for OS licences.
I do wish people would get past this 1990s theory that the only thing which matters to business is cost, and that therefore 'free' will win the day. It wasn't true for desktops, it isn't true for servers, it won't be true for storage.
There is most certainly a place for 'free' just as there is most certainly a place for commodity, but it isn't everywhere for very specific reasons which aren't just about 'cost'.
The comparison of Windows vs Linux is not applicable here, as the main deciding factors are usually related to COTS applications and user identity management. Windows did not succeed on initial quality, but on being cheaper and being usable on more commodity hardware compared to the OS of the day.
Even today the up-time for your server will be less about the OS choice and more about the hardware quality and your BOFH's ability to defend it.
The issue here is one of quality of software and support, and it is likely that some customers will stick with EMC, etc, hardware as being a safe bet, but in time the major vendors will have to learn the lessons of the past where comodity hardware decided OS sucess, in turn driving the high end market.
the main deciding factors are usually related to COTS applications and user identity management
Maybe in the world you work in, not in the one I work in. I only used Windows as an example, despite not working with it. I think you should revisit your thoughts about why Windows succeeded though.
Businesses would love to run free and OSS products and a lot do but ask most business why they choose RedHat or Oracle EL over CentOS for critical platforms and the answer in plain language is, support or "I get to shout at someone when things go wrong.".
Yes there are shops who happily run 100% free platforms but they're prepared to bump up the staff they have to compensate for any possible fear over reliability. You hire shit-hot engineers and designers who can build in the redundancy you may need when you're on your own and no one is coming when it fails, then you make time to test it to destruction.
Ultimately when shit hits the fan at 2am on a Sunday morning and your storage array is eating dirt, your app/db servers are dead in the water and you can almost see the money haemorging out the door, you need to your staff to have access to the best support desk options and knowledge bases available on a particular product so they can get it back up and running ASAP. You either get them the knowledge they need to do it on their own ( by paying a lot for top-notch bods ) or you support them by paying a vendor's high support costs to give piece of mind, either option still costs lots of money.
A lot of the comments are rightly saying that it's the support which is an important differentiator.
We can compare tick-lists of features and we can compare the price or total cost of ownership, yet how can we compare support? AC says "paying a vendor's high support costs to give piece of mind" yet in my experience none of the vendors would ever give me peace of mind, whatever the price, as their quality of support when you actually need it is usually abysmal.
Symantec has been selling Storage Foundation for a decade if not more. They have had a very TIGHT compatibility list of what hardware to use from storage to HBA's to servers. They say "buy commodity hardware, use our software." Well it sucks and can be plagued with a lot of fingerpointing from various vendors. e.g. too slow of disk, wrong HBA driver/firmware, Multipathing issues, etc. This mantra has been used in PureDIsk, NetBackup, etc too. There are other vendors like FreeNas and the like that offer this as well.
What is predicted in this article is years and years off. It's been done for years, but getting it right has never been possible. This is why the HDS, EMC's, and Netapps of the world continue to operate and work. Because their wares have proven themselves. Sure a startup will spin a compelling story, but for what? Some budget conscious admin that likes to be on the bleeding edge? Yeah good luck with that.
I'll put my trust in a proprietary piece of hardware from a major vendor that's been tested to destruction, backed up with a decent 4 hour on site call out support contract, over a collection of cobbled together second hand bits of server farm found rusting in the back of a store room with a generic storage app running on it. Thanks...
High-End Enterprise Array:
More processing power to support functionality
Multiple LUN access, load balancing
Data-center-level serviceand support
Non-disruptive upgrades and repairs, microcode change
Small impact of a component failure
zOS, RISC Unix, Windows, Linux
Data integrity and data consistency techniques across multiple subsystem
Midrange Enterprise Array:
No z/OS support
Lower subsystem scalability
Point-in-time and RC availability, but with deployment limitations
Features impact performance
Questionable data consistency and fast recovery availability
Less connections for RC
A failure of a component may cause 50% throughput and connectivity loss.
- YARR! Pirates walk the plank: DMCA magnets sink in Google results
- Pics Whisper tracks its users. So we tracked down its LA office. This is what happened next
- Review Xperia Z3: Crikey, Sony – ANOTHER flagship phondleslab?
- OnePlus One cut-price Android phone on sale to all... for 1 HOUR
- UNIX greybeards threaten Debian fork over systemd plan