I love it when storage vendors invent new market segments: "Entry-Level Enterprise Storage Arrays" appears to be the latest one, from the brilliant marketing team at EMC. And it is always a "new" space that only the company occupies. But are these new spaces real segments or just marketing? Actually, the whole Enterprise Storage …
you want 100% availability
You don't need it.
Very few do. Go spec out what it would take infrastructure wise as well as software(design etc) wise to provide 100% availability and in most cases (I'd wager 99.9+%) it's not worth the investment. Same applies to that software that these enterprise storage companies are putting on their arrays. Bugs happen, systems go down. Having both controllers fail at the same time is more likely than array vendors would like to admit (I have suffered through at least two such occasions from different tier 1 storage companies).
Look through the changelog of software releases, some of the things fixed look pretty scary at times.. fortunately I haven't been bitten by too many catastrophic storage failures. And that's just the public stuff, talking to insiders at various companies reveal even more horrific stories. One such company that I was a customer of once had us go through a good 7 hours of hard downtime because they did not have a proper escalation procedure internally. The CEO later apologized to us and they did fix their support structure.
What you may want even more though is 100% data integrity. If my services go down because storage is degraded that's not the end of the world - but if the system dies and corrupts itself in the process obviously that is more serious. Same goes for if there is a bug that is corrupting data on disk and then your using async replication to send that on-disk data to another array as a "Backup" not knowing it is just sending corrupt data to the backup system too.
Everyone loves to talk about disaster recovery, business continuity.. more often than not though at the end of the day the costs are too high and the company ends up calling it off. One company I was at got as far as tripling the budget for DR, got it all approved, only to then change their minds and direct that budget towards another one of management's pet projects.
I had another company sign a very expensive contract with a big name DR provider when they knew from day one they would never be able to use it (the plan was fatally flawed and they knew this internally), they signed it anyway just so they could tick off the check box for "DR" for their customers. Fortunately for them that company was acquired by a massive company later on and have since put a more realistic plan in place.
The term Enterprise is over used to be sure, it would be nice if there was some more formal method of determining how available a storage system is as well as how well it protects data. People with some level of storage experience can see past it pretty easily, but those less experienced management types that just look at the most basic metrics are playing with fire.
I've learned some good storage lessons over the past several years, and am a lot more cautious now as a result.
But even with all that - the problems and failures I have had with enterprise storage - I'm nowhere NEAR interested in trying to "roll my own", nor am I interested in deploying some half baked open source storage grid to replace enterprise storage. Storage is complicated to get right, while enterprise storage certainly has it's faults, and it is costly, it still is solar systems ahead of pretty much anything else out there for those organizations that do not have significant developer resources to maintain their own thing. In fact the more I use storage the less likely I have been to be interested in using anything BUT enterprise storage (at least for block and file devices - object storage is different). The risk just isn't worth it for mission critical things.
"I love it when storage vendors invent new market segments: "Entry-Level Enterprise Storage Arrays" appears to be the latest one, from the brilliant marketing team at EMC. And it is always a "new" space that only the company occupies."
FROM THAT BLOG:
"EMC has its very successful VMAX 10K. Hitachi has recently offered the HUS VM. And HP has invested heavily in enhancing 3PAR in this segment."
But really, what is the point of this? Is there something coming where you talk about how enough features have moved from Enterprise level storage to Mid tier storage that the highest level isn't necessary any more? Or perhaps go more in depth about how the way applications like Oracle and MS Exchange interact with I/O causes inflated backend storage throughput that wouldn't be necessary if they just did x or y?
Are you going to steadfastly refuse to ever use the term "Big Data" because marketing at EMC thought that one up (so they tell us)? How about an amusing story about how the one time EMC Marketing tried to rename Legato's clustering software Full Time Auto Start (ahhh FAT ASS)
But just to write a column saying the term "enterprise" is used to much and then say "too expensive" kind of lends the mental image of a guy sitting on his porch shaking his cane at passerbys.
Any storage where the word "Enterprise" is used simply means you need to get a minimum 60% discount to avoid being ripped off. Usually you should get 80% off the first quote for true "Enterprise" kit because in reality all "Enterprise" means is "we hiked the price up to make you think this system was better than the cheaper one and to make our overpriced SAN look competitive with <COMPETITOR>'s overpriced SAN".
Enterprise storage started out as a hangar at Washington Duller, then was a wing of the Udvar-Hazy Center, then it was an inflatable bubble on he deck of a retired aircraft carrier, and now it's another temporary structure on the same deck after a storm destroyed the bubble.
All of which goes to show that Enterprise storage is non-trivial.
Oh, perhaps we weren't talking about the space shuttle of that name?
Failure: plan for it
Totally agree on the point made about understanding failure modes. It's a trait that I note is dangerously lacking in many 'architects' I have met of late.
100% uptime, good luck with that
There is a reason why storage vendors all say 5 x 9's ...because they know that sooner or later there will be a DU/DL event. You are deluding yourself if you think that a home rolled cluster or whatever can have an 100% uptime over several years. Yeah you can mitigate DU/DL's by having synchronous replication to a remote site, but how long did that take to kick in, what happens if there is a server fart at 3am and something does not kick in properly?
Mostly when i think of enterprise (which is ALWAYS heavily discounted, I dont know anyone who pays full whack) I think of 24*7*365.25 support for kit that can tolerate a few faulty parts and some dude onsite repairing that kit in a reasonable amount of time.
Re: 100% uptime, good luck with that
Storage vendors don't all say 5 nines, HDS say 100%.
If there's a "server fart" at 3am your storage will still be up so that's irrelevant. If one of the storage controllers dies the other can take over in a decent SAN with zero downtime (usually the latency increases briefly during failover). The failed storage controller can then reboot or be replaced before failing back.
You don't have to have an infallable system, you just need sufficient redundancy to cover the time until the engineer arrives on site with spares.
FWIW a building level failure does not affect the uptime stats of a SAN. If it's off due to external issues then it's not classed as downtime it's classed as switched off. It may affect uptime stats for your application but that is not the storage vendors problem.
- Mounties always get their man: Heartbleed 'hacker', 19, CUFFED
- Analysis Oh no, Joe: WinPhone users already griping over 8.1 mega-update
- AMD demos 'Berlin' Opteron, world's first heterogeneous system architecture server chip
- Leaked pics show EMBIGGENED iPhone 6 screen
- OK, we get the message, Microsoft: Windows Defender splats 1000s of WinXP, Server 2k3 PCs