The public cloud doesn't eliminate disruption. It only eliminates your ability to deal with it in a safe, reliable and cost-effective manner. I stead you get "hope and prayer".
Balls of destruction CRUSH your fancy new storage systems ... better get used to it
Has EMC started an unwelcome trend? I had a discussion about roadmaps with a vendor this week, and its reps talked about substantial upcoming changes to their architecture. My questioning, "but surely that’s not just a disruptive upgrade but destructive?" was met with an affirmative. Of course, like EMC's XtremIO upgrades, the …
COMMENTS
-
Tuesday 30th September 2014 16:37 GMT Tom Maddox
No.
"And we tend to be a lot more tolerant of such disruptive and potentially destructive upgrades. Architecturally, as we move to more storage-as-software as opposed to being software wrapped in hardware, this is going to be more common, and we are going to have design infrastructure and applications to cope with this."
If a storage vendor tells me that their upgrade is data-destructive, I'll be planning my transition to a new vendor; that's how I'll deal with it architecturally.
Not to go off on a tangent, but the "software-defined" moniker is somewhat misleading. All advanced storage and networking (anything beyond a JBOD or network hub) relies on software to get its job done. The real shift is towards software that doesn't rely on a particular fixed set of hardware to get its job done, a shift which is mostly complete except in the storage world (and, arguably, in core Layer 2/3 networking, where dedicated hardware is still sensible). What the storage vendors are grappling with, more than anything else, is how to hold onto their ludicrous margins in an industry where so much other hardware is commodity.
-
Tuesday 30th September 2014 17:33 GMT Nate Amsden
design it right
And you may not have an issue. HP recently came out and claimed there has never been a software update that required data migration on 3PAR ever. I do recall being told there have been I believe two 3PAR updates (a looooong time ago certainly before I was a customer in 2006) which required some degree of hard downtime, but no data loss/migration. The end to end virtualization of the platform means data migrations, if needed should be able to be done on the fly. You're not dealing with disks after all but virtualized chunklets. Simply replicate from one format to another and flip the switch (this technology has already been on the platform for many years it's how they convert between RAID levels and move volumes between tiers - totally non disruptive).
On the topic of upgrades, just about 3 weeks ago I did a minor software upgrade on my dual controller 3PAR platform - first node upgraded fine, 2nd node failed halfway through (internal disk on the controller failed). System was in a degraded state for a long time, as support struggled to try to get the new controller to join with the existing one with mismatched code revs. Eventually once I got the case escalated to engineering they figured out a way to do it. Performance was not good during this time of course, made worse by our 90% write workload. I had been trying to get a 4-controller system in for many years but they company didn't want to pay for it originally. Our new all SSD platform which I just submitted the order to yesterday is 4 controller though (nothing to do with recent upgrade this was approved 10 months ago). So, will be nice to be on a 4-node 3PAR platform once again. It doesn't solve world hunger but really makes me not want to even consider an architecture that can't go beyond 2 nodes in a unified cluster (e.g. I don't view NetApp clustering as real clustering it's more like a workgroup of systems that you can move volumes inbetween similar to how you can move VMs between vmware hosts, but you can't have a single VM span more than one VM host)
http://www.techopsguys.com/wp-content/uploads/2010/08/cache-degraded.png
(unlike many 3PAR software features that one is included at no charge)
Public cloud migrations are of course generally far, far worse. They would often go something like "you have 30 days to move your data before we delete it - oh yeah and we don't offer you support or help you should know how to do this".
-
Tuesday 30th September 2014 18:34 GMT elip
Re: design it right
For my money, I would go with NetApp vs. EMC every single time. I've literally performed a hundred Non-disruptive upgrades without any disruption (shocker!). ;-) Compare that to the older Cellera data movers: failing back after a failover event always caused a 4-6 minute outage, assuming it came back up at all!
"(e.g. I don't view NetApp clustering as real clustering it's more like a workgroup of systems that you can move volumes inbetween similar to how you can move VMs between vmware hosts, but you can't have a single VM span more than one VM host)"
Not quite... you can have load sharing mirrors across all systems in your cluster if you wish. This is what we've done for 3 years with 1PB of data. We move a lot of data (average around 1500MB/sec on our main volume) on this cluster, and haven't had a single outage on any volume! With that said, it does take proper planning and architecture to pull it off. Also, "real clustering" as you've framed it (I assume you mean all filers in a cluster have read-write access to all disks/volumes), is coming "soon"ish. :-)
-
-
Tuesday 30th September 2014 21:36 GMT Anonymous Coward
What a total whitewash
I'm shocked that Storagebod doesn't see the fallacy of his own writing. One of the key tenants of "Software-defined" means that the software is ABSTRACTED from the hardware AND THE DATA. In the true application of "Software-defined", you should be able to update the software without data destruction. If you cannot, the value of software-defined is meaningless.
What it does mean is that the vendor is really "architecturally-defined" which means users become slaves to architecture design and, ultimately, anchored to architecture decisions (vendor lock in), which is EXACTLY what we were trying to avoid with a software-defined approach.