On behalf of everyone,
F*ck the cloud.
EMC says backup is broken, and that infrastructure should now be able to protect itself using intelligence it possesses. It is hoping users will move towards copy data management and away from leaving idle data in silos waiting for something to happen. The realisation that EMC's backup thinking had changed started with a …
Amen to that. This whole thing is just a PR stunt to make people think EMC is at the forefront.
The Cloud is just a bubble of fog high in the sky. Azure is tripping over its own feet every other month for any reason and you want me to think that I should backup my DATA to the cloud ?
Manage my website at five nines for ten years already, THEN you'll be able to talk about data.
And you still won't get mine.
Can someone on the El-Reg storage desk tranlate this management-speak drivel into English?
If you have only one copy of a piece of data, there are various things that could happen to it to cause you to lose it. You have multiple copies in difference places and in different fomats and with different levels of accessibility, because you want to reduce the probablility that whatever hazard you experience will destroy all your copies of the data.
Another copy will guard against hardware failure. Snapshops will guard against user error. Offsite copies will guard against environmental disasters such as fire, severe weather and theft. Making them less accessible will guard against software problems such as viruses and security incidents.
Would someone ask EMC to slow down with all this new stuff. They have barley finished tidying up the release party for XtremIO and now all this?
seriously, being 'distruptive' does not mean standing in the middle of the room throwing different objects at the audience until one sticks... actually, i suppose it does, BUT THATS NOT THE POINT
Backup *is* decidedly broken and has been for a long time. Some of what they are saying makes sense, but their take on the solution serves their interests at the expense of the customers.
What we need are smart systems that use something akin to Git's distributed architecture. The real answer is to have common things shared such that their integrity is proportional to their value.
They are correct about copy management and versioning but their notion of the backing store partly just moves the problem to a different location.
I am not promoting Git. I am not a fan of the implementation (no offense Linus). However, I am promoting the architectural design point that has full copies of integral data sets stored all over the place in proportion to their value (expressed as interest in open source code).
The problem inherent in the correct solution is that it gives control and money-making opportunities back to the us where it belongs. Everyone in charge from businesses and their vendors to government does not want us to have control of our information because it is equivalent to relinquishing control of us.
Money is the mechanism, but control is what the game is all about.
"Chief Marketing Mouthpiece?" Ouch.
Setting that aside... you nailed it. All I'd add for your readers is that it's important to remember that the big storage vendors created the Copy Data growth problem, and benefit the most from it. It's naive to think they're the ones who are going to solve the problem. Mr. Manley is dead right about where the market is going, but until EMC tells financial analysts that it's no longer going to get the revenue tailwind of massive same customer data growth, this copy cat strategy is more about protecting EMC market share than it is about protecting customer data.
I'd all so point out that they're calling for a data management model that tames storage growth, sets SLA's for production applications, leverages backups as assets instead of insurance, and manages data in its native format to make it accessible instantly. For all the breadth of their current product line... they have NONE of those capabilities today, and Actifio has ALL of them.
Whoa, EMC has suddenly seen the light! Static backup copies are baaaaad!
Christ, ask any Netapp customer about using backup or DR copies for test and dev. They've been doing this for ages with snapmirror and snapvault, combined with flexclones. Clone the mirror or the backup, work on the clone. When done testing, discard the clone. If you want to keep the clone, split it off if needed. No big deal, basic array functionality. Of all people, Manley should know this. And now he comes along and starts preaching this stuff as if it's some kind of revolutionary idea? Come on...
@anon: Man, you're absolutely right. They really need to get some consistency in their messaging. When are they going to ditch networker and data domain? They just said that the concept behind those products is fundamentally broken. The article's title sums it up nicely, really...
PS: yes, I created my account today, and explicitly to react to this article. I've been an avid Reg reader for years, but only until now have I felt the need to respond. I have experience with both EMC and Netapp arrays, but I'm not affiliated with either one.
One of the issues here is that by its very nature Backup is not dead, it lives on, and on, and on. What is considered to be outdated is still very much active, seven or more years after it is taken as a backup. So, whatever technology you bring in has to be alive and supported for a prolonged period of time.
The key to this is is solving the rather immature way that IT as a whole deals with data - we readily confuse Backup and Archive. Backup is exactly what it says - a fall back position should it all go wrong, we really only need a month of this (at worst), if you haven't detected your issue after a month then it really isn't a live system.
Archiving is the removal of stale data to another tier. Now this is difficult because unlike the backup chaps who don't care what your data is, the Archive chaps do, a lot.
To my mind there is little point in having a vendor of data buckets pontificating on the relative merits of different types of bucket, with the aim of selling new shiny buckets; when we still have them full of swill. I think the key to this is the database vendors (and I'm looking at Oracle here) to enforce, embue and enable their products to provide archiving as an inherent unavoidable function. Most importantly it needs to be an open standard, vendor agnostic, otherwise companies who take a long term view will stay away in their droves. What and why is the data owner's, how long is Oracle's, where is EMC's responsibility.
HDS has a product in this space, via a company they acquired last year named Cofio.
My bet, EMC purchases its' neighbor Actifio, and renames is VCopy. Leading up to the VCopy product release, VGeek, Chuck, and Zilla all blog on how EMC is about to change the face of the backup market forever. On the day of the announcement, EMC buys 400 new copiers and makes copies of the lawsuit against pure storage for 1 hour, breaking the world record for most copies of a lawsuit made on stage in an hour.
Biting the hand that feeds IT © 1998–2019