I have a niggling doubt...
... about one of the words in this article.
When it comes to the disk-based backup appliance market, there is EMC and IBM and then there is everyone else. Market analyst IDC has just released a report, "Worldwide Purpose-Built Backup Appliance 2012–2016 Forecast and 2011 Vendor Shares", that shows EMC utterly dominating the disk-based backup appliance market with its Data …
... about one of the words in this article.
It's pretty easy to get the most market share if you go round buying up the competition but it makes for a very confusing product portfolio. There are EMC devices competing with each other in the same space and unfortunately they are not that interoperable so if you buy the wrong product and need to upgrade it can be very difficult to move between EMC products should the one you have not have enough grunt so you need to move to the next one up. Hopefully their products will be aligned soon and these problems will go away.
Why backup when you can replicate for protection and use snapshots to go back in time...
I really hope that's not what you do.
Where's your plan 'B'? Snapshots are dependent on the primary data being available, what happens when that's gone? Snapshots should only ever be a plan 'A'.
Snapshots, whilst could last for a very long time, you'd need an Andre the Giant sized fist full of capacity to keep several years worth of back data, and even still, if the primary is gone, you've got to rely on your synced data and snapshots on your remote site. What if the loss was mallicious and both primary and replicated are destroyed? Bug in the array code, internal or external hack, mis-configuration of snaps and replication, shipped corruption going back before your available snapshots. - all of these are just some of the reasons that snapshots alone should NOT be considered an alternative to snapshots, just as your first port of call in an event! Backups are your next buffer between recovery and a P45.
Backup provides an airgap for such an event.
But that's the point of replication, AusStorageGuy. how is using an external disk based backup different that replicating your data to another array? Are you aware of the offerings out there for snapshots or are you simply talking about this from copy on write (like Dell Equallogic) standpoint?
If you are worried about spending gobs of primary storage on backup... well that's what redirect on write is suppose to do. Store block incrementals which are compressed and deduped ON A CAPACITY TIER on the primary storage. I even heard that VNX is moving to that strategy at EMCworld this year.
Snapshot == versioning data ... say on a hourly basis
Replicate == backup copy ... say on a daily basis (or more frequent for disaster recovery)
Tape == offsite full copy ... say on a weekly/monthly basis
NetApp had it's engineering working on a VTL product long before it tried to purchase Data Domain. I even got to see a demonstration of the product. It's either a)too bad they decided to cancel it or b)yet another example of why NetApp's software engineering group needs a revamp to get a boot up their ass to get things done (Hey, 8.1.1 finally is integrating technology from how many years ago).
When I was meeting with Data Domain long before EMC purchased them, one thing they constantly came back to was they were the only vendor that could do inflight deduplication because they had the patents. I wonder how many small companies trying to get into the market too got bullied out.
I think Quantum holds the patents to dedupe and not DD. I wouldn't be surprised if is a specialized case of dedupe that DD filed to cover themselves after they ended up licensing the core DD patent from Quantum.
BTW any word out there on how does DXi compare to Data Domain's platform? I've heard they've done a lot of work to improve their core algorithms and come up with some virtualized versions (I am NOT a quantum employee... I am genuinely interested why is DD winning over QTM and others.. is it tech or sales or both).
You're right, the original patent for the actual deduplication belongs to Quantum, Data Domain paid them for the license with stock during the original IPO of Data Domain. Data Domain said they held the patents for deduplication "in flight" versus post process.
Quantum was in trouble already when EMC bought DD and severed the agreement (EMC had given them a couple of cash infusions). Most likely a combination of having issues funding research and development and their sales capabilities.
Thanks JT. So it doesn't look quantum is going to be bought for it's D2D backup any time soon. Especially because NTAP it self is under quite a bit of pressure :)
The current definition of a purpose built backup appliance is obsolete. Obsolete.
The Data Domain model is being disrupted faster than most realize. Sending massive volumes of data over the network to be deduped by smart storage is 1990's era tech. Steve Foskett covered an angle on this not too long ago here. http://bit.ly/LETxEH
Some would argue that dedupe storage shouldn't be called an appliance at all as it needs to connect and work with backup technology like Avamar or Symantec NetBackup. For EMC to claim they have a real "Integrated Backup Appliance" they would need to tie together Avamar, DataDomain and probably some Networker. Even if they delivered that tomorrow, they would be well behind the state of the art and a couple hundred million behind on market share in that category.
The future of backup appliances is in integrated backup appliances or Unified Data Protection Appliances. These appliances integrate backup, media servers, source, media and target dedupe capabilities and in some cases security, HA and integrated encryption into an appliance or set of appliances.
Symantec and some smaller startups are making fast inroads here. (http://bit.ly/N9QtMJ)
o In the last 12 months Symantec has seen 250 percent growth in new appliance customers and 450 percent growth in the number of units shipped.
o Symantec appliance customers are coming back for more, with sales figures showing 40 percent of purchases from repeat customers.
· Even though the Symantec 5220 was introduced less than a year ago, it has won 3 awards in the last 8 months including the recent best of Microsoft Tech Ed.
· With just over 1 year in the appliance business Symantec has grown to have 3.5% of the IDC PBBA market – ahead of Quantum, Sepaton, Exagrid and other vendors with years of products in market
· IDC has split the PBBA market differentiating between target dedupe/VTLs and unified backup appliances like NetBackup / BE – this is not yet in the PBBA report, but is in other documents IDC makes available.
I sure skeptics will say, must be the law of small numbers. But, consider it this way, what enterprise tech company has ever taken a product tine from $0 to $200m in 18 months? I can't think of one.
EMC will not continue to enjoy almost no competition in the space. Frankly, it is the other way around. For those who want an integrated backup appliance and not just dedupe storage EMC is not an option. The onus is on the IT admin to put it all together into a single solution and that's the old model. If you are tired of being the glue between Avamar, Data Domain, Networker or NetBackup, integrated backup appliances are the way to go.
"For what EMC wanted to charge, I realized that I could get a bigger, better, faster, newer solution for less money by going with Symantec NetBackup appliances," http://bit.ly/MEUREZ
As for the replication discussion- Global management, policy and granular recovery of snapshots is the difference between data protection and simply having a lot of copies.
Note: this growth is happening in a segment not covered by backup software or purpose built backup appliance market coverage. http://bit.ly/N9QtMJ
@seanjregan (I work for the fastest growing vendor in the Unified Data Protection Appliance market)
Agree with seanjregan. PBBAs are ok to replace or augment tape, but they do not fundamentally change the broken backup process. You need to address the problem from end-to-end, not just at the target. And you really aren't helping recovery much if you stick with the file-based, streaming-data paradigm. There are better ways to do things.
I actually blogged about just this a few weeks ago: http://bit.ly/La7K8Y