Feeds

back to article EMC gets busy with dedupe, compression code-base

EMC has been developing its own deduplication and compression code-base, entrusting the effort to a team code-named Viper. The story, as told by someone familiar with the events, starts with EMC acquiring Avamar in November 2006. In June 2009 it acquired DataDomain. At this point it had Avamar source deduplication, DataDomain …

COMMENTS

This topic is closed for new posts.

Grr...

Or, maybe they could work on their existing piss-poor code and UIs. I'm sick of the NMC (Networker Management Console) It's about ten years behind NetBackup and flaky as hell. Their code on other products is buggy and often doesn't make sense (symmir and symclone would be good examples). Why they haven't integrated the control software for Clariion and DMX yet is beyond me, I mean, they've only had something like twelve years to do this since the purchase of Data General. This means that there is no easy upgrade path for smaller companies and that large companies who have only ever used DMX, can't easily install smaller arrays in say DMZs or other secure areas, without redeveloping all of their scripts and re-training staff. It must also mean that development inside EMC is more costly.

0
0

EMC Responds to Solaris ZFS and OpenStorage

Chris Mellor writes, "The Viper team was set up in 2009 and is still in operation. We understand from a second source that the team has written code as a component which is being used in Celerra for deduplication and FLARE for compression."

http://milek.blogspot.com/2008/03/zfs-de-duplication.html

With Sun working on De-Dup integration into Solaris ZFS since 2008 with it's final release in 2009, it seems clear that EMC started to feel the pressure from Sun/Oracle OpenStorage around that time frame.

When the competition is open-source and free, there are few other options for proprietary storage vendors besides internal development on proprietary their code base or migration to ZFS.

0
0
This topic is closed for new posts.