Feeds

back to article Falconstor/Sun wins speediest dedupe race

The fastest deduplication on the planet is performed by an 8-node Sun cluster using Falconstor deduplication software, according to a vendor-neutral comparison. Backup expert W Curtis Preston has compared the deduplication performance of different vendors' products. He uses suppliers' own performance numbers and disregards multi …

COMMENTS

This topic is closed for new posts.

Wow...

Sometimes, reading the numbers in a paragraph is not clear.

Injestion Rates...

MB/Sec Vendor

11,000 Falconstor/Sun

03,000 Sepaton/HP

01,100 EMC

00,800 Quantum/Dell

00,600 NetApp

00,500 Quantum/Dell

Deduplication rates

MB/sec Vendor

3,200 Falconstor/Sun

1,500 Sepaton/HP

0,900 IBM/Diligent

0,750 Data Domain

0,400 EMC

?,??? NetApp

That is a very interesting chart.

FalconStor owns about 70% of the market with OEM partners, including Copan Systems Inc., EMC Corp., Nexsan Technologies Inc., Sun Microsystems Inc., 3PARdata Inc. and IBM among others, all rebranding its product.

This is more of a systems performance chart than a deduplication performance chart.

0
0
Stop

@wow...

I agree, its more about system performance.

The dumb thing is its all about optimising an antiquated process. Traditional backups duplicate data constantly by constantly sending the same data everyday to the backup tapes.

Enter stage left the VTL with de-duplication which then de-duplicates all the stuff. Wouldn’t it be better if you just didn’t duplicate the data unnecessarily on the first place?

0
0
JL

Fundamental assessment rules ignored

The first rule of any technical assessment is 'never trust the vendor's metrics'.

The first rule of any performance assessment is 'performance means something different to everyone'.

The statements made in this write up are pretty pointless other than the valid (but well known) stuff about the different types of de-dupe available. If you have a dataset that is pretty unique to a single server, but contains daily repeated patterns (a DB dump for example) then an inline engine like DD is likely to win out. If you have a globally repeating dataset (eg OS backups or distributed code trees) then a global post process engine like FS is likely to win out… or maybe a client side dedupe engine like Avamar or PureDisk. There is no single winner here - it is TOTALLY dependent on the specific scenario that you are using it for.

Net: test in your environment and ignore pointless performance write-ups like this which ignore the first rules of technical and performance assessment.

0
0
Anonymous Coward

Re -- System Performance

The performance of the systems doing the de-duplication is rather important.................

0
0
This topic is closed for new posts.