Feeds

back to article Sepaton lures Feds with TOP SECRET backup BEAST

Sepaton has brought out a new deduping backup storage array that it claims inflicts 80TB an hour on a floating lump of backup data. It's the S2100-ES3 2925 and it has near doubled its data ingestion rate over the -ES2 model, says the firm. It has also added an encryption feature that it says makes it the "safe choice" for mega …

COMMENTS

This topic is closed for new posts.

It only makes sense to compare systems by how much performance you can have in a single dedupe domain and management pane. The equivalent four domain SEPATON system will yield 320TB/hr of backup. If you are going to gang separate systems together, then where do the comparisons end -- after 4 systems, 10 systems, 1000 systems, or maybe by the number of processing nodes, FC interfaces...

0
0
Anonymous Coward

The problem for Sepaton is that it's flogging a dead horse in post process deduplication. Of course it's ingest rates are very fast when it simply has to write to disk and then dedupe the data later. The EMC Data Domain & HP Store Once solutions are all inline, which means they do the work during ingest, requiring half the disk to initially land the data and then handle the post ingest dedupe process.

0
0

Inline solutions have their limitations

I think most people see through the FUD between inline and concurrent deduplication. Inline solutions are not appropriate for large enterprises and suffer their own drawbacks:

1. There is very little duplicated data within a single backup. Therefore, almost every appliance has to land virtually the whole first backup, regardless of the type of deduplication technology.

2. Inline deduplication requires very large chunks on the incoming data to be able to perform at the rates they do. This results in poor deduplication unless post-process steps are performed after all the data has landed.

3. To get competitive deduplication ratios, inline solutions need to run post-process optimization to find more duplicate data and to reclaim storage found during deduplication back to the available storage pool. This takes a lot time, causes penalizing fragmentation, and must happen post-process in order to get the results they claim and to be able to use reclaimed space for the next backup.

4. Inline systems don't scale and performance degrades over time as more data is stored on them. These systems become useless long before they reach their claimed maximum capacity specifications.

For the reasons above, anyone looking to backup a lot of data should seriously consider other solutions than inline systems.

0
0
This topic is closed for new posts.