back to article Delete all you like, but it won't free up space

Networker blog author Preston de Guise has pointed out a simple and inescapable fact: deleting files on a deduplicated storage volume may not free up any space. De Guise points out that, in un-deduplicated storage: "There is a 1:1 mapping between amount of data deleted and amount of space reclaimed." Also, space reclamation is …

COMMENTS

This topic is closed for new posts.
  1. jubtastic1

    Thoughts

    Would be useful if after a file is deduped, the actual space used was saved to the file's metadata, so you can run searches against it and pick better targets for deletion, perhaps this is already possible?

    Alternatively, throw another disk on the pile.

    1. Anonymous Coward
      Anonymous Coward

      More complex than that

      If you put two identical copies of a dvd on a drive then deleting either one fill free up almost 0 space, while deleting both will free up the full amount.

      Admittedly it's not much more complicated, could probably come up with a decent way of displaying it in an afternoon if someone has the time.

      1. Anonymous Coward
        Anonymous Coward

        @Mycho

        Actually, better to consider 3 DVDs, 2 identical, 1 different. If you delete both of the two identical DVDs it's highly unlikely that you'll delete 100% of the volume of one of those DVDs because the chances are that there are a whole load of blocks in common with the other DVD. If you have file level de-dupe you'd loose 100% of the volume of the identical DVDs, but not if you had block level dedupe.

    2. Trainee grumpy old ****
      Thumb Down

      Not quite that simple

      Trivial example:

      Load a file which is completely unlike any other file already on disk. Metadata will indicate possible 100% recoverable space.

      Next load a file which matches n% of the previous file (for simplicity assume it doesn't match any other file). The second file's metdata will show (100-n)% of its space being recoverable. Now the first file's metadata will need to be updated because deleting it will actually only free up (100 - (100 - n))% of its space.

      Every time a file is added or deleted the metadata for every remaining file with which this file has an overlap would need to be updated.

  2. Pete 2 Silver badge

    security makes dedupe irrelevant

    If you do what we're always being told to and encrypt your files there is no possibility that a deduplication process (or a file compression regime, for that matter) can work.

    1. Anonymous Coward
      Anonymous Coward

      Security and de-dupe

      It works at a block not file level so security may reduce effectiveness of dedupe, but you'll still see some benefit.

    2. Anonymous Coward
      WTF?

      Just another Title

      Dedup and Data At Rest Encryption can work together with no performance impact and no impact to dedup.

    3. David Halko
      Go

      RE: security makes dedupe irrelevant --- incorrect with ZFS: DO BOTH!

      Pete 2 posts, "If you do what we're always being told to and encrypt your files there is no possibility that a deduplication process (or a file compression regime, for that matter) can work."

      This is incorrect.

      http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg38301.html

      "Both work at the block level. Hence, they are complementary."

      "Two identical blocks will compress identically, and then dedup."

      With ZFS based storage systems, DeDup and Compression works great together!

      1. NoSh*tSherlock!
        WTF?

        data at rest encryption?

        Encrypting data at rest only protects you from the physical media being removed and is fairly useless for anything else.

        Encrypting data as it enters and exits the application is the only real sensible approach - and we really have very few solutions here.

        so most encryption is only of any use on transportable media

  3. Anonymous Coward
    Thumb Down

    Track your usage, don't sit back and wait for meaningless alarms

    Alarms going off at arbitrary usage points are meaningless without historical tracking of the usage. If your alarm goes off at 80% but it took six months to get from 70 to 80 then you're fine, you've got time to increase your capacity. If your alarm goes off at 80% but a week ago you were at 50% and a week before that only 25%, then you've got a problem.

  4. Ben Oldham
    Badgers

    Eh?

    But this isn't a bad thing, surely you're already reaping the benefits of a centralised, de-duped array? And you're using less space that you would otherwise be?

    His numbers do make sense though, although the rate of change surely needs to be considered.

  5. simon casey 1

    Jeez...

    Talk about stating the blooming obvious.

    Does he also have any wise words of wisdom akin to this regarding thin prov volumes?

    Ok for the layman where de-dupe isn't really in use in the consumer field (much), it may be a realisation that isn't pointed out hard enough, but anyone running de-dupe in a commercial environment I would hope already knows this! Then again I don't think at the consumer level you'd be explaining how deduplication, pointers etc really work.

    That and his advice only really accounts for de-dupe systems at an object level, if you're dealing with de-dupe at a block level (many tape backup systems?) the ability to release/reclaim space when you have 2Tb+ blocks of tapes even knowing what'll be released when any given volume is deleted will be tricky and will shrink over time when your factoring ratio increases natch.

    All IM(own)HO

  6. Annihilator
    Paris Hilton

    Were you bored?

    Is this meant to be a stunning insight? Were there people out there that didn't realise this already?

  7. BingBong

    Reference or link count?

    Surely this should be a non-issue if each deduped data file/block/bytes has an associated link or reference count (like the hard link count in UFS) and you free the associated file/block/bytes when the reference count drops to zero.

    I guess you may still have fragmentation issues and so need to defrag in the background at lower priority.

  8. John Miles 1

    Just Garbage collection ?

    Not sure what the issue is.

    Isn't this just the same paradigm as garbage collection in a program with dynamic memory allocaiton. When the last pointer goes away the space should be reclaimed.

    On the other hand if there is still a reference to the data block, then someone still wants it and you are no worse off than if you had not de-duped, the data would still be there because the other guy had not deleted it.

    1. Anonymous Coward
      Stop

      RE: Just Garbage collection ?

      Precisely the same paradigm I was thinking, yes, but on the topic of garbage collection on SSDs, but not just about freeing space as much as consolidating it; and allowing reuse without drop in performance, which is the great problem foreseen in SSDs. Now this paradigm in SSDs potentially affects the average Joe and was tackled with TRIM (or anything in that fashion).

      The thing is, freeing up space (and chasing dead references?) based on occupied percentage seems wrong to me. It should be scheduled to low usage hours either dynamically or hard-coded to a given time (wee hours in the morning?), pretty much one more task to be added to a "defrag tool".

  9. frobnicate

    Beware

    he is going to invent reference counting next!

    1. Velv

      Invent?

      Don't you mean Patent?

  10. Nick Stallman
    WTF?

    Huh?

    That problem was solved decades ago with memory garbage collection.

    Store a counter with each file that is actually stored. For each reference to the file, add one. Each time a reference is deleted, subtract one.

    If the counter gets to 0 then delete the file.

    So simple.

    1. Anonymous Coward
      Anonymous Coward

      It's block level

      The dedupe happens at block level in dedupe arrays, so your file is represented by a bunch of pointers to the unique blocks. Lots of files will have lots of blocks in common - the template of a blank word doc will be shared, at least in part, by pretty much all word docs, for instance. So what you have to do is a more sophisticated garbage collection which looks at the individual blocks and follows their pointers to see if they belong to any files. You also have to make sure that there a no pointers being re-created to the blocks you are removing during the garbage collection process.

      So not that simple, actually.

  11. Keith 21
    FAIL

    Wow!

    Talk about stating the bleedin' obvious!

    Next he'll be telling us thatwater is wet...

  12. Anonymous Coward
    Anonymous Coward

    which part

    Which part of this is a surprise? Though the only thing I've ever noticed is it's really hard to get the claimed levels of space saving (you need to do things in very specific ways.)

  13. Evil Auditor Silver badge
    WTF?

    News...

    ...as in 'de Guise just discovered America'?

  14. Dave Cradle

    Saints preserve us.

    Welcome to storage 101.

    I really hope the last sentence of the article was meant sarcastically, but I fear not.

  15. MarkA
    FAIL

    Really?

    You wasted time on this?

    A deduped volume is there to slow down, delay, defer, whatever, your disk purchases. If your deduped volume is getting full that's the cue to buy more disk. It's the cue that you need the disk now rather than six months or a year ago.

    Jeez. What a clown.

  16. The Fuzzy Wotnot
    Pint

    Did I miss something?

    So you have one actual copy on the device and lots of pointers to the actual file. You remove one of the pointers and the actual file still has to remain as it has all the other pointers to support, thus no space reclaimed.

    And his point is?!

  17. JimC

    The point is usefully made

    For those who are being fed deduplication Kool Aid. There are a lot of naive decision makers about who will believe anything the vendors (but not their own technical staff) tell them, and not make the connection that introducing the technology means that the old "disk space panic - lets delete something " bit won't always work and they will need to build in rather more effort in terms of proactive monitoring and management, and keep a bit more contingency ad not put off buying the new storage until the end of the FY.

    And for those serial commentards (flamentards?) who are already ready hitting the comment button or the down vote with "they should be doing that anyway", why don't you include in your posts the wonderful organisations you work for where there are no short sighted managers or penny pinching bean counters so the rest of us struggling with such know what companies to apply for jobs to... In any case, to be quite honest, we in the real world aren't much interested in how easy your lives are.

    1. perlcat

      and sometimes it isn't even the managers' fault

      I know of several vendors that are quite unclear on the concept, having had their professional services architects and technicians set up systems with no viable plan whatsoever for clearing the space other than the claims printed on the glossy advertisement. I stopped believing advertisements a long time ago.

      After all, the content id handed to a third party application, if 'deleted' in the third party app, remains in the filestore index. There's no use in reference counting if you never decrement the reference count or delete the reference when it hits zero from where it counts. Bonus points to the vendor for sheer asshattery if they combine that with a unique file format that means you don't gain any of said de-duplication benefits. It's like a gas station burrito that just keeps on giving.

      The only option for deletion then is to migrate your known good data to a new system, and drop your old data store in the shredder. Hopefully, you have chosen a less stupid vendor for the new system. (I'd use the word 'smarter', but in the context of high dollar IT kit sold on golf courses, I'll settle for the much more practical 'less stupid' term.)

      No icon -- there is no appropriate icon for this level of moody and bitter.

  18. Storage_Person

    Snarkiness from the Uninformed

    This has nothing to do with garbage collection, with the reference count going from 1 to 0 for a given piece of data. It has to do with reduced referencing, with the reference going from n to n-1 for a given piece of data where n >1. As would be expected, if all you're doing is decrementing a reference then you're not oging to save a lot of space by doing so, and so 'traditional' methods of keeping your storage utilisation down aren't going to work with deduped storage.

    Yes it's a relatively simple point but given that from the comments half the people here didn't even get the basic idea of the article there is some worth in publishing this stuff.

    One other thing that has been missed is that because dedupe is (as a rule) carried out as a post-process you need to keep some 'spare' capacity anyway to place yet-to-be-deduped information prior to the dedupe process kicking off, so when you measure your high water mark make sure it's before your daily dedupe kicks in rather than when you wander in with your coffee and the process is long completed.

  19. Rob 103

    Not the bloggers fault

    Having read the linked article, its clear the blogger doesnt think this is new information. He's just pointing out to anyone interested that its a factor to consider and suggests some best practices.

    The Reg author on the other hand......... the way he phrases it he sounds like he thinks he's uncovered the new Antennagate.

  20. BobS

    the system must scan remaining data ... ?

    Why? If the reference count (to the file, block, or whatever you've deduped) is going from 1 to 0, you can delete the actual data, otherwise you need to keep it.

    I'm similarly stumped by "reclamation is rarely run on a continuous basis on deduplication systems – instead, you either have to wait for the next scheduled process, or manually force it to start." What prevents reclamation happening as soon as the reference count hits zero?

    Regarding the randomly selected early warning thresholds, wouldn't it be more useful to monitor usage vs reference count. e.g.

    90% full with reference count > 100 is different from 90% full with all files having reference count of 1. In the later case, targets for deletion will be easier to identify.

  21. The Dark Lord

    Missed point

    The article reads as "deduplication is bad, or at least isn't as good as you thought it was", whereas the actual point is that dropping vast storage arrays onto your network doesn't excuse you from carrying out good capacity planning.

    I'm tired of asking organisations what their storage/app pool/network/etc utilisation is like only to be told "oh, the kit is so fast/massive we don't have to worry about it". As the "track your usage" AC pointed out, capacity planning is all about the rate of consumption of space, but the sad fact is that most managers don't look at your capacity graphs until it's too late. Then they fall bad on dedupe and emergency deletion until the problem goes away.

    Organisations have to face up to the fact that buying the storage array is only the start of paying for storage.

  22. Jim 59
    Thumb Up

    dedupe delete

    Eh ? Isn't that reasonably obvious to any storage techy? Deduplication is not new technology. Good on Preston de Guise, but this article does not make him "a clever guy".

    de Guise envisages admins running round like decapitated chickens trying to reclaim space, only to haplessly delete a few block pointers. I doubt it. The OS (platform or storage) presumably will provide tools to see where the real reclaims can be made ? Am i missing something here ?

    Jim

  23. Anonymous Coward
    Anonymous Coward

    More procedures?

    Ah, maybe a percentage of how much is deduped is a better start. Only, how do you calculate that? What sort of metric is useful here?

    My current problem is that I have a lot of duplicate stuff to sort out that deduping will only reduce actual disk space usage thereof, but won't improve its usability, where actual removing of confirmed identical duplicates will do just that.

  24. Justin Stringfellow
    Troll

    I propose a fix... REDUPE

    Clearly what's needed here is a redup() function to fix these nasty dedup()'d blocks.

    I always thought dedup was a nasty communist plot to dilute our precious bodily fluids.

    I'm off to patent redup(), before netapp claims they thought of it first.

    1. Adam Foxton
      Joke

      Not only that, but ReDup()

      would vastly improve your data integrity- a failure of one block doesn't affect any other files! And disk space is so cheap nowadays- you can build a multi-terabyte raid array for a couple of hundred quid- that it makes sense. Even better, with our patented DirectStor™ technology which stores the data in a logical order on a disc your access times will drop slightly- potentially making big savings in a large datacentre!

      So, for a faster and more reliable system, use our new patented ReDup™ technology!

  25. Anonymous Coward
    Unhappy

    Not everyone knows this techy stuff

    Many of the comments here seem to be objecting to the article teaching the basics of storage management in a de-duplicated environment. I see thinjgs from another point of view. I work (at least at the moment) for a large software house which is currently in consultation with a view to making most of their experienced techies redundant in favour of cheaper graduates (a graduate, of course, being someone who is educated ... to a degree). The new graduates may be very bright but sometimes this sort of concept eludes them ... perhaps it was deleted from their degree courses.

    Anonymous Coward for obvious reasons!

  26. Anonymous Coward
    Anonymous Coward

    Dear ZFS zealots

    You have been pwned.

  27. Anonymous Coward
    Anonymous Coward

    Deletion plan?

    Not claiming *any* knowledge here, but should the kind of place likely to be using deduplication not have some kind of plan (even if only in someone's head) of which *kinds* of data to delete if they run low on space and decide against buying extra storage?

    I'm assuming that someone doesn't run around panicking and deleting files at random, so why wouldn't account be taken when deleting files of space likely to be freed up?

    Do deduplicated storage systems give some rough idea of how 'shared' particular files are, vaguely akin to a compression ratio when using compressed storage?

    (Even though I know such information isn't quite as simple, and might not always be current, it could still have some use.)

  28. Gideon 1
    FAIL

    Why bother?

    Disks are cheaper than paying fleshy carbon lifeforms to delete files...

    1. Anonymous Coward
      Anonymous Coward

      Err...

      You don't use enterprise disk arrays do you?

  29. James 100

    Worse on ZFS

    It can be even worse than this on some deduped filesystems (ZFS, WAFL, looking at you...) - if I have, say, two identical copies of a used source tree (i.e. also containing the resulting object files), the second copy may take no space other than a single reference to the first. Now, I spot that I'm down to my last few kilobytes ... so I go in and run a "make clean" on one of the copies. Not only does this not free up space (the first copy is still using all those blocks) - because I am modifying all the directories involved, they can no longer share all their blocks with the first set. Deleting a load of files will actually CONSUME space rather than freeing it.

    It's easy to understand how deleting one copy of a set of data in a deduped system may not free up space - but having a delete operation fail due to insufficient space can still come as a whole new class of WTF moment.

    1. Oninoshiko
      Pint

      TANSTAAFL!

      Which is, again, a well known issue. It is reccommended that you not let a pool fall below 20% free (even with dedup disabled) in ZFS. Snapshots also prevent freeind space after a delete.

      ZFS makes some trade-offs. For the most part, considering the capibilities of modern systems, they are the right ones.

  30. John Smith 19 Gold badge
    Thumb Up

    So some kind of GUI which show which are the *real* files and which are the alias copies

    I guess I always thought that anyone who bought this sort of tech would have that *already*.

    A bit like the difference between "file size" that Windows *loves* to show you (and AFAIK remains f**k all use as Windows can't put two files closer together than a sector) and "Size on disk" which is rather more relevant

    In a database context de-dupe is also called data normalisation.

    I don't think *anyone* expects that to actually speed up a database in terms of the number of disk accesses.

    Thumbs up because some people out there who should know this probably just found it our reading that blog. PSA obligation fulfilled.

  31. W. Keith Wingate
    WTF?

    And so? This is quite familiar....

    ... to anyone who's used the .snapshot feature of a NetApp filer or similar redirect-on-write filesystem.

    Still, I agree that this doesn't really seem like a reason not to dedup, unless I've missed the point entirely.

  32. Dan Keating

    Let's face it

    You can't get any bugger to delete their data anyway ..

    It's like saying - I had the benefit of a space efficient clone but when I deleted it, I didn't get the full size of the system back in my volume. The guy is a nutter.

  33. mike panero

    Time to build a real bin

    Upon deleting a file off the filesystem it should be copied to some other backing store, a real life recycle bin, say oh I don't know "hybrid drives" or somekind of USB stick if its a lone desktop

    And how about a TTL for a file? Create date, modify & TTD(time to die) date, and we all know files go to file heaven once they die, to undie it we would type lazarus -r webcamsnapshot002.jpg

    1. Anonymous Coward
      Anonymous Coward

      Congratulations

      You've just invented backup.

  34. Chris Mellor 1

    UNIX users know this to an extent already

    Sent to me by mail:-

    ----------------------------

    You're quite right about deduplication (of course), but this is an issue that Unix folk have faced for years - at least those who make use of hard links. Okay, it's not quite the same as (for most filesystems) the space is automatically reclaimed once the final hard link is deleted (specifically, when the reference count drops to zero) rather than after a scan*, but you take the point.

    There's no such thing as a free lunch - you "gain" space in deduplication only in as much as you are reusing bit-patterns; you are not owed that space, and you only get the benefit so long as the duplication remains true. Deletion of one copy (as opposed to two) breaks that.

    * Okay, refcounts can get confused, in which case nothing short of an fsck or equivalent is going to return you the space - but that's a bug not a feature.

    Mark

    P.S. The pedant in me must point out that if you *do* delete enough, of course it will free up space. ;-)

    -----------------------

    Thanks Mark,

    Chris.

  35. Henry Wertz 1 Gold badge

    Some points regarding dedup

    @David Halko, compression is not encryption. I would guess some crypto systems would not have the same output given the same input block (even for the UNIX passwd command, a salt is thrown in to make things more difficult for an attacker.)

    Re: Nick Stallman etc. regarding Garbage Collection. Yes, it's a sort of garbage collection. That's the point -- if deleting some files doesn't reduce the reference count to zero, then there's no garbage to be collected and reclaimed as free space. The point isn't that these devices don't know how to free space, it's that sometimes people delete files and it turns out they're virtually all duplicates so very little space gets freed, or they forget about the reclamation step and wonder why space doesn't free immediately.

    "the system must scan remaining data ... ? Why? If the reference count (to the file, block, or whatever you've deduped) is going from 1 to 0, you can delete the actual data, otherwise you need to keep it."

    For newly written data, the system probably cannot dedupe it in real time, the write speeds would become far too low. So, when the reference count goes from 1 to 0, it could STILL be a duplicate for some of that new data.

    "I'm similarly stumped by "reclamation is rarely run on a continuous basis on deduplication systems – instead, you either have to wait for the next scheduled process, or manually force it to start." What prevents reclamation happening as soon as the reference count hits zero?"

    I'd guess the reference count list is too large to reasonably hold in RAM and process in anything like a timely manner, so it's run as a sort of batch process. Also, again, references that hit 0 may have to be compared with new data.

    "The thing is, freeing up space (and chasing dead references?) based on occupied percentage seems wrong to me. It should be scheduled to low usage hours either dynamically or hard-coded to a given time (wee hours in the morning?), pretty much one more task to be added to a "defrag tool"."

    It's possible it is. I get the impression with some of these dedup products, that they don't reclaim space on any sort of continuous basis, they will run a reclaim step, and I wouldn't be at all surrpised if on some of them it was like a cron job.

    Anyway, in one sense this article states the obvious. But in another sense, it's easy to overlook the fact that in a dedupe system deleting a bunch of files may not free up a bunch of space. I think this was quite a good article.

  36. Anon NHS IT flunkey
    Black Helicopters

    RE: You've been (de)duped ...

    In the morning everyone! ;)

  37. BristolBachelor Gold badge
    Joke

    Statistically not a problem?

    If (statistically speaking), deleting a file does not free up space, because the blocks are also used elsewhere, then when you ADD a file, it takes up no space because the blocks will be shared elsewhere too.

    Therefore statistically speaking, Dedupe systems never run out of storage space as long as you have room to add a few more links and meta data for a file!

    OK, I'll get me coat. (Actually it doesn't have to be my coat, just one that looks the same!)

This topic is closed for new posts.