The bug has nothing to do with the disk being actually full. It is related to the timing of transactions and order in which file entries are created in a directory, which may lead to transient collision of hashes. Which is allowed only up to a certain limit. If exceeded, we have an error poorly reported as "disk full". The problem is that the limit should not be there.
Of course ideally, this should be covered by existing suite of regression tests. However, because the issue only occurs if the transaction cache is not flushed frequently enough, it is timing dependent. And timing-dependent tests are extremely difficult to write. Writing them in such a way as to exclude false positives and false negatives is impossible unless you enter into white-box testing which then becomes a nightmare to maintain.