LogFS has some fscking capabilities in the right context
As someone who looked a bit into log-structured filesystems way back when as a grad student, they have some great capabilities in the right context.
Your standard file systems assume a use pattern where reads predominate over writes. They attempt to update files more-or-less in place, perhaps moving them through a journal first. This is done to try to keep files that are somehow "related" (typically, "in the same directory") in some sense "near" each other on disk, hoping to minimize seek time during later access. This involves a fair amount of fiddling with the on-disk file data, meta-data, and file system structures. It makes sense to do all this on rotating magnetic disks where seek time is major performance constraint, and one that is stubbornly resistant to Moore's Law.
Enter flash storage devices. No rotating media, no seek time constraint. But enter also a new constraint: lifetime limits on the number of erase cycles. The on-disk fiddling of standard file systems hits flash here. Combine this with flash's other constraint: the need for erases to take place in huge chunks that might have pieces (or all) of other, unchanged files. If you try to use a file system meant for spinning platters, you wind up with flash performing poorly over a shorter lifetime.
Log-structured file systems avoid this problem by *not* trying to update in place. Write new or updated data in the next empty, available portion of permanent storage. Write new metadata pointing to this updated location ... and write that new metadata in the next empty, available portion of permanent storage. Recurse up the meta*-data chain as necessary until, at the last step, the meta* data is written in a known pre-ordained location (or one of a handful of such locations). Yes, such location can become an erase hot-spot and limit the device lifetime, which is why you might want to have a reserved pool you cycle through. And yes, you need some kind of garbage collection to reclaim areas full of nothing but obsolete data. Ain't nothing perfect. Of course, you play nifty tricks with caching stuff in memory as well, to batch up the writes to the devices and satisfy reads on recently-active files ... but this is standard for any file system.
OK, so this has the nice effect of leveling out erase cycles across a flash device. But it has a fsck-tastic effect on _spinning_ media when doing heavy writes of new data, such as keeping os and application logs or writing large backup jobs. The file system just writes the data into the location under the disk head. No seeking, except for when you need to jump to the next track. In this use case, log-structured file systems (on a dedicated disk!!) are *wicked* fast.
There are also some possibilities that log-structured-ness opens up. Since the filesystem is a log, the whole file system history persists on the media, at least until garbage collection knocks holes in the history (and of course you could set a preserve bit to prevent even that on critical files). Can you say "undo file delete"? I knew you could! After all, it's just a special case of file system level versioning / checkpointing / branching.
Caveat (PHBs take note): this is a high-level description. The devil is in the details. You are hereby not, repeat not, an expert on log-structured file system (neither, for that matter, am I).
The paper at http://lazybastard.org/~joern/logfs1.pdf has more detail and is clearer than this post. And it's short.
No, I'm not affiliated with the LogFS project. I just think they're onto a good thing.
So, AC, now do you understand why YAFS isn't such a bad thing? Or do you wish to maintain that FAT and NTFS, or for that matter EXT4, are all the world really needs?
Mine's the one with the bits of bark and wood chips in the pocket, thanks.