I've been using it in production on Solaris/Solris derivatives for years now. Hella fine FS. I'm happy for all you linux guys who finally have a reasonable FS.
The maintainers of the native Linux port of the ZFS high-reliability filesystem have announced that the most recent release, version 0.6.1, is officially ready for production use. "Over two years of use by real users has convinced us ZoL [ZFS on Linux] is ready for wide scale deployment on everything from desktops to super …
I've been using it in production on Solaris/Solris derivatives for years now. Hella fine FS. I'm happy for all you linux guys who finally have a reasonable FS.
Dude, just because the Solaris world lagged behind for so long, don't over-exaggerate the (undoubted) virtues of ZFS. We've had XFS, which is unquestionably a solid FS to use if you only have a few petabytes to worry about!
You mean the XFS that, comes with everything except an online FS repair? I have had to boot far too many XFS systems with Linux rescue CD just to run the FS repair to have any love for that FS.
Apples and Oranges, it really helps to know what the heck you're talking about.
Comparing XFS (which I've been using myself on Linux for ages) and ZFS is so absurd its not even remotely funny. For the record; I personally prefer XFS over Ext3 and Ext4 (see below).
Lets see here... ZFS allows you to setup one huge storage pool and then create virtual filesystems which all share the main storage pool. Meaning: I hope we can all agree that using one huge filesystem in Linux / Unix is a bad idea. So the very least you'd want /, /var and /home to make sure one doesn't interfere with the other. Now what would happen if you notice that /var is gobbling up too much space than is good for it ?
With XFS you'd have no other alternative but to change your setup (quicker log rotations, quicker removals, etc.), down the system to resize (outage, which is a big no no for production), or perhaps setup a whole new box then move the data over (if this really is a huge deal while uptime is too).
ZFS? Well, you simply change this on the fly. You can resize filesystems all you want, you can setup quota's, hard or soft, you can basically do whatever you want while the system remains running.
Then there's the issue of backups. One of the reasons I favour XFS is the xfsdump/restore program. It doesn't only make a filesystem snapshot like dump/restore does; it also allows you to restore your stuff interactively. On a per-file basis if you need to. Last time I checked dump/restore simply didn't work at all anymore on ext4, and it has given many issues in between (up to last year it seems). XFS just kept working ;-) (this is one of the main reasons I prefer XFS; Ext4 is a filesystem where restore tools stopped working? for real?!).
ZFS otoh... Snapshots as well as dump/restore setups (though those were a bit flakey too; you couldn't easily restore parts nor restore to a smaller filesystem).
But snapshots FTW. It basically means you make a backup in one second or so, and then continue working. This is esp. true if your storage is completely redundant (raid5 / 6 or so). You can make as much snapshots as you have diskspace and of course also remove older snapshots and such.
And needless to say; restore can either be a complete rollback or you simply get individual files back.
These are only 2 points where ZFS differs with XFS, but I hope you do realize comparing the two like that is simply absurd.
PS: I know about the online resizing capabilities of XFS btw. But the same story goes; its simply not comparable AT ALL.
Amen. That's why I stopped using XFS
BTRFS may be lots of things, but it's not particularly robust. That's why I stopped using it.
ZFS (so far) has been bulletproof. As for Linux versions, it's available for Debian/Ubuntu and Redhat/clones
If you want a commercially supported version, there's Nexenta.
They all work - and ZFS is the only FS for linux which can detect and repair disk ECC failures (others can detect, but not repair)
@ShelLuser your response, while undoubtedly excellent, makes the chronic error of failing to address what I ACTUALLY wrote about, which was the PRODUCTION READY remark (emphasis added, since you missed it the first time). I would have thought my "few petabytes" remark may have been a clue that I am well aware that ZFS is a _different_ sort of beast (including a volume-manager substitute), even if the article hadn't pointed that out anyway.
As to the remarks about online FS recovery tools, I can honestly say that I've been involved with industrial-strength (well, OK, military-industrial strength) XFS filesystems for the past 15 years and have never felt the lack. Possibly this is related to the quality of the hardware being used?
Anyway, as the watchful may have noticed, I absolutely acknowledge the virtues of ZFS. Unfortunately, until/unless it becomes part of a standard distro, it's not a lot of use.
"......Unfortunately, until/unless it becomes part of a standard distro, it's not a lot of use." Agreed, ZFS has its uses and advantages. What I object to is the manner in which some people seem determined to force it on the Linux community with no regard for its technical limitations, and then berate those that dare to point out that there are other options already in use and already in the kernel. After suffering their browbeatings for a while you start to wonder why is it they are so determined to shout down any opposition.....
Recent research shows that XFS is not really safe. It does not protect your data against corruption. And it also does not detect all type of errors. Here is a PhD thesis on data protection capabilities of XFS, JFS, ext3, etc:
The conclusion is that all those filesystems are not designed for data protection.
When you have a small filesystem, say a few TB there are not much risk of silently corrupted data. But when you venture into Big Data of many many TB or even PB, there is always silently corrupted data somewhere, just read the experience of Amazon.com:
"...Another frequent question is “non-ECC mother boards are much cheaper -- do we really need ECC on memory?” The answer is always yes. At scale(!!!!!!!!), error detection and correction at lower levels fails to correct or even detect some problems. Software stacks above introduce errors. Hardware introduces more errors. Firmware introduces errors. Errors creep in everywhere and absolutely nobody and nothing can be trusted....Over the years, each time I have had an opportunity to see the impact of adding a new layer of error detection, the result has been the same. It fires fast and it fires frequently. In each of these cases, I predicted we would find issues at scale. But, even starting from that perspective, each time I was amazed at the frequency the error correction code fired..."
Most of the time, you wont even notice you have corrupted data, because the system will not know it, nor detect it. For instance, just look at the spec sheet of any high end Fibre Channel or SAS disk, and it will always say "One irrecoverable error for every 10^16 bits read". Those errors are not recoverable. Some of the errors are not even detectable. There are always cases that error repairing algorithms can not handle. Some errors are uncorrectable, some errors are undetectable. Here is more information with lots of research papers on error detection:
"They all work - and ZFS is the only FS for linux which can detect and repair disk ECC failures (others can detect, but not repair)"
This is not true. Read the research above. Other filesystems can not even detect errors, let alone ECC failures or other types of failures such as ghost writes.
OTOH, researchers have tried to provoke ZFS and inject artificial errors too, and ZFS detected and recovered from all errors. No other filesystem nor hardware raid, can do that. THAT is the reason ZFS is hyped. Not because it is faster or all of its functions such as snap shot, who cares about performance if your data is silently altered without the system even noticing?
ZFS production ready on Linux? I doubt that. Linux has a long history of cutting corners just to win benchmarks, etc. Safety suffers on Linux, just to win benchmarks. See what Ted Tso writes, the creator of the ext4 filesystem:
"In the case of reiserfs, Chris Mason submitted a patch 4 years ago to turn on barriers by default, but Hans Reiser vetoed it. Apparently, to Hans, winning the benchmark demolition derby was more important than his user's data. (It's a sad fact that sometimes the desire to win benchmark competition will cause developers to cheat, sometimes at the expense of their users.)...We tried to get the default changed in ext3, but it was overruled by Andrew Morton, on the grounds that it would represent a big performance loss, and he didn't think the corruption happened all that often (!!!!!) --- despite the fact that Chris Mason had developed a python program that would reliably corrupt an ext3 file system if you ran it and then pulled the power plug "
The conclusion is that Linux can not be trusted, because of all cheating. Linux users are prematurely declaring Linux tech as safe, when it is not. It is almost as if Microsoft would declare ReFS and Storage spaces to be production ready, that would be funny. Just google on peoples experiences of them.
I really doubt BTRFS will be production ready soon. ZFS is over ten years old, and still we find bugs in it. There are sysadmins that does not trust ZFS, because it is not tried enough, it is too new and fancy. It takes decades before a filesystem gets proven. Even when/if BTRFS gets production ready, it will take years.
ZFS on linux is production ready? Hmmm....
If you're going to mention Vijayan Prabhakaran's dissertation, it'd be nice to actually cite it, and not some handwaving six-year-old editorial on ZDNet.
For those who are interested, it's at http://research.cs.wisc.edu/wind/Publications/vijayan-thesis06.pdf. It's now seven years old, but it's still of interest, though I wouldn't want to take what it says about the various filesystems as gospel without looking into what may have changed in them since 2006.
"ZFS on linux is production ready? Hmmm...."
It's always a question of: how much production readiness do you want?
Is it production-ready enough? I guess so. You left out this little thing in the middle of the text, written in 2009:
"In the case of ext3, it's actually an interesting story. Both Red Hat and SuSE turn on barriers by default in their Enterprise kernels. SuSE, to its credit, did this earlier than Red Hat. We tried to get the default changed in ext3, but it was overruled by Andrew Morton...."
Is the sky falling? Evidently not. Is it getting better? Yes! And seriously, is there anyone (except the ones who like to riding bikes where their unprotected balls 5 are cm from the tarmac) who uses ReiserFS?
@Kebabbert It is undoubtedly true that almost all commercial file systems depend on the storage accurately, well, storing data. It is undoubtedly lovely that ZFS provides a mechanism (at some cost) to improve the quality of the storage subsystem by adding error checking features.
It is NOT true to assert that XFS (or anything else) is "unsafe" simply because they do not have those error checks. Error checking can be implemented in many different places and in many different ways, and the fact that the ZFS folks have decided there is One True Way is irrelevant to the reality that, if the underlying storage fails in various ways, you may get hurt -- which is true even with ZFS, because as a non-clustered FS is the host croaks, you are down. CXFS (as an example) is immune from those sorts of errors.
So is CXFS "safe" and ZFS "not safe"? Of course not: XFS (underlying CXFS) is vulnerable to certain types of failure, and ZFS is vulnerable to other types. Yer pays yer money and yer takes your choices.
Meanwhile CXFS + dmapi is my friend.
"....It is NOT true to assert that XFS (or anything else) is "unsafe" simply because they do not have those error checks. Error checking can be implemented in many different places and in many different ways, and the fact that the ZFS folks have decided there is One True Way is irrelevant ..."
Yes, my assertion is TRUE. Let me explain. There are lot of error checksums in every domain. There are checksums on disk, on ECC RAM, on interface, etc. As my amazon link above shows: there are checksums everywhere. Every piece of hardware have checksums. Checksums are implemented in many different places and in different ways. Does this massive checksumming help? No. Let me explain why.
The reason all these checksums does not help is because of this:
Have you ever played a game as a kid? There are lot of children sitting in a ring, and one kid whispers a word to the next kid. And he whispers on, etc. At the end of the ring, the words are compared and they always differ. The word got distorted in the chain.
Lesson learned, it does not help to have checksums within a domain. You must have checksums that passes through the boundaries, you must be able to compare checksum from the beginning of the chain, and the end of the chain. Are those checksums identical? End-to-end checksums are needed! When you pass a boundary, the data might get corrupt. So within a boundary, the corrupted data have a good checksum. But that does not help. You must have end-to-end checksums, you must always compare the beginning checksum with the last checksum. This is what ZFS does.
ZFS is monolithic, it is a raid manager, filesystem, etc - all in one. Other solutions have a separate raid layer, a filesystem, separate raid card, etc. There are many different layers, and the checksum can not be passed between the layers. ZFS has control of everything, from RAM down to disk because it is monolithic and therefore can compare from end to end. Other layered solutions can not do this.
For instance, ZFS can detect faulty power supplies whereas other solutions can not. If the power supply is flaky, ZFS will notice data corruption within minutes and warn immediately. Earlier filesystems on the same computer did not notice:
And ZFS also immediately detects faulty RAM dimms. ZFS also detects faulty switches!!! Here is a fibre channel switch that is corrupt. ZFS was the first one to detect it, it had gone unnoticed earlier:
Please dont tell me that other filesystems or hardware raid can detect faulty switches, because they can not. If ZFS stores the data on a storage server via a switch, then ZFS can detect all problems in the path, because ZFS compares what is on disk, with what is on RAM. End to end. No one else does that, they can not detect faulty switches, or faulty power supplies, or....
Sun learned that checksums does not help. CERN confirms this in a study "checksumming is not enough, you need end-to-end checksums (zfs has a point)". I can google this CERN study for you, if you wish to read it. The point is that ZFS does end-to-end checksums, whereas other solutions does not. It does not suffice to add checksums everywhere, you will not get a safer solution. You need end-to-end. Which is what ZFS does.
Do you understand now why ZFS is safe, and other solutions are not?
Kebbie, every one of your crusading posts just makes the whole idea sound even more silly. You can't just say to the people that have between them happily and successfully run millions of systems over the years on solutions other than ZFS that they were "all wrong and ZFS is the only right answer". They will just laugh at you.
"....There are lot of children sitting in a ring...." Hmmm, good thing I use arrays to store my data and not groups of children then!
".... ZFS can detect faulty power supplies ...." <Yawn> Most servers and arrays I know of can do this for themselves already by seperate PSU monitoring software. In fact, since many of them link into remote support solutions, they do it BETTER than ZFS in that they will get a replacement PSU out to site whilst the ZFS admin is still working through the logs looking for the ZFS warning on the PSU. Fail!
"...<Yawn> Most servers and arrays I know of can do this for themselves already by seperate PSU monitoring software..."
Well good for them. But the point is, ZFS can detect faulty PSU without additional software. The data corruption detection of ZFS is so strong it can even detect faulty PSU without additional software. People report that ZFS detected faulty SATA cables. Detected faulty fibre channel switches. Faulty ECC RAM dimms. etc. All this, without any additional software.
This is a a true testament to the extremely strong data integrity of ZFS, which surpasses every other filesystem on the market. Or do you know of any other filesystem or storage system that can do this?
As CERN says about hardware raid:
Measurements at CERN
- Wrote a simple application to write/verify 1GB file
- Write 1MB, sleep 1 second, etc. until 1GB has been written
- Read 1MB, verify, sleep 1 second, etc.
- Ran on 3000 rack servers with HW RAID card
- After 3 weeks, found 152 instances of silent data corruption
- Previously thought “everything was fine”
- HW RAID only detected “noisy” data errors
- Need end-to-end verification to catch silent data corruption
This shows that hardware raid does not offer data integrity at all, and should not be trusted. I know that you trust hardware raid, but you shouldnt. I also know that you dont think ECC RAM is necessary in servers, but they are. I have said that you should read research on data corruption, umpteen times but you refuse. I dont really understand why you reject all research on this matter...
<Yawn, yawn, YAWN>. Simply repeating the same Sunshiner propaganda over and over is simply going to switch the undecided off even faster. Fail!
256 quintillion zettabytes, aught to be enough for anyone...
Ahh but you're forgetting 'Ron Jeremy' law. The amount of porn on the Internet doubles every 18 months. Teenagers and people with a large rain hack collection will probably last a week.
Hard to do when there aren't enough atoms or energy to keep that much data on the planet Earth, just as you can't cram a baker's dozen eggs in an egg carton without breaking something. ZFS limits were intentionally set to be beyond PHYSICAL limitations.
It is fine for us puny humans, but his/her Noodliness couldn't possibly keep track of mechanical properties of the protons in his/her creation with just one ZFS. He/she would need 6. (3N where N approx = 1x10^80)
>possibly keep track of mechanical properties of the protons in his/her creation.
Protons = quantum mechanics applies = impossible to accurately keep track of all properties all the time. Couldn't resist.
Yes, it was a joke, humour.
The classical mechanical degrees of freedom of a proton (disregarding the quarks and gluons that compose such a particle) in 3d space are surprisingly enough are 3.
As you rightly state and just as the Heisenberg uncertainty principal states, one cannot precisely measure both the momentum and position of a quantum object such as a proton at the same time. However one could precisely measure the position of such a particle. Of course once the measurement has been made the particle would have moved if not by the action of the measuring device, by zero point fluctuation alone.
Still this is his/her Noodliness that is doing the accounting here. Are you suggesting his/her Noodliness is anything less than Omni-everything? That is heresy and you are likely going straight to the restaurant at the end of the Universe without any Bolognese sauce ;-)
> The classical mechanical degrees of freedom of a proton (disregarding the quarks and gluons that compose such a particle) in 3d space are surprisingly enough are 3.
They clearly would be 9: 3 for the position, 3 for the momentum, and 3 for the axis of rotation.
> However one could precisely measure the position of such a particle.
No. Because you need too much energy to do that.
> by zero point fluctuation alone.
Explaining impossibility to determine position and momentum at the same time is better explained by the olde Fourier Transform: peaks in frequency domain mean flat functions in time domain and conversely and doesn't involved Marvel Comics Science Terminology.
Just because WE can't know both momentum and position, has nothing to do with her Noodliness.
She can do whatever she wants, and know whatever she wants.
> Just because WE can't know both momentum and position, has nothing to do with her Noodliness.
That's rubbish. Read about Bell inequalities that were pretty much conclusively disproved. And with them the idea that a logically consistent underlying state is possible which would define simultaneously both momentum and position (or the equivalent for polarisation-bases for photons anyway). So, there's nothing for her Noodliness to actually know.
> Hard to do when there aren't enough atoms or energy to keep that much data on the planet Earth.
About the energy, I can agree. The energy needed to sufficiently reduce entropy of whatever to make it meaningful storage for this kind of data mount would be hard to get.
But atoms are plenty. If you figured out the atomic level storage somehow and kept one bit per atom, you'd only need a Silicon cube with side of 18km. That may seem huge but it's tiny compared to Earth. And if you had lots of energy you could easily process entire Earth to common Silicon chips to obtain this kind of data capacity and beyond.
Practical? Perhaps not. But sure we have the atoms.
Tell it to the pasta, cos the sauce ain't listening.
>ZFS limits were intentionally set to be beyond PHYSICAL limitations.<
Good old Sun. In other company Marketing would have been all over that, questioning the USP.
Your NEEDS will GROW. ZFS can help!
“We're in the process of receiving two visitors from Earth.” Gisela was astonished. “Earth? Which polis?” “Athena. The first one has just arrived; the second will be in transit for another ninety minutes.” Gisela had never heard of Athena, but ninety minutes per person sounded ominous. Everything meaningful about an individual citizen could be packed into less than an exabyte, and sent as a gamma-ray burst a few milliseconds long. If you wanted to simulate an entire flesher body — cell by cell, redundant viscera and all — that was a harmless enough eccentricity, but lugging the microscopic details of your “very own” small intestine ninety-seven light years was just being precious.
This is excellent news for everyone, and well done to the guys/girls who've done this. Thank you :)
However I do think it's a real shame that licensing concerns prevent the inclusion of this in the linux kernel. Whatever those concerns are they're surely relatively petty in comparison to the benefits we'd all get. Surely for something as significant as ZFS some rules could be changed specifically to accommodate it. Couldn't Linus or whoever just scrawl "except for ZFS which is ok so far as we're concerned" somewhere in the middle of GPL2?
It's still open source code. It's not as if anyone's going to be chasing anyone else for money if they use it. It seems unnecessarily obstinate, a bit like refusing a fantastic Christmas present simply because one's favourite Aunt has used gift paper that you didn't like... It didn't seem to worry anyone in the FreeBSD camp.
Hmmm, can I hear the drone of a cloud of hornets rushing towards me from a recently upset nest?
You probably wouldn't want that - except for a limited set of exceptions the GPL is not modifiable. So if code is labelled as being GPL then you don't have to inspect it closely to see if they snuck in a clause "I get to sleep in Bazza's bed wearing big muddy boots" because if they did then it's not GPL at all.
Absolutely agreed that this incompatibility is a PITA - according to Wiki it comes from the CDDL side, with Sun electing to deliberately make it incompatible with GPL, for reasons apparently obscure (maybe idealist, maybe corporate screwing around). But such pains seem something of a virtue to the eyes of GNU stalwarts - better to lose some convenience than dilute essential not-as-in-beer-freedom.
"with Sun electing to deliberately make it incompatible with GPL, for reasons apparently obscure (maybe idealist, maybe corporate screwing around)"
Couldn't it occur to you that maybe - just maybe - Sun, like others, do not like the GPL?
FreeBSD is also actively removing all GPL stuff from the base ( I think replacing gcc with clang in version 10.X is the last thing on the list)
Afaik there is no bsd licensed assembler or linker. clang isn't enough if it is still using binutils.
> Add the usual suspects delivering "unsupported packages because lawyas" (like DeCSS, MP3 decoders etc.) to your list or recognized package sources.
> yum install
However I do think it's a real shame that licensing concerns prevent the inclusion of this in the linux kernel.
Why on earth would you want the file system driver in the kernel?
As for wanting to use ZFS: if you really need the features then it's probably worth looking at some of scale features that Solaris/Illumos offer. Best tool for the job, etc.
As for licence madness: the GPL was set up to provoke precisely this kind of conflict and try and force GPL onto other projects. Reap as you sow, done by as you did, etc.
"I get to sleep in Bazza's bed wearing big muddy boots"
The wife got there first with the muddy boots...
"clang isn't enough if it is still using binutils."
Is llvm-as and llvm-ld/link usable instead, in a clang-based work-flow, or have they got rid of them completely now ?
"Afaik there is no bsd licensed assembler or linker. clang isn't enough if it is still using binutils.
True, and thanks for the link. I don't know when "ld" and "as" will be ready, but as your link points out, it's on the cards, and the intention is to have them done in time for 10.X
"Is llvm-as and llvm-ld/link usable instead, in a clang-based work-flow, or have they got rid of them completely now ?"
I'm not sure. There's some chatter here:
I was sweeping the "GPL - bah!" side under "maybe idealist", though calling out separately "maybe pragmatic" would have been clearer. The shenanigans angle was what I read into http://en.wikipedia.org/wiki/Common_Development_and_Distribution_License#GPL_incompatibility - though of course on a contentious issue WP may be unusually unreliable. In fact I was rather hoping some Sun greybeard would pop up hear with a juicy recollection ("ah, the meeting where Scott did his squeaky-outraged-RMS voice and Schwartzy laughed so much the rubber band fell off his ponytail")
If only I had an edit button I'd look so much less dumberer.
If only I proofread before hitting "submit" I'd look so much better able to adapt to circumstances in a mature fashion.
If only I was that out of character then friends would be waiting for the pod-person scream...
> Couldn't Linus or whoever just scrawl "except for ZFS which is ok so far as we're concerned" somewhere in the middle of GPL2?
I think Linus has finally learned his lesson about making exceptions with the tool set with that whole Bitkeeper fiasco.
You got the GPL wrong.
It wasn't to force GPL onto other projects. It didn't in this case, now did it.
All it does is state that if you use GPL code IN your project, then you must also use the GPL.
If you don't use any GPL code, you can use whatever license you want.
> All it does is state that if you use GPL code IN your project, then you must also use the GPL.
i.e. it forces GPL into your project if you use GPL-ed code anywhere in it. Sun didn't want that constraint, hence the CDDL.
"Why on earth would you want the file system driver in the kernel?"
Because if it isn't in the kernel, it can't be used as a baseline partition. You can't boot userspace filesystems because userspace doesn't appear until later in the boot process.
> Sun didn't want that constraint
Well then that's on Sun. The idea of releasing the source and being specifically hostile to the GPL is a bit of a contradiction. You either are for end user freedom or you aren't. GNU was already here. Linux was already here. Sun chose to be antagonistic to it.
It's not up to the oldest libre projects to pander to the pro-corporate inclinations of the latest shiny thing.
"I was sweeping the "GPL - bah!" side under "maybe idealist", though calling out separately "maybe pragmatic" would have been clearer."
Oh, I see now. Sorry for being (slightly) sarcastic in my reply!
From the wiki link:
"....that the engineers who had written the Solaris kernel requested that the license of OpenSolaris be GPL-incompatible. "Mozilla was selected partially because it is GPL incompatible. That was part of the design when they released OpenSolaris...."
Assuming that is correct, that may again have been because they didn't like the inherent rules of the GPL rather than "It's GPL,ARRRGH"
But I see your point. There are fanbois on both sides :)
Afaik there is no bsd licensed assembler or linker. clang isn't enough if it is still using binutils.
Give us a chance, we're getting there! Lots of the standard tools have recently been ported from their GPL equivalent, iconv, sort, grep are all on their way to being fully replaced, clang introduction has been very good, the toolchain will land by FreeBSD 12 I'd guess.
IIRC, the 'problem' with CDDL and GPL is not that the CDDL prohibits the GPL, it is that GPL prohibits itself from CDDL, since it cannot re-license it as GPL. CDDL isn't a problem for a BSD licensed OS, since we just want to use the code, not re-license it.
The zfsonlinux guys are very active in the ZFS community, and have fixed lots of bugs in the upstream (which is Illumos, open source ZFS has little to do with Oracle/Sun anymore). The only feature missing from ZFS, Block Pointer Rewrite¹, will probably come from zfsonlinux if anywhere.
¹ Block Pointer Rewrite is the ability to dynamically resize a pool by adding or removing vdevs, eg by adding a single disk vdev to a 4 disk raidz pool to make a 5 disk raidz pool.
Silent data corruption handling
Silent? As in one of your disks has failed but you won't notice a thing until a second one dies and the whole FS gets irretrievably corrupted? How about we go for automatic with lots of warnings instead?