"Now obviously the requirements of RAID-6 mean that you would probably do 5/2/3 but still I think the 2:1 principle holds."
It's pretty much guaranteed that anything larger than about 40Tb is extremely likely suffer to data corruption somewhere in the array even without a rebuild The statistics of CRC checking mean that one bad sector will slip through about that often and RAID may not catch it.
In addition, the extra thrash on a disk set means that you run a not-insignificant chance (2-6% on 100Tb) of losing the entire RAID6 array during a rebuild after a drive is lost. To bring that into perspective I've lost 2 RAID6 sets in the last decade, both of which were only 20Tb or less (HP MSA arrays)
Adding a third parity disk drops the latter problem down into the vanishingly small arena, but does nothing for the former problem.
In addition a RAID set only protects the disk layout and filesystem checking still has to be performed from time to time - which in almost every FS means taking it offline.
ZFS kills several birds with one stone. It assumes drive are crap, so builds it robustness around recovery and error checking of every block of data - as well as writing corrected data back to the disks when errors occur. It acts as a volume manager and filesystem, so you don't have to run 2 layers of complexity on top of your raid sets _AND_ you can run periodic FSchecks (ZFS calls them "scrubs") without taking the data offline. In addition it offers SSD caching of "hot" data and SSD backup of writes, so these can be spooled to the disks sequentially, even if there's a power failure. The result is that it's got levels of performance far in excess of what is regarded as "normal" for RAID systems and it provides a 3rd parity disk, with the potential to add more in future.
This isn't a substitute for distributed filesystems but there's a compelling case for using it in most circumstances instead of RAID and it makes great building block if you do need distributed FSes.