Many people would like the option of adding another disk to the pool, telling ZFS to move the data off the failing disk, then remove the now empty failing disk from the pool. This seems a reasonable request, especially as it looks like it should be do-able with no downtime. Unfortunately, as far as I know, this isn't possible, and there are no plans in mainstream ZFS development for it to be possible.
You want to start with a 5 disk zpool, notice one of the disks is bad, and have ZFS resize the pool to a 4 disk zpool? This is not possible with ZFS, because ZFS has been written for enterprise solutions, where a) you know what capacity you have specified and don't want smaller, and b) you would simply replace the disk. (The feature is called Block Pointer Rewrite, and there was an excellent Sun blog post on how and why to do it, and why they cba, but it looks like Oracle have taken it down..)
The process for replacing a disk is trivial, plug new disk in, zfs replace <old dev> <new dev>, unplug old disk once done.
What I'd like, using your example, is to start with a 5 disk pool, add a disk that has the same or larger capacity as a failing disk, zfs to move the data on the fly from the failing disk to the new disk, then remove the now 'empty' failing disk, leaving me with a 5 disk pool. If you have enough physical disks in the pool so they provide sufficient redundancy, you can, as you say, use 'zfs replace' - however, if you have a one disk pool, you cannot do this without downtime. So while zfs is great for large enterprise use, it is of less use on (say) a single disk laptop or desktop.
I have not had the time to experiment doing things with setting up file-backed zfs pools on a single disk, as documented here:
while definitely not recommended for high-end enterprise use, it could well make zfs more useful for me on small low-end systems. ZFS is great for big-data use-cases, but sometimes just isn't a good fit for smaller systems.