Backups are the easiest thing to get right
Put a script on every server that should be getting backed up to contact a master/monitoring server to let it know what filesystems it can see, and choose a random file in each, that hasn't been modified for at least a day and not crazy large and takes a SHA1 hash. The master server selects a few of those files at random (say one for every 100 servers or so) and does a test restore every day and compares hashes.
It posts to the intranet the list of all servers being backed up and the results of the test restores to those operating the backups every day, so anyone who worries "is my stuff getting backed up?" can see it is, and see the results of test restores to know that backups are actually getting done.
Not saying this is foolproof, and depending on how you handle offsite vaulting that may not be so easy to verify so you may need something different there, but considering the number of times I've seen data loss for various reasons this simple step could address that. i.e. new filesystem was added but it didn't get added to the backups, or a server had been around for three years and was never backed up but those who cared about it had no way to know that, or that backups were running, but couldn't be restored for one reason or another.
Yes, you want monitoring that can report on completions so failed backups can be restarted and so forth, but the above is step 1, because too often if monitoring says "all backups were completed successfully" the backup team thinks their job is done. That's because they don't understand their job isn't to back up data, but to restore data. I consulted at a place with really messed up backups, and even though I was there for storage I spent a few weeks helping their backup team. One thing the data center manager refused to do that I really wish he'd considered is to rename that team the data restoration team, to drive home the point of what their job really is.