Sounds like you got away easy.
This is just for my present position, but I was hired by word-of-mouth as I happened to be in the job market at exactly the same time that a disaster befell this particular workplace. I was snapped up after they'd done most of the initial firefighting (please bear that in mind) and have thus far witnessed the following:
1) One server. Literally. One. Running 500 users. That setup was in the process of being replaced when I started by the following setup: One server running all the user stuff, another running the SQL server (including payroll), the print system and the phone system, and some shared areas, and backup software, all kinds of junk. Ironically, they had some of the most powerful servers I'd ever seen running Windows thin-clients - powerful enough to run 50+ user sessions. They never got used and everyone hated them, but the servers outclassed everything else in the server room (but were sadly quite old - floppy disk era too - and we just replaced them. Back in their day they must have been TOP of the line). They were sitting idle while the one-server did all the work until it fell over.
2) A set of data-recovered failed RAID disks, in a box. Previously resident in the single server. £10k to recover and they never got all their data back. User profiles and documents had been recovered from CLIENT ROAMING PROFILE COPIES! The recovered drives I had framed and hung on the wall with a plaque reading "Cogito Ergo Facsimile" (excuse the Latin - hopefully "I think therefore I make copies"?)
3) No backups. None. The guy was still getting emails about a freeware backup utility but hadn't even bothered to deploy that. There was nothing. No tape, no NAS, nothing except for what was on the server hard drive. And he had been there to ignore BOTH RAID failures. By the time I inherited it, there was some NAS boxes but also an illegal and unlicensed copy of Backup Exec on every server too.
4) No WSUS at all.
5) No client images (not even WDS, they just bit-for-bit copied existing machines!).
6) Exchange installed on the DC, thus making an unfixable and unsupported combination (officially, you cannot remove Exchange that's been on a DC because you shouldn't be able to do that in the first place - and demoting a DC server that's running Exchange is dangerous and likely to break both!).
7) Every cable measured TO THE INCH to the patch panels and crimped by hand. And often going through the centres of the racks so you couldn't actually insert anything more into the rack without de-patching EVERY CABLE and re-patching it. For one cabinet we had to pull an all-nighter just to rewire 24U. And we rewired EVERY cable in there.
8) I found a switch hidden in a radiator cabinet powered by a socket inside the floor (near a cellar hatch). That switch ran all the main office and wasn't documented anywhere. The uplink for it was Cat5 over 150m using internal cable that went externally and was thoroughly destroyed by the time I got there. Apparently that had been in place for several years and nobody knew about it. Until it went off.
Needless to say, I got triple-normal-IT budget in order to fix the problems. We bought a proper set of redundant blade servers, spread them over the site, multiple backup strategies, proper backup software, full virtualisation and service separation, a complete re-cable (including redundant links around the site and to the Internet) and it's now... well, quite impressive.
My boss has also indicated that next month we will have a full, live, in-service failover test. I think because I've made all these assertions about what should happen on a modern system and he wants to see if it's true. As in, he will "pull power" (not literally, but simulated by turning off machines gracefully) to one entire server location in the middle of the working day to see what happens. We are merely expected to provide "business continuity" (i.e. We don't lose data and thus bankrupt the company! Shouldn't be hard! Shows you what kind of IT they had previously!) but I'm actually expecting "service continuity" (i.e. nobody but us notices that anything has happened).
But that's not even the worse I've inherited. Hell, I refused to touch one charity's network that I was invited to work on. I had to literally say to them "I can't touch that" and they knew I was doing them a favour by saying so. It wasn't fully backed up, the backups were at a remote site they didn't have access to, and nothing on the desktops was in a state where I thought I could safely play with it, and they dealt with the medication records of dying children, etc. Sorry, I have no qualms about fixing it for you, but it's really in your interest to get a proper firm in - because the responsibility with that, given the state it was in, was so bad you wouldn't have been able to afford the price I'd have to put on that responsibility. Start again, get a proper firm in, and get some ongoing support while you're there. It will cost the earth, but it will be nothing compared to continuing on that precipice you're on of losing that data. I did make sure they had at least one sufficient backup before I left but that was all I could do in the time.
I'm sure people have worse stories too, but by comparison some "neglected" server settings and a single non-booting server (sorry, your solution of a note not to reboot it is NOT a solution, even temporarily) is nothing.