I specialise in working in schools that have experienced IT disasters, cleaning them up, restoring confidence in the system, proving it can run for a while (so I'm not just a fly-by-night merchant) and then moving to the next.
I've done it for about 17 years, just not on the scale of KCL.
There's good points all round here. Sure, you shouldn't be saving data which may come under the Data Protection Act on personal drives. That's a given.
But you've destroyed user confidence here. That counts for an awful lot. What you SHOULD be doing its running around with a bulk purchase of, say, small NAS devices (which will be perpetually useful to you when you recall them) if that's what people are doing. You desperately need storage? Here, have a 12Tb array - that we can secure, encrypt, restrict, recall, replicate and then copy off when we're sure the problem has gone away.
You've destroyed user confidence, and with it their obedience. Those are normally the points where someone like myself enters, as an unknown, and tries to enforce good policy while fixing the problem.
My mantra is "I don't lose data". I will happily demonstrate the number of levels, checks, replicas, backups, etc. that I take to prove that to people. I don't lose data. You deleted stuff from last week? Here it is. Last month? Here it is. Last year? Here it is. You might not be able to see it instantly, but we don't lose data. You need to drum that home.
But you HAVE lost data. And with the same IT people and the same equipment and the same suppliers you're trying to convince users that something has changed and will never happen again. That's an impossible task. Throw them a bone. You need to get back in their good books. I've already predicted that there should be a few pink slips up winging their way around the KCL internal mail, because this is just that serious. But you also need to throw them a bone, technically, to get them - and their confidence in your system - back.
Literally, say, "We will provide you with multiple independent places to store your data while we make sure everything is back - they are under our control, we can still control the data on them for legal purposes, but here you go. There's a working area. You can safely put your years of research and teachings on there because you yourself can see that it's several different places, each independent and under our (yours and ours) control."
It's expensive. It's huge. It's a big job. But if you want to restore confidence, it's a necessary step. Even "This network share is in The Strand, this one is in our other data center, they are independent, please feel free to copy to both". It's showing them that you care about their data (which is worth more than your job, I assure you), that you are letting them keep control of their data, but at the same time not encouraging hundreds of devices tucked under desks out of IT's - and therefore the Data Controllers - sight.
I've been at my current place 2.5 years. Not a bit of data lost. Despite lightning strikes (literally blowing up a network switch), server failures, power tripping even UPS (crossed-phases), etc. Their data is still there. All the data that existed when I started, plus everything they've made since.
My previous place, 5 years. Same. Took over a network that wasn't a network and then never lost a single byte of data. Was even asked to prove it at one point when a teacher claimed they'd "definitely" saved their old lesson plans - shadow copies twice-a-day going back months, backups going back years, replicas of those backups, and backup logs listing every file present.
It's a core, basic, principle of IT. You are the curators of the data. It's up to you to preserve it, because nobody else will, it's up to you to prove that, and to ensure it applies to everything, and to survive a disaster - flood, fire, lightning-strike, even (for KCL) a potential bombing -.and to not lose things.
But you lost it on a "routine" upgrade because you did not have backups in place sufficient to restore working order in good time. Literally a USB stick would have been better for most people. That's NOT running the IT properly, and hence why heads should roll.
But to then expect users to throw all their research into ONLY your same systems again straight after that - after a huge, catastrophic failure of that self same system that wiped them out for weeks without any hope of restoration or working replicas- is dumb.
Technically, ethically, personally, it's a dumb suggestion.
Provide them with some confidence and make them trust you again.
"Oh, you remember when we just accidentally lost all your children and couldn't find them for weeks? Well, we've changed nothing but you HAVE to give us your children again."
We do NOT lose data. If you lost data - or sufficiently timely access to data that it makes no difference that it wasn't a total loss - you are not part of us. Not part of IT.
IT do NOT lose data.