rm -fr @IMG
I used to run HPC clusters where doing this on the compute nodes would not have been quite as catastrophic as on a normal system. They would probably have rebooted OK.
The reason for this is that / was always copied into a RAMfs on boot from a read-only copy, /usr was a read-only mount and most of what would normally be other filesystems were just directories in / and /usr. It's true that /var would have been trashed, and any of the data filesystems if they were mounted would also have gone, but the system would have rebooted!
On a related note, when the clusters were decommissioned, I was the primary person responsible at all stages of the systematic, documented and verified destruction of the HPC clusters. It ranged from the filesystems, through to the deconstruction of the RAID devices and scrubbing of all of the disks (about 4000 of them), the destruction of the network configuration and routing information, deleting all of the read-only copies of the diskless root and usr filesystems, even as far as the scrub of the HMCs disks (it's interesting, they run Linux, and it was possible to run scrub against the OS disk of the last HMC [it was jailbroken], while the HMC was still running!)
The complete deconstruction, from working HPC systems to them being driven away from the loading bay took 6 (very long) working days, and finished with a day's contingency remaining in the timetable.
So I am one of a relatively small number of people who can claim that they've deliberately, and with complete authorization, destroyed two of the top 200 HPC systems of their time!
I had real mixed feelings. It was empowering to be able to do such a thing, and upsetting, because keeping them running was almost my complete working life for four years or so.