not usually since this is controlled by the drive firmware and the OS which you are using to wipe the drive no longer has access to those blocks/tracks. Not sure about SSD, probably similar approach since the marking of off limits areas is done at the hw/fw/controller level.
Theoretically the risk is very low, similar to what happens when a drive fails in a way that it is offline so you cannot wipe it before it is removed from a system. There is a gap there for sure since the platters still contain data, but hardware vendors tend to charge you a lot if you want a secure destruction contract where you get to keep failed drives and not return them in exchange for the replacement their tech brought out. I know the companies I have worked for securely erase those devices in their reman processing if they will be put back into the spares pool while maintaining the defect list so those faulty blocks are not allocated again and they always had policies around secure destruction of drives that were not being re-used.
For any Unix/Linux systems, I have used DBan on both windows and Linux machines, or variations on Format - in Solaris for example the format/analyze/purge method is compliant with a DOD spec for data destruction since it writes multiple times over every block in different patterns and verifies that every block has been written - I think it defaults to 5 passes? Still though, that is only every block that the OS can access which might not be 100% of them, although it will be close.
Degaussing works, as does shredding, but if you have to return a drive in a condition that it can then be reconditioned and used for future service calls, those methods tend to cause a problem.