I agree it looks like a 100Hz ticker reaching 2^31 and going negative. But I think (without going off into comp.risks) there might be something else going on.
Problematic code can fail at overflow one of two ways: looping forever (very bad) or exiting prematurely (softer). It's not hard to get it right, of course, but if, say, the API changes (!) after you wrote your timer code or something like that it can end up wrong. So, for ultra-cautious safety-critical stuff, how do you make sure? You add asserts. Lots of asserts. Sounds like that's what we have here, as it goes into a fail-"safe" shutdown.
Problem is, if your code would do the soft failure - premature exit from the loop - then the assert makes it less reliable. Because what would otherwise be a single spurious shorter timeout, perhaps no worse than noise in whatever it is you're measuring/updating periodically, perhaps completely harmless, the assert failure turns into a complete shutdown.
In other words, assert - and thus shutdown - if you see insanity in your inputs, or detect an actual failure signal, for example. Asserting for anything less might help find bugs if you test it enough, but when deployed, you want something weaker, that logs an unexpected condition, but doesn't panic the system. IMHO.