Re: Just tested it
Yeah, it comes with the caveat of "don't compile with optimization".
The whole point of the "algorithm" is just to measure the time the CPU does work; if you let the compiler remove all the work, then of course there's nothing for us to measure.
If you chose the clock() timer (this is usually the lowest-resolution timer in most modern systems), then that's your worst-case results (again, not counting the optimized ones). Using the nanosecond-level timers will improve your score. If you're in Windows, the jump will be extreme, because for some reason Window's default timer is super low res.
But even with just 75% MFV (most frequent value), you're already golden. Collect 1,000 samples and you've got 400 bits of entropy, more than enough for seeding. The versions of the POC after the cited code here switched around the SCALE and SAMPLES settings - I found it was more efficient to lower the scale (how many times to loop before measuring) and increase the samples (how many measurements to take).
Even an Arduino Uno (measly 16MHz CPU with a low-res 4-microsecond-precision timer) gets to collect 3,000 bits of entropy per second. That's already the super low-end of results.
Anyway, all these and more are in the updated supplementary site: http://research.jvroig.com/siderand/