Just tested it
Version 1: The time drops rapidly for the first nine samples, then remains fairly constant with the last digits showing five or six bits of entropy. Multiple runs show the way the time drops for the first 9 samples is quite consistent. Around 70 unique samples per run.
Version 2: 3 unique samples per run with the most common turning up 75% of the time and the least common usually first.
Version 3: 2 unique samples with the most common turning up 96% of the time.
Version 4: Same as version 3.
Version 5: Only about 30 unique values per run.
Version 1 was not optimised. Version 2 used -O2. Version 3 diverted output to a file instead of pasting output from a window into a file. Version 4 moved the printf to a separate loop from the sample generator. Version 5 was like version 4 but without the -O2.
Conclusion: Use with lots of caution. Make absolutely certain your test code and production code use the same compiler options. Much of the randomness comes from "printf" and what it outputs to.