I was convinced linux already did this? That with the kernel entropy pool no matter how much non-random data you mix in, the pool is always at least as entropic as it was before*. So there is nothing to lose, and absolutely everything to gain from hashing in keystrokes, network packet arrival times, interrupt service times, and there's probably some timing loops for kernel housekeeping tasks that could/are mixed in too. Or 10 gigabytes of /dev/null. It's always as random as it can get. Not sure what the point of his paper is honestly.
* assuming you don't do anything stupid, like give an entropy generator access to its own output :P