Woo-fu****g-hoo. This is a perfect example of how laziness breeds complexity and inefficiency. Fractal generation generally consists of one for loop executed inside another, covering each pixel on the screen, with no inter-iteration dependency. Yes, you can separate them out over different cores for speed using Web Workers, but why bother? Do a bit of reading and discover decent C/C++ compilers like the ones from Intel and Sun (maybe even GCC these days). These will both spot such loops for you and do the same thing with threads at run time without you ever having to think about it in your coding. And it will probably run significantly quicker as well. And that's before you start contemplating bending it to fit on a GPU.
Okay, so this is just an eye candy demo of no major consequence. But to say:
The only benefits of Ruby, PHP, and other such languages is that they provide rapid development enabling a service to be brought to market quicker. The labour costs are cheap too. That is an important commercial consideration. But if your service expands and your electricity and inventory bills start heading in to the millions (or billions if you've been very successful indeed), the costs of that initial laziness start looking very high indeed.
Of course many web traditionalists point to their great saviour, the JIT compiler. Sure, a JIT compiler does produce executable object code which runs modestly quickly. But CPUs are complicated beasts these days with pipelines, caches, etc. I'd put good money on most JIT compiler's object code not being as fast as that produced by, for example, Intel's native compiler. Intel build the chips. The writers of TraceMonkey, V8, etc. didn't. If your JIT compiler is 5% less efficient than a natively written app, for a really big data centre that could amount to millions in electricity and inventory bills a year, every year . You have to wonder how much properly written native application code you could get written for that much money.