SGI uses a term in their cloudrack documentation - stranded power, I've stolen it and used it towards resource utilization - stranded capacity(whether it's CPU/memory/IO etc). But reducing or eliminating stranded capacity is what it's all about these days, driving up capacity utilization by reducing those islands of capacity that you can't use because of some other constraint(maybe have plenty of cpu but not enough memory for example)
The recent advances in cpu tech pound this even further, most apps simply do not scale to be able to tax a typical 8-24 core server by themselves. And from what I've seen it will be some time until they do, that level of optimization is pretty complex, and priorities just aren't there when "workarounds" like virtualization can fill the gap in the meantime.
I wrote recently about "testing the limits of virtualization"(google it) where I go off on a tangent and explore pushing the hypervisors to their limits. I've spent a lot of time this year thinking about how best to drive capacity utilization upwards, and the savings you can get are pretty amazing, even when compared against "legacy" strategies such as deploying "cheap white box" servers for your apps.