This is something I'm looking at
as well -
Too many app teams insist that their code "needs more ram" in every case where there is an issue - the result being that we now have ridiculous expectations on our virtualization environment. Next iteration, same issues, and we'll have to cope with the resulting misguided stack requirements. Surprisingly I've found that the 4 VMs in my laptop can produce unexpectedly correct guidance on performances of server deployed apps. Usually indicating that the JVMs are spending more time in thread recovery after bad code executes.
I've a new firewall/router/vpn "block of plastic" - its not much larger than 4" square and 2" deep. I've yet to put all the bits on the SD card that will be it's boot disk, but it has 1G of ram and should have more than enough cpu to do what we need in the house. (and I have teenagers) - I'm looking from this to the resources I'm using in the work world and really starting to think that corporate IT is going to go through another three or four iterations before they start getting the idea that "spinning up a new box" is *really* freaking overkill for 99.9% of what they are doing.
The problem a lot of corporate environments are going to have with this is that siloing has put the networking team on the other side of a set of walls from the systems admin team and again a set of walls separating the DC operations folks, and to boot, the security teams. These walls are the single biggest barrier to getting some *really* decent automation and virtualization into play these days. As well, one may have to completely pave over old processes, standards, and implemented process controls that will no longer apply, or may not be usable in the automation/virtualization world.
(We need an icon for "in dire need of a professional massage therapist" -- a group of juvenile relatives assisted me in destroying my back this weekend: gotta love the kids)