Hmm... used to be a good idea TPM.
Your idea with building a cloud of POWER 775's is actually one that we, where I work examined, back in 2006, here we actually had the POWER 575 as our intended platform for our shared environment.
Today we use POWER 770's, the problem with using the HPC nodes for commercial workloads is the RAM per core numbers. The POWER 775 has 256 cores to 1 TB of memory that is 4GB of memory per core. This is way way to little for todays commercial applications, which is what UNIX systems run today.
Now for something like for example a SAP system and databases (specially if you use DB compression) you will normally have something between 8-16 GB of RAM per core + RAM for OS and virtualization.. Furthermore cause you want to drive up utilization on your platform you will normally overcommit your processor resources, usually somewhere from a factor of 2-4.
If you look at individual virtual machines that have been sized for peak workloads, their average utilization is usually somewhere in the 10-25% range.
So the actual physical amount of RAM you need per core is usually something like the factor you overcommit with times the non overcommitted amount of RAM per core. Hence usually somewhere between 2-4 times 8-16+ = 16-64 GB of RAM per core. So on a machine like the POWER 770-MMC with 64 cores you would need 64 cores times 16-64 GB of RAM + overhead. Or realistic something like 2-4TB of RAM. For a machine like the 775 this would be in the 8-16TB range.
So the HPC nodes aren't really suitable for commercial workloads these days IMHO..Sure there is memory compression but that won't give you enough IMHO.
Now as for the DCM modules I would guess that these would be 2x6 core modules not full 2x8 core modules, but lets see what is announced here what.. next week is it ?
Thanx for a good article btw.