Last week, thousands of you did the right thing by tuning into our first Meat Cast on blade servers. You're now smarter than the average data center hack and have the blade scene down. Go ahead - lord your knowledge over friends, family and co-workers. This week, Chris Hipp and I return with an assault on high performance …
This is an interesting discussion which ranged from the practical to the esoteric (quantum computing). Electricity use is certainly a big issue even for smaller facilities. In the UK at least, the cost of powering and cooling machines over 4 years life can now exceed the capital cost. Electricity costs nearly £1 per watt per year. In a space cooled by conventional air conditioning, about £1.50 per watt of equipment per year. Users should be costing this for all purchases.
GPUs, accelerators like those from Clearspeed, and novel architectures like Sun Coolthreads all have a contribution to make in particular problems areas.
For general purpose computing there are simple savings which can be be made. For example AMD Opteron "HE" chips run at 68 watts rather 95 watts. A second power supply a 1u server adds around 20 watts to the electrictrity consumption. On subsystems, 2.5" disks use less power than 3.5" disks, and in general, portable technology uses a lot less power than desktop or server technology.
On the desktop the most efficient "PC" is actually the Mac Mini which make use of portable technology to use only 37 watts when working hard, 22 watts when idle and 2 watts when hibernating. A typical PC will use more than double this amount of power, but obviously the thing to do is to make sure PCs go to hibernate when idle.
Focusing on total life cost, including power, rather than on capital cost could bring a major change to the market.
I think people in the HPC market are underestimating the ability of GPGPU. By the looks of things Nvidia are really keen on taking a big chunk of the hPC market share.
It will be very interesting to see what the next generation of GPU's can do (after tesla 1.0)