Spindle:CPU ratio bad for Hadoop
I'm putting on my Hadoop committer hat and noting some things about it on this box -independent of any other HPC uses-
1. Ignoring point (3) below, you don't need to "port" Apache Hadoop to the system provided you can bring up RHEL and Java on it, ideally 64-bit JVM from Sun, that being the only one that the Hadoop team opt to care about.
2. There's not enough storage. 24 HDDs for that many CPUs? The current generation of Hadoop servers put 12x 3.5" HDDs in a 1U rack with 6-12 x86-64 cores, giving a ratio of 1 CPU to 1 or 2 HDDs. That's massive storage capacity and good IO bandwidth, with good CPU. Why? Storage capacity with some local datamining is the driving need. It's why HDD and not SDD is the storage, it's why 3.5" disks are chosen over 2.5". It brings you cost/petabyte down.
3. The use of independent servers gives you better failure modes. If you built a rack out of these systems, you would need to somehow change Hadoop's topology logic to know that a set of servers are inter-dependent, and so that copies of blocks of the files (usually 128+ MB blocks) are not stored on servers instances in the same physical server. There's been discussion of making the placement policy pluggable, so Quanta could write a new Java class to implement placement differently, but as the plugin interface isn't there yet, they can't have done so.