2 posts • joined 26 Jun 2012
It's all about picking the most appropriate criteria
Of course the fun thing about performance claims of any type is that it all depends on what your criteria are. The comparisons in this article are primarily focused on performance per rack which actually favors the highest density solutions more than it does the highest performance solutions. I have yet to hear a prospect say “what I really need is xx GB/s per rack” – instead they say “I need a specific amount of usable storage capacity that does at least xx GB/s” and then there’s a multi-vendor race to compete for the business.
At Panasas, for our "world's fastest" claim we have used the metric that we believe has the most relevance to customers: file system performance per disk – how much measurable delivered parallel file system performance across the network to the client is possible per 7.2K SATA drive in the system. For large file throughput workloads accessed by a typical HPC Linux cluster, our customers can reproducibly measure 1600MB/s write throughput from a single 20-drive ActiveStor 12 shelf using the open source IOR benchmark across an optimal network setup, with this result scaling near-linearly with additional shelves. That’s a full 80MB/s per SATA drive in delivered file system performance. To the best of our knowledge, that is still the world’s fastest and allows Panasas to offer more performance for any specific amount of capacity.
I think it's also worth noting that to the best of our knowledge, Xyratex or DDN have never published independent benchmarks proving their file system throughput numbers with the Lustre or GPFS file systems on top of their hardware. Panasas has provided independent third party validation to substantiate its claims (the ESG report that Chris' article referenced).
Ultimately though, what is much more important than performance for most real world deployments is ease of use/manageability along with superior reliability, availability and serviceability – these are all major strengths of Panasas ActiveStor.
Sr. Director of Product Marketing
Enterprise IT needs are just different from HPC
Hi! Some good comments here. The bottom line is that the requirements for Enterprise IT and HPC are normally very different from each other. So considering an Enterprise IT-focused flash-based storage system with features like compression and de-duplication, it’s pretty unlikely that such a product would be equally applicable to HPC storage. Flash is simply too costly for it to be used for the predominantly large file throughput requirements in HPC where storage is usually measured in a combination of GB/s and $/TB. However, there is a very interesting role to be played for flash memory in accelerating small file IOP-focused workloads in HPC. The same goes for speeding file system metadata performance which is useful for both large file throughput and small file IOP HPC workloads.
- Updated HIDDEN packet sniffer spy tech in MILLIONS of iPhones, iPads – expert
- Peak Apple: Mountain of 80 MILLION 'Air' iPhone 6s ordered
- Students hack Tesla Model S, make all its doors pop open IN MOTION
- BBC goes offline in MASSIVE COCKUP: Stephen Fry partly muzzled
- PROOF the Apple iPhone 6 rumor mill hype-gasm has reached its logical conclusion