Feeds

back to article I'm the world's fastest! No, I am! And I'm staggering, too!

ISC 2012 Xyratex has formally launched its ClusterStor high-performance computing drive arrays, saying it's the fastest data storage array for high-performance computing in the industry. At the other end of the scale HP has revved its X5000 NAS filer upping capacity and adding iSCSI SAN access. Xyratex' ClusterStor 6000 is a pre …

COMMENTS

This topic is closed for new posts.
Silver badge
Happy

Oh, Goody!!

There I was getting worried about storage bottlenecks in terapixel image processing applications.

What wonderful times we live in.

0
0

It's all about picking the most appropriate criteria

Of course the fun thing about performance claims of any type is that it all depends on what your criteria are. The comparisons in this article are primarily focused on performance per rack which actually favors the highest density solutions more than it does the highest performance solutions. I have yet to hear a prospect say “what I really need is xx GB/s per rack” – instead they say “I need a specific amount of usable storage capacity that does at least xx GB/s” and then there’s a multi-vendor race to compete for the business.

At Panasas, for our "world's fastest" claim we have used the metric that we believe has the most relevance to customers: file system performance per disk – how much measurable delivered parallel file system performance across the network to the client is possible per 7.2K SATA drive in the system. For large file throughput workloads accessed by a typical HPC Linux cluster, our customers can reproducibly measure 1600MB/s write throughput from a single 20-drive ActiveStor 12 shelf using the open source IOR benchmark across an optimal network setup, with this result scaling near-linearly with additional shelves. That’s a full 80MB/s per SATA drive in delivered file system performance. To the best of our knowledge, that is still the world’s fastest and allows Panasas to offer more performance for any specific amount of capacity.

I think it's also worth noting that to the best of our knowledge, Xyratex or DDN have never published independent benchmarks proving their file system throughput numbers with the Lustre or GPFS file systems on top of their hardware. Panasas has provided independent third party validation to substantiate its claims (the ESG report that Chris' article referenced).

Ultimately though, what is much more important than performance for most real world deployments is ease of use/manageability along with superior reliability, availability and serviceability – these are all major strengths of Panasas ActiveStor.

Geoffrey Noer

Sr. Director of Product Marketing

Panasas, Inc.

0
0
Anonymous Coward

In my experience...

In the GPFS world, the most relevant thing is not what each individual storage subsystem can achieve - knowing what each individual LUN (often a raid-5 set in a subsystem along with several others) can do in a certain configuration helps with the design, though. It helps drive how many subsystems to use to get to a certain performance target. It helps to understand the costs, the heat output, things like that. It's not the primary driving factor in how you get high performance out of your storage system.

Overall system performance to a single file is a better measure of an HPC storage system. Other things like how many metadata operations a system can handle (that is, how quickly you can create or delete thousands or millions of files) is also important.

Back to the bandwidth "record" claims - for context, there are several production installations running over 100GB/s to a single file, and there have in fact been several for quite some time.

Here's a paper from Livermore from their 2006 exploits with GPFS showing 129GB/s (write) and 153GBs (read) against a required 122GB/s bandwidth requirement:

https://e-reports-ext.llnl.gov/pdf/341493.pdf

I imagine there are similar publications about lustre, although I've not been involved in those personally.

Getting excited about how many MB/s a single storage subsystem can handle is a bit like getting excited about how much coal you can shove into a single carriage of a train. It's technically interesting, but most people with a big requirement would simply put another carriage on the train.

0
0
This topic is closed for new posts.