Feeds

back to article Data Direct offers native file system

DataDirect, a shipper of very high-speed block-access storage to the high performance computing (HPC) and media worlds, is now offering native file access. The company has seen the way two winds are blowing. The first is that multi-core commodity CPUs are outpacing what it can do with its in-house FPGA hardware. The second wind …

COMMENTS

This topic is closed for new posts.
Headmaster

Lower IOPS inherent with S2A, not necessarily a problem

The S2A approach trades IOPS for strong guarantees on achievable streaming bandwidth and data integrity. All the disks are organised as 10 disk ECC striped virtual disks (think 8+2 RAID 6, but with ECC instead of simple parity, and with 512 *byte* stripe segments); every access is a full stripe write or read, with the ECC always written and read. It is obvious that the achievable small random read IOPS with that approach is 1/8 of that achievable with 8+2 RAID 6, and the small random write IOPS will be 3/8 of 8+2 RAID 6.

Why would they set it up this way? Well if you were after streaming bandwidth rather than IOPS, you were going to be issuing full stripe reads and writes in any case, and this way you pay no penalty when up to two disks in any virtual disk pack in or hold a go slow. You wouldn't put a transaction processing database on it unless you were desperate or stupid because that's not what it was designed to do.

Read/modify/write cycles don't tend to happen with S2A because any modern filesystem is writing data in 4k or greater chunks, and a full stripe just happens to hold 8 data sectors, which is 4k of data. FPGAs are great at slicing and dicing data in fiddly ways - they are fine with doing ECC on sector chunks as opposed to the the larger chunks that work well for software RAID 5/6.

Claiming that the S2A approach is falling behind on IOPS compared to RAID 5 or RAID 6 arrays is like dissing an efficient and comfortable people mover because it can't post a blinding quarter mile. If you want high IOPS and "works until it doesn't" QOS, go with RAID 5 or 6. If you want end-to-end data integrity and streaming bandwidth, look at S2A or something based on ZFS with mirrors or RAIDZ[2].

0
0

Sent to me and posted on ...

Chris,

... at times you get certain things wrong.

SPEC sfs2008 benchmark is an NFS/CIFS benchmark. GPFS and Lustre are clustered filesystems with a native access protocol that has nothing to do whatsoever with NFS and, in both cases, works very differently (more like pNFS). These file systems also come with a highly optimized network layer. For example, the core of Lustre is really the LNET layer, which provides modular support for multiple network types and functions like an abstraction layer over RDMA, is similar to Sandias's Portals message passing API. All this is heavily optimized for large data transfers and allows you to reach performance levels in the several 100 GB/s (not sever GB/s). Not quite the SPEC sfc2008 world:) There really is no spec benchmark that would make sense here...

0
0
This topic is closed for new posts.