Feeds

back to article Flash card latency: Time to get some marks on benches

Fusion-io flash cards outperformed a slew of competitors in a Marklogic NoSQL benchmark reported by StorageReview - for which much thanks. The benchmark rig is detailed in the article text and it notes that tested storage configs must have a more-than-650GB usable capacity, meaning individual flash devices can be bunched up. …

COMMENTS

This topic is closed for new posts.
Anonymous Coward

61.398ms?

How slow is that really, can anyone explain?

0
0

It's pretty slow. However, a lot depends on the application. If it's single threaded, i.e. the application waits for the I/O to complete before doing anything else, then 60 ms of latency per I/O is going to significantly slow down the application, especially when compared with latencies of a handful of ms.

However, if the application and the OS are capable of sending multiple I/Os without waiting for the previous I/O to complete, then the difference will be a lot less noticeable. If you reach the point where the interconnect between the host and the storage device is saturated then there will be no difference at all, and this should be the cases with large sequential I/O. Note that multi-threaded applications are useless when the storage is on a single disk drive as you will get a lot of drive head movement. Not an issue with SSDs.

Databases tend to rely on small I/Os and while they may be able to do multiple things at once, application threads using them are often single-threaded (get something from there, do something with it depending on what it is, put it there, etc. etc.) and in an ideal world disk latency would be zero or as close as possible.

It's quite a big subject.

0
0
Bronze badge

Software designed for HDD's does a poor kob with SSD

The problem of software with SSD is ubiquitous. Code designed for slow spinners is not good for fast SSD. Examples abound:

1. The FileIO stack on Linux and Windows is many layers, all of which add latency.

2. Most apps don't use directIO. Memcopy moving data from buffers to app space can take as long as an SSD read.

3. Interrupt levels with SSD are very high, which eats compute power due to context changing.

4. BlockIO uses 4KB transfers. Many databases need much less. this doesn't matter for HDD, but the SAS or PCIe busses are near saturation with SSD

Databases are a good place to consider using the ioMemory model for transfers. Updates are small, and avoiding fileIO is a good idea

0
0

You are dead-on in that the interconnect has become the bottleneck. But not due to lots of small I/Os - but rather less rather large ones, at least for the workloads we have. In fact large IOPS are rather unimportant, as are low-latency single (small) block reads. Large allocation units (think MB not KB) see to that. Keeping track of little 8K AU's is so 20th century <grin>.

Rarely do we exceed 3K IOPS during busy times. Our top waits are log reads (due to resync during large materialized view refreshes) and direct path reads - and both do large sequential I/O's. Put ASM in the mix with multiple FC cards with say 6 channels and we can really punish the SAN.

Multi-threaded parallel use is now the database norm, not the exception (at least if you want to get anything done quickly). Add to that lots of RAM, say 256-512 GB (so the former "hot" I/O queens now are pinned in memory), plus lots of flash-cash for when SGA RAM overflows (so you get get it back quick as needed), and yesterdays storage problems are nowhere like today's.

So forget about needing crazy high (but small) IOPS, think raw throughput.

0
0
Bronze badge

Apples & Oranges ?

The thing that concerns me about the benchmarks, is that the drives weren't all configured the same.

Some where RAID 0, some JBODs, and others were one drive partitioned up.

Surely a better comparision would have been to have all the drives configured the same?

0
0

Re: Apples & Oranges ?

Indeed. Four drives, each with their own dedicated connection are always going to outperform a single drive, no matter how it is partitioned.

0
0
This topic is closed for new posts.