back to article Fusion-io ups SSD ante

Solid state storage maker Fusion-io has upped the ante in the SSD game, launching the ioDrive Duo, a PCI-Express peripheral card with 640GB of capacity. The ioDrive Duo is a kicker to the company's first generation of SSDs and doubles up the capacity by cramming two modules on the same board. The ioDrive Duo is rated at 186, …

COMMENTS

This topic is closed for new posts.
  1. Dave
    Thumb Up

    So, $19,200?

    I'll take two!

  2. Craig Vaughton
    Stop

    Get real

    Never mind bigger SSD drives, how about producing more smaller drive so at least one dealer in the UK actually has some? As an aside that would be the ones that people can actually afford!

  3. Steven Jones

    Great but

    I can see plenty of use for these things in transactionally intensive systems. It's not primarily the I/O rate - at the expensive of enough spindles you can get the IOPs or data rate. What really, really matters is the latency. 50 microseconds is a factor of 100x better than physical disks can do on uncached operations, and perhaps 20 times better than a cached operation over a typical FC SAN array.

    In many cases we improve performance by throwing RAM at the database - even SSDs can't match a logical read. However, cache hits never hit 100% unless you can fit the whole DB in memory - you get into laws of diminishing returns. There is a further, and much more difficult issue - that is startup time. You might have a nice 200GB DB cache sitting there, but populating it from startup with 8KB random read blocks at a time can take 10s of minutes during which time your application servers are choking on the backlog of users trying to get back on. With a few of these things sitting on PCI-X buses that cache will fill much, much faster and during that startup period users won't be seeing response times extended by a factor of 10 whilst the cache is warmed up.

    This sort of problem happens if you have an uncontrolled failover in an HA cluster, it also happens if you have to start a DR instance, and even Oracle RAC can suffer from severe periods of "brown-outs". Also, if putting a few TB of this stuff into a big-iron server enables you to halve the amount of incredibly expensive RAM (by PC standards) that you are using. It might even cost in.

    However, there is an enormous problem - as these sit inside a server, they are fundamentally unsuited to shared memory clusters. That's a big, big problem for big enterprise systems. Putting SSDs into fibre SANs introduces a major bottleneck. Current arrays don't go near coping with this number of IOPs for a given amount of storage. Also, stick this through a normal I/O stack in the server, FC cards, SAN switches and arrays and you are looking at latency times approaching 1ms. So current shared storage architectures introduce latency of perhaps 20x what this can do, and I rather suspect a similar proportion of potential IOPs. Put this in an array and you might get 10x improvement in (uncached) latency whilst the technology could do 100x., and probably something similar on IOPs.

    In the absence of a very low latency shared-storage version of this architecture, then maybe synchronous replication of databases across two machines. Do it across infiniband and you might see 0.5ms addition on synchronous replication to a second instance of the DB. It could work except for very write-intensive DBs as a cost, and it wouldn't be cheap.

    So how much am I quoted for 20TB of this stuff, I have the ideal app...

  4. JB
    Thumb Up

    Mighty oaks...

    Come on, it's early days. I remember back in the late 80s when the PCs at our college had hard discs installed, and hefty 20Mb drives at that! Perhaps it won't be all that long before the hard drives of today look as dated as punched paper tape and magnetic drums.

This topic is closed for new posts.