back to article Cry havoc and let slip the SSD dogs of war

Fusion-io has a storage technology that, if it succeeds, could wreak havoc amongst the business models of mid-range drive array storage vendors. The thinking goes like this: Fusion-io makes a solid state drive (SSD) that connects to a server's PCIe bus. It doesn't use traditional storage interfaces such as Fibre Channel (FC), …

COMMENTS

This topic is closed for new posts.
  1. Bronek Kozicki
    Thumb Up

    Makes perfect sense and I wish them luck!

    but one has to wonder - how SSDs are attached to these ePCI cards? And are they user-replaceable at all?

  2. Anonymous Coward
    Paris Hilton

    One last hurdle

    One of the use cases for external storage is clustering in its various guises. Just about any HA solution relies on external storage as a shared resource. Then there's VMware's Vmotion and the features that depend on it, like DRS. Fast internal storage will work for many use cases but what you allude to for EMC in the future in the form of an appliance is what most mid-range storage arrays are today: a pair of "servers" running magic sauce managing access and availability to data on a bunch of disk drives. IT breaks, good idea not to have it all in one box. Slightly less radical: faster, and maybe cheaper, interconnects could be what's needed. Did Paris say FCoE?

  3. xjy
    Paris Hilton

    Vista ha-ha

    Maybe this will mean Vista will run at what most people consider acceptable speeds?

    Though mind you, if optimal acceleration if what folks are after...

    If I had any M$ shares, or assuming I had, had any left now, I'd cut and run.

    Billy G probably has a safe enough buffer if his system survives this nose-dive, but I wonder how much gold Ballmer has at this moment in time. Not much I hope. Are there any tall enough buildings he frequents for him to jump and splat?

    (Paris for her preparty-party-afterparty pitch and toss with fake president Sheen while shilling for her run for fake president - like cutting out the middle-ware)

  4. Alan Parsons
    Stop

    Great article, bad idea..

    How would you cluster such servers? What about replication and high availability? - true, we're talking midrange and not enterprise but still... And what about the stability of the data and the machine itself? We don't just RAID disks for performance, often it's for integrity. I understand that SSD doesn't have all the moving parts but I'm sure the MTBF is not infinity! And when it does fail, rather than an HBA and driver layer neatly reporting the error to the OS, I assume it'll just hit your PCIe bus with a Non Maskable Interrupt and take your machine down with it. The whole point of SAN and NAS tech is that it lives outside the box. That's a feature, not a limitation. This smacks of putting all your eggs back in one basket. I understand the application for a single server that currently has a single array hung off the back of it - but unless you're gonna put two of these cards in and RAID1 them at the very least then you need to think about data integrity.. If the application is single server, all about speed, and can live with last night's backup as the recovery point then I guess it's interesting.

  5. Steven Jones

    A backward step...

    No doubt from a minimising I/O latency approach this works well. Put it in a PC and it should fly. However, storage is put on the end of networks for a reason - those include data sharing, clustering, storage virtualisation and faster provisioning. So by all means put your server boot and local storage on the PCIe bus, but for larger scale and more sophistaced users, some form of storage interconnect is still required.

  6. Ian Michael Gumby
    Thumb Up

    Interesting.

    If I read the story correctly, you would have the disks (SSDs) mounted like regular disks, on a PCIe controller.

    It sounds like RAID is in the future, or possibly some combination of SSD to a RAID SATA/SAS configuration.

    I wonder if you could create a "hot swappable" configuration. Based on the size of the SSD, your mission critical database would be comprised on a mix of SSD and SAS/SATA drives.

    There is still a lot of hype factor at work, but would love to see it in a system.

  7. Alain Moran
    Alert

    Added value

    Perhaps the added value will come from software which optimises the usage of the SSD's?

    IIRC there are a limited number of read/write cycles that can be performed on SSD's, and I would imagine that this is related to the individual chips within the 'disk' ... if a software solution can be found that ensures that writes are made in a distributed way across the 'surface' of the 'disk' so as to optimise/extend the life of the SSD, then I think the vendor of this 'app' would have an advantage over the others.

    Remember folks ... you heard it here first, and if you make a million I want a cut ;D

  8. Charles

    @Alan Parsons

    I think part of the problems is that these older solutions do offer data protection, but *just as important* is access times. This is extremely important if the server is, say, a transaction server that has to go through millions of transactions per minute--if not *per second*. In this case, you *must* have the eggs in one basket or you can't get to them quickly enough.

  9. Anonymous Coward
    Linux

    Disruptive

    Why does it seem like folks are talking about these PCIe SSDs like they're limited to only DAS applications? Did I miss something in this article?

    .

    I would suggest checking out project Quicksilver from IBM. This prototype system used Fusion's SSDs with IBM's xSeries servers to demonstrate 4 terabytes of network-attached storage (FCoE) that had a response time of less than 1 millisecond. If this is true its very disruptive technology.

    .

    And assuming I understood this article these cards from Fusion make it possible for any of our project teams to design high performance storage into their projects. For a lot of us life will change if our architecture teams can use standard servers for I/O performance rather than proprietary systems from vendors who I won't mention.

  10. Roland

    Would be used for PAM by NetApp

    Isn't the PAM from NetApp just a PCIe card ?

    Moving from RAM to Flash should not be a big deal, since the cache is block oriented anyway.

    This would be a good way to boost the latency and IOPS of an array, without spending $$$ on 15K spindles.

    Or maybe NetApp would have us spending $$$ on 15K spindles ?

This topic is closed for new posts.

Other stories you might like