back to article Balancing act for servers that flash the cache

Multiple flash locations in the server-storage stack are upsetting balanced I/O conventions and making overall system design much more difficult. Designing and implementing server-to-storage systems is going to become very much harder, because existing assumptions about server and storage I/Os per second (IOPS) handling are …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    oh yeah!

    grmpf. seems like it makes sense to put some intelligent decision making in that. oh no wait. sun's kit already does that. Damn. what was the point of this article again?

  2. Anonymous Coward
    Megaphone

    Seeing problems where none exist...

    The fact that caching can be implemented anywhere -- from L1 processor cache to a DRAM write buffer on an HDD is NOT a problem.

    Caching (Flash or otherwise) needs to go where the cost/benefit is greatest, and this is a function of the application workload profile and overall system architecture.

    In general, the idea that Flash should go behind an ANSI T-10 Block Device interface (as in the "solid-state disk" contradiction-in-terms) is just goofy. There is simply not enough application-related metadata at the block-device level to make any kind of intelligent cache replacement decisions. This is why (for example) approaches like Fusion-IO and Sun's FMOD approach will win in the end (because they are closer to the file-system and application metadata), while STEC and all the other "HDD form-factor" applications of Flash will die (and not soon enough).

    But in any case, the idea that there should be a "standard" or "agreed upon" place to put Flash is ludicrous -- it all depoends on the application.

This topic is closed for new posts.

Other stories you might like