back to article All hard drive arrays will mutate into flashy faster hybrids

All hard drive storage arrays will become flash hybrids, with non-volatile memory fitted internally or bolted on externally. These mutants will fight all-flash arrays with better data management and server flash cache integration. The aim of the game is to get more revs out of the engines: applications should run faster, more …

COMMENTS

This topic is closed for new posts.
  1. AbortRetryFail

    Why is it taking so long

    A hybrid solution has been the obvious choice for years now, ever since SDDs started hitting the market, so I'm confused as to why it has taken so long. I would have predicted that pretty much all HDDs would be hybrid by now but clearly they aren't.

    (Don't flame me - I really am genuinely confused as to why they aren't more prevalent. Is it price?)

    1. Anonymous Coward
      Anonymous Coward

      Re: Why is it taking so long

      I think price and durability/lifetime have been the two main sticking points.

    2. Yet Another Anonymous coward Silver badge

      Re: Why is it taking so long

      Enterprise customers had their own caching solution.

      Desktop users that wanted more speed bought SSDs for the C drive

      The only customers that needed a hybrid drive were laptop users with only one drive slot who needed speed AND lots of storage = a fairly niche market.

  2. Anonymous Coward
    Anonymous Coward

    Hybrid Array =/= Hybrid Drive. The comment about Hybrid Drives has nothing to do with this.

    We are talking about storage arrays, not consumer drives.

    Take a look at what IBM did with XIV Gen3, and SSD cache (up to 6TB) and dynamic caching between RAM cache and SSD cache.

    As flash becomes more cost competitive, I see this trend expanding.

  3. Gordon Fecyk
    Boffin

    If money were no object...

    The tactic of adding spindles to boost array response times is over. It's now a question of where do you put the flash: in servers, in all-flash arrays or in disk drive arrays, or all three places?

    What kind of disk performance would come out of multiple 'spindles' of flash drives? At first, the thought of a RAID 0 or RAID 5 SSD array just seems the fastest for those with more dollars than patience. But sober second-thought suggests we'd just move the bottleneck from the physical 'disks' to the interface.

  4. pPPPP

    Hybrid technology is still in its infancy, and there's the argument that as flash prices come down the difference between them and spinning disk will lessen. However, spinning disk capacities continue to grow, as does demand for capacity.

    It's all about the way its implemented, in particular granularity. Some systems only offer automated tiering on a per-volume basis. Others offer sub-volume tiering but the entire volume tends to have to be in the automated tiering pool. And the level of granularity tends to be large extents, rather than the small chunks of data you actually want on SSD. So you're always going to get a degree of inefficiency (although this will no doubt improve).

    Also currently, the admin has no control over what data goes where. The small chunks of data you want to run quickly aren't necessarily the same as those which are accessed the most frequently.

    It's this kind of stuff that's being focused on and will ultimately differentiate manufacturers' offerings. In fact, it already does to an extent (pun not intended, because it would be a really bad one).

  5. Anonymous Coward
    Anonymous Coward

    No they won't...

    I'm not allowing my HDDs to even get close to an SSD let alone mutate.

  6. This post has been deleted by its author

  7. Last Bandit
    Facepalm

    May I offer my own cutting edge piece of information

    That water is wet. It's about as a cutting edge insight as the article.

  8. Steven Jones

    It's all about latency...

    Enterprise arrays have had large amounts of NV cache for a long time, usually in the form of battery-backed dynamic memory. In some arrays this can amount to hundreds of GB. The main purpose of this is to reduce the impact of latency of physical disk I/O and to optimise that I/O activity at the physical device (by, for instance, such things as roll-up writes, full-width stripe writes on parity RAID, read-ahead on detected sequential I/O etc.). A secondary, but important function, is also to cache the logical configuration of the array (so virtual device definitions, sparse snapshot mappings and so on). That way the I/O latency for writes (which can generally all be cached unless cache is saturated due to back-end IOP congestion) and cached reads on a SAN can be reduced to about 0.5ms from perhaps 6-10ms (on a heavily used array). However, all caching mechanisms, whether NV dynamic ram or flash based get into areas of diminishing returns as each extra GB buys you less and less improvement. If you happen to be running the sort of random transactional OLTP app where the read hit rate on cache is relatively low, it is very likely your transactional time will be I/O bound by read latency. As modern databases tend to keep good read cache candidates in server memory, the array tends only to see the random elements. Thus it's not unusual to see DBs having logical rates of read I/O many tens of times higher than the physical reads. Thus the array tends to get all the difficult, uncachable reads. This latter is very commonly the limiting factor on transactional throughput on transactional systems. Ultimately hybrid approaches will always limit throughput unless the amount of cache approaches the total DB size.

    There is also another issue - that is start-up time with "cold" cache in the database. Typically when such an app starts, the database floods the array with vastly more read requests than when the DB is running in a stable condition when its internal cache is populated. This tends to be far more random IOPs than the storage array can possibly cope with and thus I/O latency becomes very high. This, in turn, causes backlogs, poor caching behaviour and instability. Indeed very many large applications have to be started slowly to avoid this - it's not uncommon for large applications to require a ramp-up over an hour or two before reaching full throughput.

    Cached (including hybrid array) approaches are of only limited benefit for these situations. For very large transactional systems with lots of random access, only a fully solid-state storage mechanism will meet requirements. Hybrid approaches are going to be of limited value in this area.

    1. Anonymous Coward
      Anonymous Coward

      Re: It's all about latency...

      Not so fast. Please explain your comments.

This topic is closed for new posts.

Other stories you might like