back to article How much disruptive innovation does your flash storage rig really need?

Our technology world is fascinated by disruptive innovation. Every tech startup says its new technology is disruptive and therefore it is bound to succeed. So it is with all-flash arrays which can answer data requests in microseconds, instead of the milliseconds needed by disk drive arrays. Startups such as Pure Storage, …

COMMENTS

This topic is closed for new posts.
  1. Nate Amsden

    flash aware

    ""Our software," they will say, "has been designed from the get-go to use flash and be aware that it wears out with repeated writing, unlike disk. It minimises the number of writes by coalescing them and deduplication to get rid of redundant data.""

    I think HP's new "unconditional 5 year warranty" on their 480GB, 960GB, and 1.9TB SSDs says they too have the software that is built for flash, with the adaptive sparing, and adaptive write cache specifically (among others too). That 5 year warranty exceeds that of the manufacturer of the SSDs themselves (Sandisk). Not to mention 99.9999% guarantee as well.

    Pure is dead, they got nothin. Two years ago they looked interesting, now they are dead. I know a few folks (ex-HP) over at Pure too.

    Some of the other players like Nimble(hybrid disk/ssd) will be able to nibble around at the low end for a while yet. Tintri is dead when VVOLs hit the scene.

    1. Anonymous Coward
      Anonymous Coward

      Re: flash aware

      Pure can no longer rely solely on clever messaging as the big vendors not only catch up but start to pull ahead and yes HP's latest 3PAR announcements basically remove Pure's and other AFA vendors core value propositions. Pure need to come back with something extremely disruptive or find a buyer, but unfortunately for them I can't see either being very likely.

  2. Dave Hilling

    Reasons why you dont see some of these vendors on SPECFS

    Vendors who use compression as part of their system which cannot be turned of violate SPECFS rules if I remember right which is why you wont see many of them on there.

    1. Joel K

      Re: Reasons why you dont see some of these vendors on SPECFS

      The reason is much simpler: SPECfs is a NFS benchmark, and many of the all-flash arrays are block/LUN devices only. Even the vendors that have NFS capabilities have to go through a rigor of resourcing to drive a validated SPEC SFS result, and i'm sure that many of the startup all-flash or hybrid vendors may not have those resources regardless of whether its a SPECfs or a TPC or whatever independent benchmark.

  3. zootle

    Why is there so much fuss about this now?

    Hybrid storage has been the norm in Solaris and other platforms that support ZFS for nearly a decade now. All of the the ZFS storage I've built over the past six or seven years has the flash acceleration, snapshot and replication, write reduction and cache features people appear to be getting excited about now. I guess Sun/Oracle didn't shout loud enough or long enough.

    https://blogs.oracle.com/ahl/entry/flash_hybrid_pools_and_future

  4. Don Jefe

    Disruptive Commodities

    Poor MBA's, they've gone and confused competitive with disruptive. Faster, cheaper, denser, tastier, flashier (Ha!), etc... are competitive differentiators. The attributes what set you apart from your competitors. Potential customers compare the attributes of your product with the attributes of other products and buy whatever product delivers the attributes they value the most. That's not special. That's, that's just business. I can't even find a good synonym. The concept is so basic that children work it out for themselves. It certainly isn't disruptive unless you're airlifting your product in to isolated tribes of natives or North Korea.

    Disruptive is not an attribute, you don't add, or subtract, disruption unless you're a Romulan arms manufacturer. Disruptive is an effect descriptor. It's a generic term that indicates you, or your product, force one or more participants in a market to respond in a way that's beneficial to you and/or detrimental to them.

    Disruptive is your new technology that uses a raw material you cornered the market on before you went to market. Your production costs are now 1000% lower than what anybody else can offer. Competitors can either eviscerate their margins by cutting prices or buying the amazing new material from you or just stop making their product altogether and try to beat you another way. It could be a logistics advantage or a strategic partnership or a completely irrational moment of temporary public adoration.

    The specifics are irrelevant, what matters is that you are forcing others in your space to assume a defensive posture that slows, or stops, their forward momentum while you simultaneously kick the shit out of them before setting their house on fire and barring all but one exit. You can shoot them when they run out if that's your thing, but that's not nearly as much fun as you'd think.

    A pretty good rule of thumb when assessing the disruptive potential of something is to investigate the industry scuttlebutt and see how often you see stuff like 'unfair', '(x)opoly', 'protectionism', etc... There may, or may not, be any of that stuff going on, it could just be luck or savvy. It's hard to say, but 50:1 odds say that the legality of the disruptive elements will be assessed and judged in court. Probably a bunch of times. Whatever the legal verdict ends up being, you can always be certain that any truly disruptive thing will absolutely infuriate anyone not directly benefitting from it.

  5. storman

    Can't judge flash or hybrid arrays on SPEC numbers

    Relying on (or even showing) SPECsfs numbers from a limited set of vendors is nearly useless. As can be seen in the chart, it's primarily the legacy vendors who run SPEC benchmarks as they have been doing for decades. None of the flash or hybrid "newcomers" care about SPEC because they realize application workloads vary dramatically and the solid state technology implementations (compression, dedupe, flash chips, controllers) can also vary dramatically. The only way to assess performance is to run real (or highly accurate representations of) production workloads. Without the ability to do this performance validation, it is very easy to over-provision storage or under provision by a factor of 2X or more. Given the typical cost of flash storage arrays, purchases need to be aligned to performance requirements. Tools like Load Dynamix or SwiftTest are designed to make this analysis very easy.

  6. razorfishsl

    It's a crock anyway for a number of reasons.

    1. Nand Flash is PAGE & BLOCK BASED, so any comments they make about block based software not working as well for Flash … 'because flash is not block based'..is a crock.

    2. nand -flash writes pages, but ONLY if the full block as been erased before hand, which means you have to move EVERY other used page in the block, to another block, before you can reuse a 'page'

    3. To move a 'page' you have to read & write TWO pages, one that contains the data and one that contains the pointers, you cannot update the pointers in situ, it requires a complete re-write of the pointer page. ( Nand-flash works great when its new, but once the blocks need to be erased, it starts taking massive hits on the throughput)

    4. In many cases you cannot erase 'blocks' next to data in another block, since you get something called write disturbance which corrupts the valid data surrounding the block you want to erase. ( and you won't find out about it… until you need THAT data)

    Interesting things happen if you get a 'page/block 'error, because then you have to move the complete block to try and do a block erase to fix the defective page, it gets real interesting if you cannot find enough space to put the block or if indeed the next block has an error. ( block errors tend to cascade) due to the recovery amps drifting.( ESP. on the MLC nand-flash crap)

    Fact…., the recovery amplifiers on nand-flash age and drift, which means every so often you need to do a chip erase to get them back in spec, so then you end up trying to move a full chips content to another chip, whilst you do a chip erase.( but then you find out many manufacturers design the systems with the data split over multiple chips that don't allow single chips to be erased, because they split the data over chips to get their sub-ms timing)

    The result is that Nand-flash runs great until something goes wrong, and then it looses everything with no chance of validated recovery at all……

    Even reading the data from nand-flash causes the other data around that page to degrade…..

This topic is closed for new posts.

Other stories you might like