back to article Nimble whips out fat boxen: We're here for the all-flash array market

Nimble has fixed a great hole in its product line with a quartet of all-flash array products, reaching up to 2PB of effective capacity (assuming 5:1 data reduction) and 350,000 read IOPS, beating competition from Pure, EMC and NetApp/SolidFire. These arrays offer more performance, with 0.2ms latency, than Nimble’s CS hybrid ( …

  1. Anonymous Coward
    Anonymous Coward

    Correction

    That should be 99.9997, Chris.

    1. Anonymous Coward
      Anonymous Coward

      Re: Correction

      "the CS and AF systems have recorded 99.9997 per cent availability"

      How many systems and over what period of time was this measured ? Without that detail the number is meaningless.

  2. Anonymous Coward
    Anonymous Coward

    I wish vendors would stop 'assuming data compaction' in their capacity figures.

    1. Anonymous Coward
      Anonymous Coward

      or there should be a guarantee. At the very least they should only be able to use compression and de-dupe in their calculations. 5:1 will no doubt include thin provisioning , clones and snapshots and whilst clones have merit the others should definitely not be calculated in this day and age. HP are arses for this counting thin provisioning in their 4:1 as half the savings.

      Interesting though to me - all vendors have a flash platform now. So we're back to where features matter rather than being blinded by marketing bullshit.

      1. Anonymous Coward
        Anonymous Coward

        The 5:1 assumption does not include anything from thin provisioning,snapshots or clones. Nimble has all of these, they just don't factor them into the expected data reduction. Thin in particular is not worth counting since you might put your 100GB of data in a 1TB or 2TB thin volume, getting different (meaningless) results.

  3. Marc 25

    Every vendor does it. Welcome to the 2016 storage market.

    However in the big table in the middle of the article, they show you raw, maximum and effective capacity. It could not be easier to understand.

    For reference, the effective capacity is only with compression. It does not include dedupe, thin provisioning or zero copy snapshot savings. So you're always going to be on the winning side.

    Its also worth noting (because the article skips it) that encryption is on a per-LUN basis or the whole array because it's done via the Haswell offload chipset, rather than via hardware drive encryption.

    With drive encryption, you can only encrypt the whole array (all LUNs) which just adds unnecessary load to the storage controllers.

    Well done to Nimble, great to see them finally fill this gap in their product range.

    1. Anonymous Coward
      Anonymous Coward

      Compression only?

      Wait... I'm sorry but no, I call BS on only compression for those effective numbers.

      Unless they assume some unrealistic data which is super compressible, but then so could everyone else.

      If I suddenly loaded pictures or video on it the compression savings would be close to 0. Like for everyone.

      Dont get me wrong, this is a very nice update from Nimble overall, but pretty much everyone uses 5:1 in marketing, and all include compression and dedupe at least, in some cases thin prov and snaps/clones too (which is not cool).

      In the real world this will vary wildly, anyone and no one can reach those same numbers if you get what I mean.

      Raw and Usable before efficiency is where you should start to look, then its educated guesses from there and you may or may not see the effective capacity you expect.

      1. Marc 25

        Re: Compression only?

        You're absolutely right. If you use pre compressed formats you won't see any compression benefits at all.

        That said though, not everybody DOES have file systems full of jpegs and movies. Most use storage arrays for Hypervisor LUNs which compress very well.

        Either way, the RAW and Usable values are right there for all to see!

        How many of us in IT reads marketing blurb anyway? We all read the specs ....right?!

    2. Anonymous Coward
      Anonymous Coward

      With drive encryption, you can only encrypt the whole array (all LUNs) which just adds unnecessary load to the storage controllers.

      Not so with encrypted drives the work is handled at wire speed by the drives on-board encryption chip there is no additional overhead on the array or drive other than initial key generation. Similarly the fact that encryption remains transparent to the array means it doesn't break other features such as dedupe etc since the efficiencies are realized before they hit the drive and get scrambled by encryption.

      Controller based encryption without (off CPU) offload engines is the lowest cost method, but will always incur the highest performance overhead.

  4. Anonymous Coward
    Anonymous Coward

    Nimble Fat Boxen Kicken Arsen?

    Hm! Looks like they may have gotten more things right that most. Time will only tell but it certainly looks intriguing. Any pricing details?

    1. Anonymous Coward
      Anonymous Coward

      Re: Nimble Fat Boxen Kicken Arsen?

      Nimble are saying they will be lower cost on a like for like comparison but what I found really interesting was that they have two lower performance AFAs. I am guessing these will be a lot cheaper than what EMC and Pure are pushing. Their non-disruptive upgrades work on their hybrid so no reason it won't on their AFA. I can see them shifting a lot of these entry units.

  5. Anonymous Coward
    Anonymous Coward

    Well they're good at marketing at least...

    Full disclosure - EMCer here

    I couldn't help but note that the IOPS specs quoted for EMC's XtremIO are from the published 50/50 read write ratio while the Nimble is at 100% reads. I haven't found a published 50/50 IOPS statistic for Nimble - the lowest they are brave enough to go is 70%r / 30%w

    Their own website says "However, InfoSight data from 7,500+ customers verifies that widespread adoption of server-side caching has resulted in real-life workloads today that are dominated 70%/30% in favor of writes."

    If your real-world experience says writes are so critical, why would you publish stats in read only?

    "Hey everyone! We found out that cars have to go both up and down hills, but look how fast ours go DOWN the hill compared to those guys going up one!!!"

    Also, the Nimble uses an 4k block size as opposed to the 8k that XtremIO uses making the "apples to apples" comparison they are claiming, just that much more useless marketing fluff.

    1. Nick Dyer

      Re: Well they're good at marketing at least...

      Hi. Nick from Nimble here.

      The great news is that Nimble's systems draw IOPS (and are throttled by) Intel CPUs, and thus IOPS on arrays (regardless of it being AFA or HFA) is not constrained by the disk media behind it - be it all flash, or a mixture of flash and NL-SAS... OR by the read/write ratio of the IO pattern. Therefore, IO performance of any Nimble system at 100% read are pretty much the same performance at 70/30 r/w, 50/50 r/w, 30/70 r/w or 100% write. Being constrained by CPU in a controller rather than the amount of flash or disk in RAID groups makes it a lot easier, especially when scaling.

      1. Nick Dyer

        Re: Well they're good at marketing at least...

        BTW: IOPS statements and p!ssing contests are fun and all, but it really doesn't answer the real question which is how does it perform with the production workloads required, and how can data analytics give insights and visibility to what needs to change over time.

        That's the important part.

        Which is why our AFA and HFA are custom tuned and engineered with "application profiles", designed to perform exactly how we believe that workload will want to be treated, which allows noisy neighbour avoidance, block size alignment for the app but also customisation of compression/dedupe for said application, to mitigate away from burning CPU cycles for the fun of it (our inline dedupe is not global, by design).

        We've tuned our systems based on a series of data mining exercises we performed last year (see here for the blog post: http://www.nimblestorage.com/blog/storage-performance-benchmarks-are-useful-if-you-read-them-carefully/)

        Cheers

        @nick_dyer_

        1. Anonymous Coward
          Anonymous Coward

          Re: Well they're good at marketing at least...

          Some vendors use QOS to avoid noisy neighbours how does application profiles do this? Is this QOS? Are these numbers that you publish inclusive of de-dupe and compression given the above statement that it's tuneable?

          1. Nick Dyer

            Re: Well they're good at marketing at least...

            All figures published are with compression switched on (every figure ever published has always included compression, as it's inline but also has minimal performance impact). I'm pretty sure the figures also include dedupe, too.

            App Profiles are backend QoS tuned and isolated today within the file system to ensure sequential IO does not compromise random IO (or a heavy write workload does not compromise a read workload, for example), but no user tunable settings just yet.

      2. greatwhite1x

        Re: Well they're good at marketing at least...

        "Therefore, IO performance of any Nimble system at 100% read are pretty much the same performance at 70/30 r/w, 50/50 r/w, 30/70 r/w or 100% write."

        That doesn't jive with the published specs on the website. There's a 40,000 IOP (15%) performance decrease simply by introducing writes into 30% of the workload. I find it extremely difficult to imagine that the decrease won't continue to scale as you continue to introduce additional writes into the workload. Basic math dictates that you can expect almost a 30% decrease in performance by increasing writes to 50%, but one data point doesn't make a trend. Unless Nimble wants to publish the rest of the workload specs...

        1. Nick Dyer

          Re: Well they're good at marketing at least...

          If you speak to your local Nimble SE, they can provide you with the true figures. More importantly, we're happy to put our money where our mouth is with true, production workload POCs. Like I said, we can have a p1ssing contest over 4k block IOPS charts, but in the real world, those figures don't really matter.

      3. Anonymous Coward
        Anonymous Coward

        Re: Well they're good at marketing at least...

        "pretty much the same performance"

        There's definitely a delta from testing on 4kb over 20% on a CS2** (granted it's a baby but I expect these numbers are linear given CPU is your limit?)

        So are you going to publish a workload based on real customer workloads?

        Seems like the whole "we're bound by CPU not disk" falls apart when you finally admit flash is faster

        1. Nick Dyer

          Re: Well they're good at marketing at least...

          "So are you going to publish a workload based on real customer workloads?"

          That's what SmartStacks are for. Sadly, the market is driven by 4K marketing benchmarks for.

          "Seems like the whole "we're bound by CPU not disk" falls apart when you finally admit flash is faster"

          Can't say I agree, Nimble are still CPU and memory bound; by design. Even the AFA platforms are CPU bound. For example, look at the performance difference between an AF3000 and AF9000. Same SSDs, same chassis...

          Important to note, the AFA platform uses a newer generation CPU than the HFAs... now I wonder what the performance delta would be if the HFA were to be fitted with the newer type CPUs... I'll just leave that here...

      4. Anonymous Coward
        Anonymous Coward

        Re: Well they're good at marketing at least...

        Nick Your'e being completely disingenuous here.

        "Therefore, IO performance of any Nimble system at 100% read are pretty much the same performance at 70/30 r/w, 50/50 r/w, 30/70 r/w or 100% write."

        Yes the backend IOps experienced by the array may well be the same and limited by the CPU, but the front end IOps (those experienced by the application) will differ significantly based on read write ratio, block size etc this is storage 101.

    2. Anonymous Coward
      Anonymous Coward

      Re: Well they're good at marketing at least...

      EMC complaining about marketing, now that is rich. I think the massive sales engine might be more than a little afraid. They had to sell themselves to Dell because of the pressure from Pure and now Nimble has an AFA that makes XtremIO look very inflexible and expensive. I think DSSD will service a small niche but it's hard to see anything ahead for EMC but overall decline. I hear from insiders that they have been getting rid of some of their best support staff because they aren't revenue generating. Not good when their NPS is already massively below the likes of Nutanix, Pure and Nimble.

      1. Anonymous Coward
        Anonymous Coward

        Re: Well they're good at marketing at least...

        Some company which shall remain nameless, wanted nimble as we had it and it worked very well, EMC discounted a bit more than nimble and procurement purchased EMC, one year later and horrendous performance issues and absolutely awful support, they ditched EMC and went nimble and everything now works well. Google EMC support and nimble support and see what customers think! Dell support knocked out our old array, and nearly did it twice! The hybrid arrays are so good I don't see the point in all flash at the moment, that's usingt multiple oracle, SQL, vm using a few hundred TB. They can tell you typical compression rates for each application as infosight gives them data on real data, generally we see what they say. Obviously media, video, pictures don't compress to well, but we do see some and that's not using the variable compression yet, that can compress more. We encrypt all volumes and compress we don't see any difference in performance compared to when we didn't.

    3. portyman

      Re: Well they're good at marketing at least...

      you can create block sizes any size you like, 4k 8k, 32, 64 etc when you create the volume. I can confirm that the cs700 writes on the hybrid are very good, sub 1 ms, that's with multiple oracle and sql virtual servers. I don't work for nimble, we have about 8 now of various types. The infosight is not perfect but pretty good and is evolving all the time.

  6. Anonymous Coward
    Anonymous Coward

    Hmm

    So writes involve more than reads, which are just fetches.

    It means the write being written to more than one place, some kind of RAID or parity calculation and actual writing effort. So I can't see see how an all write workload can be the same as an all read workload due to the difference in effort involved.

    Perhaps you can explain?

    1. Nick Dyer

      Re: Hmm

      Absolutely correct; but thanks to our CASL log structured file system, this IO and writing effort is minimised drastically. It was one of the original design principles of the OS back in 2008; when cost effective all-flash storage was mostly a marketing and dedupe-riding pipedream.

      It's worthwhile digging out a CASL deep dive video from youtube to check out how it works. The beauty is in the detail.

      1. Anonymous Coward
        Anonymous Coward

        Re: Hmm

        Minimized but not removed......let's say you cap out at 100k reads. Are you saying you will hit100k writes?

        Above you said the R/W ratio had no impact on the IO but can't see this being at all possible given the fact you can't avoid the work associated.

        I'll wait for some real world tests perhaps.......

        1. caw35slr

          Re: Hmm

          "I'll wait for some real world tests perhaps......."

          If you are an end user, your local Nimble SE would be very happy to arrange one at your facility (best done with prod data, or something akin to it).

          @cawnimble works for Nimble Storage

  7. Anonymous Coward
    Anonymous Coward

    Encryption

    Encryption is typically carried out on an array level basis to protect from taking drives off site - it's protecting the slow bleed threat of gaining access to data by slowly removing drives.

    Attacks at the data layer will go via the app and , servers not the array.

    Given that it's on a per-lun basis is a bit confusing and begs one simple question as a result. What's the impact of turning encryption on? Why not on for all? What's missing here?

    1. Nick Dyer

      Re: Encryption

      Encryption at Rest on a Nimble platform can be enabled either on the whole array, or indeed on a per-volume basis. Reason being that there may only be a subset of data that actually needs encryption capability; why penalise the whole system with encryption overhead when it's not necessarily needed.

      There is ~5% top-end performance impact on encrypted volumes.... a feasible tradeoff for an important feature. Also, there's no need for expensive Self Encrypted Drives or SSDs... or any software license fees.

      1. Anonymous Coward
        Anonymous Coward

        Re: Encryption

        It's only a noticable overhead if you use the CPU to carry this out, which is appears you do.....as as you say the CPU is the bottleneck of the typical AFA. Bit of a shot in the foot.

        It's standard on the vast majority of decent SSDs (both Enterprise and commercial types). So why not offloaded the process to those SSDs...which are mostly 'chilling out' and not under pressure at all in an AFA? It's also a methodology that already accepted by FIPS 180, 197, etc.

        What level of encryption is it? Has it been validated anywhere?

  8. Anonymous Coward
    Anonymous Coward

    Nobody needs Flash - Says Nimble

    http://www.techworld.com/news/storage/flash-storage-cost-claims-need-careful-examination-3435759/

    1. Anonymous Coward
      Anonymous Coward

      Re: Nobody needs Flash - Says Nimble

      Erm, and your point is? The article rather still holds true. Nimble has options for high performance at a sensible cost with its hybrid and uber fast all flash for specific workloads and for customers with deeper pockets. Which bit are you confused about?

    2. Anonymous Coward
      Anonymous Coward

      Re: Nobody needs Flash - Says Nimble

      Well done for posting an article from 3 years ago. A lot happens in 3 years:

      EMC sells to a PC manufacturer

      Netapp declines year on year, quarter on quarter. Seen as a dinosaur in the industry. Buys the worst AFA in the market in response

      HP split the company into two in order to try to survive

      VC market has dried up, no more funding for private, pre-IPO startups

      IPO market has dried up, no more IPOs for startups burning cash that can't get funding and desperately need to IPO to survive.

      Need I go on?

      Point is, this market is rapidly changing. What was said a few years ago does not ring true today.

      1. Anonymous Coward
        Anonymous Coward

        Re: Nobody needs Flash - Says Nimble

        You forgot: Nimble loses 80% of it's market cap in the matter of a few months due to the failure to succeed in enterprises.

        1. Anonymous Coward
          Anonymous Coward

          Re: Nobody needs Flash - Says Nimble

          You forgot that the entire market is on a major downturn. Nimble lost up to 50% of it's share price because it ONLY grew 36% QoQ (41% if you excuse the crap exchange rates). That's terrible, right? This is in the same market where the likes of NTAP's revenues declined 15% in the last quarter.

          The rest of Nimble's market share loss was because of outside market conditions. Linkedin lost a similar market cap only two weeks ago. VMW have dropped from $90 to $49. The market is in a bad state and companies are being hammered across the board, not just in tech.

          Next.

          1. JohnMartin

            Re: Nobody needs Flash - Says Nimble

            -Disclosure NetApp Employee opinions are my own -

            Nimble lost 50% of it's share price because it has never made an actual profit (-34% Margin is the best it's ever achieved, last Q it was -35%) AND because it stopped growing fast enough to justify that those losses would ever turn around. It's amazing the growth you can get when you sell your products at a loss, and there's always the question of "what happens when they run out of money ?"

            Leaving that alone (because it's kind of irrelevant in a technical discussion), Nick T (who I respect), was a little silly suggesting that a mixed workload would perform the same as a pure 4K read test without having any data to back it up. All the AFA vendors are limited by CPU and writes are simply more expensive in terms of CPU than reads, that goes all the way down to the SSD's themselves. Intuitively it sounds like BS, but I'd be happy to see a benchmark that proves him right.

            Speaking of benchmarks, I cant wait until SPC or someone comes up with one that allow dedupe and compression against a set of well accepted workloads and datasets, that way there'd be a realistic way of comparing performance and storage efficiency claims, or that customers were all allowed to publish the VDbench figures from POCs that have been verified by the vendor for conforming to their best practices. (Almost everyone uses VDbench these days).

            Also interesting that nimble didn't include an AFF, or 3PAR in their rack unit diagram .. there's a reason for that.

            1. Anonymous Coward
              Anonymous Coward

              Re: Nobody needs Flash - Says Nimble

              "Also interesting that nimble didn't include an AFF, or 3PAR in their rack unit diagram .. there's a reason for that."

              Because they are retrograde SSD fitted spindle-based arrays, not purpose built AFAs. Plus Ive seen our Hybrid Arrays destroy both in real world performance PoCs.

              Discl: Nimbler

              1. Anonymous Coward
                Anonymous Coward

                Re: Nobody needs Flash - Says Nimble

                3PAR and NetApp "retrograde"? And so what?

                It's storage , it goes really fast at low latency and they are proven platforms. If I hear "You shouldnt include Vendor A because they're not built for flash" from a startup once more when they're trying to pitch for my business i'll flip.

                My business doesn't give a crap - I don't give a crap.

                If a storage array :

                Does lots of IOPs

                Has low latency

                Allows native backups

                Has a no quibble SSD warranty

                Solves my business challenge

                Then in my world it's an all flash array. I worked for a reseller and now i'm a customer, stop feeding this BS about "retrograde" it doesn't work. Check SPC1 benchmarks - they don't lie. Both the vendors above have impressive numbers.

                And with regards to your Hyrbid arrays - not in our tests they didn't you came back with a bigger box after a week because your performance was poor. You weren't even the best hybrid, Tintri trounced you.

                1. Anonymous Coward
                  Anonymous Coward

                  Re: Nobody needs Flash - Says Nimble

                  Disclosure - Nimble Employee (& ex-EMC & HDS)

                  I would caution care about headline SPC1 numbers. Lie is a big word that I wouldn't use but that doesn't mean 'they don't lie'. Down the years I get the impression that perhaps the majority of configs submitted are not very real world. Short stoking, none default RAID levels, turning features like dedupe off. The devil is often in the detail.

              2. Anonymous Coward
                Anonymous Coward

                Re: Nobody needs Flash - Says Nimble

                Can you tell us what specifically about 3PAR and AFF being "retrograde" means they are not worthy of your definition of all flash?

              3. This post has been deleted by its author

          2. greatwhite1x

            Re: Nobody needs Flash - Says Nimble

            You seemingly forgot to mention that Nimble missed guidance by almost 10%, projected almost no growth quarter over quarter, i.e. only 4% on the high-end, has no cash flow to add to the pile of money they generated from their IPO (that $200M is in a perpetual flatline to slightly-negative for 2015 to date), is still not generating a profit, and as a result the CEO went on record stating that they will not become profitable as soon as they had hoped.

            Don't act like things are all sunshine and roses at Nimble. The market punished them because they're a startup that is not showing enough growth to substantiate their stock price and as such are seen as a risky investment. It's as simple as that. Sure external market factors come into play, but not nearly enough cover up their less than stellar performance as a company as of late.

            Maybe an AFA changes things for them, maybe not. Only time will tell.

  9. JohnMartin

    IOPS / TB

    -Disclosure NetApp Employee, opinions are my own ... etc -

    I know that pointing out that a vendors' competitive marketing slide is misleading and unfair comes under the "No Shit Sherlock" category, but I cant help saying that while the capacity density is pretty good (not outstanding though .. 48drives in 4RU isn't exactly something to claim as being revolutionary) , the IOPS/TB or even IOPS/RU probably doesn't match that of any of the other solutions, particularly solidfire.

    eg. that solidfire rig would work out to about 50,000 IOPS per RU with 3750 IOPS /TB

    the nimble config is about 25,000 IOPS per RU and about 150 IOPS / TB

    Apples, Meet Oranges .. or in this case blood Oranges :-)

    1. Flammi

      Re: IOPS / TB

      Funny you mention IO/sec for Solidfire, where a single LUN can never get more than 20.000 IO/sec.

      And your calculations are just plain wrong! Check the datasheet again.

  10. Anonymous Coward
    Anonymous Coward

    Hi Nimble. Welcome to 2014.

  11. JohnMartin

    Where did that 0.2ms latency number come from ?

    -Me Again - still a netapp employee - still have opinions (and questions) of my own -

    The samsung device (which is awesome) has a response time of about 0.1ms at a queue depth of 1 .. So its reasonable to assume 0.2ms under light loads, I know because I’ve seen 0.18ms response times from AFF under too under light to moderate loads, but I wouldn't suggest our marketing team start publicising an unqualified "sub 0.2ms response time" either.

    Given how stingy they say they are with RAM (10x less than competitors according to Suresh V) .. they’re going to be limited to the device speeds. As awesome as the Samsung drives are, once you put them under load .. Say a heavy load with a queue depth of 32 the response time climbs to about 0.5ms .. I just cant see that 0.2 ms number being sustainable under the loads they're talking about being able to achieve.

    Also even with variable block deduplication, the metadata requirements for full inline deduce should push the RAM requirement up, unless Nimble is going to do some form of post process fixup for the blocks which aren’t in memory .. If I remember how CASL works that’s going to be difficult to achieve without reading and re-writing entire stripes whch ought to start making drive endurance something of an issue, unless of course this is CASL Jim, but not as we know it.

    1. Anonymous Coward
      Anonymous Coward

      Re: Where did that 0.2ms latency number come from ?

      "...still a netapp employee..." ?

  12. Anonymous Coward
    Anonymous Coward

    How does it scale out?

    -- Disclaimer, EMC'er here, personal views below --

    Genuinely interested to know here.

    When growing to the Nimble AF9000 x 4, are the space-saving techniques used (compression/dedup) global across all 4 arrays?

    Or is each AF9000 it's own little island of storage?

    So wanting to learn more, I go to Nimble's website here: http://www.nimblestorage.com/technology-products/unified-flash-fabric/scale-to-fit/

    ... it says:

    "Scale performance and capacity beyond the physical limitations of a single array to a four-node cluster, using any combination of Nimble Adaptive Flash and All Flash arrays." Learn More.

    So I click "Learn More", and I get a "Scale out demo with Vmware ESXi" page, with no content.

    http://www.nimblestorage.com/resource/scale-demo-vmware-esxi/

    So there are no more details on their website about how it "scales out".

    Can someone from Nimble or a partner please clarify?

    I'd prefer not to talk to "My local Nimble SE" for obvious reasons :)

    1. Anonymous Coward
      Anonymous Coward

      Re: How does it scale out?

      From Nicks comments above i'm guessing de-dupe is not global

    2. Anonymous Coward
      Anonymous Coward

      Re: How does it scale out?

      Disclamer: local Nimble SE (& ex-EMC)

      There is an overhead associated with running global dedupe and an expense/risk associated with running an Infiniband backend. These are the XtremIO design choices. Currently there is a limit per Brick of 40TB raw and 320TB raw on a cluster.

      A single AF9000 will scale beyond 500TB raw and could be a single dedupe domain, so yes, a scale out cluster of 4 AF9000s would be at least 4 dedupe domains. Each domain could be bigger than a current XtremeIOs global dedupe domain, but the point is flexibility. (see also mixing different size SSD devices in the same array/cluster or mixing all flash with hybrid in the same cluster.)

      You don't always need to run a single dedupe domain as (a) dissimilar datasets provide few cross dedupe savings and (b) some datasets just doesn't dedupe well. With Nimble the default is dedupe on but you have the choice at a per volume level. CPU savings from turning dedupe off is probably less than 20%. Not massive, but if you're not going to save anything space wise then better to free up the CPU cycles.

      My own view is I expect XtremIO to release some bigger bricks at some point, possible soon. This would mean the size of their global dedupe domain might leapfrog the AF9000 for a time. This is one reason I don't think who has got the biggest dedupe domain (currently Nimble over XtremIO) is really very important. Look for the features and flexibility, the quality of support and the analytics capabilities of a platform, as well as price - these things matter more than whose ceiling is the highest today. I'm very proud of Nimble's NPS of 85, due entirely to the efforts of people way smarter than me.

      1. Anonymous Coward
        Anonymous Coward

        Re: How does it scale out?

        Thanks for the clear explanations Nimble SE dude!

        Couple of other questions while we are at it:

        1 - does scale-out work for both FC and iSCSI?

        2 - does scale-out still require a special host multi-path driver that implements the virtualization map?

        Thanks!

  13. Anonymous Coward
    Anonymous Coward

    Amusing

    Reading thru the comments it looks as if there's concern for what Nimble has announced. EMC people, NetApp people, I'm sure others as well.

    My advise is that instead of being concerned with what this company has announced try to make your own products better.

    To the wondering eye it feels as if you don't believe in your own offering and you're more dependent in uncovering someone's real or perceived shortcomings in order to succeed. That much is obvious.

    Current Nimble Customer

  14. Anonymous Coward
    Anonymous Coward

    Just based on number of comments...

    ...Nimble might be on to something.

    Isn't there an old saying, something about not being able to market to indifference? With 45+ comments already, at least people are talking about this new kit. I'd have to think increased chatter is taking place in the channel and customer base also.

  15. Nick Dyer

    Customer POC - AF7000 vs XtremIO

    Hello everyone. Me again. I work for Nimble, by the way.

    This popped up on Reddit yesterday. A customer POC (yes, an actual customer, not legacy vendor trolls like this thread) POC's a shiny new Nimble AF7000 vs EMC XtremIO. Interesting, wouldn't you say?

    https://www.reddit.com/r/storage/comments/47epoh/observations_on_extremio_and_beta_nimble_afa/

    ACs - You can carry on with the FUD throwing... but remember - it's all about the customer and applications, not about speeds, feeds and BS.

    1. Anonymous Coward
      Anonymous Coward

      Re: Customer POC - AF7000 vs XtremIO

      Bored of this now. Shouldn't there be a limit to the number of times a vendor's excitable puppies can try to justify their products in these comments sections? Off to the official company blog with you, Nick Dyer.

    2. Anonymous Coward
      Anonymous Coward

      Re: Customer POC - AF7000 vs XtremIO

      "You can carry on with the FUD throwing... but remember - it's all about the customer and applications, not about speeds, feeds and BS."

      LOL Nick, that is funniest thing I've read all week.

      The first line of the reddit post you linked to:

      "Disclaimer: All the performance tests we've performed are all synthetic in nature..."

  16. NickT

    A few thoughts

    Hi everybody,

    Disclaimer: My name is Nick Triantos and I am and Nimble Storage employee

    Lots of interesting comments in the thread, some with good intent, so I will dignify those.

    For those not familiar with the Nimble Architecture...

    CASL is a Log Structured Filesystem. Two of the basic principles of Log structured file systems are that you only write in free space and that space that has already written to can't be overwritten until it’s garbage collected first. If this this sounds familiar it’s because this Principle that also exists in Flash.

    https://lwn.net/Articles/353411/

    In fact, all SSDs under the hood use a form of a Log Structured File System (link above). Why ? Because, unlike disk, reads and writes in Flash are very Asymmetric. It takes much more time to erase and write a flash cell, than it takes to read. Flash also has a finite lifetime, therefore how storage writes to it becomes very important.

    CASL writes to SSDs in chunks. A chunk is the amount of data that will be written to an SSD before writing to the next. Our chunk size is an even multiple of an Flash Erase Block. This leads to lower write amplification and wear.

    Additionally, our RAID layout has changed. We write across 20 data drives vs 9 which means our Segment size vs our Hybrid has increased by 2.5x. Additionally, we use some of the SSD overprovisioned space as spare chunks. That allow us to reserve less space for rebuilds as well as have one of the highest raw:usable % in the industry.

    Furthermore, not only do we protect against ANY Three Simultaneous SSD failures vs the standard industry approach of 2, but we also provide Intra-Drive Parity which affords us the ability to recover from 1 sector failure on the remaining SSDs.

    Anyone who has read Google's recently published USENIX paper on a 6 years SSD reliability study, will understand why we provide these extensive levels of Data protection along with Data Integrity techniques for Lost and Mis-directed writes, segment checksums, snapshot checksums, Quick RAID rebuilt and many more only found in Tier 1 Enterprise systems.

    http://0b4af6cdc2f0c5998459-c0245c5c937c5dedcca3f1764ecc9b2f.r43.cf2.rackcdn.com/23105-fast16-papers-schroeder.pdf

    CASL also does system level GC and uses the TRIM command to notify SSD Flash Translation Layer (FTL) of copy forwarded blocks. We do this to optimize write efficiency.

    Throughout our development and almost 6 mos beta process we've used IDC's vdbench guidelines to test our AFA just so to make sure our performance isn't impacted like some of our competitors when capacity increases to dangerous levels and with inline data reduction on. So we don't take our foot of the pedal and continue process *everything* inline.

    Lastly as a common sense point…, any architecture who can so effectively and transparently Garbage Collect on an SATA and NL-SAS drives as CASL has done the last 6 years, can Garbage Collect on SSDs.

    Thank You

    Nick Triantos

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like