back to article Traditional enterprise workloads on an all-flash array? WHY WOULD I BOTHER?

Are all-flash arrays ready for legacy enterprise workloads? The latest little spat between EMC and HP bloggers asked that question. But it’s not really an interesting question. A more interesting question would be: "Why would I put traditional enterprise workloads on an AFA?" More and more I’m coming across people who are …

  1. A Non e-mouse Silver badge

    I’d like to see more vendors be honest about the use cases for their arrays.

    There, fixed that for you.

    (Of course, we all know what chances there are of a vendor being honest....)

  2. htq

    Reminds me of...

    the time our Drawing Office Manager wanted to upgrade all their desktops from a Pentium 90 to a P120 - instead of my advice of increasing the RAM. Manager's decision was based on more MHz therefore spooling the print job would be much faster. Which was true, then I pointed out that the plotter took over 30 minutes to finish the job so the fact the desktop can finish spooling the print job in 15s instead of 30s it moot!

  3. Anonymous Coward
    Anonymous Coward

    "Who cares if the AFA can handle replication, consistency groups and other such capabilities when that is taken care of by the application?"

    So, we are putting data consistency in the hands of the C# coders instead of the hands of the DBA and the data infrastructure, replacing declarative with procedural tech? That would seem to be a sub optimal way of doing things.

    Or did I miss something?

    1. Ian Ringrose

      No, the application may be SQL server doing the replication, without the need for a shared storage system connected to by both servers.

      Likewise a message queue store all incoming requests as well as sending them to the application is a lot safer, then any block level replication. As if there is a bug in the application, the messages can be replay, but all the block level replication will have just replicated bad data.

  4. Anonymous Coward
    Anonymous Coward

    What's a snowflake application?

    See title... :)

    1. Joseph Eoff

      Re: What's a snowflake application?

      One that's fragile and delicate, and melts down at the slightest provocation.

      1. Michael Wojcik Silver badge

        Re: What's a snowflake application?

        One that's fragile and delicate, and melts down at the slightest provocation.

        So... a better question would be, what isn't a snowflake application?

        On a more serious note, my assumption was the writer meant applications with unique designs and thus unusual requirements, in the sense of "no two snowflakes" and all that. Not that it matters, though, since the overall sense seems to be clear: most applications don't need extremely fast storage access.

        Or, more generally, most applications don't need extremely X Y, for any noun Y and adjective X.

        Even more generally, most W don't need X Y Z, for nouns W and Z, adjective Y, and adverb X that indicates rarity or exception. That's true by the definition of X. It indicates the case is rare, and so doesn't apply in most instances. So formally the whole claim is a tautology.

        Informally, of course, the argument is that AFA is the solution to an extraordinary requirement. Everything else is implied by that. And that seems like a plausible claim to me, based on my experience.

  5. Matt Bryant Silver badge

    Technical thrills vs business needs.

    One of the problems with us techies is we like tech, and flash is pretty awesome tech. The outcome is we often look for a use for a new tech when what we should be doing is looking at the business case for it. Often, having a storage unit that will respond 100ms faster to a Web-based query is neither here nor there, a human using the Web interface won't even notice the difference. We used to use flash in the form of TMS RamSAN systems to speed up Oracle databases when SCSI disks and networks were slow and we did lots of batch updates, but now we have fast SAS connected to 16Gb fibre channel and 10Gb LANs and our apps update their databases throughout the day. If you can design your solution to update throughout the day and remove traditional batch updates then flash loses a lot of its appeal and "ordinary" spinning rust can still work out as the more economic option.

    Amusingly, where flash can be really good is not in the enterprise apps but the more mundane, back-end tasks, such as the batch processing mentioned. But in the minds of CEOs, core systems are the exciting stuff that "generate money" for the business whereas backup "only safeguards data" and is seen as an operational burden. Try convincing your Luddite CEO why you should have more expensive backup per-GB array costs than for your CRM system, it's not easy!

    Most people I talk to are actually using more flash disk in their desktop environment (especially laptops now) than in their core systems.

  6. DeepStorage

    Budgets are binary

    Martin,

    I've noticed customers buying AFAs simply because an AFA fits within their budget. They're pretty sure a hybrid could meet their needs but since they can afford the shiny new AFA they buy it so they don't have to worry if the hybrid would have been fast enough.

    After all a storage guy could get fired for poor performance, but not for staying within budget, even if they could have saved money.

    1. chris 17 Silver badge

      Re: Budgets are binary

      That's the same problem with budgets across the public sector, there is no onus or reward to save money, but you won't get in trouble spending more than you could have but within your budget. Also if you spend less now you'll get less next time and have to work harder to justify why you need more than last time to do x. Stupid blind accountancy.

  7. Man Mountain

    Look, we're at the point where it's not a case of thinking 'why would I buy an AFA' but 'why wouldn't I'? If an AFA is cheaper per effective TB (based on a sensible low realistic de-dupe rate not some higher aspirational number) then why wouldn't you? Even if your apps don't need typical flash performance, if flash is cheaper then you'd buy it simply because it means you can put more on the array and not worry about it. And even apps that don't need performance as such aren't going to turn their nose up at it if it's available for less than the price of spinning disk. Forget flash being a luxury that few can afford, flash is cheaper than spinning disk for an awful lot of environments! It certainly is for the vendor I work for who has the best AFA on the market: http://searchsolidstatestorage.techtarget.com/feature/HP-3PAR-StoreServ-7450

  8. Ian Ringrose

    Firstly if staff every have to wait for the computer, they will not feel as good, so may become slower themselves. Making the computer respond in 1 second instead of 20, so the member of staff does not start to think about something else, has a much bigger effect that any logic will say.

    I have personally spend far too long at the Screwfix trade counter due to the computer being so slow processing returns, I bet any cost/benefit review speeding the system up, did not take into account the risk of the person behind me in the queue going to Toolstation next time…

    Also flush is a lot more predictable, one application move the disk head about does not slow down other applications in an unpredictable way.

  9. Nate Amsden

    for me

    The cost of the AFA (3PAR 7450) was so good that it made sense to get it with the 2TB SSDs, because we are more I/O bound than space bound (though not I/O bound enough to *really* need an AFA). The data set is small enough that it fit easily in the system(about 12TB of logical data). Moving from a 3PAR F200 which is 80x15k disks, to the 7450 which is obviously much faster, consumes much less rack space(I can get about 180TB of flash in less space than the F200 has about 64TB of disk), less power etc.. F200 is end of life anyway, and end of support in November of next year.

    Before the 2TB SSDs I was planning on perhaps a 7400 hybrid.. but the big SSDs made it an easier decision to just go AFA. Though I would prefer to have a 7440 which allows both disk and SSD (purely a marketing limitation not a technical one).

    Note that most of the AFA offerings out there seem to be stuck using small SSDs (well south of 1TB from what I've seen) for whatever reason. I'm expecting to see at least 3 or 4TB SSDs on my 7450 easily within 2-3 years which means way north of 200TB of raw flash in my initial 8U footprint. I don't need millions of IOPS(average now well south of 10k), but to know everything is on flash and will get consistent performance is a nice feeling -- and the cost is not bad either.. and the data services are there in the event I need em (I do leverage snapshots heavily, not replication etc though). Also I get a true 4 controller system which is important to me for my 90% write workload. Add HP's unconditional 5 year warranty on all my 3PAR SSDs, and I don't have to care about wearing them out (obviously they have proactive failing etc and I have 6 hour call to repair support).

    YMMV.

  10. jcrb
    Boffin

    Asking the wrong question

    [Disclaimer, I work for Violin Memory]

    Why would you put traditional enterprise workloads on AFAs?

    Because they don’t run batch jobs in ½ the time but in 1/10th time or 1/50th the time, and suddenly that batch job can be run as a real time dashboard, or the batch order processing that produced the morning report can give you the results before the end of today.

    Because the reason the legacy batch job fit in the morning window is you had eliminated all the things you wanted to do and left in only the ones you absolutely needed to.

    Because when you are migrating apps to new storage as you continuously refresh your data center you no longer need to wonder if a given set of apps will interfere with each other when they share a given storage array.

    Because the latency of the user in front of the screen comes from the storage holding the data they are trying to call up. And user efficiency has long been shown to exponentially improve with shortened computer response time (http://www.vm.ibm.com/devpages/jelliott/evrrt.html ).

    Do you remember what it was like using your laptop before you put an SSD in it (or got a macbook)? Would you ever go back to a HDD? Then why do you think that worse than HDD IOPs/latency is ok for your VDI users? But that’s not legacy you say. Sure it is from the user perspective it’s the same app, just being run differently on the backend.

    How much improvement can a user in front of a keyboard see? Enough that we have had customers with highly instrumented call center VDI setups, who can tell you every possible stat imaginable, switch from MLC array to SLC arrays as being fully cost justified due to the improved call handling rate.

    Yeah I bet you never thought of call center operators as a high performance workload? Turns out when the latency of the user in front of the screen matters the differences in response times from your storage become anything but insignificant.

    How many times have to been dealing with any form of customer representative of a company who had to tell you, "I'm sorry my computer is slow", if you speed everyone one of those delays up even a little, much less slash them, how many more customers will be handled by the same (or fewer reps), how much more work gets done when whatever you want from the system returns immediately rather than after you go get a cup of coffee?

    How about because it will take far fewer servers to run the same set of batch jobs on an AFA, firstly because if the job finishes in ½, 1/10th, 1/50th the time, then the same server can handle, 2X, 10X, 50X the number of jobs in the same amount of time, secondly when the available storage IOPs are huge and the latency low, you don’t need giant amounts of data cached in server DRAM, and so the same server can run more jobs at the same time.

    We have done the math with some of the largest companies in the world and once you include the floor space, power, cooling, and opex considerations such as backup and restore speed, not having to have multiple full copies of giant data sets for testing/QA because the array with the primary dataset can handle the live and test loads at the same time using thin snaps, etc, etc that it really is the case that you can have flash performance for the price of disk storage. And we are hardly the only ones saying this.

    Really the question should be why would you put your enterprise work load anywhere but an AFA?

    1. Dunstan Vavasour

      Re: Asking the wrong question

      One of the more helpful comments I've read on the subject. To paraphrase, if you don't have to consider storage performance, you can do more and new stuff.

      1. Terafirma-NZ

        Re: Asking the wrong question

        exactly.

        We are going through this right now and will go the AFA way for these reason:

        It's cheaper to purchase

        It's cheaper to run

        It's cheaper to maintain and much simpler

        It gives agility

        It uses less space

        It has a longer life

        It brings back purchasing for capacity not performance

        How awesome to say yes to any new request and thanks to Dedupe and flash speeds no additional storage purchase is needed allowing the application to move forward.

        Our testing and research shows there is no reason to not go AFA if you are in the >$60K mark absolutely nothing about traditional arrays is better unless you have sequential encrypted data on a huge scale.

        Traditional arrays are whats driving many application owners to look at cloud for delivery.

        Did the author of this article write this on a computer using flash storage? A disk would have been fine for the purpose but something tells me like most others they have made the move to flash.

        This article sounds more like a rant after having to sit through some long presentation or scared there will be no job once the 6 racks of disk get tossed and replaced with a half rack of flash that uses a couple of tick boxes to enable features.

    2. Munchausen's proxy

      Re: Asking the wrong question

      "Because the reason the legacy batch job fit in the morning window is you had eliminated all the things you wanted to do and left in only the ones you absolutely needed to."

      You say that as if it's a bad thing.

      It's a given that new capacity will be filled -- space capacity, iops capacity, mips capacity, pixel count capacity, whatever. In my opinion, it's not a given that the new filler is worth having.

  11. returnofthemus

    Dynamic Applications and Changing Business Models

    Big Data Tsunami, less batch more real-time or at the very least near real-time. Simples!

    1. JamesTQuirk

      Re: Dynamic Applications and Changing Business Models

      I thought it was about consumer world having a big uptake of these devices (Which I have some Samsung 840pro's), to the Slower & more thought about Implementing them in Existing Arrays, the "flash ram" people want you get off your butts & update, even if you don't need to, because they Need sales ...

      I agree with others here, it's the NUT lose behind the keyboard, that need's tightening up..

      I still have working 20meg ST506 HDD's here in old Laptop's etc, Prove to me Flash will last that long, even if sitting in box for years ....

  12. Anonymous Coward
    Anonymous Coward

    Exactly the right question

    Yes, this is exactly the right question to ask, for two reasons:

    1.) No, Flash is not cheaper than (SATA) disk, at least when you compare $/GB. No matter how hard the AFA vendors try to tell you otherwise, a simple price comparison on Amazon will tell you that this is not true. Yes, Flash prices are going down, but so are disk prices. Look at any chart that shows disk and flash price over time and you'll see that the price gap between the two hasn't fundamentally changed - it likely won't any time soon.

    So if you AFA appears to be cheaper than a traditional array on a $/GB basis it's either due to some creative accounting on the AFA vendor side or because the traditional array is seriously overpriced.

    2.) For typical enterprise applications, the storage part of the total latency is not very relevant. Look at a typical ERP system and you'll probably find a latency on the client side of around 80ms. Storage latency on a disk/hybrid system will contribute around 6ms of that. The rest is network, server processing etc. Getting the storage latency from 6ms to 1ms is not really helping you a lot if your total client-side latency is 80ms. Better invest that money to optimize in other areas.

    Disk based arrays (hybrid with Flash as a cache) will not go away anytime soon - just like tape never went away. The ability to move workloads between different storage tiers will be key as most customers will need different tiers. If your array only supports a single tier (--> AFA startups) it won't be a full solution to your storage needs. It seems to be relatively easy to just add flash as a new tier to a slightly tuned traditional array (and achieve very similar performance to startup AFAs) but it will likely be very hard to add disk to a pure AFA.

  13. jcrb
    Flame

    Exactly the wrong question

    No really it is exactly the wrong question (and next time please post while logged in so we know which vendor you work for).

    1) Nice qualification, about being cheaper than SATA disks from Amazon. Of course even if you are talking about tier-2 workloads, the price of an array is hardly the same as the cost per GB of the HDDs, those controllers in disk arrays usually cost more than the storage in the array. And in reality for an enterprise workload we would be talking about SAS drives. And then there is the difference between the cost per GB raw, and the cost usable, you know when you short stroke the system because you can’t get the needed performance from a full system. Add in the cost of power, cooling, buying new buildings when the power company will not give you another megawatt power feed to your datacenter and soon you are talking real money. It isn’t just the cost of the drive, it is the cost of *everything* and flash makes the cost of *everything* go down. Again we are hardly the only ones saying this, for example this IBM document in Fig 2 gives a perfect example of what I am talking about (http://www.redbooks.ibm.com/redpapers/pdfs/redp5020.pdf )

    As far as the AFA being cheaper because the traditional array is overpriced, well that one point might be true 

    2) I’m going to assume the claim that the storage latency is not very relevant to typical enterprise applications is made from ignorance. The reason that ERP system has an 80ms client side latency is because the query took 13 separate 6ms storage requests and 2ms of processing to answer. Turn those 6ms requests to 1ms and your latency drops to 15ms, turn them to 500us requests and the total latency drops to 6.5ms. This is without doing any modifications to the application, there is basically no better place to invest money in to optimize than storage performance.

    The rest of your post is just assertions that the products you sell and the hoops you make your customers jump through make sense and they don’t. Why would you want to migrate data between your hybrid tiers? Even the most basic cache hit analysis will demonstrate that the difference between mechanical disk and silicon storage means you are wasting your flash if it is in a system with disk, sure you are getting some improvement but only a fraction of what you could be getting.

    What people who have not actually used a modern AFA don’t understand is that when every disk array was basically the same, then no improvement from model to model had enough of a change in the rest of the data center and the rest of the business to make it meaningful to look at the cost in any way other than the $/GB of the array.

    But as I said in my previous post, we’ve had a customer not just switch from HDD to our MLC AFA, but then to our SLC AFA WITHOUT DEDUPE for handling the VDI of a call center based on it being full cost justified to the business to do so.

    We can debate whether or not the cost of a compressed and deduped MLC AFA is or is not cheaper than a disk based array. But when a customer can conclude that it makes business sense to use a non-deduped SLC AFA to support call center handling does that not truly suggest that if all you are asking about is $/GB you really are asking the wrong question?

  14. boriss111

    Why not AFA? Or better to say, why would you want to buy a hybrid array (I don't believe anybody buys disk-only arrays for traditional enterprise workloads anymore) instead of an AFA?

    No matter how I look at it, it all comes down to price. If for particular workload, no matter what that may be, the AFA can be improve company's bottom line, then AFA it should be. The tricky thing is, of course, how to evaluate the impact of AFA on company's bottom line. For some people, it is the matter of TCO, for some people it is the improvement of revenue/efficiency.

    This brings us to the begining, traditional enterprise workloads on AFA, why should I bother? You shouldn't. Just add one more column for AFA array in your comparison sheet when buying your next storage.

    (disclosure: working for EMC partner company)

  15. chris 17 Silver badge

    The reason that ERP system has an 80ms client side latency is because the query took 13 separate 6ms storage requests and 2ms of processing to answer. Turn those 6ms requests to 1ms and your latency drops to 15ms, turn them to 500us requests and the total latency drops to 6.5ms. This is without doing any modifications to the application, there is basically no better place to invest money in to optimize than storage performance.

    totally agree!!

    You only have to look at a typical webpage to see the huge number of requests going here there and everywhere to deliver the page content. Internal apps aren't far off especially the security conscious ones that have to retrieve secure keys from hardware security modules. this all adds to the latency experienced at the client, its not all a single simple san data retrieve.

    1. JamesTQuirk

      Well as somebody who helps IT to "scrooge", I think SSD's as a Network Cache in existing systems, would help old mechanical drives & make WEB & local data a little more available, However, I agree Flash is way to go in long term, purely on NO Moving Parts issue, but I will have to thrash a SSD or 2 before I believe it, totally. I have a early 256meg key (& smaller ones, somewhere here), which I still use, it has had DSL linux on it for 6-7 years, so personally, things are looking better, I think for long term storage with flash, but Samsung 840pro Drive Lifetime write cycle limits bothers me as a main Drive, I do notice 850 series is Sporting a 10 year warranty, so things are improving ...

  16. Last Bandit

    Buy an AFA

    Migrate your old spinning rust array into it. Visualize the old array behind it, enable tiering. Get the performance boost you want whilst re-purposing old kit. Keeps everyone from the accountant to the tech happy.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like