back to article When the SSD came to storage land: How flashy upstarts got their break

Of all the recent changes in the storage landscape over the past five years, the most dramatic is the coming of flash-based storage devices. Half a decade ago, we were talking about general purpose, multi-tier arrays, automated tiering and provisioning – all coming together in a single monolithic device. The multi-protocol …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    Ah yes, we can always hope. I'm somewhat pessimistic. There have been initiatives like SMI-S and no vendor obviously want to make it easier to switch to the competition, so I expect that the silos will remain.

  2. Zmodem

    put some gill vents on the SSD casing so you can put a seperate HD/CD bay cooler behind and they might not fry themselves being 400x hotter the a USB stick after a 2gb transfer

    1. Zmodem

      heat which is'nt much good if you have a big array and 100 people storing data

  3. Anonymous Coward
    Anonymous Coward

    I've already done this

    But even then it struck me that it should have been simple to code something which sat on top of the various arrays (from all vendors), queried them and pulled back useful information. Most of them already had fully featured command-line interfaces; it should not have been beyond them to code a layer that sat above the CLIs that took simple operations such as "allocate 10x10Gb LUNs to host 'x'" and turn them into the appropriate array commands – no matter which array.

    .

    I'm sure I'm not the only one reading this that has. The thing is, all organizations (well, at least all those I've consulted for) have different 'best practices' with how they allocate storage to simplify things. Those that haven't, I introduced them to. If you let people just do whatever to get a given amount of storage to the servers that need it, you'll end up with an unholy mess. So they don't get to allocate 10x 10 GB LUNs, they have to follow the best practices, which differ for different arrays. Not only that, those practices may be adjusted depending on the type of use the storage will be put to, and/or the business unit that it is going to.

    So not only do you code in the "translate to appropriate array commands", you code in the business logic and best practices for the site. That's great, but then it is no longer generally applicable, and isn't something you could sell as a product without doing a tremendous amount of work to provide flexibility for all the possible permutations. So while simply doing what you suggest would be easy, it isn't going to help things, and would in fact make things worse in storage environments of any size if people started actually using it. Actually allocating x LUNs of y size is the easiest part of being a storage admin. Every array has a GUI to make the job easy, many of them are so easy that if you provide a step by step guide that tells them, "click here", "fill in this", "do that" you could train the janitor to allocate storage in half a day.

    The hard part is knowing that your default unit of allocation is not 10 GB on any of those arrays, and trying to give them what they want in a way that keeps your carefully architected policies from getting thrown out the window. And without annoying the Unix, ESX and Windows guys who wish you'd just give them the 10x 10GB LUNs they asked for and not a 108 GB metalun because it'll take them an extra couple minutes to carve it up. And without annoying the old school DBAs who think they need to know what spindles each bit of storage they get allocated comes from, so they can set things up for the least amount of contention possible, and hate it when you tell them it is striped across 16 spindles - no amount of explaining will convince some of these guys that is better for them than putting it all on one spindle + mirror!

This topic is closed for new posts.

Other stories you might like