back to article Kaminario playing 3D flash chippery doo-dah with its arrays

Twitter can be great. There we were, we storage twits, talking about 3D flash, when Kaminario CTO Shachar Fienblit got in touch to say Kaminario was using 3D flash already. That makes it the first enterprise storage supplier to go 3D NAND, as far as we know, and has an 800GB 3D NAND SSD in its set of flash media. We got in …

  1. Anonymous Coward
    Anonymous Coward

    Same architecture as others... what's different?

    I am trying to understand what is different in the real world deployment between a Kaminario system and say a 3PAR 7450 or an Netapp FlashRay or even a Pure Storage FA450? They are all SANs with some functionality around deduplication and compression (some assume more dedupe+compression vs. others) and snapshot and replication functionality.

    Still need to deal with (1) LUNs and (2) Fibre channel.

    At the end of the day everyone can use the same NAND SSDs (3D, MLC, TLC) with vendors end up squabbling about < 1 ms accesses and a few percent more reduction in data, but they ignore management overhead of SANs and storage. This is no different than the LVM management I had to do in the early 2000s.

    Answer me practical questions -- how many VM clones can I make using your array? What is the amount of time I need to spend managing LUNs and FC networks? How can I tell what VMs are screwing with the storage?

    I am done with SAN in my environment -- everything is virtualized and my apps guys don't bother with physical deployments any more. That's why I like buying something like Tintri.

    1. Nate Amsden Silver badge

      Re: Same architecture as others... what's different?

      For me at least managing my 3PAR systems is a breeze, I was reminded how easy it was when I had to setup a HP P2000 for a small 3 host vmware cluster a couple of years ago (replaced it last year with a 3PAR 7200). Exporting a LUN to the cluster was at least 6 operations (1 operation per host path per host).

      Anyway my time spent managing my FC network is minimal, granted my network is small, but it doesn't need to be big to drive our $220M+ business. To-date I have stuck to qlogic switches since they are easy to use but will have to go to Brocade I guess since Qlogic is out of the switching business.

      My systems look to be just over 98% virtualized (the rest are in containers on physical hosts).

      I won't go with iSCSI or NFS myself, I prefer the maturity and reliability of FC (along with boot from SAN). I'm sure iSCSI and NFS work fine for most people, I'm happy to pay a bit more to get even more reliability out of the system. Maybe I wouldn't feel that way if the overhead of my FC stuff wasn't so trivial. They are literally like the least complex components in my network(I manage all storage, all networking, all servers for my organization's production operations. I don't touch internal IT stuff though).

      As for identifying VMs that are consumers of I/O I use LogicMonitor to do that, I have graphs that show me globally (across vCenter instances and across storage arrays) which are the top VMs that drive IOPS, or throughput or latency etc). Same goes for CPU usage, memory usage, whatever statistic I want - whatever metric that is available to vCenter is available to LogicMonitor. I especially love seeing top VMs for cpu ready%). I also use LogicMonitor to watch our 3PARs (more than 12,000 data points a minute collected through custom scripts I have integrated into LogicMonitor for our 3 arrays). Along with our FC switches, load balancers, ethernet switches, and bunches of other stuff. It's pretty neat.

      Tintri sounds cool, though for me it's still waaaaaaaaaaaayy to new to risk any of my stuff with. If there's one thing I have learned since I started getting deeper into storage in the past 9 years is to be more conservative. If that means paying a bit more here or there, or maybe having to work a bit more for a more solid solution then I'm willing to do it. Of course 3PAR is not a flawless platform I have had my issues with it over the years, if anything it has just reinforced my feelings of being conservative when it comes to storage. Being less conservative on network gear, or even servers perhaps (I am not for either), but of course storage is the most stateful of anything. And yes I have heard(from reliable sources not witnessed/experienced myself) multiple arrays going down simultaneously for the same bug(or data corruption being replicated to a backup array), so replication to a 2nd system isn't a complete cure.

      (or many other things, e.g. I won't touch vSphere 6 for at least a year, I *just* completed upgrading from 4.1 to 5.5 - my load balancer software is about to go end of support, I only upgraded my switching software last year because it was past end of support, my Splunk installations haven't had official support in probably 18 months now, it works, the last splunk bug took a year to track down I have no outstanding issues with Splunk so I'm in no hurry to upgrade the list goes on and on).

      Hell I used to be pretty excited about vvols (WRT tintri) but now that they are out, I just don't care. I'm sure I'll use em at some point, but there's no compelling need to even give them a second thought at the moment for me anyway.

      1. Anonymous Coward
        Anonymous Coward

        Re: Same architecture as others... what's different?

        Nate -OP here. Thanks for the reply. As an ole time UNIX guy who now manages all infrastructure supporting a bunch of engineering test/dev groups in a midsized SW company -- questions like cloning, etc do come up. I find it funny when vendor reps come by and tell me they can do 5% less latency than their competitor or they can compress data to occupy 2% less space. Those unfortunately are under ideal conditions and require me to spend more of my day doing stuff I'd rather not do. I realize my corporate IT counterparts who don't have to deal with the amount of change I do will be more attuned to FC. But it seems wasteful for a company to still maintain separate infrastructure for storage itself.

        BTW how is your experience managing dockers? We have some engineers try out Jails for infrastructure for testing.

        1. Nate Amsden Silver badge

          Re: Same architecture as others... what's different?

          For the containers they don't require much management. We don't use docker, just LXC. It is for a very specific use case. Basically the way we deploy code on this application is we have two "farms" of servers and flip between the two. Using LXC allows a pair of servers (server 1 would host web 1A and web 1B for example) to utilize the entire underlying hardware (mainly concerned about CPU, memory is not a constraint in this case) because the cost of the software is $15,000/installation/year (so if you have two VMs on one host running the software that is $30k/year if they are both taking production traffic regardless of CPU core/sockets). We used to run these servers in VMware but decided to move to containers, more cost effective -- the containers were deployed probably 8 months ago and haven't been touched since. Though I am about to touch them with some upgrades soon.

          I think containers make good sense for certain use cases, limitations in the technology prevent it from taking over roles that a full virtualized stack otherwise provides(I am annoyed by the fact autofs with NFS does not work in our containers - last I checked it was a kernel issue). I don't subscribe to the notion where you need to be constantly destroying and re-creating containers(or VMs) though. I'm sure that works for some folks - for us we have had VMs running continuously since the infrastructure was installed more than 3 years ago (short of reboots for patches etc). Have never, ever had to rebuild a VM due to a failure (which was a common issue when we were hosted in a public cloud at the time).

    2. chrismevans
      Happy

      Re: Same architecture as others... what's different?

      It's pretty simple; Flash has issues like finite writes & garbage collection that affect product lifetime & performance respectively. The better storage solutions (not SANs, because a SAN is a network) optimise the process of writing and reading from flash for performance and longevity. That's the main differentiating factor between Pure, 3PAR, Kaminario etc. Features (data efficiency, snapshots, thin provisioning, replication etc) are moving to be (to use an Americanism) table stakes. Everyone needs to have them to compete in the first place.

      Efficient flash management means more predictable I/O, better product lifetime and so a more competitive price point.

      Now you may think performance is a big deal but coming from a place of 5-20ms response time to < 1ms means applications will (initially) see such a performance benefit that most solutions will do the job. In time, performance will be an issue again, but it's not a big one now.

    3. shaimaskit

      Re: Same architecture as others... what's different?

      (Disclosure - Kaminario employee here)

      Good questions are being asked here, which confirms that there's a lot of (marketing) noise in the all-flash industry.

      I'll try to address question - "same architecture - what's different?".

      There's the scalability aspect:

      Scale-up, which allows adding more capacity under the management and name space of the array but without the ability to add compute (==performance) to the array.

      Scale-out, which allows adding more capacity and compute to the array, which will bump up the $/GB when all you need is more capacity.

      So what's different? Kaminario's architecture can scale-up and scale-out, allowing customers to grow their storage according to their needs in the most cost efficient way. The storage vendors you've mentioned will be limited to only one way of scaling (one vendor doesn't really have a product...)

      There's the data reduction aspect:

      Dedup and compression are now a commodity - notice that the storage vendors you've mentioned do not necessarily support them. Why is it important? Gain more GB for your $, as simple as that.

      So what's different? True, Kaminario is not the only storage vendor to have inline deduplication and inline compression, however it is the only vendor that can enable or disable deduplication on the LUN level, so you don't waste cycles (==performance) on DB applications that do not dedup.

      And lastly, how does the MLC-TLC-3D-NAND come into play?

      Flash prices are declining, SSD capacities are getting bigger and new technologies are being introduced to the market. Integrating these technologies to storage arrays require them to manage more capacity and metadata using their existing architectures.

      So what's different? Kaminario's architecture is not limited to having all the metadata in DRAM, or in other words, is not limited on the amount of capacity it can manage or the data reduction it can achieve. Combine this architectural differentiation with the ability to scale-out, and you have yourself the most cost-efficient AFA without compromises.

      To your more practical questions -

      How many VMs can be cloned? As many as your ESX cluster allows you.

      Time to manage LUNs and FC? Reduced to none (I guess you do have to create and delete LUNs once in a while...)

      Hope this answers the questions

      1. ashminder

        Re: Same architecture as others... what's different?

        Kaminario is not the only vendor that can turn data reduction on or off by LUN so that's not a unique feature

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019