back to article Our storage reporter has breaking news about Data Fabrics. Chris?

Recent NetApp Data Fabric literature presages a return of FlashRay and an apparently semi-detached integration of its StorageGRID product. Data Fabric is NetApp's over-arching concept of a virtual fabric with which data in different clouds (stores) can be seamlessly connected across different data management (product) …

  1. Anonymous Coward
    Anonymous Coward

    Link to the paper?

    1. dikrek

      Document link

      Folks, Dimitris from NetApp here (recoverymonkey.org).

      Here's the link to the paper:

      http://www.netapponcloud.com/hubfs/Data-Fabric/datafabric-wp.pdf

      As you can see we are thinking big.

      Data management and solving business problems is where the action is.

      Doesn't hurt that the widgets themselves are awesome, either :)

      Thx

      D

      1. klaxhu

        Re: Document link

        3 things:

        1. if you have to twice call yourself who you are, name+title and blog, ...chances are people don't know you or you are not important

        2. if you think big and feel the need to call it out ...its not a thing anymore! just let us decide if what you think about is big or not

        3. anyone who tries really hard to sell a new story, end up saying the same thing in the end: mr customer,p lease buy these arrays :-)

  2. Anonymous Coward
    Anonymous Coward

    Hiding behind a thin vail of cloud

    and constant repackaging will not save Netapp. They need to change direction.

    http://www.benzinga.com/analyst-ratings/analyst-color/15/10/5942751/ubs-concerned-about-netapp-downgrades-stock-to-sell

    1. dikrek
      Megaphone

      Not repackaging

      This is no repackaging. There's all kinds of new software doing all this behind the scenes, we are implementing the same replication protocol across the entire product line...

      See this

      https://youtu.be/HgArpF3W73Y?t=3038

      and this

      https://youtu.be/UluLv_YXx-o

      Thx

      D

      1. Anonymous Coward
        Anonymous Coward

        Re: Not repackaging

        Aside from the Cloud Manager GUI what else is new? Isn't this a single unclustered ONTAP VM with Snapmirror?

        You mean you've announced you'll implement snapmirror across the product line which is really E-series support. The response is...So what? There are SW vendors out there today that can move workloads across and between public clouds without storage vendor lock in. look into it.

        1. dikrek

          Re: Not repackaging

          Adding the SnapMirror engine to AltaVault (and more) as well, plus the VMs won't be unclustered any more.

          And there is ALWAYS lock-in. You just have to choose what you want to be locked into, and whether that serves your business needs best.

          Seems to me you missed some of the tidbits in the videos.

          For instance, being able to drag in a GUI an AltaVault AWS instance (so, a backup and recovery appliance) into an Azure ONTAP instance (so, a storage OS appliance) and doing seamless recovery - the amount of automation is staggering.

          No vendor offers that complete flash to disk to multi cloud and back + automation + backup story.

          Thx

          D

    2. Anonymous Coward
      Anonymous Coward

      Re: Hiding behind a thin vail of cloud

      From the UBS link above:

      "All FAS [storage] products scored below zero on a strong-weak scale. NetApp's FlashRay all-flash offering is still not generally available. Most concerning, the Dec quarter outlook registered the lowest score ever."

      That's a pretty alarming statement. Say what you will about features and integration which btw are very nice, the fact is there's a huge secular shift in IT to simplify and lower cost among other things and away from established vendors and the status quo.

      I suppose VC funding the last 6-7 years has accelerated development to the point where even though a lot of startups don't have all the features as compared to their larger competitors, they have enough and are easy and simple to deploy and use.

      For the larger vendors adding a feature is easy even if they have to do unnatural things to their architectures. However, simplifying and making a system easy to use, although sounds trivial is a very complex engineering task, can takes years and needs to be part of the architectural DNA otherwise it becomes a monumental task.

      This shift in IT is real and creating even more features and integration doesn't necessarily guarantee a path to success for NetApp. If EMC, 3-4x the size of NetApp and with almost unlimited resources is selling itself because it can't find other satisfactory alternatives for their shareholders why would NetApp succeed?

      Genuinely concerned about NetApp's future

      M

      1. JohnMartin

        Re: Hiding behind a thin vail of cloud

        - NetApp Employee Opinions are my own-

        Why would NetApp succeed ? Because of the exact thing you pointed out, Architectural simplicity is something that's astoundingly difficult to achieve, even more so to maintain over a long period of time. NetApp and EMC are very different organisations.

        EMC had (has ?) dozens of products which would never be able to be tied into a single compelling offering, it's success in acquisitions (primarily on the back of one astoundingly good one, there were plenty of bad ones that got little publicity as they failed), simply cant be used as a architectural foundation for something as transformational as Data Fabric.

        NetApp in contrast has invested heavily in the underlying architecture and development of ONTAP as the core of Data Fabric, something which addresses a problem of such importance and complexity that the reason for many of ONTAPs features and benefits are lost on people who's approach to solve their existing infrastructure problems mostly consists of trying to do exactly what they're doing now, only faster and cheaper.

        Many of those "do it faster and cheaper" infrastructures will be moved to hyper-scale cloud providers within a a few years, especially in the MSE space which is where a lot of that VAR measurement is done today (which explains in part the UBS commentary).

        Both EMC and Dell are doing everything they can to fight that inevitable tide, which explains why they are doomed. NetApp on the other hand is embracing that, and is focussed on helping customers solve the new problems that that move will cause, helping them avoid lock in by giving them the choice and agility they need to safely put the data where it can be best turned into a better business decision, or a better customer outcome.

        The first customers for that technology were the large enterprises and cloud service partners who were building multi-petabyte scale storage and data management infrastructures who required a stack of new capabilities that many in the MSE (Medium to Small Enterprise) market don't value yet, but if you look at OnCommand Cloud Manager works, you'll see and example of the kind of work NetApp has already done to make ONTAP simple and easy to consume https://www.youtube.com/watch?v=RTBIEfxqC7g and that is only the beginning .

        To be fair, the IT shops that that haven't moved to cloud yet seem happy enough to consume simple products which are subsidised by VC funds which are in turn funded by more or less free debt. When the free debt party stops, so will many (most ??) of those subsidised products, which partly explains some IPO's and recent mergers.

        Unfortunately because those simple silo's are short term fixes for what is a long term problem it actually introduces or maintains a stack of architectural complexity for those IT organisations (it creates complexity externalities via a kind of organisational and process polution) e.g. One array for Tier-0&1, another for Tier-2, another for backups, separate backup, archiving and replication products, a cloud gateway, some hyper converged gear, and a bunch of data sitting in AWS and Amazon that someone in marketing has bought that nobody has any idea of whats actually in there of whether its being backed up or encrypted. Each one of those might be simple in and of itself, but the level of integration and automation is minimal, or needs to be crafted in house. That model isn't scalable, and makes effective governance of an exploding dataset almost impossible.

        Data Fabric solves that complexity, regardless of whether a customer buys NetApp kit to build _their_ Data Fabric is beside the point. The important thing is that Business begins to treat their data like the strategic resource that it can potentially be, and that IT practitioners start thinking in terms of having a small number unified scalable and automateable data management practices that covers the majority of their data, regardless of who's infrastructure it's currently sitting on.

        I've got a pretty good view of what NetApp has planned, and I'm not worried at all about it's future, actually I'm kind of excited.

        Regards

        John

        @JohnMartinIT

  3. Anonymous Coward
    Anonymous Coward

    Wrong Angle

    The point on lock-in D makes is spot on. In the NetApp Data Fabric scenario the lock-in is gargantuan. In this world the only software than can move your data between on-premise and between cloud providers is only available via NetApp storage hardware! The reality is nearly every backup vendor has the ability to pick up data (agnostically) and place it where it needs to be. The scenario of moving huge chunks of data between providers is ludicrous. The network data transfer fees you would pay to move data from AWS to Azure (or vice versa) wouldn't be worth the pennies required to store it in the first place. If I place my data in Amazon, Google, or Microsoft, why would I ever move it? This seems to be the point NetApp is missing.

    1. dikrek

      You don't NEED to move your data

      The beauty of the NetApp solution is that it doesn't force you to move your data.

      http://www.netapp.com/us/solutions/cloud/private-storage-cloud/

      Using this schema, you could burst into multiple cloud providers for compute, yet your data resides in a colo facility that has fast links to the various cloud providers. No need to move around vast amounts of data. Many people like this approach, and use cloud for what it's really good at - rapidly spinning up VMs.

      Conversely, if you DO want to move some data into the hyperscale cloud providers, NetApp lets you keep it in the native usable format without needing to do a recovery first. You could then do things like snapmirror between ONTAP VMs between, say, Azure and AWS, and keep data in their native format WITHOUT needing backup software and WITHOUT needing to do a whole restore...

      It's all about providing choices. NetApp currently provides far more choices when it comes to cloud deployments than any other storage vendor. You can go in as deep or as shallow as you like, and if you decide you don't feel comfortable in the end, repatriating the data is an easy process.

      In addition, there is ALWAYS lock-in no matter how you engineer something. It's either the storage vendor, or the cloud vendor, or the backup tool vendor, or the application vendor, or the operating system...

      Even with totally free tools, the lock-in become the free tools. It's not just a cost challenge.

      The trick is in figuring out what level of lock-in is acceptable for your business and whether said lock-in actually helps your business long-term more than it creates challenges.

      Thx

      D

      1. klaxhu

        Re: You don't NEED to move your data

        The market or customers must be stupid then? right? the last year of no growth for netapp or many of the big storage vendors, the customers who are moving to any of the storage/converged startups out there just away from emc or netapp & co, the market who clearly says sell netapp shares. Are they all stupid?

        IBM stopped the N series and so did other partners oem-ing netapp, dell will stop it too with the EMC aquisition for sure ...

        If you want to survive, create a FAS software appliance and stick it in all possible clouds or face the inevitable

  4. Anonymous Coward
    Anonymous Coward

    Oh Dimitris.... Yawn...

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like