back to article NetApp plays out ASIS dedupe lead

While EMC, Dell, HDS and HP stand impotently by, NetApp is making a killing in virtual desktop infrastructure deals and extending its lead in primary data deduplication, making ASIS run faster and deal with more data. How long can this advantage last? Last week NetApp announced terrific quarterly results, citing virtualisation …

COMMENTS

This topic is closed for new posts.
  1. Dwayne
    Grenade

    Remember Data Domain anyone?

    NetApp doesn't do inline dedup because it could not afford it.

  2. Steven Jones

    WAFL, fragementation etc.

    "NetApp has no intention of adding write caching, as its testing has demonstrated a lot of overhead and no real-world benefit."

    I thought that WAFL did, in effect, perform write caching anyway as it assembles random writes into what is essentially a sequential one (ideally a complete stripe write). However, surely it can't keep all the clients waiting until it has assembled such a write.

    However, the observation that WAFL turns random writes into sequential ones is, of course, true. The one problem is that the if you have a randomly updated file (such as a database supporting an OLTP application), then that very same WAFL operation will fragment the data. That doesn't matter much if you only generally access data randomly, but if you want to access it sequentially then what eventually happens is that large, sequential reads get turned into lots of random ones. If the backend can't cope due to the number of IOPs then it can spell trouble. With housekeeping, backup and so on you can overcome much of this with NetApp tools, but if it comes down to application access patterns then it can spell trouble. That won't happen as much if your data is updated sequentially, but it's the downside of WAFL.

    However, on the main subject, for de-duping system images it's undoubtedly the best solution at the moment and the "boot storm" issue is a particularly important one where caching with de-dup helps, although it won't get you out of holes caused by lack of controller throughput or network capacity.

  3. Chris Mellor 1

    An EMC-centric view

    Sent to me:-

    "DD BOOST is there to reduce or eliminate the network link between the backup server and the DD Restorer as a bottle neck. DD systems can see a throughput increase because of that but they're never going to exceed the performance of the CPU in the controller as that does everything. We're I a cynical man I'd say NetApp can't do anything about thin network links reducing throughput into their Filers without buying a Riverbed or SilverPeak box while BOOST on the other hand can.

    "Also, dedupe loses out to boring old compression in nearly every use case that isn't backup or VMware.

    http://communities.netapp.com/message/24939

    "My testing confirms what the NetApp guys say in this thread about getting less than 10 per cent reduction, a customer and I got about a 5 per cent reduction after running ASIS on Exchange volumes for what took days, but managed to get a 40 per cent reduction on the same dataset on different volumes by using compression. So people will have to offer more than one data reduction technology as dedupe isn't a magic bullet and the more full copies you have the more efficient it is."

    DD BOOST is there to reduce or eliminate the network link between the backup server and the DD Restorer as a bottle neck. DD systems can see a throughput increase because of that but they're never going to exceed the performance of the CPU in the controller as that does everything. We're I a cynical man I'd say NetApp can't do anything about thin network links reducing throughput into their Filers without buying a Riverbed or SilverPeak box while BOOST on the other hand can.

    Also, dedup loses out to boring old compression in nearly every use case that isn't backup or VMware.

    http://communities.netapp.com/message/24939

    My testing confirms what the NetApp guys say in this thread about getting less than 10 per cent reduction, a customer and I got about a 5 per cent reduction after running ASIS on Exchange volumes for what took days but managed to get a 40 per cent reduction on the same dataset on different volumes by using compression. So people will have to offer more than one data reduction technology as dedup isn't a magic bullet and the more full copies you have the more efficient it is.

    ... Chris.

    1. 111
      WTF?

      @ An EMC-centric view

      Chris,

      Are you quoting someone else, or expressing your own opinion? Either way, someone is missing the point here.

      NetApp ASIS de-duplicates *primary* data, whilst Data Domain (& neighbouring tools like DD Boost) are de-duplicating *secondary* backup images only. So this is a completely different kettle of fish.

      Also the post / thread from NetApp forums you are referring to ain't particularly relevant - this is merely a discussion whether de-duplication of *primary* Exchange data can make sense. That, however, doesn't change the unquestionable benefits of ASIS de-dupe in VMware / VDI environment.

      Regards,

      Radek

      1. Chris Mellor 1

        MIssing the point

        Re Radek,

        Good point I think. The pro-EMC source was possibly more concerned with dissing NetApp than talking about primary dedupe. I merely passed on his note as he was concerned to be anonymous. It's amazing to me that other mainstream vendors aren't responding to NetApp's undoubted success.

        Chris.

  4. Anonymous Coward
    Coffee/keyboard

    Odds and Ends...

    For Steven Jones "I thought that WAFL did, in effect, perform write caching anyway as it assembles random writes into what is essentially a sequential one (ideally a complete stripe write). However, surely it can't keep all the clients waiting until it has assembled such a write."

    Your assesment is essentilly correct in that NVRAM is used to provide the same outcome as a write cache. From a timing point of view - striping of writes reduces the time taken to write the blocks out by orders of magnitude - and the assembly process is done by alternating NVRAM pages - so essentially random write clients are not kept waiting. There was a time when the NVRAM was undersized and sequential clients experienced bottlenecks through NVRAM, but this has mostly been addressed.

    The alternate traditional design of write cache requires much larger cache to absorb client load untill hopefully quiet times will allow time consuming random write destaging - what I learnt was called 'write avoidance design' ;-)

    Compression is definitely interesting and a completely different paradigm to deduplication - particularly as once you compress data it is hardly likely to ever be a candidate for deduplication. There is also the huge issue of how the data stretches and compacts if you make changes - Storwize has considerable development on breaking the data into smaller chunks so that random access can be managed. Use of external products such as Storwize is common in NetApp deployments for large file applications, and there is good anecdotal evidence that it is effective on lower activity databases. I am not aware of a thorough invetigation which covers the absolute space efficiency impact of using compression together with a typical NetApp approach of lots of Snapshots, as there would be a tradeoff.

    Just like ASIS/dedupe was introduced in stages, Compression has definite advantages in certain application, and Storewize now, or expected compression abilities in OnTap coming in the future will become part of the picture.

    @Dwayne - having worked with DD since they had a small office in Palo Alto and being a big fan of their technology and efforts to bring it to the market, I will offer the following perspectives from what I experienced. DD worked well and improved to being an excellent product, however when engaged in 5 year plans for clients I always found that my reliance on having their product reduced because I would design towards reducing the behaviours and practices which created a need for DD in the first place. In the end I avoid copy of bulk data like the plague, and hate reading my entire array to make a backup.

    Secondly the $1.5B and later $1.8B which NetApp could afford was a good price for DD and I believe history will show that the $2.1B which EMC could afford will never be recouped - it certainly has already been largely discounted in the share price valuation comparing NetApp and EMC. How often have we seen the affluent waste money because they can afford to, but I am delighted for the nice DD guys who benefitted.

    Finally to the DD restorer fan- It's a great idea and essential for those who have the problem, but he has no idea how it compares to a snapmirror replicated NetApp on bandwidth usage and will never come close on restore speed in a DR/BC environment.

    Chris I have noticed your efforts recently to encourage and share knowledge in your articles and comments and it is hugely appreciated, hence my desire to add in.

    1. The Storage Alchemist

      Are you kidding?

      $2.1B never recouped? - DD w/ do $1B this year alone -

      1. Anonymous Coward
        Unhappy

        Storage Alchemy? money out of .......

        Dear me - do you really equate turnover with profit, and remember you need to allow for the profit you would have made just investing the money too - so yes it will take a seriously long time - in my opinion not achievable to recoup the $2.1B CASH that was spent for DD.

    2. Dwayne

      The Difference In Dedup

      Anonymous - Not sure I would agree with your assertion that somehow 1.8B was a good price but $2.1B was not. Data Domain global multisite de-duplication for both an archive tier of storage and backup just makes sense. Certainly not to be compared with local (individual file system) within a single array de-duplication offered by others. A big difference in de-duping 2-60 individual file systems across multiple arrays vs global across the enterprise which is why if the job is to save on primary storage space, compression on local file systems can work just as well if not better.

      Also not sure what you are getting at by saying you avoid reading the entire array to make a backup? Arrays can fail whether there is SSD, FC, SAS, SATA, Compression or Dedup. The DD approach is high speed inline global de-duplication with replication. If you combine an archive Tier with your DD environment, you basically get global archive for free since its likely the archive has been backed up.

  5. Storagezilla

    EMC Celerra deduplicates active files on primary storage

    Disclosure: I'm an EMC employee.

    NetApp isn't the only one in the market doing capacity reduction of primary storage.

    EMC Celerra Deduplication will perform file level single instancing and compression of multi-terabyte active files (VMDK, VDI, or other wise) and it'll do so at the file level.

    Doing it at the file level means you can force specific files to always be space reduced regardless of the reduction schedule for the volume or decided to exclude various file types if you so choose.

  6. 111
    Happy

    @ EMC Celerra deduplicates active files on primary storage

    This, however, probably doesn't deliver much in a practical terms in real-life VMware deployments ;-)

    My thought is based on the fact that NetApp is still brave enough to guarantee they need 50% less disk space than their competitors (including EMC) in a virtualised environment:

    http://www.netapp.com/us/solutions/infrastructure/virtualization/guarantee.html

    Regards,

    Radek

  7. The Storage Alchemist

    IT Depends!

    Lets keep in mind the age old answer in IT - IT Depends!

    First to say that NTAP is "Killing It" is WAY over played. This is someone not getting the facts and just trying to pump technology - in my book this is called FUD. How do I know this? Because I work for a company where 75+% of our data compression solution sits in front of NTAP. OUR customers are telling us that ASIS gives anywhere between 9% to 15% optimization on primary because there isn't a lot of repetitive data. They are getting 75% in VMware environments with Storwize.

    Now I will admit that if you have 100's of VMs instances (with OUT the users data stored in the file) then dedupe is best - why compression 100 systems to 25 when I can dedupe it to 1. Makes sense - BUT if I put the data in the .vmdk - then that blows the theory out of the water. It also begs the question - if I put the data on a file share why can't I do both - dedupe the .vmdk images then compress the data? Two great tastes that taste great together right?

    Also, lets keep in mind - I don't care how much resources you throw at it, doing deduplication - or 'traditional' compression (not real-time random access compression) you will never get the performance you need (sorry Zilla) to do compression on primary storage. Now if you can with Storwize - why not compress - get your 75% data optimization and then dedupe after - getting even more benefit?

    Remember there is never one right answer in IT. Let's stop the FUD and help users find the most robust solution(s) available.

    1. mriley
      Pint

      I Agree - Somewhat

      I think the implication of NTAP "killing it" had to do with its financial performance which, no matter how you slice it, has been impressive. (see: http://www.theregister.co.uk/2010/05/27/netapp_fy2010/)

      In no small measure, virtualization has helped propel NTAP sales. Virtualization created a new wave of data demands. NTAP doesn't claim to have created the wave - nor does anyone on this list - but we are all trying to ride it. It just so happened that primary storage efficiency technologies - dedupe and non-dupe - were ready at the right time. Good technology; great timing. The customer results in this environment have exceeded the 50% guarantee (you didn't think we were going to make a bet we couldn't win, did you?). We've seen upwards to 98% savings in virtualized environments.

      As an aside, most people focus on commonality of file types and overlook the easiest thing to dedupe: a blank space. Right click on your C: drive right now and tell me how much space you have available. We can represent that in a single block. Roll all of this up to a "dedupe aware" or, more accurately "shared block aware" Flash cache, and you end up accelerating a disproportional amount of physical data (reads as well as writes).

      This file system-aware (HDW + SFW) solution could only be done on-box. To your point, though, there are always ways to complement any solution and Storwize could certainly be one of them.

      Where I disagree somewhat is in the statement that you will "never" be able to provide enough resources to get the performance you need doing on-array compression/dedupe. Two proof points: a) Given Moore's Law and resulting economics, it's relatively easy to envision having enough cores, cache and processing domains to absorb compression functions on-array particularly since each vendor will have the opportunity to tune the compression code to run optimally on their kit. b) As a good customer put to me early on in my NetApp career after belaboring a point on why Oracle over NFS would meet or beat their current performance over FC: "Son,' (BTW - a red flag that your meeting is going sideways when the customer refers to yo as 'Son') "I don't care if your 25% faster, I can't tell the difference between 3 ms and 4 ms." His point stuck with me all these years - at some point all the technology in the world didn't matter unless it impacted his business in some positive way. So, you're right, as long as compression negatively impacts the business value of an app, you'll see people looking at ways to improve it including moving it off-box. As soon as the impact becomes statistically insignificant, manageability will take precedence and, in my mind, customers would prefer to manage fewer devices or, in virtualization terms, VMs. We're so busy now virtualizing anything that isn't nailed down that we may be overlooking the idea that the next wave of consolidation might just be these functions currently running as separate VMs or devices.

      The question remains, when will this next wave of VM/functional consolidation begin and I think it probably already has but will certainly accelerate with advances in HDW.

  8. Anonymous Coward
    Boffin

    whoa nellie

    I love how much storage nerds write when they get going. Each comment has to be a complete article, nay, fully referenced thesis!

    When in fact the main problem is that NetApp aren't being forced into competing, at a time when their competitors should be ripping them apart on the ontap 8 upgrade path debacle.

    i r an enterprise user

This topic is closed for new posts.

Other stories you might like