back to article Dell's new Compellent will make you break down in tiers... of flash

Dell thinks its Compellent arrays will be more compelling with automated tiering extended to different types of flash, policy-driven deduplication and namespace expansion added to the filesystem, and more drives in a smaller space. The storage boxes' new Storage Center 6.4 software can handle flash tiering and distinguish …

COMMENTS

This topic is closed for new posts.
Anonymous Coward

post process dedupe

is major fail. if you cant do it inline, and across all tiers then its just a checkbox for feature set. if you cant do compression and dedupe inline then you might as well not do it at all.

2
0
Anonymous Coward

Re: post process dedupe

Not necessarily. Valid reasons for all types of dedupe. Inline can slow your apps down. Source dedupe requires massive server farms (ever seen an EMC Avamar server implementation?). Differnt strokes for different folks.

0
0

This post has been deleted by its author

FAIL

Typical Dell announcement. Missing the point and timing of the technology and where the market is moving at a very rapid pace. Flash (or SSDs if you're working with legacy storage vendors) works best as a cache rather than a tier. And using their "data progression" software is not a selling feature, I know lots of customers moving off CML purely due to how poor that code works.

Also agree with above poster about the FAIL of not having inline compression or dedupe. However the latter isn't so important for primary storage (unless you work in marketing for Netapp...).

0
0
Anonymous Coward

Whats's with all the pre release announcements (should be available 3rd & 4th quarter), why not tell us when you have something to ship instead of pedalling vapourware. SLC to MLC tiering ? what exactly is the the point, if you're using eMLC the difference in performance is minimal, so much so that 5 x eMLC drives will likely trump 4 x SLC's so the larger MLC tier will go faster than the smaller SLC tier anyway. Maybe they're going to do it in reverse ?. Policy based variable block deduplication is available now in Windows 2012 and it's storage server variant along with SMB 3.0 & NFS v4.1.

1
0
Anonymous Coward

Agree with Nick and OP..... all this noise for an announcment STILL BEHIND the curve lol.

I like do like compellent but come on guys, you need to catch up.

0
0
Bronze badge

dedupe block or file based?

It appears to be file based according to the article but further clarification would be helpful.

I assume this tech comes from Orcarnia(sp) ?

0
0

Re: dedupe block or file based?

The technology is variable block, sliding window plus compression, implemented in the file-system. So it happens at the file layer, but definitely not SIS. And yes developed by the Ocarina team in Santa Clara.

0
0
Mushroom

Ai... Sick of their promises

We have been struggling with their product for the past 3 years - and twice upgraded controllers just to resolve issues with the system. Currently running the 6.2 release. We bought into the SSD tiering KoolAid, only to find out it does not really work. Data progression only runs once a day, so when the tier fills up, all writes go to disk. <SPLAT> goes performance. When data progression runs, it does not move all of today's writes down to the next tier, so a bigger SSD tier is needed to cover that. But then if you don't have enough in tier3, then tier2 does not work right.

Then there is another issue we continually battle. When a disk fails, most SANs just mark it as bad, restripe to a new disk and go. Not compellent. they try to keep a flaky disk running as long as possible. So when it starts burping, the performance of the entire array stalls until the disk settles down. In the logs you can sometimes see which disk is having a problem... Then when it comes time to replace a disk, it takes about a day for the SAN to release it. It moves data off the bad disk onto the hot spare... yes, I typed that correctly. So when the bad disk burps as it is now being attacked intensely to move data off of it, the SAN can freeze up again.

Of course, we are running over 500 VMs against this SAN, so when it stalls, bad things happen. One of the side effects is that the iSCSI stack on ESXi gets locked up, bringing the hypervisor to a standstill. 80% of the time we can get out safely - that is with performance similar to a newbie trying to learn to drive a car with a manual transmission. Takes about 2 hours until the jerking stops. 20% of the time, ESXi locks up so bad we have to start rebooting hosts... and writing letters to our customers. Joy.

1
0

Re: Ai... Sick of their promises

I am very intrigued by Elmars comments, because we heard the almost exact situation from a Dell/Vendor customer reference. They drank the SSD tiering KoolAid, data progression fills their Tier1 disk and performance comes to a halt. The reference account stated "Dell's fluid data architecture...isn't fluid at all". I appreciated the feedback.

I work for a small to medium sized financial institution and we are currently evaluating three leading SAN vendors (Dell Compellent, EMC VNX, NetApp FAS) to replace an aging Dell EqualLogic environment consisting of six mixed arrays.

0
0

This post has been deleted by its author

Anonymous Coward

Re: Ai... Sick of their promises

I've worked with a variety of vendors' storage products over the years. All have their advantages and their Achilles heels.

Here are the questions I would discuss with your Compellent (Dell) systems consultant:

1. The idea is that you should size your fastest tier of storage for at least 1 day's data change, otherwise you're going to run into these sorts of problems. Why did you size your tier#1 so small? Have you used the reporting inside the Compellent to ascertain what your TIER#1 should be sized at, based on historical usage? The reports are very good.

2. If you have to have it that small, have you considered running data progression more often? With what I know of Compellent, you can set data progression to work more than once a day, if you need to, although you'd need to have appropriate windows to do so.

3. Is your environment really suitable for data progression if it is this dynamic in terms of rate of change? Not all environments are suitable.

4. Drive evacuation is very fast on Compellent - one of the fastest in the industry (although I know of one other SAN system which is faster). have you considered using smaller drive sizes - or a better RAID configuration so that this doesn't become an issue for you - RAID-6 instead of RAID -5 for example?

AC, because I'm an industry consultant.

0
0
Thumb Up

Re: Ai... Sick of their promises

Elmars,

Like anything in IT, an incorrectly sized compellent by a partner or Dell rep can be a big problem. But this is not the fault of the Compellent. You are correct, currently data progression runs once a day but that is the big change in 6.3.10 or 6.4. Data progression will now be able to run a mini subset of itself multiple times through out the day, basically every time you take a replay it will check against your tier1 and decide if any pages should be moved down to tier2 in order to free up RAID10 tier1 space. Compellent or any other vendor can not help you if you have more data than space on the SAN. Properly sizing a SAN is extremely important and should not be taken lightly.

You should contact and talk to copilot about performance issues when failing the drive. A good business partner will also be able to help you out also. I have had this with some of our customers, but have been able to resolve it with some tweaks.

The reason for the SLC and MLC tiering is that SLC is more expensive and write happy and MLC is cheaper but read happy. SLC has smaller drive sizes, 200Gb and 400Gb and MLC has much larger drives size, 1.6Tb I believe. All flash storage is a consumable so it is important that you manage it correctly and data progression will allow that to happen without you having to manage it.

Enjoy and contact Bob Fine. Pretty cool that he offered, had not seen the same from other vendors on here....

1
0
Black Helicopters

Re: Ai... Sick of their promises

John-

Thanks for the response. Interesting to see that you are reading this thread. It is quite amusing to see Sr. Dell executives researching my background in other channels. Did take a day or so to figure out... And you thought I wouldn't notice. :)

We are working with your representatives and (maligned) business partner to resolve these issues. Please take a read through our recent cases to understand the full history of the relationship, and the scope of our frustration.

Please also be aware, that our patience is wearing out. Four years of promises and problems. But oddly, the only fixes offered are those with at least 5 figures.

Elmars

p.s. So is the fix for the abandoned VMware blocks also two years out? This is burning 2TB/month...

0
0

Re: Ai... Sick of their promises

What should a customer expect or see from a performance impact when running these data progression subsets? Was this introduced due to vendor sizing issues and/or inefficiencies of the SAN array?

0
0

Sorry for the issues you are running into. If there is anything that I can assist you with please feel free to let me know and Iwould be happy to assist.

For support on Twitter: @DellCaresPro

1
0

Response from Dell

Elmars and Nick, thank you for your postings. We’d like to have an offline conversation with you and our Copilot support team to better understand this situation and offer suggestions for improvements. We have many customers using Compellent’s autotiering successfully in environments very similar to yours. While we have modified Data Progression in this release to run more than once per day, our bigger concern is addressing your current situation with the current release. Please post your email address or contact me directly at bob_fine@dell.com so we can make the connection to Copilot and work towards resolution.

1
0

Posr proc vs in-band

We disagree with your assessment Anonymous. As you know Dell already ships products with in-band dedupe and compression, so this is not a technical barrier. In-band is an appropriate implementation for secondary storage. In primary storage, demand histogram invariably follows a 10%/90% (or even 1/99) rule. So like tiering and caching strategies, FluidFS implements a more elegant data reduction design that aligns with the information lifecycle, applying more aggressive savings to the 99% yet maximizing performance for the 1%. And of course your policy-settings are adjustable. [Dell storage dev]

1
2
This topic is closed for new posts.

Forums