back to article Pillar rains on EMC's parade

Pillar Data is raining insults on EMC's CX4 usable capacity parade. Chuck Hollis, VP for tech alliances at EMC, reckons EMC's CX4 delivers 70 per cent of its raw capacity to users as usable capacity and pooh-poohs HP EVA and NetApp FAS arrays for only delivering 48 per cent and 34 per cent respectively in the same Exchange …


This topic is closed for new posts.
  1. Anonymous Coward

    I guess he doesn't understand performance....

    Firstly, there is absolutely no guarantee that RAID10 will outperform RAID5. Take the scenario where a full-stripe write is possible. RAID5 will not need to read in the data in the existing data to calculate the new parity and will just require the data to be written once.

    RAID10 WILL the require the data to be written twice with all the back-end contention that can involve.

    I just love it with these kind of people make blanket statements to prove their earlier statements.

    Secondly, the duty cycle of SATA is WAY below that of SAS, FC or pSCSI drives. You try to run an IO intensive application on SATA and your disks will be quivering wrecks within days or weeks.

    Thirdly, if they're using RAID10 then the RAID overhead is 50% not including hotspares which kind of tramples all over his previous boast of 75%.

    SATA has it's place - just not for tier 1 storage.

  2. Anonymous Coward

    SATA on exchange, what a joke

    So if you are a small company with 100 users MAYBE SATA MIGHT be ok. I have seen dozens if not hundreds of organizations running exchange(usually 1000-5000 users) in the enterprise and NONE of them use SATA due to performance implications. Any exchange or storage architect knows that spindle count and response time are critical with exchange(whether EMC, IBM HP storage, I don't care).

    Pillar please try again.

  3. Hate2Register

    Wow, a real story at last..

    Wow, a real story at last..

    "Chuck Hollis, VP for tech alliances at EMC, reckons EMC's CX4 delivers 70 per cent of its raw capacity to users as usable capacity and pooh-poohs HP EVA and NetApp FAS arrays for only delivering 48 per cent and 34 per cent respectively in the same Exchange scenario."

    Only joking! This is a boring story about two companies arguing about per cent usability. You really should throw away some of those press releases that land on your desk.

  4. Destroy All Monsters Silver badge
    Thumb Down


    > there is absolutely no guarantee that RAID10 will outperform RAID5

    These are sad times when RAID5 is still on the table. See here:

    See here:

    > NONE of them use SATA due to performance implications

    I hear the performance implications are not big. Would it not rather be the case that they do not care about any price difference and just go for the true and trusted SCSI storage. After having been wined and dined by the marketing man/woman of course.

    Apart from that, this story is about as exciting as bored philosophers arguing about Boltzmann Brains.

  5. Anonymous Coward
    Anonymous Coward

    Big SATAs have their place.

    Big SATAs are great for Exchange 2007 local continuous replication, since they're only being used as backup to the main store the low I/O performance isn't a problem.

  6. Anonymous Coward

    Re: Hmmm....

    Performance implications are huge.

    SATA drives spin around 7.2Krpm and FC + SAS at 15Krpm nowadays. This rotational velocity makes a big difference.

    SATA doesn't suck as much when the workload is large sequential IO as the inferior mechanisms for head positioning don't have to work so hard. Try random small IO and your performance falls through the floor. SATA head positioning is not the best in the world and it can take a couple of rotations at 7.2Krpm for the heads to settle and start the I or O.

    With SATA, you get Normal Command Queuing (cf lift optimisation algorithm) with SATA-II which is a great step forward from the old zero command queuing of SATA-I. However, a bullet proof implementation requires NCQ + Prioritised Command acceptance meaning I want this IO to occur NOW and not when it comes round on the rotation.

    SATA-II just doesn't give you this.

    SCSI is pretty much dead if you're talking about parallel SCSI. The price difference between SAS and SATA is reducing (at a pretty alarming rate) where SAS has all the advantages of pSCSI but better potential bandwidth (point-to-point v multidrop bus)

    Gartner have SCSI sales down to zero in the next CY but SAS replacing it and biting into SATAs playspace.

    One of the other great things about SAS is it's connector-compatible with SATA-II but not vice versa. So, firmware et al notwithstanding, you can get tiering within a single chassis.

  7. Ian

    When does spindle count, er, count?

    Honesty in Commenting: I'm a happy Pillar customer, although I'm also very fond of my EMC storage as well, and Mike's one of the most engaging men I've had dinner with in recent year. However, what follows is my long-standing take on storage, formed more during my days as an Auspex customer.

    Spindle count matters (ie you need more of them) when you either have a random read rate greater than the aggregate operation rate of your drives, or you have a sustained write rate ditto over a period of time longer than you can solve with array cache.

    Back in the day, central storage took a pounding on the read side, because clients were mostly very memory poor. Read was hard to optimise away: if it's not in cache (and it usually wasn't) you have no choice but to go to spindle, and then you need lots of them. So my Auspexes used to be something like ~70% read, and most of those reads were serviced from disk. 84 x 36GB RAID5 seemed like a good way to provide ~2TB (or, for those with very long memories, 60 x 1.3GB RAID 0+1 seemed like a good way to provide ~40GB).

    But these days, a not-dissimilar workload is write heavy. The clients have plenty of RAM which means that they rapidly stabilise at a point where the read load they impose is relatively small, and most of the central disk time is spent coping with writes --- the clients, properly, issue writes as soon as they have dirty pages. Write can always be reduced to a cache operation, so long as you have enough cache, and RAM --- even mirrored ECC RAM with battery back-up --- is dirt cheap.

    So yes, if you have a burst of write which exceeds the capacity of your cache to the point that not even the portion that will fit into your cache helps, and not even the write re-ordering that the cache allows helps, you'll need more spindles. But my experience of sizing storage is that most people grossly over-estimate the scale of their problem in terms of duration: you may need oodles of random write performance over a few gigs, or perhaps a few tens of gigs, rarely more.

    If you need to do random writes over many hundreds of gigs such that you can't re-order them in any useful way, then you're going to need spindles, and lots of them. Even then it's an open question if in that environment you're better off with N 300GB FC disks or 2N 1TB SATA disks short-stroked to 300GB. Mike would argue the latter, and he'd argue that you can probably get some mileage, if only for snapshots and long-term archive, out of the 700GB that are not part of your short-stroking.

    So when I look at my arrays (fairly intensive Oracle workloads on some, Clearcase on others) I don't see the limiting factor being the spindles: I see it being the ability of the controllers to keep up with the shorter bursts. And the easiest way to improve that is to throw more RAM at the problem. Spindles may allow more operations to complete within 7ms, but they still won't improve the speed of any individual operation: if it's going to disk, it's going to take 7ms. RAM allows operations to complete (essentially) in zero time. 100 lorries can carry more goods up the M6 than 1 lorry, but it'll take just the same length of time for a given package to get from Rugby to Carlisle.

    Now this analysis doesn't help for read. But if you can do your read in a reasonably sensibly ordered way (as, say, during backup or large table reads), then 100 spindles of 1800rpm ESDI is more than enough bandwidth to be going along with, never mind 100 spindles of anything remotely modern. If you're doing random reads, not so much, and then I agree you need the fastest disks you can get.

    But those random reads are again going to be taking 7ms each, and I would seriously question the overall design of a production environment in which you need to sustain 10K random reads per second. Yes, 100 FC disks will do it, and 100 SATA disks will struggle, and yes the latency will be lower with FC, but still...7ms, 10ms, both are lifetimes in terms of CPU speed, and isn't it God's way of telling you to either put some indexes on your tables or just buy the terabyte of RAM you know you want for your application?


  8. Anonymous Coward

    Is SATA bad for Tier 1, or is that just good marketing?

    > Secondly, the duty cycle of SATA is WAY below that of SAS, FC or pSCSI

    > drives. You try to run an IO intensive application on SATA and your disks

    > will be quivering wrecks within days or weeks.

    Google would disagree. They swear by SATA in their data centres and save a huge amount in the process. You can argue about duty cycles all you like but a lot of people would point to studies like the one release by Google which show otherwise. Whilst a higher percentage of SATA drives might fail early those that don't will last as long as their far more expensive rivals in the market. The big picture is that replacing the small % of SATA drives which fail is cheaper than buying FC etc in the first place, or at least this is what firms such as Pillar and Google believe.

  9. Paul Coen

    @AC - doesn't Microsoft?

    I thought I remembered reading that Microsoft has been using SATA internally with Exchange 2007.

  10. Anonymous Coward
    Anonymous Coward

    Exchange on SATA

    Exchange on SATA isn't a big deal. We run about 1,700 heavy-IO mailboxes (7,000 IOPS peak) on an Axiom with no problems (and we're not even using pooled RAID10 yet).

    As long as you've got the spindles and did your math right, it's fine, even besting a CX. Axiom SATA enclosures are fundamentally different than EMC enclosures (individual RAID port connections in an AX vs a loop architecture in EMC).

    There is nothing new or exciting on a CX--the Clariion code hasn't changed since it was puchased from Data General in 1999. Half the commands still start with DG.

    EMC neds to move over. There's a lot of new ideas in this field.

  11. Anonymous Coward
    Anonymous Coward

    Exchange on SATA = no

    I have seen a largish site in London Docklands moving from SATA to FC due to SATA just being too damn slow, especially after there was a disk failure and the array had to rebuild the data onto the new disk

  12. Anonymous Coward


    With most intensive systems, you want to be able to dedicate spindles to the job and only that job. With pillar and other "virtual" storage systems, you lose the ability to do that - you will find each disk is part of tens or hundreds of different LUNs and can be accessed by tens of different hosts with no way for you to control that. This means the spindles can get high contention and the Pillar solution to this - QoS banding for LUNs - high, medium, low and archive doesn't really do enough to help this. Usually, the reason for a SAN is owing to performance and availability - so why are people worrying about usable storage - it's like saying "Buy my new book, because the cover is red" It's true, but useless information....

  13. Anonymous Coward

    SATA may not be as bad as you all say

    If Pillar is pushing it, and XIV (developed by Moshe Yanai - anyone recognise his name from the early EMC days???) is as well, maybe it's time to look at it again. Does it go everywhere? No. But it can fit in more places than you all seem open to acknowledging.

  14. Gary A

    Umm How about the controller?

    What most of you are missing in your drive-type pissing match is that the controller and RAID type will have a much more dramatic impact on throughput than the drive type. It's no wonder you see some people have success with SATA and others don't.

This topic is closed for new posts.

Biting the hand that feeds IT © 1998–2019