Word has reached me that EMC's marketing dept may not be reacting quite so well to my previous post; yet if truth be known, I actually toned down what I really wanted to write because I wanted to ensure that people who I like and have time for didn’t catch too much flak. Although I speak only for myself, I know that I am not the …
Still Spindle Bound!
I really don't get the VNX2 hype either!
Using CPU cores more efficiently in a “spindle bound” architecture still leaves it a spindle bound architecture. In other words, if your performance was directly proportional to the number of high RPM disks (or expensive SSDs), improved CPU utilization and more CPU cores may raise the limits of the system, but will still leave you needing the exact same number of high RPM disks or SSDs to achieve the same performance.
Re: Still Spindle Bound!
So our biggest problem with the Clariion & VNX line was we kept hitting CPU ceilings. More so on the VNX because the more SSD's in it the higher the CPU utilisation as the more it's working. Many of our arrays were nowhere near full capacity wise, but very busy CPU wise - we were always cautious about placing more workload on the arrays.
Remember with a dual head architecture you want to try to keep the CPU below 50% so if a head does fail the other can take over the workload without too much impact. We used to sit at 90% CPU often so we were going to have a hit (and we did) when we lost a head.
So really any improvement in CPU utilisation is much welcomed, with the rise of the SSD it's the new bottleneck.
VNX can NEVER be "spindle bound"
“Spindle bound”, huh?
Not so. FLASH changes that.
Now the controllers are the ones setting the limits for performance. The controllers in VNX can deliver 1M 8K IOPS to the hosts. That takes MCx™ and 32 cores to do. What the poster above does not seem to undertand is that in order to deliver 1M IOPS with “spindles” would take roughly 5,000 15K drives to do. At a latency of 9mS to 12mS and a power budget of 87.5kW. That is just plain NUTS! – And besides, the VNX platform only scales to 1,500 drive slots. Thus it is IMPOSSIBLE for VNX to be “spindle bound”. This is old school talk!
The world has changed. We now live in a FLASH world. To deliver 1M 8K IOPS to the hosts, with backend overhead of RAID, we would need roughly 160 SSDs. At this point, we have reached performance parity between the storage processors and the SSDs. We are NOT “spindle bound” at all (we can still add another 1,340 SSDs). The response time is now less than 1mS and the power budget has dropped to .5kW.
That is what we want: low latency, high IOPS and low power consumption. With FLASH + MCx™ VNX delivers that.
(and yes, I work for EMC)
Re: VNX can NEVER be "spindle bound"
Everyone in the market can do 1 million IOPs on slideware, but as usual Customers are expected to take EMC's word for it. Just like they did with the previous VNX generation which we now know was severely CPU bound because EMC didn't have a decent scheduler and in reality could never do more that 170,000 read IOps flat out.
So if VNX is so fast, why did you need XtremeIO ?
"a micro-RAID distributed across all disks with improved re-build times" sounds awfully similar to Pillar's Distributed RAID.
You mean like pretty much everyone other than EMC & Netapp ?
- Review Reg man looks through a Glass, darkly: Google's toy ploy or killer tech specs?
- MEN WANTED to satisfy town full of yearning BRAZILIAN HOTNESS
- +Comment 'Stop dissing Google or quit': OK, I quit, says Code Club co-founder
- Nokia: Read our Maps, Samsung – we're HERE for the Gear
- Ofcom will not probe lesbian lizard snog in new Dr Who series