Another lab queen from IBM !
40 Controllers from IBM vs 8 on 3PAR and 17 management points on IBM vs 1 on 3PAR. IBM should really get their very own SPC category for silly configurations.
IBM's SAN Volume Controller has done the benchmark business, again, and passed the half million SPC-1 IOPS mark using Storwize V7000 storage. The Storage Performance Council-1 benchmark measures the ability of a storage configuration to respond to I/O requests, counting SPC-1 I/Os per second (IOPS). The scenario is said to be …
I'm thinking the SPC-1 benchmark could use some SSD. And maybe I need a raise because I could best this by twice for half the cost at most, with COTS stuff. $6.92/IOPS is freaking insanely high. If I bid three dollars can I keep the under? If so, it's retirement time and then some.
/Yes, I do this for a living. My opinion here though is my own and not somebody else's.
Non-sequitur: the only reason enterprise SSD is so expensive is... iPhones and Android Phones. If the mobile industry would just quit consuming 85-90% of the global supply of the flash storage silicon no matter how fast the factories are built, the price would drop like the proverbial lead balloon. Eventually this market MUST saturate, but I'm thinking 2015 at the earliest. When it happens though it could beat tulip mania for the greatest commodity market collapse ever.
eight CG8 SVC Storage Engine models used, a 24-port Brocade Fibre Channel switch, and 16 x 2-node V7000 arrays: 1,920 2.5-inch, 146GB, 15,000rpm disk drives in total. ??
8 x CG8 SVC Storage Engine and 16 x 2-Node (32-Node) V7000, that is 4-nodes.. and I wonder if it works as a single cluster? with single Management?
Versus 8-Nodes 3PAR in a single management and single cluster concept.
IBM wins the benchmark I applaud but the setup does look far more complex than 3PAR. Simplicity anyone?
We have an 6 node SVC cluster and its pretty easy to configure and setup. Most difficult part is having mutliple vendors arrays and different models under are SVC cluster. We use IBM, HDS, and HP arrays with about 1600 different disk under the SVC cluster (yeah I know, too many vendors).
So just having v7000's underneath the SVC cluster would be easy (and heck, the SVC and v7000 is the same GUI and CLI).
Regarding the SVC clustering, they had a 4 node cluster (8 SVC engines) so each vdisk (LUN) would be in one I/O group so two SVC engines would be serving the vdisk. So if one SVC has a hardware issue, all I/O in that I/O group is being served on one SVC engine.
The reality is that many SVC customers end up with too many vendors, for the very reason that SVC lets you manage so many vendors and supports so many vendors. This makes integrating storage brought in by merger and acquisition significantly easier and lets you sweat out old assets. It also leaves you working with a single zero-cost multi-path device driver (regardless of how many hosts or vendors) and a single management interface (regardless of how many SVC nodes).
Since it had a 4 node SVC cluster, It would be interesting to see the results with the 3PAR as the storage for the 4 node SVC cluster. That way you could compare the backend storage of 3PAR vs the v7000 (storwize boxes that has its own SVC code). It would also be fun just to mirror the vdisk between each vendors array and look at the latency between the two.
Wonder what is the percentage usable for the price?
From the IBM SPC-1 Full disclosure report :
Application Utilization - 34.62%
Protected Application Utilization - 69.23%???
Unused Storage Ratio of 28.74% ???
Look at the Priced Storage Configuration Diagram ... it does look extremely complex to me. Tons of connection to deliver such performance?
3PAR SPC-1 Full Disclosure Report
Application Utilization : 40.23%
Protected application Utilization : 80.46%
Unused Storage Ratio : 14.53% ...
I think efficiency, simplicity and lesser complexity makes 3PAR a real winner.
comparing to another SPC-1 by HDS.
Application Utilization : 29.25%
Protected application Utilization : 58.5%
Unused Storage Ratio : 39.42% ...
Delivers - 269K IOPs ... Using 146GB 15K 2.5" 1,152 drives... it can support more but why only 1,152 drives?
Biting the hand that feeds IT © 1998–2019