There is no way the front end would scale that well
You might see a million IOPS with 8 x V7000 though, two behind each SVC IO Group
Flash strikes again. IBM's newest storage array, the Storwize V7000, has trounced an Oracle/Sun ZFS array, delivering almost the same performance for less than half the price and 4 per cent of the capacity. The SPC-1 benchmark tests the performance of a storage array doing mainly random I/O in a business environment. IBM's …
Basically the IBM SSD is x2 better in IOPS/cost (probably also in power - not mentioned here), but the ZFS HDD array is x10 better in TB/cost.
So what usage pattern do you expect to have, lots of users and/or random access, or large sequential file access? Makes a big difference to what you should choose...
and most OLTP databases are sub-1TB in size so your point is?
Until SSD came along a lot of LARGE databases were sitting on disks that were so short stroked that a 146GB 15K FC Disk (The smallest you have been able to bur for a while) were sitting at less than 10% capacity to get the required IOPS.
It is not all large, unstructured data my friend, and most "Enterprise" arrays and applications need dedicated spindles for Tier-1 applications.
Weird of IBM to release such an expensive server with only 3.6TB storage. That is not usable in practice, is it.
I wonder if Oracle released SSD only ZFS server, who would win. There is an RAM based disk with really high performance, if Oracle inserted a few of those and got 1TB space - I believe Oracle would win. But that is quite a weird server. 1TB disk? Not usable, and no one would buy it.
Yes, lets hear the ZFS is the best in the world story bla bla bla.
Try telling it to the clients of one of the larger cloud suppliers here i Denmark who build their cloud infrastructure around redundant Oracle Comstar servers with ZFS.
All the clients who didn't have backup.. lost their data.
Feel free to look up the story.
COMSTAR servers? You mean they custom-hacked their own COMSTAR-based Storage? And did they build RAID-0 pools?
In my shop we have hundreds of TB of data sitting on ZFS (albeit zfs pools carved on top of RAID-5 luns presented from our EMC arrays) and have recovered safely from catastrophic EMC array failures without any issues.
So installing the storage-server software package (comstar) on a blade to present ZFS storage through iSCSI to client nodes is custom hacked ?
I thought you installed software, configured it and then used it on Solaris. Not custom-hacked it in.
And it was, according to what people have been able to dig up, a hardware failure of the comstar/ZFS node that trashed the lot, not the underlying HDS storage.
Now that is kind of disturbing, feel free to look at the news articles, they are in danish so you'll need to translate.
No system is immune mot crashes. Heard about Murphys law? ZFS is not immune, no system is immune. But running without backup in production is not an optimal decision. I googled for the links you talk of, but could not found any. Can you please post them?
In Sweden there was a very large outage at an outsourcer TietoEnator which affected several major services. They ran EMC storage, which must be considered as Enterprise and reliable. Sens moral? If someone tells you that a system is immune to errors, they are lying.
The point with ZFS is it is cheap and very reliable, and is the next generation filesystem, the future. And ZFS protects your data against corruption, which no ordinary solution does, no hardware raid, no nothing. There are also big ZFS servers, for instance Lustre is merging in ZFS now, to scale better and IBMs new future 20 Tflop supercomputer will use it, with 55PB and 1TB/sec bandwidth.
Nexenta (OpenSolaris + ZFS) servers are interesting too. NetApp and EMC is around $1.5million. Nexenta is $0.33 million. With higher capacity and more reliable.
"...the emc solution failed during the show which is why that nexenta stack ran 8 of 12 labs for the duration of the show. the netapp couldnt have handled the load."
Enterprise storage servers are also built around Solaris tech. I really like the diversity of Solaris.
The story is here:
And yes there are stories out there with more or less all types of systems and technology that have failed, and you are always sure to point this out, as long as this does not involve any Oracle stuff. So I thought I'd just help you a bit, so that you could get full circle.
"...And yes there are stories out there with more or less all types of systems and technology that have failed, and you are always sure to point this out, as long as this does not involve any Oracle stuff..."
And have you EVER pointed out IBM stuff failing? No? Then why do you accuse me of bias?
For the typical application it is unlikely that size / capacity would be the issue - it's maximum IOPS - the article is just noting that the SSDs produced almost the same performance for a less than half the cost making the price to performance ratio much better.
As someone else commented most of these systems sit there with huge amounts of capacity free as they are just using lots of (spinning) drives to try and get the IOPS up.
You guys are getting too hung up on acquisition price in your efforts to disqualify a beastly effort.
The point isn't that it's that you only get 4% of the capacity, it's that 4% of the capacity in a v7000 outperformed a frame they had to shortstroke the crap out of. If you honestly think another 96% of capacity is going to double the price of a storage frame these days then I have a large british landmark to sell you.
This is why SSD is such a disruptive product to storage, but it highlights the real thing you need to look at. It's still very expensive. So you have to have a system that can leverage your fast SSD and your slow SAS/SATA without great effect on your application. This is where you need to look long term. All flash is great, but you aren't going to buy an all flash v7000 if you can buy a hybrid v7000 that automatically moves data between tiers depending on performance/access needs.
In a few years, sure, all SSD. but there are a few years still.
(Also, since you all brought up price, acquisition price is one thing, operational expense is another. One of those systems is a lot hotter than the other. These things are more important than ever).
Yes we all know SSD is fast, we don't need a SPC benchmark to tell us that. What we need is IBM to run a hybrid SSD, SAS, SATA benchmark to provide a real world baseline. Why do IBM continually cook up either non production or niche configurations for SPC testing, see authors flight of fancy toward the end, Show us something usable - useful and stop being so continuously sycophantic about all things flash.
J.T. --- 4% of the capacity in a v7000 outperformed a frame they had to shortstroke the crap out of.
Huh? That is an ignorant statement, if I am reading it correctly.
The ZFS storage does not short-stroke for speed, it uses SSD caching on reads and writes for speed and RAM for additional speed.
The Oracle storage would perform consistently, as you increase usage on the frame, it would not degrade, as short-stroked solutions would.
J.T. --- If you honestly think another 96% of capacity is going to double the price of a storage frame these days then I have a large british landmark to sell you.
Someone who understood the two solutions side-by-side would be interested in the cost of the IBM solution with a similar quantity of SSD disk capacity.
A pure SSD solution should be slightly faster than the hybrid approach, but the hybrid approach (for the same quantity of storage) should be less expensive (and more reliable, since the read-write cycles on rotating rust should be greater than SSD today.)
The unique feature of the ZFS storage appliance is the auto-tiering capability (ZFS functionality) -- how it can move hot data into the flash cache and DRAM and keep the LRU data in the slower storage. So, its great for certain workloads (I wouldn't run Tier-1 apps on either of these arrays)
Add to that built in functionality of replication, dedup, compression make a good value proposition for the ZFS array.
Like someone pointed out, cost/TB is what will potentially drive the decision-making process. The benchmark lists just the controllers with the 18x200GB SSDs with "Base software" and 8 enabled 8GB FC ports @ $181K. The Oracle ZFS appliance is listed at $409K with 84TB of Storage, providing 137K IOPs (ie more than the 120K achieved by IBM).
How exactly is the value proposition for the IBM more enticing than that of the ZFS array?
An all-SSD IBM lab-queen at a ridiculously high price pr. useable capacity (and not enough for anything purposeful) beats another system (with more realistic performance/capacity configuration) in a synthetic test.
Makes me wonder how idiotic it is possible to be when creating benchmarketing.
So yet another V7000 SPC-1 submission but to my knowledtge IBM havw still not done an XIV one ... despite making some ludicrous performance claims when speaking to customers. Makes me think they're scared what the result would be ... or they already know and it isn't worth submitting!
It is probably harder to create an SPC-1 benchmark for XIV with the ludicrous numbers you need to put up not to be FUDed because XIV spreads volumes across all of the modules and uses SSD across all of the modules as well. You can't load up an XIV with pure SSD and tune the volume to get the 28 million IOPS to play the SPC-1 games. XIV is built for the real world, not for building absurd SPC-1 configs that will never be shipped to a real user.
True, all of these SPC-1 benchmarks are just games. They are never apples to apples. The configs never look anything like an array that anyone would buy. It is a bit like going to a NASCAR race and deciding to buy a Ford from your local dealer because you saw the same model going 250 miles per hour at the race.
A quick comparison of the IBM V7000 vs Oracle F5100 (both flash arrays)
IBM peaked at 120,492.34 IOPS with 3.5TB of SSDs (a cost of $181,029 – meaning $1.50/IOPS)
Oracle claims to be "well over 1 million IOPS with 2TB of SSDs" (a cost of $187,513 – meaning $0.18/IOPS)
While the Oracle figure is not the official SPC benchmark its close to 10x the IBM one.
Who is smashing who now???
From the article "What would happen if we replaced those 16 disk-based V7000s with all-flash V7000s? Each of the disk-based ones delivered 32,502.7 IOPS. Let's substitute them with 16 all-flash V7000s, like the one above, and, extrapolate linearly; we would get 1,927,877.4 SPC-1 IOPS - nearly 2 million IOPS. Come on IBM: go for it."
Checking the docs for the Oracle F5100 which would be an apples to apples comparison.
"scalable up to 80 TB and more than 50 million IOPS in a single rack"
No offence but next time you should compare similar technologies rather than comparing a cheetah to a house cat.
So i think everyone is missing the point of what this publish means.
1. V7000 as a 2U system can hold its own against flash only boxes
2. V7000 as a 2U midrange box with Easy Tier has a whole bunch more performance to give over and above any disks it contains...
See my post here, and feel free to comment :
Oracles Z stuff used :
8x 512GB SSD
8x 73GB SSD
and 280x 15K RPM disks
Total SSD capacity = 4680GB
Total HDD IOPS = 280x 300 = 84,000 IOPs just from the HDD
So the SSD only contributed : 137,000 - 84,000 = 53,000 IOPs
Which based on 4860GB is 10.9 IOPs/GB
V7000 used 18x200GB = 3600GB
So is 33.33 IOPs/GB
Maths speaks for itself
This is a public forum, anyone can read what you post here. And it might be saved by someone for the future. I would be very careful talking about stuff I don't have a clue about. Anyone that thinks this does not apply to them can ignore this message. Everyone else is free to contact me if they want to know more.
ZFS Storage Appliance expert @ Oracle EMEA
It's a simple recommendation not to speculate, not a threat.
It is always more safe to avoid making conclusions unless you are deeply involved and know all aspects and details about the subject. Whatever it might be about.
I can assure you that the efforts that goes into any product from any engineering group is substantial, and anyone making _assumptions_ about products and their behaviour are most likely incorrect.
Furthermore running a benchmark like SPC-1, SPC-2, SPECsfs2008, is not trivial. Most likely it is a team of engineers working for months preparing, tuning and running the tests. Once their goal is met, they will submit their result for review.
We should honour these effort, whoever does it, whatever product it is and whatever the results are.
Jens, I am pretty sure I've even seen you in RL. And doing what you do (or did) you should know that benchmarking is all about 5 things
Benchmarketing, Benchmarketing, Benchmarketing, sizing info and testing.
So surely people should speculate, that is what 80% of the people here do 99% of the time. Most of them without a clue.
Now surely you are absolutely right that doing benchmarks is a huge task that takes a lot of resources, but when you put your neck out, it being you guys, EMC, IBM, NetApp, HP, HDS whoever, then it's at your own risk, and people like Kebabbart will twist and turn any results you have put out there to fit his agenda.
Others will try to make sense of it trying to figure out what can be learned from it.
So if you don't want your benchmarks analysed, misinterpreted, admired, scoffed at whatever, then don't put it out there.
And remember that when you sign your posts with your company's name and your title. You are perceived as speaking for your company.
JF wrote: Others will try to make sense of it trying to figure out what can be learned from it.
Yes. Thank you. Exactly what I want from a public forum.
JF wrote: So if you don't want your benchmarks analysed, misinterpreted, admired, scoffed at whatever, then don't put it out there.
Yes, sure I want. That's not the point. Please dissect, admire or scoff all you want. If you want to FUD, make sure you have a clue.
JF wrote: And remember that when you sign your posts with your company's name and your title. You are perceived as speaking for your company.
"...and people like Kebabbart will twist and turn any results ..."
When have I twisted and turned any results? Can you post any links?
You on the other hand, have certainly twisted and turned results. I remember our debate on the Niagara T2+ cpu, and I showed a benchmark where Niagara had greater throughput than IBM POWER6 upon which you replied something like "throughput does not matter, POWER6 has lower latency". Later I showed another benchmark where Niagara T2 had lower latency, upon which you replied something like "latency does not matter, POWER6 has greater throughput". No matter which benchmarks I show, you always find something to nag - sometimes even contradicting yourself.
And you say that the benchmarks and white papers I show, are "twisting and turning facts"? Great. People have even complained that I always copy links to benchmarks, white papers, etc.
all those who think you can compare these benchmarks are utterly, utterly wrong. Chris Mellor, equally you should be ashamed of yourself.
Read the summary:
Total ASU (Application Storage Unit) Capacity represents the total storage capacity read
and written in the course of executing the SPC-1 benchmark.
For the ZFS appliance:
Addressable Storage Capacity
For the V7000:
Addressable Storage Capacity
So the working set used by the ZFS server is FIFTEEN TIMES greater.
Perhaps we should be comparing http://www.storageperformance.org/benchmark_results_files/SPC-1/IBM/A00103_IBM_Storwize-V7000/a00103_IBM_Storwize-V7000_2-node_SPC1_executive-summary.pdf
Which is a 2-node Storwize V7000, configured with 240 15K-RPM drives, addressing:
Addressable Storage Capacity
Aha. Now we can compare that to the ZFS box... So what are the figures for this one?
SPC-1 IOPS 53,014.29
SPC-1 Price-Performance $7.52/SPC-1 IOPS™
Total ASU Capacity 24,433.592 GB
Data Protection Level Protected (Mirroring)
Total TSC Price (including three-year maintenance) $389,425.11
Maybe one of the Oracle employees on here should have pointed that one out? Just sayin'
Biting the hand that feeds IT © 1998–2019