Feeds

back to article IBM's XIV roadmap includes multiple frames and InfiniBand

IBM is planning multiple frames followed by InfiniBand links for its XIV cloud storage products, while asserting that petabytes of multiple XIV box storage are very much easier to manage than petabytes in a single storage array. The XIV storage product is a cluster or grid of up to 15 storage and interface nodes linked by …

COMMENTS

This topic is closed for new posts.
Anonymous Coward

The interface sucks.

Sorry, but I don´t like the interface with its stupid "I´m a Mac" looks.

Also still no one was able to tell me what I´ll lose when a second drive fails during rebuild.

AC, since I got IBM in here later today... most probably trying once again to sell me a couple of said XIV.

1
1
Thumb Up

Interface is awesome!!!

We have one already (love it) and going to be purchasing another one. Unfortunately I need to get one up and running by Feb 2010 so these new features most likely won't be in the one I purchase next.

The interface is awesome, and this is comming from a CLI guy. Heck, on my HDS arrays, I don't even use their interface but work directly with the SNM CLI. But with the XIV, they made things so easy and fast that I seldom use the XIV CLI. I only use my perl scripts to capture daily activity to post to a website and thats all.

One thing this article forgot to mention is they will be also offering more cache in the data and IO modules.

1
2
Anonymous Coward

Resilience is far from awesome

So it seems this is the only thing IBM sell these days so it was no surprise when they came knocking on my door. I was even interested enough to go for a demo but imagine my amusement when the guy said he could pull out 16 drives and it would still work and it fell over when he pulled the second one out! Hmmm. Still a long time before I'll be convinced that this is a Tier 1 array.

Besides, 79TB in a rack in rubbish density - you can do that with FC. Even with 2TB drives it's not amazing, and with twice the capacity then presumably you'd want to attach twice as many servers or use the additional capacity with the existing servers yet there is no more additional performance to be had over the 79TB array. Again, doesn't sound very Tier 1 to me. And have you seen how much power these things draw?? Wow!!

I don't see where this is supposed to fit - it's not content optimised, it's not performance optimised. And these proposed changes suggest it's years away from being what I hoped it would be.

1
0
Thumb Up

IBM XIV looks like a mini-version of Sun's Open Storage...

The IBM XIV looks pretty promising!

This looks a lot like Sun release of Open Storage years ago (leveraging large form factor SATA drives with the ability to tack on additional frames.)

http://www.sun.com/storage/openstorage/

With Sun's Open Storage offering: no RAID write hole, full GUI (showing real-time graphs of drive, link, frame performance), Flash Acceleration (on reads and writes), multiple failed drive support in a striped LUN, block level de-duplication... how close does IBM XIV come to the maturity of Sun Open Storage?

0
0

....And HP's EVA

with the way it stripes the content across all the drives.......

1
0
Thumb Down

Scale out cluster, with limited scale out

It's a cobbled together scaleout cluster with a severely limited backend, environmentals are a joke, it's not got the performance of tier 1 and I'd be suprised if it could compete with tier 2. Limited enterpise features, has poor capacity utilisation due to the clustering's reliance on mirroring. Love the 2TB drive forklift upgrade option, nice and disruptive. Adding more cache simply means adding more memory to each whitebox server in the cluster, a good percentage of that cache will be used for the Linux O/S and also policy and control memory, then there's cache mirroring, I wonder what you actually end up with. If you're not getting a high cache hit rate then you'll be going to SATA and incurring loooong seek times, which will only get worse the more data you write and the more hosts you attach. But yeah it does have a pretty vista-ish GUI.

1
0
Thumb Up

performance is quite good

Performance is quite good in the XIV. It really depends on the type of workload comparing a Tier 1 with the XIV. For our transaction processing applications, our Tier 1 storage array (DS8300 ... 480 FC disk with 128 GB cache) is faster than on the XIV but it does fine. On unstructured data, the XIV is faster than our DS8300.

For our Tier 2, we have Hitachi AMS1000's and HP EVA's 6000's and on average the XIV is about 3 to 4 times faster. Now you maybe curious on how I know it is about 3 times faster or more. Most of the data on our AMS1000's and HP EVA's were migrated to the XIV.

0
0

The EVA6000 is 3 generations old now!

You'd expect even something optmistically called Tier 1 storage when it isn't to be quicker than a 3 generation old competitive system surely? I think the issue with the XIV is that it MIGHT perform ok, for SOME customers, in CERTAIN situations, with PARTICULAR workloads. But do you want those uncertainties from a Tier 1 array? Which then makes you think would this be a good Tier 2 array but it's too clunky. The smaller capacity XIV's definitely do not performance. It needs the 180 drives or it really is a lame duck. So then you're stuck with a rack of Tier 2 storage that only gives out 79TB usable but makes the lights go dim when you turn it on. There are much more modular,more cost effective Tier 2 arrays out there. And even though it's SATA, it can't compete with Tier 3 / content stuff as it's not dense enough and too costly to run. So where is it meant to sit. The multiple frames thing sounds like a cobbled together fix - sounds like you get to manage the boxes together but they're still 2 boxes. How can that compete with truly modular scale out plays. Some people have gone with XIV cos it looks a bit different and it's a bit of a funky idea, more have had them basically given to them by IBM, but the vast majority of people have run for the hills when IBM have turned up with their XIV is the answer, now what's the question approach.

0
0
Stop

(untitled)

Pull two drives from different shelves and the XIV will show you it is the fastest array in the world... at losing data.

Kudos to you anyway, I´d never have admitted my 180 drive SATA array is as fast as my 480 drive FC array.

0
0
Happy

Get blasted when informing how it works

First off, I was informing those who never seen or touch one. I was trying to explain that how it works compared to the other storage arrays I administer and it seems that their is a bias opinion out there against it. If you don't like it fine, but wouldn't those who haven't touch one like to know how it performs?

Second, the DS8300 will be faster on many applications but might not be on other applications against the XIV. The hospital I work at has lots of unstructured data and it quite honestly does perform quite well with this type of data. Also, the DS8300 is like other traditional storage arrays and the one at my shop has 480 disk in 60 RAID groups (either RAID 5 or 10). I myself like to wide stripe LUN's but not on 480 disk of course. So there is still a potential to have hot spots in this array. But we monitor for this and when issues arrive we start moving LUN's around to different disk. With the XIV, it doesn't have any hot spots since 1 MB extents are written to all of the disk and so far the application teams and database admins like the performance they are getting.

Now, I myself compare the XIV to more of a higher Tier 2 array than a Tier 1 and it is true the EVA6000 is a few years old and to be fair I'm sure the latest version would perform comparable to the XIV. However, the HDS AMS1000 we have is only 6 months older than the XIV so I suppose that is a closer comparison, even though the AMS2500 is now out (we had to purchase the AMS1000 last December and couldn't wait for the AMS2500 release). We will be purchasing a AMS2500 soon as well, so maybe I could post my findings later.

Also, I have noticed where the XIV doesn't peform well and for those who do large sequential reads, better to stick with a traditional array. But for lots of random reads and writes (databases, VMs, medical records) it does quite well for a high end Tier 2.

0
0
This topic is closed for new posts.