back to article Isilon filer guarantees 80 per cent utilisation

Clustered filer supplier Isilon is offering a guarantee that its filers will give you more than 80 per cent storage utilisation, up to 3.45PB of file space, and cost less than a dollar a gig. In comparison, Pillar Data offers an 80 per cent storage utilisation guarantee on its Axiom block-level storage arrays. NetApp, …

COMMENTS

This topic is closed for new posts.
  1. Eddie Edwards
    Dead Vulture

    Eh?

    So Pillar guarantee 80%, meaning that with engineering tolerances they know they can get better than 80%.

    Isilon guarantee > 80%, meaning at least 80.000001%.

    In practice the two guarantees are identical. Isilon's > 80% guarantee doesn't sound unique at all.

  2. Anonymous Coward
    Stop

    80% utilisation?

    Don't most arrays offer 100% utilisation? Yes you're consuming some of the space for RAID, performance, snapshots or whatever, but you're still choosing to consume it to get the enterprise features you need.

    If you just want maximum capacity then it's better to purchase drives from your local store and format them yourself. People shouldn't care (too much) about usable to raw capacities or whether a 1TB drive gives 1TB or 1000GB or some other random number. It's all about the business requirements and cost, not about how many bits fit on a piece of spinning rust.

    It's better to ask vendors to tell you what will meet your requirements:

    By application, indicate how many GB it's going to need. What is it's performance profile? Is it random, sequential, a mixture, how much? What of the enterprise storage features does it need? How important is this application? Is there any specific requirements that it has, such as must be fibre-channel, can't use this specific HBA, must be NFS?

    Most vendors will help you do that, for a cost. As will most resellers. Just be careful that the results aren't focused on one particular vendors technology.

    Check that there aren't any existing spare systems in the enterprise that you could use or re-deploy.

    Then ask the vendors to take that list of applications and performance metrics and tell you how much it's going to cost, what physical size (rack units, floor tiles), what power, what heat. Then ask them to tell you how that configuration manages to meet all your requirements.

    Is this rocket science?

  3. Nate Amsden

    Rocket science

    I think it's more rocket science than you describe.

    My company for example has just retired a 150TB BlueArc storage system. It still works but it's no longer cost effective to operate as it requires $100k/year in floor space and power costs. Add to that the support costs on top of that for the disks($60k/year), and the front end NAS units are end of lifed. So look closely at those other storage systems before deciding to re-deploy them.

    I can't find anyone who will pay anything for this thing.

    And as for capacity utilization it is incredibly important. I'm a 3PAR fan myself of course but just take a look at these published SPC-1 numbers:

    (I tried 4 times to fix the extra line breaks but they kept getting added back in, don't know why)

    3PAR F400 mid range performance:

    (384 146G 15k RPM Fiber disks) - tested 4/27/09

    System cost: $548k

    93,000 SPC-1 IOPS ($5.89 per IOP)

    27TB Usable capacity ($20k per usable TB)

    http://www.storageperformance.org/benchmark_results_files/SPC-1/3PAR/A00070_3PAR_F400/a00079_3PAR_InServ-F400_SPC1_executive-summary.pdf

    Pillar Axiom 600 mid range performance:

    (288 146G 15k RPM Fiber disks) - tested 1/13/09

    System Cost: $571k

    65,000 SPC-1 IOPS ($8.79 per IOP)

    10TB Usable Capacity ($57k per usable TB)

    http://www.storageperformance.org/benchmark_results_files/SPC-1/PillarDataSystems/A00073_Pillar_Axiom600/a00073_Pillar_Axiom-600_SPC1_executive-summary.pdf

    Hitachi AMS 2500 mid range performance:

    (352 146G 15k RPM SAS disks) - tested 3/24/09

    System Cost: $600k

    89,500 SPC-1 IOPS ($6.71 per IOP)

    15.9TB Usable Capacity ($38k per usable TB)

    http://www.storageperformance.org/benchmark_results_files/SPC-1/HDS/A00078_Hitachi-AMS2500a00078_HDS_AMS2500_SPC1_executive-summary.pdf

    Since iSilon is NAS there aren't any SPC numbers for them, and they also haven't published any recent SPEC SFS numbers either.

    There is 'overhead' associated with RAID 1(all systems above are RAID 1), but that overhead is not wasted, both mirror members can be read from in parallel improving performance.

    HDS was out here last year with BlueArc pitching their AMS2500, against a 3PAR T400 which I had configured. They matched the usable space but the amount of performance you get from their system for the same amount of usable space is a small fraction of what you get on a 3PAR system(I think Compellent is somewhat similar). The HDS 2500 system above required about 14 pages worth of configuration for host+array. The 3PAR was about 3 pages. You can't build an AMS2500 that can compete with the F400 with regards to performance+usable capacity, the AMS2500 doesn't support enough disks. The T400 is is more scalable and slighly higher performing(about 25%) than the F400.

    As for your applications, how big will it get? what type of I/O, how many IOPS? BlueArc gave us these complicated forms last year when all of our stuff was on their NAS to try to help them build a solution. Guess what! Those numbers aren't easy to get. In fact we didn't get them, there was no way to extract performance data from the system on a per-application basis. We could get space usage but that's it. Remember this is NAS, you can't run iostat and get physical IOPS of an NFS volume. I couldn't even get them to tell me how many physical disk IOPS the system was using as a whole! About all we had was the BlueArc front end IOPS to go by, and those of course to not map directly to back end IOPS.

    And you know what? I want a system that can scale to my yet undetermined requirements without breaking the bank. There aren't many systems out there that can do it cost effectively.

    The solution I ended up with was a 3PAR T400 with 200x750GB SATA disks, utilizing wide striping, block based virtualization leaving no spindle untouched, lots of fiber ports(up to 96 host+disk), high density(240TB in the first rack, 320TB in additional racks), ability to scale linerally by re-striping data online across all spindles. SATA disk performance on a 3PAR is roughly on-par with 10k RPM disk performance on other systems. The architecture of the system is *that* much of a difference. Add to that the 3PAR RAID 5 is roughly only 9% slower than RAID 1(as tested by Oracle), and the performance advantages are even bigger.

    Front end NAS is provided by a 2-node Exanet cluster(expandable to 8 nodes at the moment)

    I'm confident that the T400's specifications will be extended to support up to 1.2PB of space, and 1280 disks in a single system in the not too distant future. And Exanet has 16-node clusters in testing, the amount of disks required for such a cluster in order to drive that level of performance is very high. We can double our disk count and still be within a 2 node cluster.

    What this means for us is we will be able to expand up to 8x our current usable storage/performance linerally on the 3PAR side, and 8 fold on the Exanet side. Without migrating to a new system, without any downtime, and it's not like we're exactly starting small with roughly 100TB of usable space to start with.

    We looked at iSilon as well and we just didn't want a system that was inflexible enough to run only NAS. We wanted the ability to run both fiber/iSCSI and NFS/CIFS. We use fiber for our vmware and database servers, NFS for the file storage and data processing.

    BlueArc guys knew their NAS inside and out but their solution fell apart when they put the HDS up against the 3PAR. They were obviously used to dealing with people who were fairly simple minded as the HDS engineers had a real hard time answering questions, even simple ones like how many IOPS can the system give me AFTER RAID overhead? How much of a performance hit is RAID 6 on your system?

    I had a fairly lengthy discussion with NetApp as well who was going to provide me with pricing on their GX product line which was the closest solution they could offer that would compete, but in the end I never heard back from them. I've since been told they have a hard time marketing the GX since it's so different, or something.

    So what did we end up with? After months of planning an negotiations, we ended up with the solution I proposed. I had to fight tooth and nail for it but we got it. And what is the result? We've matched the previous system's raw capacity and gotten dramatically higher net usable capacty than the previous system because the system is more balanced and we don't have 6 pools of storage that we can't change. The system is more reliable(those tier 3 storage systems are shoddy), more flexible(run both SAN and NAS over the same spindles), easier to manage, and monitor(collecting more than 8 million data points a day from the 3PAR alone). We also have gotten much better performance, and the ability to grow up to 8x more linearlly with the ability to re-stripe data across all disks online.

    In the process we have cut the number of disks by more than half(500+ to 200), cut the floor space by half - 4 racks to two and a half, soon one and a half, cut the power usage in half.

    Given the lack of performance data from the previous system I have to admit that my configuration proposal was very much a shot in the dark as far as what we would need, but fortunately it turned out to work very well for us. We push the disks very hard, and the large caches on the system are able to absorb that and give good response times still(avg 21ms read 1ms write on the host fiber ports). Average of 9,600 disk IOPS and 378 Megabytes/second is what the spindles do. The controllers in the system are rated for 160,000 disk IOPS and 3.2 gigabytes a second of throughput to the hosts, so there's a lot of room to grow into.

    And the cost? It really wasn't much more than maintaining the existing system over a 3 year time period(while getting new front end NAS units so we could stay in support). Not taking into account the severe limitations in scalability and flexibility we had with the existing system. It was a no brainer of a purchase decision.

  4. Anonymous Coward
    Thumb Up

    Isilon proposal

    We had some discussions with Isilon for the tender for our new 50-100TB system, and to be fair they were by far the best technical answers we got from anyone who replied. We needed only NAS, so the FC/iSCSI aspect was never a concern. Unfortunately, being a cheap-skate academic facility they were out of our budget range (as were most), but they are well worth speaking to.

This topic is closed for new posts.