back to article HP: Our 3PAR kit can cram twice as many VMs into your server

HP is so confident its 3PAR storage arrays will double a server's virtual machine count that it's guaranteeing it - and will pay for any extra 3PAR storage needed beyond what it replaces to get to the 2x VM count number. The HP Get Virtual Guarantee Program states uncompromisingly: Clients deploying HP 3PAR systems with …

COMMENTS

This topic is closed for new posts.
  1. thegreatsatan
    Thumb Down

    so buy 500K+ in storage/software and we will throw in a few free proliants. #yawn

  2. Matt Bryant Silver badge
    Pirate

    Spidey-sense tingling.....

    /looks for the small print.

  3. Anonymous Coward
    Anonymous Coward

    Clustered Pairs < Grid Cluster

    3PAR has the same issue as EMC VMAX and IBM V7000. You are still dealing with a two controller array, now they have paired or federated two controller arrays but the problems of a two controller array still exist (more disk you put on it, the more fighting for cache and CPU, the slower everything gets). XIV, now IBM XIV, is the only architecture which solves the dual controller issue with true modularity across CPU, IO, cache/memory and disk. XIV groups storage into 12 disk modules, each with their own dual controller, cache, etc. It then uses a snappy algorithm to splice data cross every disk in the array. It then takes all incoming IO requests and serves them in parallel (all of the nodes/controllers) working on IO in a large virtual pool of not only disk, but cache and CPU. That is why XIV will always smoke VMAX and 3PAR in performance even with SAS disk.

    There must be some fine print in the HP 3PAR offering, because there is no way it is going to provide 2x more dense VM servers than XIV. If you are using previous generation storage, EVA, DS5, Clariion, then it might work as you probably have a serious IO bottleneck. The challenge assumes that IO from disk is the bottleneck, usually memory is the bottleneck in large VM servers. Although, maybe HP doesn't really care if they have to throw some free memory and additional disk arms at people if it gets them the 3PAR sale. The cost of some extra disk and memory is relatively minor as compared to the cost of a new SAN.

    1. Man Mountain

      Re: Clustered Pairs < Grid Cluster

      So you're aware that 3par isn't a dual controller array but can scale to 8 active acti

      ve nodes, right? And that 3par has the world record spc-1 benchmark whereas xiv is the only array IBM haven't submitted? Strange that!

      1. Anonymous Coward
        Anonymous Coward

        Re: Clustered Pairs < Grid Cluster

        Yes, 3PAR can scale to 8 active controllers, but they are clustered pair controllers which are connected through 3PARs custom ASIC IO mesh (similar to VMAX or V7000s architectures). I believe they are still connected using FC 8. You could have 300 TB distributed behind each pair. XIV, on the other hand, takes a grid distributed cache approach where each group of 12 SAS disks have their own CPU and 24 GB of memory. All of the 12 disk modules are Infiniband connected and handle IO in tandem. While 3PAR also handles IO in tandem, there is no concept of ensure that each 12 disk node has it is own CPU and cache so you don't choke the controller with too many volumes behind them. You still have to plan and add controllers, or a new node to the cluster, when you are starting to pile on the disk. In XIV, you don't need to do that as the system will not allow you to over burden a controller/module.

        Benchmarks are a waste of time. Everyone just short strokes the disk or piles on unrealistic amounts of SSD. It is like a NASCAR race where a Ford 500 is going 250 miles an hour and Ford then claiming that Ford 500s go 250 miles per hour. True, but it is a Ford 500 you will never see on the street.

      2. Anonymous Coward
        Anonymous Coward

        Re: Clustered Pairs < Grid Cluster

        The primary difference between XIV and 3PAR, VMAX, V7000, etc is that XIV puts CPU and cache down in the pool and tightly couples it with disk/SSD. 3PAR and the other clustered pair architectures are certainly better than old school dual controller arrays as you can scale beyond two controllers through the ASIC mesh, but they are not XIV. XIV is simple and powerful.... XIV is actually very similar to a Hadoop style cluster where you have a bunch of commodity x86 servers with local disk all interconnected together and the power of parallel processing does the heavy lifting.

        1. Anonymous Coward
          Anonymous Coward

          Man that sounds just like LeftHand but without the flexibility ! Someone should tell HP to package xx P4000 nodes in a rack, include switching and UPS, remove the option to scale individual nodes, only offer one disk type, then rebrand the whole lot as enterprise and you've got yourself an XIV. So far you've compared V7000 which is midrange, VMax which is enterprise and 3PAR which can address either market with a single architecture vs the one trick pony of XIV, your comparison isn't looking so good and neither is your credibility.

          1. Anonymous Coward
            Anonymous Coward

            I am comparing the architectures, not every feature/function. VMAX undoubtedly has more feature/functions and scale than V7000, XIV or 3PAR. The way 3PAR, V7000, and VMAX federate controllers in a matrix is similar. That is all.

            I have never looked into Lefthand in detail. I know it used to be an iSCSI array, but that has probably changed at this point. If it works like XIV, you should tell them to do it. Btw, you can scale individual nodes in XIV after the base config required for redundancy.

            XIV is a one trick pony. That is what's so nice about it. There is no administrative monkey work or unnecessary planning. The people who have jobs that involve a lot of the administrative work that XIV takes care of are obviously going to look for reasons that it won't work and they need to continue planning volume sizes, intermixing disk/SDD for LUN performance, assigning priority to workload, etc. XIV provisioning can be done by a child in about 2 minutes, assign a LUN name and map it to the XIV. Done. All of the performance stats, mirroring, redundancy, thin provisioning (if you want to use it) is integrated into the system. You can take some redirect on write (no performance degradation) snaps if you need them. XIV basically manages itself and pulls enterprise grade performance, not insanely high performance, but enterprise grade. It will do 50,000 IOPS at 15ms in a standard OLTP environment without any SDD, tuning, or anything other than turning the system on. For most companies, that is more than enough for every workload in place.

            If you are running an application which requires a million IOPS with under 15ms response time, XIV is not for you. If you want to scale across petabytes under the same management, XIV is not for you (although having multiple 150 TB usable arrays is not a big deal for all but the largest companies... most don't even have 150 TB). If you are not, then it probably will work very well.

            1. Anonymous Coward
              Anonymous Coward

              But hold 50,000 IOps ? You said XIV would smoke both VMAX and 3PAR which can do 450,000, non short stroked and without SSD. By your own reckoning even that pesky dual controller V7000 smokes the XIV.

              http://www.theregister.co.uk/2011/04/26/pillars_spc_1_result/

              BTW the mid range 3PAR F400 smokes both at ~90,000 IOps and yes you're correct P4000 Lefthand is ISCSI only, but as you've already stated, we're not comparing features just architectures.

              Also you're concept of how federation operates is also incorrect, on 3PAR federation is between clusters, not within a cluster.

              1. Anonymous Coward
                Anonymous Coward

                Again, you are comparing wonderland, NASCAR benchmark numbers whereas I am honestly telling people what they can expect if they turn on an XIV with no tuning and a bunch of random IO in a real world situation. Yes, if you pick the ideal OLTP workload for your storage system and configure it perfectly for that workload (assuming no new workload will be added), you can get insane IOPS levels. I am sure you could do the same thing with XIV. I just don't see any value in saying that a Ford 500 does 250 miles per hour when you are never going to be able to do, or need to do, 250 miles per hour in an actual Ford 500.

                Find me an in-production, run of the mill VMAX or 3PAR that is running at 450,000 IOPS with reasonably short response time (sub 15 ms). I know an IT manager that replaced 2 PB of random IO data running on DMX-4 with a 20ms response time and IOPS of under 15,000 (replaced with XIV). The fact that in some lab condition with perfectly uniform IO those DMX-4s run at 500,000 IOPS didn't really help them. They were not running a testing lab.

                1. Anonymous Coward
                  Anonymous Coward

                  So where is XIV's SPC-1 number ? IBM are more than happy to test the rest of the portfolio, their lack of a number means we have to accept your word on performance. (someone who mistakes 7.2K drives for 15K). As I said it's not about all out performance, it's about being able scale performance to suit changing requirements, with XIV once you hit the performance ceiling you need another one regardless of the free capacity.

                  15,000 IOps over 2PB's, sounds more like an archive, and just how many XIVs were required ? Thirteen would look to be the minimum and what about the power consumption for all of those 195 nodes. It sounds like he was sold a pup with each XIV providing slightly more than 1000 IOps per 42U rack.

                  1. Anonymous Coward
                    Anonymous Coward

                    No, you shouldn't accept anyone's benchmarks... IBM, EMC, NetApp, HDS, etc. It is a silly game that only people working for storage vendors reference. Every CIO knows, as referenced in my "what to do if a vendor references a benchmark" comments above, that the IOPS performance in a benchmark will bear no resemblance to what they are likely to see in their environment. The only way to get a meaningful performance number is to do a PoC.

                    I would assume IBM, and every other storage vendor, only agrees to benchmarks because they will get FUDed to death with "not having an SPC-1" if they do not. As EMC, HDS, HP, etc are unlikely to buy an XIV and no CIOs know what SPC-1 is and if they do they know it is of no help in determining the performance they are likely to see, IBM would probably rather have people working on PoCs than battles for bragging rights vs. EMC and the other storage vendors that are useless in real world situations.

                    "15,000 IOps over 2PB's, sounds more like an archive, and just how many XIVs were required ?"

                    Yes, it was a large global file system for pulling random docs at a large retailer. Standard situation where IOPS don't mean much as getting a pdf on some product isn't a race against the clock, but simple management and planning of all of those new file shares were paramount. The DMXs were running at snail speed because people just kept throwing additional disk on them, slowing the whole SAN down, because no one wanted to take on the expense or create a capital budget for new frames. I don't know the exact number of XIVs, but 20 something frames. Yes, they were not all part of the same logical system (they had XIV1, XIV2, XIV3, etc managed from the same counsel) but they used SONAS so they were all part of the same global parallel file system.

              2. Anonymous Coward
                Anonymous Coward

                Benchmarks

                I have always thought CIOs should handle anyone that brings up a benchmark by telling the vendor, "Sounds great. How about if your OLTP benchmark for this system mirrors our actual performance, we will buy the system at full list price, but if you do not meet the benchmark within 10% margin of error, the system is free?" Stunned silence.

                1. Matt Bryant Silver badge
                  Pirate

                  Re: Benchmarks

                  "I have always thought CIOs should handle anyone that brings up a benchmark...." Wunderburp is actually on the right path here (for once) - the only benchmark that counts is how the kit performs in your environment, with your data and your apps stack. Anything else is just a vague indication of performance.

                  But Wunderburp should also know he's actually just repeating a very old IBM sales play, which I'm not really surprised at seeing as he often just seems to repeat IBM sales brochures. Indeed, IBM used this old play with the Sharks - "have it free for a year, if it doesn't work we'll take it back." They also tried it with us with XIV a few years back. The trick is to get the customer/sucker to load all their mission-critical data on the array and get it in production, beacuse the hope is it will then be too painful for them to remove from use and they will swallow the poor performance and other limitations. We saw this coming when they tried it with a pair of Sharks and we simply used them for backups and archives - easy to back off at the end of the free year! I still smile at the memory of the IBM salesgrunts face when we told him we'd rather not keep the Sharks.

                2. Anonymous Coward
                  Anonymous Coward

                  Hang on, you're banging on about benchmarks being invalid, but it's you who started the performance discussion by stating XIV had the much superior architecture and would smoke 3PAR amongst others. It's apparent from your posts you're not equipped to make those claims as you don't grasp the basics of storage performance or understand the alternative architectures you are attempting to slate.

                  1. Anonymous Coward
                    Anonymous Coward

                    "Hang on, you're banging on about benchmarks being invalid, but it's you who started the performance discussion by stating XIV had the much superior architecture and would smoke 3PAR amongst others. It's apparent from your posts you're not equipped to make those claims as you don't grasp the basics of storage performance or understand the alternative architectures you are attempting to slate."

                    You are talking about benchmarks whereas I am talking about what happens in the real world. In benchmark land, VMAX and probably 3PAR will outperform XIV. In a real world environment it is likely XIV will outperform VMAX and 3PAR. No contradiction. That is because you can easily be running VMAX, for instance, at far less than optimal levels whereas it is virtually impossible to make any decisions or errors that will impact XIV's performance. For instance, in a real world environment, people may be at a point where they should really add another node to the VMAX or 3PAR matrix in order to maintain performance. As the cost of a node is high and the cost of continuing to add disk is low, they may just add disk and degrade performance because of budgetary issues. There are far more decisions to be made in allocating resources to hot or high priority workloads and objects in VMAX and 3PAR. If a company doesn't have a dedicated expert or someone with the time to constantly monitor the system, performance could get really whacked out really quickly (especially as all but the largest companies do not dedicate staff specifically to SAN). XIV requires no SAN expertise or staffing, you can't screw it up.

                    Benchmarks are not "invalid" per se. They are just idealized versions of the systems on idealized versions of the workload. The Ford 500 at NASCAR really does go 250 miles per hour and you could really buy that car. If you have the budget to buy the ultimate configuration of the system and have a team of engineers spending hours tuning the system to every workload, it is possible. It is not that they are "invalid" it is that they are unrealistic and not at all cost effective.

                  2. Anonymous Coward
                    Anonymous Coward

                    The key point on performance is that XIV is always running at optimal performance. There are no decisions to make that would impact it either positively or negatively. You are claiming that people have options/flexibility with VMAX, 3PAR, DS8 (for that matter) which will enable them to surpass XIV's performance in optimal, or even best practice, settings. Absolutely true, assuming there is a knowledgeable SAN team in place. How often do storage environments get larger budgets after they go-live though? What is more likely, that someone will be given a budget for SSD or someone managing a SAN will be told to just add extra disk and volumes to the existing controller set up because disk is relatively inexpensive where are buying a new frame is costly even though they recommend adding a new frame? Point is that, even though there is the possibility to do good things with the flexibility, most people use the flexibility to degrade storage performance for cost reasons. Again, absolutely valid, it might not matter if users get pdfs off of some file share .5 seconds faster (assuming disk IO was even the bottleneck). XIV takes a nice approach of making it impossible to degrade performance while adding volume to the pool. The costs of the nodes are relatively inexpensive as they are all commodity parts so adding CPU, memory, etc and disk doesn't break anyone's budget. It is a nice approach for front line IT people to ensure that their environment will always be running at a reasonable level regardless of budget cuts and workaround plans.

                    If SAN admins run the company, VMAX, 3PAR, DS8 will always out perform XIV. If the CFO runs the company and doesn't care about SAN best practices, then XIV will frequently out perform other over burdened tier-one SANs.

  4. Anonymous Coward
    Anonymous Coward

    sorry your limitation is memory

    We would give you more disk drives if the bottleneck was storage but you problem is memory.

    Sucks that you bought and migrated everything to 3par thinking you would double your vm per footprint, but hey this is HP sales not the magic kingdom.

    If you want we can buy you some $70 tickets to disneyland to take your family.

    1. This post has been deleted by its author

    2. Matt Bryant Silver badge
      Happy

      Re: sorry your limitation is memory

      "....but you problem is memory...." More than likely, but those cheeky fornicators at VMware are licensing on memory footprint now, so we're interested in anything that makes the whole caboodle go faster without adding more RAM per server.

  5. Anonymous Coward
    Anonymous Coward

    Clustered Pairs > What were you smoking ?

    3PAR is a grid system, you can start with two controllers and go to 4, 6 or 8, it will scale performance linearly and continue to be managed as a single system. There is no concept of managing individual or controller pairs in a 3PAR system, from a management perspective it's a single image. Thanks to the ASICs the 3PAR offloads most of the heavy lifting from the intel CPU's that many other solutions are so reliant on to perform all data movement and processing.

    3PAR can wide stripe data over many more disks than XIV, which can only do 180 SATA disks (180 x ~80 I/Ops = 14,400 IOps . 3PAR V800 can do 1920 FC or SATA with plenty of SSD thrown in as well if required (1920 x ~200 = 384,000 IOps), but actually thanks to that grid architecture and the copious amounts of cache included, a single system did 450,000 on the SPC-1. Now where's the XIV SPC-1 result ?

    Even if XIV controllers, cache and Raid-X could work miracles it's not in the same league. Yes XIV has many nice modern features that make it a good solution for virtual infrastructure as does 3PAR, but really that's where the comparison should end. As with all storage guarantees it's a marketing campaign, so yes there will be T&C's attached, but as a vendor you'd have to be pretty confident in you ability to deliver before putting your head on the block.

    1. Anonymous Coward
      Anonymous Coward

      Re: Clustered Pairs > What were you smoking ?

      3PAR is a "grid" system, but it is a clustered pair grid, like VMAX or V7000. You can still over burden a controller pair despite the ability of the ASIC mesh to handle IO in tandem and connect controller pairs as disk is decoupled from CPU and cache. In XIV, each 12 disk module has its own CPU and 24 GB of memory with the similar feature to 3PAR of being able to handle IO across multiple controllers/modules. In 3PAR, you need to ensure that disk matches with controller performance to get whatever IOPS level you are looking to achieve. XIV is simpler as every time you add disk, you are also adding CPU and cache to the pool. That is why XIV gets faster every time you add more workload.

      XIV uses SAS 15k in 1TB or 2TB, not SATA.

      1. Anonymous Coward
        Anonymous Coward

        Re: Clustered Pairs > What were you smoking ?

        Wnderbar1, you haven't got a clue 1 & 2TB 15K drives ?

        1. Anonymous Coward
          Anonymous Coward

          Re: Clustered Pairs > What were you smoking ?

          "Wnderbar1, you haven't got a clue 1 & 2TB 15K drives ?"

          7.2k then... whatever, they are SAS, not SATA, drives.

          1. Anonymous Coward
            Anonymous Coward

            Re: Clustered Pairs > What were you smoking ?

            Whatever ?? Yes because the interface, sas, fc, sata not the rotational speed makes all the difference doesn't it ?

            We're talking IOps here not bandwidth, so a 7.2K drive, regardless of the interface will go at most, half as fast as a 15K drive for a given latency, regardless of the interface.

            Which rather undermines your entire argument, unless your still drinking the kool aid and peddling more XIV magic.

            1. Anonymous Coward
              Anonymous Coward

              Re: Clustered Pairs > What were you smoking ?

              I just meant that I had the RPMs wrong, but the drives are still SAS not SATA. I wasn't intending to make any comments about the relative performance.

              Regardless, you are ignoring the concurrent parallel processing of IO across 15 controller units all at the same time. One 15k SAS disk is obviously going to have lower IOPS performance than one 7.2k drive, all latencies being equal, but we are not talking about 1 vs. 1. We are talking about all of the controllers and disk in XIV working against two controllers or at most eight controllers.... Also, all latencies are not equal. XIV has a bunch of closely coupled Infiniband nodes. With the matrix systems, there is a longer latency, possibly much longer, as the nodes are not in the same frame with IFNB connections.

              It is the same reason that Google has such insane performance with a bunch of commodity disk on little x86 servers. They are all working in tandem.

              1. Anonymous Coward
                Anonymous Coward

                Re: Clustered Pairs > What were you smoking ?

                I'm sorry you need to walk away now before you embarrass IBM further. You've already stated the box does ~50,000 IOps at 15ms and to do that it needs 15 controllers running in parallel with a close coupled inifinband architecture.

                Yet the dual controller IBM V7000 does better in the SPC-1 benchmark and I'll bet it's much cheaper, the 3PAR F400, not the newest box on the block blows away both with 4 controllers at ~90,000 and the 3PAR V800 smokes all comers with 8 controllers doing 450,000 IOps. All of which combined totally destroy your architectual argument.

                XIV doesn't have a SPC-1 benchmark why ? It has SPC-2 which is based on sequential performance, which is pretty good, but not the sweetspot for most workloads including VMWare.

                All can also provide much greater choice in terms of disk type and capacity efficiency, supporting differing raid levels etc and all are very simple to use and provision. 3PAR has the same OS and management across all boxes, so if you can manage the smaller FClass then you can also manage the VClass. BTW it's not really about achieving maximum performance, it's knowing you can do if you need to, being able to scale the platform with confidence.

                Nothing against XIV it just needs to be pitched where it fits and not pretend to be something it isn't and suggesting it smokes everything out there with absolutely no evidence is a silly place to start.

                1. Anonymous Coward
                  Anonymous Coward

                  Re: Clustered Pairs > What were you smoking ?

                  As I wrote above, you are comparing idealized, benchmark tests whereas I am writing about what I have seen happen in real world situations with a bunch of random workloads and random IO and random scale requests being thrown at the system without a team of design engineers creating the perfect test scenario for their system to do well on a benchmark no one looks at except people working for storage companies. I am not writing that in some artificial lab environment XIV can not pump out considerably more IOPS, but, unless your business is running a storage test lab, it doesn't really matter to you.

                  I am not speaking for IBM. I am speaking for myself. IBM maybe the last honest systems provider that isn't going to saying our system will run at 450,000 IOPS with under 15ms latency until they actually understand your environment. There is absolutely no way that VMAX or 3PAR are going to perform at 450,000 IOPS with low ms response time in the standard user's environment. The only way to judge this is to do a PoC. XIV is pleased to do PoCs in real world circumstances. If 3PAR and VMAX actually smoke XIV, then do a PoC and find out. I think what you will find is that all of the storage systems do more than enough IOPS to handle your workloads and provide a reasonable response time which more than meets the performance requirements of your software provider's recommended specs. XIV requires no planning, virtually no admin and is less costly than either of the other two.

                  1. Matt Bryant Silver badge
                    FAIL

                    Re: Re: Clustered Pairs > What were you smoking ?

                    ".......IBM maybe the last honest systems provider....." Comments like that just make you sound a bit naive, TBH. All the vendors FUD each other, and they all employ reams of marketting people to use clever language to bend the truth about their own products.

                    1. Anonymous Coward
                      Anonymous Coward

                      Re: Clustered Pairs > What were you smoking ?

                      True, IBM is not completely immune to the benchmark game, everyone does it. It is a question of magnitude. For instance, you don't see IBM releasing full page ads in the Wall Street journal saying that Sun Supercluster runs 8x Power 795 or 20x Superdome at half the cost and getting in trouble with the Better Business Bureau, a la Sun-Oracle. Likewise with ZFS arrays supposedly being multiples the performance of every other filer at a fraction of the cost. IBM doesn't release ads, to my knowledge, comparing their systems to some other system, with benchmarks that are not at all indicative of what customers should expect to see in their environments. They play the benchmark game, as does everyone, because you have to if you don't want to get nonsense comments like "XIV has yet to release the SPC-1 benchmark!!" or some other benchmark of interest to no CIO and of relevance to no real world environment. There are some that use benchmarks as a defensive tool to shut other vendors up and some that use them as offensive tools, Oracle being the worst of all.

              2. Anonymous Coward
                Anonymous Coward

                Re: Clustered Pairs > What were you smoking ?

                Actually with 3PAR the nodes are in the same frame, infact they're on the same copper traced backplane, you can't really get closer coupled or lower latency than that, not even with infiniband. I think you're probably confusing array federation with local clustering.

  6. G Olson

    Pull your head out of your AIX

    Wunderbar1 sounds like the only storage you have ever managed is IBM or some other legacy storage which has not been engineered recently. 3PAR ASICs-- some may consider ASICs to be a limiting factor in implementation; but the improvement in performance is worth it.

    1. Anonymous Coward
      Anonymous Coward

      Re: Pull your head out of your AIX

      XIV is not legacy storage by any stretch. It is was built five years ago by Moshe Yanai, the inventor of EMC Symmetrix and data deduplication tech. IBM bought the prototype from him a few years ago. See comments above and read up on XIV. As with most HP acquisitions, 3PAR was an attempt to catch up with IBM (XIV, in this case).

      1. Anonymous Coward
        Anonymous Coward

        Yup we all understand XIV is not a legacy architecture, and yes we all know the history with Moshe. HP didn't need to catch up with IBM XIV, they already had a very similar solution, commodity servers, grid based etc, they just aim that solutiont at a different area of the market. If you've only got a hammer every problem needs a nail. If anything I'd say V7000 is the more promising long term platform for IBM as it has plenty of flexibility. Your gainsaying of other architectures and repeating the same old fud won't make XIV go any faster or scale any bigger, unfortunately You're comparing apples with oranges.

        1. Anonymous Coward
          Anonymous Coward

          See post above. XIV isn't going to work for every workload. If you need a million IOPS with very low response time, look some SDD laden mother ship. If you need a petabyte of storage under one hood for some reason, XIV is not the answer. The usual complaints about XIV are basically "XIV isn't a Lamborghini." It isn't, but for 99% of the companies out there XIV's scale and performance isn't an issue. It basically manages itself, it is less than half the cost of the other options, and it still gives you enterprise grade performance and enough scale to meet the vast majority of workload requirements.

          There is no FUD about 3PAR or any of the other systems. If you need tons of scale under one hood for whatever reason, 3PAR is better suited than XIV. If you need the Formula1 performance, it is not XIV... but not that many people have Formula1 workloads.

          1. Anonymous Coward
            Anonymous Coward

            So it doesn't actually smoke anyone, it's a one trick pony, which isn't too fast and doesn't really scale, but so long as your requirements never change and everything remains predictable, you should be ok ?

            No real problem with that so long as you accept the limitations upfront. The biggest issue is lack of flexibility on performance, capacity is easy. Since it's a one size fits all architecture (at the moment) you're in a bad place if things change.

            Half the price of what ? Not from the pricing I've seen or the environmentals. XIV consumes massive amounts of power and subsequent cooling for the IOps being delivered, so you need to consider whole life costs and if things change, your only option is another XIV, and that's no longer half the price.

            1. Anonymous Coward
              Anonymous Coward

              Just the opposite. If you are running some banking application (or whatever) that has a very predictable IO portfolio and *needs* to run at half a million IOPS with 15ms response times, XIV is not for you and you should look for a storage system that is ideally suited to that workload and tune it specific for that workload with specific disk/SSD intermix, path tuning, specific allocated disk arms, etc.

              If you are in the 99% of storage that has a bunch of random workloads and you have no idea what they might look like tomorrow (people just throwing new file data, OTLP data, web server data, etc at your SAN). XIV will perform well on any type of workload, but specifically random IO workloads. It will, in all likelihood, out perform the other systems if you have all kinds of stuff on the system and no time to plan it all out in advance... or no ability to plan it as you don't know what is coming in to the SAN in a few months. XIV thrives in environments that are not predictable because there is no planning or layout changes required. You just add more capacity to the pool and the system figures it out. Often times XIV ends up being faster in real world environments because disk gets out of whack with CPU and cache or the layout is planned poorly. In a lab environment, I don't doubt that VMAX would smoke XIV, but in a real world environment where stuff is happening and you don't have time to plan and admin a SAN, XIV can end up being faster because there is nothing to plan, tweak or screw up. Finding an application that requires insanely high IOPS is rare. The number of companies that have hassles with SAN admin, planning requirements, etc and wish their SAN would just run at their ISV specs without having to be messed around with is everywhere.

              I have used XIV and VMAX (well, DMX) and the answer to the power/cooling questions is "meh." XIV can consume more power vs. traditional storage if you are running your traditional storage very well. Unless you are running tons of volume, it probably will be such a minor difference that no one is going to get bent out of shape one way or the other. It isn't going to move the needle that much.

              If you have seen XIV quotes that are more costly than VMAX or 3PAR with all of the software added in, I would talk to your rep. Generally, XIV should cost considerably less... unless you are getting a massive discount on VMAX/3PAR and little discount from IBM.

              1. Anonymous Coward
                Anonymous Coward

                #and tune it specific for that workload with specific disk/SSD intermix, path tuning, specific allocated disk arms, etc.#

                Years ago we had to bother with those things. Nowadays we have page level tiering and performance(IOPS) is a matter of setting policies. Now you can increase IOPS and capacity independently with SSD and SATA(NL-SAS).

                XIV could have been interesting for those customers looking for "just some storage" and present itself as an alternative to for example HP EVA. Problem with that is the poor storage utilization. You get about 70 TB of useful storage out of an XIV with 180 TB SATA. .

                XIV seems like something that has just randomly popped up in IBMs storage portfolio.Where is its place between DS8000 and V7000?

                1. Anonymous Coward
                  Anonymous Coward

                  "Years ago we had to bother with those things. Nowadays we have page level tiering and performance(IOPS) is a matter of setting policies. Now you can increase IOPS and capacity independently with SSD and SATA(NL-SAS)."

                  You're still bothering with setting policies and creating business priority schedules as well as creating different disk/SSD profiles for different workloads. As I mention, if you need insanely high performance, that's what you have to do. XIV is going to hit the vendor recommended IOPS and response times from disk out of the box for all major commercial ISVs. If you need to go far beyond that, then you need to mess with all of that SAN admin. Most are just looking for the SAN IO performance to not cause a major bottleneck. XIV will do that easily in all but the most extreme environments.

                  "an alternative to for example HP EVA"

                  EVA is kind of similar in that it breaks IO and volumes up across all of the disk, but it is still the old dual controller set up where you basically keep adding stuff to the array, every time you add stuff it gets slower, when you can't take any more performance degradation, you buy a new frame. XIV is different in that you are adding IO, CPU, memory every time you add disk. Every time you add disk the system gets faster and you don't have to mess with capacity planning.

                  "XIV seems like something that has just randomly popped up in IBMs storage portfolio.Where is its place between DS8000 and V7000?"

                  I don't know, ask the IBM product manager. I would say that XIV is storage for people that don't want to deal with storage. It is never going to embarrass you. It was originally positioned as mid-market, but has been elevated, as far as I can tell, to enterprise grade as IBM has found that what they thought would be mid-market performance was an improvement upon the high end systems due to people generally running their high end stuff at fractions of the optimal levels.

                  "Problem with that is the poor storage utilization. You get about 70 TB of useful storage out of an XIV with 180 TB SATA."

                  That's because the system takes care of the redundancy with their internal mirroring architecture. Could you do it more efficiently with a roll your own? Possibly. Is it worth the time, cost and possible human error? Not for most people.

                  1. Matt Bryant Silver badge
                    FAIL

                    Re; Wunderbar1 - You know nothing about EVAs, obviously.

                    ".....EVA is kind of similar in that it breaks IO and volumes up across all of the disk...." EVA smokes XIV, period, and I know that because I have tried both with the same workloads (Oracle, fileserving, Exchange and MS SQL). And XIV does not RAID stripe all the data across all the disks in the array like the EVA does, it breaks it into 1MB chunks and spreads those chunks as mirrors across two of the controller nodes. That also makes the XIV absolutely terrible on utilisation, and means it is still going to one controller at a time to get a chunk of data which then has to be built up into the overall data by collating all the chunks.

                    ".....every time you add stuff it gets slower...." Wrong again! With the EVA, more shelves and more disks mean more thinner stripes, so data is read faster from the simple "more spidles = more bandwidth". And let's not talk about the hours and hours you save administering the EVA compared to the XIV.

                    The XIV design is interesting, and it has its place in large, cheap, semi-smart JBOD category, but it's being squeezed out of the SMB space by offerings like hp's P4000 kit (lots of cheap rack server nodes with internal disk all pooled as one instance), which does the same better and cheaper. I'm not sure why IBM think they can push it up into the enterprise space, they really need to go back and make the XIV design work more like P4000.

              2. Anonymous Coward
                Anonymous Coward

                Meh ? See your post above talking about 2PB over 195 XIV nodes vs 2PB's over 8 controllers on 3PAR. The power, cooling and datacentre floorspace savings will not be insignificant

                For all the above we still go back to 15 nodes, each with 12 SAS 7.2K disks = 180 disks, each of which is capable of ~80 IOps at a reasonable latency. Asuuming any write overhead for Raid X is masked by cache then that's 14,400 IOps total.

                Now lets be really generous and pretend XIV can double the IOps per disk through its grid architecture, well thats still only 28,800 IOps flat out. So unless your workload is very cache friendly and most OLTP and virtual environments aren't as they're random then you're out of luck.

                BTW 3PAR uses wide striping so wouldn't be considered a traditional architecture, the more you add, the faster the system gets as a whole. There's pretty much zero up front planning as the system self manages, you just request capacity of a specific type. If it proves too fast or too slow you can tune the whole thing to a different raid level, stripe size or tier with no downtime and no disruption.

                The difference being I have options, whereas your only alternative is to buy another XIV and migrate data.

              3. This post has been deleted by its author

  7. Anonymous Coward
    Anonymous Coward

    A quick clarification on 3PAR's SPC-1 benchmark configuration:

    - RAID 10 to avoid RAID 5/6 write penalty

    - Instead of striping LUNs across nodes, each LUN was contained within a single node

    - Only 300GB/15K FC drives, probably short stroked, no auto-tiering, no thin provisioning

    I could drop a Prius from a B52 for speed, but I wouldn't tell people you should buy one because it's top speed was terminal velocity.

    1. Anonymous Coward
      Anonymous Coward

      I haven't seen 3PAR's SPC-1 config. I would assume you are right though. I have never seen a performance benchmark that wasn't short stroked. It is not unique to 3PAR, that is just how the benchmark game is played. If you actually run a realistic workload with a configuration that might be within the realm of possibility of some one ever buying, you will get killed. Benchmarks were supposed to be an apples to apples comparison of real world environments, but they are just the opposite.

      The truth is that if 3PAR, or other vendors, ever heard one of their customers say that their SAN was performing at their benchmark result, they would assume the customer has messed up their performance reporting... because they are so outlandish.

    2. Anonymous Coward
      Anonymous Coward

      I think what you'll find is that you've just described IBMs SPC-1 submission for the SVC and V7000 combo, you know the one where they strapped together 17 different systems to try and upstage the 3PAR result achieved on a single system. I know IBM didnt use 300GB drives, that would be too generous, 146GB I believe. See they're more than happy to play the benchmark game with anything oyher than XIV, funny that.

      Yes Raid 10 was used, it's all in the submission, it is afterall a speed test, and the choices for XIV Raid are what ? Oh thats right Raid X, which is a derivative of ........ Raid 10.

    3. Matt Bryant Silver badge
      Happy

      Everyone should drop a Prius from a B52, TBH.

  8. Man Mountain

    Wunderbar, you're embarrassing yourself and IBM

    Man, there has been some drivel posted on here. We all know benchmarks are exactly that - a flat out speed test and vendors use a variety of tactics to maximise their score. IBM in fact is normally a huge supporter of the SPC-1 and has benchmarked pretty much every other array in their portfolio - apart from the XIV which is what speaks volumes. As for the 3PAR benchmark, it was pretty much out of the box (500 commands vs 12,000 commands for Hitachi's VSP for example) and wasn't short stroked if you look at the usable capacity figures in the submission. The impressive thing about the 3PAR benchmark is that is was achieved using a single array fully configured with 15k drives. It didn't rely on cobbling separate arrays together, or stacking them full of SSDs, or performance from cache. The best performance it could achieve was when it was full of drives - so that is predictable, real, disk-based performance. Look at the VSP submission for example and their best score was with an array that was half full. So the assumption would be that any extra drives added beyond that didn't add any performance.

    The XIV bubble has burst. I am consistently meeting customers who say it's ok but it's not what it was claimed to be. It's not 'Tier 1 at Tier 3 pricing' as IBM claimed (I worked for IBM selling storage for a long time), it's fairly simple general purpose storage that is looking for a problem to solve. It's not performent or resilient enough for Tier 1 and it's not dense / cheap enough for lower Tier storage. It's Tier 1.5 which could be claimed to be the best of both worlds but in reality is the best of neither.

This topic is closed for new posts.

Other stories you might like