back to article REVEALED: IBM's new DS3000-killing Storwise storage beast

IBM has an entry-level Storwize V3700 array coming that, we are told, effectively replaces the existing DS3500 array. IBM's Storwize V7000 is a new array with SVC SAN virtualisation capability, an XIV-style GUI and enterprise-class features (background here.) The DS3500 is a low-end array that is part of the DS8000-DS6000- …

COMMENTS

This topic is closed for new posts.
  1. Nate Amsden

    the same as this array you wrote about for China?

    same as this ?

    http://www.theregister.co.uk/2012/08/23/ibm_v3500/

    the number is a bit different

  2. Nate Amsden

    404

    that PDF you link to has gone 404 too

    The requested URL /common/ssi/ecm/en/tsd03157gben/TSD03157GBEN.PDF was not found on this server.

  3. Michael Duke

    The info I have (and it is consistent with the V7000) is the V3700 will be limited to 5 enclosures.

    So 120 2.5" disks OR 60 3.5" or a mix making 5 shelves

  4. Anonymous Coward
    Anonymous Coward

    This should be a hit, but I doubt it replaces the DS3500

    The DS3500 is low end, economy, LSI based SAS disk. This will be on a completely different level of functionality and, likely, performance than DS3500. This is more like an array which should be able to take a bite out of NetApp, Compellent, and HP's menagerie of mid/low end stuff.

    1. Adam White

      Re: This should be a hit, but I doubt it replaces the DS3500

      Depends on how they price it. If it's litrerally the same $/GB as a current DS35xx then it's an ideal replacement.

      IBM has too many storage product lines, they need to consolidate and as Storwise is a paid-off development effort (and partially derived from VSC and XIV) it's a good candidate for the low-to-medium range vs Engenio. By artifically constraining the amount of disks you can put in a V3700 IBM can preserve the market for more expensive offerrings like the V7000.

    2. flashguy

      Re: This should be a hit, but I doubt it replaces the DS3500

      "will (likely) be on a completely different level of ... performance than DS3500." You may well be right that they keep the DS3500 for the low end, but I doubt the performance claim. In terms of performance, a DS3500 with half the disks and a throttled cache can outperform this proposed model's "big brother" the V7000 on throughput (see http://www.esg-global.com/lab-reports/ibm-storwize-v7000-real-world-mixed-workload-performance-in-vmware-environments/?keywords=v7000) and (http://www.esg-global.com/lab-reports/ibm-system-storage-ds3500-express-mixed-workload-performance-with-application-aware-data-management/?keywords=ds3500). Load it up with SSDs and a larger cache and the same amount of disks, and I wonder what the IOPs figures would look like.

  5. Dare to Think
    IT Angle

    Why do we actually still need a SAN?

    Another low end storage array from this or that vendor does not matter. Why do we actually still need a SAN?

    Let's face it: it's first and foremost a cumbersome, expensive method of providing storage to a server, as you not only need a expensive, dedicated network infrastructure (the SAN fabric) in addition to the IP network you already have.

    SAN is never fast - it cannot be, as the signal still needs to travel through the fabric from the server of the SAN array. Every direct attached SAS/SATA disk beats the SAN.

    A SAN is a single point of failure and it introduces more points of failures, despite what the sales rep says. In HA implementations you have a redundant number of servers, NICs, HBAs etc., Oracle RAC, Dataguard, Golden Gate, Solaris Cluster, Power HA and whatnot. What do you have with the SAN? A bunch of arrays, fragmented over time with Raid 1, 10, 5, 50, 60, 0+1, 0+6 configs, getting mains from one UPS. Now, what if that UPS fails or someone removes all the ports on the switch which was in use whilst the Oracle DB was quarter-end batch processing?

    For me a SAN is just another example of 'cos-we've-always-done-it-this-way. You're much better off with a few dozen HDDs, or better, SSD PCIe's in your server and OS virtualization inside.

    1. Anonymous Coward
      Anonymous Coward

      Re: Why do we actually still need a SAN?

      Internal disk/SSD can work well. Mainly people use SAN to share resources/increase utilization and for centralized management of data.... It does seem that people, and IT companies, are moving back to internal or DAS storage. As usual, people are pretending that this is some new concept that came out with HANA, but it is really the original model. Most people have always had an AS/400 (now System i) or two in their data center with internal disk that they don't even know about because it just works and is rock solid. These new integrated systems are basically trying to work their way back to the System i and the "minicomputer" generation ("mini" as compared to the mainframe).

    2. Matt Bryant Silver badge
      FAIL

      Re: Why do we actually still need a SAN?

      "..... Why do we actually still need a SAN?...." Because most storage gurus still hate sharing networks with the IP traffic. Partially it's a turf thing, as in "storage is my turf, network monkey, so I want my own switches/cables".

      "....as you not only need a expensive, dedicated network infrastructure (the SAN fabric) in addition to the IP network you already have....." Well, what actually happens is your chief networking monkey assures the board that the network can do FC over IP just fine, then they actually implement it and rapidly run out of bandwidth. So you end up putting in extra networking just for the FC-IP traffic, which turns out to be just as expensive and more hassle to manage than the old SAN.

      ".....SAN is never fast...." Sorry, but yes it is. Seeing as it has its own dedicated network it can be very fast, and faster fibre is on its way. We already have 16Gb FC and 32Gb is coming in 2014, so the product is far from dead.

      "....Every direct attached SAS/SATA disk beats the SAN....." Every direct-attach disk introduce a single point of failure for your data, and also massive inefficeincies in storage utilisation. If you don't have a SAN, when you want to share that ata with another server you have to copy it over what is probably already a swamped IP network.

      ".......A SAN is a single point of failure...." You don't know how to design a SAN. Please stop embarrassing yourself.

      1. This post has been deleted by its author

      2. Dare to Think
        IT Angle

        Re: Why do we actually still need a SAN?

        "most storage gurus still hate sharing networks with the IP traffic" - Well, they will have to sooner or later, with the increased deployment of converged HBA/Ethernet cards and 10GEth networks.

        "putting in extra networking just for the FC-IP traffic" - you don't need that, as the virtualized servers are using DAS storage. Thus, storage traffic is left within the server. Unless you want to replicate block devices or use Gluster etc., for which with you can use EtherChannel (cheap) or a 10GEth network (expensive).

        "rapidly run out of bandwidth" - Matt, the highest network spikes are usually on the proxy tier, the largest data movements are usually on the database tier. You put them on different networks anyway to satisfy IT security, and you don't do a weekly offline database backup at 10am on Tuesday morning, rather than the incremental online backup after 11pm every day. If you don't want to do that, it is better to leave the storage data stream within DAS. Strangely enough, you haven't mentioned Dataguard or SRDF over IP, which put load on the network already.

        "".....SAN is never fast...." Sorry, but yes it is. " - Matt, compare the IOPS and throughput of a direct attached SSD or SSD PCIs with what you get with a SAN. A PCI bus is simply is faster than the SAN fabric.

        "Every direct-attach disk introduce a single point of failure for your data, and also massive inefficiencies in storage utilisation." - Firstly, storage inefficiencies never went away because of the SAN, they were transferred to the storage arrays, as I pointed out in my original post. Thin provisioning is done on OS Virtualized servers, too, such as RHEV. Spindles still pop may the disk array be in the server or in the SAN. A storage strategy removes inefficiencies. And RAID 0,1,5,6,50,60, etc can also be done on the server itself, if you have a sufficient number of drives. Have a look at the Dell c6220 or the many 4U servers out there. In addition, DRBD can help in HA implementations.

        "You don't know how to design a SAN" - Matt, one storage controller serving, let's say, 100 servers, is a SPOF, even if you have two Cisco Nexus switches. The way around this is....another storage controller and array, preferably in another data centre. Let's compare the expense of two 4U x86 servers with two IBM XIVs, 4 Brocade switches, etc.

        Someone said "You never managed a few hundred servers, did you?" - Yes, I did. And I've seen a whole datacentre going down and weeks of restores/roll forwards because the SAN went pop. Strange, when it comes to security everyone is saying compartmentalization. Not so when it comes to resilience. Have a look at vSphere, RHEV, Canonical Landscape or Oracle Enterprise Manager for managing distributed systems.

        1. Matt Bryant Silver badge
          FAIL

          Re: Re: Why do we actually still need a SAN?

          ".....Well, they will have to sooner or later, with the increased deployment of converged HBA/Ethernet cards and 10GEth networks....." Why? As I mentioned, 16Gb FC is already here, and that's 16Gb with proper FC implementation, not 10GbE slowed down by having to do FCIP. 32Gb FC is coming soon. You might have suggested trunking lots of 10GbE pipes together, but then that chews into the number of 10GbE ports you can have per switch, and that isn't many unless you want to put a CISCO 6500 at the top of each rack.

          ".....you don't need that, as the virtualized servers are using DAS storage...." I see where you're going now. Ever heard of a product called Lefthand? It's now hp's P4000 range - virtualised storage based on standard x64 nodes acting in a clustered filesystem, serving clients by iSCSI and all working over 10GbE links, and usually using internal DAS (we'll ignore the blades and array VSA version for now). Only problem is that range is aimed very much below the SAN array level of beasts like the 3PARs, because it doesn't match the 3PARs in either scale or performance. Don't get me wrong, for SMBs and a lot of mid-level departmental applications such grid systems are great, especially as they integrate with products like vSphere and SRM, but for those enterprise applications they don't cut the mustard like a big SAN array.

          "....compare the IOPS and throughput of a direct attached SSD or SSD PCIs with what you get with a SAN...." You mean the IOPS from what, a 100GB card? How many of those do I need to scale up to get to my SAN array's capabilities? Local SSDs are great for application acceleration, not general storage.

          ".....one storage controller serving, let's say, 100 servers, is a SPOF...." Hooooboy, it really is a long time since you looked at arrays! Most designs hitting the market now have multiple controller node pairs, e.g., the top-of-the-range 3PARs have four pairs, and each node can hold multiple redundant cards for hosts and drives, so the whole idea of "one controller is a SPOF" is just so Nineties. Go away, update your knowledge, then come back.

          ".....And I've seen a whole datacentre going down and weeks of restores/roll forwards because the SAN went pop....." So we know you have TRIED to design a SAN before then, just you didn't do it very well. I suggest you employ someone with the right knowledge you so obviously not only lack but are too blinkered to see.

        2. Anonymous Coward
          Anonymous Coward

          Re: Why do we actually still need a SAN?

          What about Disaster Recovery? We use ONE method, synchronous/asynchronous replication, which covers all applications and operating systems. Monitoring is one simple script that checks the status of all the replication groups. And this method has been tested and worked successfully for over a decade.

          With DAS we would have to implement numerous replication regimes covering half a dozen operating systems and 100+ different applications. Even if it would work, which I doubt, the administration would be a nightmare.

          Your view on SAN vs DAS is hobbyist at best.

    3. Anonymous Coward
      Anonymous Coward

      Re: Why do we actually still need a SAN?

      "Every direct attached SAS/SATA disk beats the SAN" - Really? Any decent disk system with a cache, configured properly will drive more I/O at lower latency than your server's internal disk storage.

      "A SAN is a single point of failure" - No it isn't, its often fully redundant, the disk system often is too. Your internal storage often uses a single SAS adapter.

      "For me a SAN is just another example of 'cos-we've-always-done-it-this-way." - Actually we used to do it the way you want to, but then this millennium arrived.

      I'm convinced you are in fact trolling.

    4. Dave Hilling

      Re: Why do we actually still need a SAN?

      And if you need several hundred servers in 6-8 racks... tell me how you do that exactly with DAS. You cant, SAN is the only good option. Not everyone has those requirements but we do and SAN is our only option.

    5. Morten
      FAIL

      Re: Why do we actually still need a SAN?

      You never managed a few hundred servers, did you? SAN is cost effective in larger scale, it is faster than direct attached in most cases because of the Storage you put on your SAN generally has hundreds of spindles and tiered storage with SDD. SAN can easily be made redundant. Storage inside every server? Are you mad? How to you propose to manage that with thousands of servers?

      1. Anonymous Coward
        Anonymous Coward

        Re: Why do we actually still need a SAN?

        I agree with the pros for SAN. This question is bound to come up more and more because of Oracle telling everyone that using an x86 server with flash as storage (Exadata) is the way to go and Google's clustering architecture. Exadata is smoke and mirrors. Google is not, but it is insanely complex.

  6. Diskcrash

    The writing was on the wall

    One of the reasons if not the single biggest reason for LSI to dump its Engenio division to NetApp was their inability to keep IBM happy with the DS product range. The DS units from Engenio are solid, reliable and well performing systems with virtually no software integration. IBM constantly asked for and was promised numerous features and updates but LSI senior management thought they could deliver something for nothing and as a result frustrated a major OEM customer.

    This move to a SVC fronted system is just the natural evolution of IBM moving away further. Sure they will continue to sell the Engenio systems as long as their is some profit there but they would rather bring that all in house.

    It appears that NetApp is taking some steps to provide resources to refresh the product line and to better market it to OEMs but they seem ambivalent about the product at times.

  7. M. B.

    Wondering...

    ...what else about this array will be cut down to make it fit in to the DS3500 price range, if that's what the are trying to do? Are they cutting back on a couple host ports? Are they cutting the controller cache? Are they cutting out some of the options, forcing people in to the v7000 for certain functionality? I mean, they can probably lose 2 of the 4 fiber channel ports per controller, and they can maybe get away with cutting the RAM in half (can they get away with 4GB/node?) but that won't introduce a large cost savings, RAM and 4-port HBAs are pretty cheap these days.

    Not that I doubt them, I just want to know what I'm giving up to get a v7000 with less capacity aside from just "less capacity", which in and of itself won't lower the initial cost (unless there's a big markup on the v7000). As someone considering SAN options for a medium business/small enterprise, I'd be an idiot to ignore this product but my local partners don't even know it exists yet.

  8. Anonymous Coward
    Anonymous Coward

    SVC equivalent

    That is a smart move if they can get the same margins for themselves and $/GB and price/performance as the DS3500 that Adam talked about. IBM is the master of complete solution selling, so they have a ready made market for any product they launch.

    Why isn't Engenio/NetApp pursuing their own OEM-able version of SVC? Didn't LSI buy StoreAge? I remember reading that has wound down, but did NetApp get that IP. The V-series is well an good, but it is ONTAP based and may be inherently inefficient on 3rd party SANs with some basic data management functionality. Isn't there an opportunity to do something new and unique? Maybe netapp should OEM Datacore's offering to bundle with the Engenio portfolio (http://www.theregister.co.uk/2012/10/24/datacore_picture/) or use their VSA without the WAFL tax.

    1. M. B.

      Re: SVC equivalent

      I used the IBM equivalents, two DS4300 arrays with an N6040 front end and it was... subpar. Manageability was fine since I'm quite comfortable with Data OnTAP but there was a weird timing issue between the arrays which would cause the DS4300s to disconnect from the IBM-branded NetApp at random (well not random, but at an unknown interval). We were never able to solve it (read: IBM and NetApp couldn't figure it out, we were forced to buy native N-series shelves to cover the capacity hit) so if I were to be front-ending any of the Engenio stuff at the moment, it wouldn't be with a NetApp unless they've specifically addressed this issue.

This topic is closed for new posts.

Other stories you might like