194 posts • joined 26 Aug 2009
Re: Was hoping...
Nice. There are 6 of us going through NIOP training tomorrow and Nimble is selling exceedingly well for us - almost every storage discussion becomes a Nimble discussion at some point, if only for a few minutes. We have a number of customers who either want to stick with FC after committing to fabric upgrades and a few others who are growing in terms of cold data (IOPS don't vary much but the capacity requirements would put them at or over the limits of the current arrays).
...to see the inclusion of Fiber Channel HBAs. I've heard "it's coming soon", but that's about it. At least the midrange stuff now has additional front-end ports. I'd still like to see them scale up higher in terms of disk shelves, seems dumb to arbitrarily limit their capacity to 3 expansion enclosures when there's nothing really limiting them architecturally.
We have several customers taking a cloud-first approach to core infrastructure services, it depends on the level of confidence they have in their service providers (including us). Some of them are so cloud-focused they are even buying cloud-managed networking/wireless/security and getting someone else to manage that for them as well. We've got a lot of businesses on our hosted Lync service and have had great results and growth in that area. Putting VoIP in the cloud is a big commitment, but it seems to be working out well.
A lot of it boils down to cost. I can put 10 medium instances of Windows Server or RHEL in our cloud for 5 years for less than the cost of a single server with local disks (never mind licensing etc). Since most of our smaller customers are running less than 10 distinct workloads (and simply using hosted services for common workloads like email and the aforementioned collaboration), many of them don't even bother with a server at all.
We've been following the trend...
...as well, since we're a premier partner. We seem to be losing a lot of disk-based business when we lead with HP. It's not that the deals aren't there, we just aren't winning them with HP. I'm not sure if it's a mindset thing or what. Might be a couple good quarters around the corner, but the past two have been pretty bad. Even traditional all-HP shops are taking the time to shop around and it hasn't been good for our high-end storage practice (and devastating on the low margin SME stuff).
There are at least a few VMware SE's mentioning that to customers scoping storage as well. Since we don't sell NetApp or HDS and EQL is our number 2 or 3 option for iSCSI, we're biting it occasionally when that question comes up. I swear there's another partner out there sabotaging us (good for them, I'd do it too). Other vendors promises are just that and most customers with any sense stopped believing those a long time ago (even though we lead with them usually).
The funny thing is...
...most partners aren't even allowed to sell NSX services or support yet. We're a large regional Premier partner and we're only just starting to get our partner briefings and there is some talk of training plans in the next 6 months. We're considering coupling NSX with Brocade VCS for a large data center build but yet we still have to wait our turn. Only select nationals with PSAs are actually permitted to sell the NSX product and services around it. So most people who say they've worked with it are full of shit, at least around these parts.
VMware is playing this really close to their chest right now, way too close to tell what impact this is going to have on the industry. Everything is pure speculation at this point. There is one large client that was looking for a significant amount of work to be done on their freshly implemented NSX environment around scripting and monitoring that is currently unfilled simply because no one knows the product yet.
@Tokoloshe: We're expecting that, based on what our PSE discussed. Some things are best left to ASICs, and when we pushed for more details around that statement and the impact of extensive ACLs and routing and load balancing configurations we didn't get very far.
Re: Smells like copy-protections
I think there are some advantages to the per-core billing model.
I worked for a company that used the Oracle Database Appliance to drive a RAC cluster. It was pretty simple for me to take a look in my storage management tool and server monitoring tool to turn around and tell Oracle exactly what size and amount of disk I/O and CPU/RAM utilization we were driving.
Their proposal had us running Oracle VM on the ODA hosts and running the RAC nodes as VMs on the hosts (which is fully supported and gets around their restrictive virtualization licensing). It ended up being a significant savings for the company and upgrades were dead simple (just add resources to the VMs as needed). Since Oracle provides Oracle VM appliances for many of their applications, provisioning new applications was a snap.
YMMV of course. Worked well for that company though.
I wonder if...
...there will be any encyclopedias or scientific periodicals in the huge collection (and if they will be searchable). I hate it when my son has a project and my wife's first instinct is to Google something and trust the contents in the first hit are factually accurate and not subject to bias. It would be nice to get him started down the path of proper research and use quality references that don't start with a "W".
Technical books, especially certification books, likely will not make the list since they are a good cash cow for their prospective companies but if they do make the list I will go straight to Amazon as soon as it's announced and buy a Kindle PaperWhite and a subscription. Fingers crossed on that.
"The Compellent array maxed out at around 6,000 IOPS but the Tegile Zebi hit 35,000, meaning more servers and users could be supported."
...sounds like someone either undersized or incorrectly configured the Compellent array, since a pair of SC8000 controllers with 64GB of RAM can do 6,000 8k (avg) random IOPS with 70%-ish reads on 24 drives - I know because I did it myself and hit firstname.lastname@example.org response time. A Compellent will not "max out" at 6,000 IOPS, not even close.
For reference, the 6 SLC + 6 MLC flash/hybrid shelf is designed to sustain 77,000 IOPS with sub-millisecond latency.
So either someone screwed the pooch designing or installing, or alternately, someone is lying.
So long as it drives their margins down!
We have a couple 3PAR customers running Tier 1 workloads on their mesh-active 10k systems and know many large local orgs using cluster-mode NetApp to run their Tier 1 workloads. Any by Tier 1, we're talking about utilities and hospitals and governments running hundreds to thousands of critical applications. Even with hardware failures or under extreme load they've all been fine (as long as they've been implemented properly, everything sucks with bad design).
I've been considering them both Tier 1 for a while now. Drives our EMC partner SE mad.
I've not heard...
...much about this company except one partner who went with an ISE box over a P4000 solution we priced out. They went through 2 of them and hours of support calls trying to get the thing working properly. Still not sure if they got things sorted out or not. To be fair, that was a few years ago (3-ish).
I doubt it...
...most of the deals I've been in leading with Cisco UCS have still gone to EMC or Nimble when it comes to storage, depending on customer requirements. Cisco account managers have only put Invicta in for special use cases.
They have been very aggressive with discounts for the VSAN nodes though and I too have heard about the Simplivity OEM talks/rumors. That combo plus ScaleIO through their BFFs at EMC would probably scratch any hyper-converged itch they have for the time being.
I still don't think they REALLY want to get too far down the path of being a distinct general purpose storage vendor. What I've seen locally is that a company buying Dell servers will buy Dell or EMC storage (or NetApp through someone else). HP buyers will buy HP or EMC storage (again, sometimes NetApp). Cisco UCS customers feel free to buy whatever they want, and we just try to encourage them to make it something we sell (all of the above, plus Nimble but not NetApp obviously). Cisco is happy because people jump into the Nexus product line as a result, they don't seem to care who wins the storage business.
I suppose we'll see. UCS is eating up the large enterprise/utility/government/cloud services market here right now but these places are usually the Cisco-or-nothing types anyways (at least on networking/wireless/collab), and even then I can't see then jumping storage platforms.
I would nickname it the c*ntblock though.
This kind of nonesense...
...is why I selected RHEL for a DMZ project involving a few dozen servers at my old employer. Sitting outside the managed environment meant sitting outside the visibility of most of our tools. No BS from Red Hat (here's how you license it, yes that's production support, yes that includes all this extra software, sure we'll give you a discount on JBoss), and the support was at the least slightly better than MS, and quite often much better.
The next year saw us replace aging Solaris boxes with RHEL on x86 for Oracle RAC, inexpensive, fully supported, and fast enough for our accounting folks to bring us cookies the first time they ran the monthly reporting.
My director was especially tickled by the amount of money NOT given to Oracle and Microsoft and the users were much better off for it. The yearly cost savings in licensing alone helped them justify two additional FTEs plus training for all my admins.
"Dell could have integrated its own server, storage and networking kit but has decided that it is better to use Nutanix as the software glue for its converged system offering."
Except Dell did integrate their own server/storage/networking kit with custom management software and they called it the Active System, orchestrated by the Active System Manager software. They still sell them, and they are owning the education market pretty hard in my area (as an HP and VCE/VPEX partner, this makes us sadface).
Re: Wont be a hit until
But also, I'm interested in this one myself. I'm torn between this and a Lenovo Yoga as an "alternate device". I like having the tablet functionality with touch for drawing network diagrams or reading manuals (or watching Netflix), and being able to convert to something with a keyboard when I need to create documentation or enter commands into a CLI.
As it is, I'm using an HP Elitebook and iPad Mini to get things done and it's a bit cumbersome with two devices, plus smartphone.
It may offer protection from litigation from IBM...
...but I don't think they really expect it to specifically protect them from EMC where EMC has provided evidence that data was removed from their internal systems by staffers who then left for Pure? That is not a matter of IP infringement, but rather a matter of IP theft (not yet proven in court AFAIK).
I don't think this article is telling the full story, as indicated by Neo Darwin above. IBM is saying something slightly different than Pure about the patents (licensed to Pure, rather than owned by Pure). Pure may have been forced to do so by IBM after violating a patent, but is now protected by IBM's patent muscle in case anyone decides to go after Pure.
There is a special layer of hell reserved for patent lawyers.
The performance of the RAID-DP implementation is literally one of my favorite things about NetApp.
Re: Cisco leading in the VoIP market?
As a reseller, we've had great success with the Business Edition 6000 systems, even smaller companies are buying into them (~100 seats). Not so much below that, we tend to stick with Adtran in tiny environments (as opposed to CME).
...or not having Fibre Channel is irrelevant.
It doesn't matter how you connect to the storage platform, so long as the "how" meets the latency and bandwidth requirements of the customer and is supported across the stack. FlexPods, for example, use NFS over 10GbE for storage connectivity. They are plenty fast. The baby vBlock systems use iSCSI for connectivity. They easily meet the needs for the workloads they are intended to support.
The Nutanix strength is also its weakness, it scales in fixed increments. If those increments are generally in line with the business needs, then it makes a lot of sense. If you are supporting multiple applications which scale in different ways, it does not. When you deploy a data warehousing application on your converged infrastructure and you quickly need to scale up storage capacity, it's easy to do on a traditional converged system. Not so easy on hyper-converged, especially if you do not require the additional cluster capacity from a CPU and RAM perspective.
I'm in love with the idea of Nutanix for other types of applications which scale more linearly like VDI. The easier something can be, the better. For more flexibility (supporting multiple widely varying applications), I would tend to prefer more traditional converged platforms which allow for more granular scalability.
...is the reason they are tearing it up around here. If they are selling at a loss, then they need to be very explicit with customer as to what their plans are for profit. Growth is good, but if you aren't profitable there will be no money to reinvest in R&D and the product will stagnate.
That said, we resell HP and EMC as well as Nimble, and Nimble is outselling them both, combined, as of late.
A Cisco buy would have been a huge boost to their business, but the growing relationship between Cisco and Red Hat (after their Inktank acquisition) means they could deploy scale-out Ceph nodes on their C240 servers connected to a Nexus fabric, and leverage Invicta for flash acceleration. I don't know if Cisco really needs them any more. I guess we'll see if they decide to buy either of them (or both, that would piss off a lot of people and be hilarious for the industry in general).
...this comes to fruition, even as a check-box item it can only help them. More so if they can leverage their ASIC technology to accelerate the dedupe process, if it's as transparent (and effective) as the thin suite it'll be solid.
I wonder how a 2-node 7450 would then compare to a Pure FlashArray, considering the robust options and maturity of the InServ software and the availability of the 960GB SSD from HP (compared with Pure's 512GB modules).
Re: What is there to free ?
What I saw specifically was IBM shops looking for a solution that IBM couldn't provide themselves - the N-series stuff is way ahead of the DS-series stuff on features and functionality (and was often put in front of existing DS storage to extend ONTAP features to the legacy arrays). IBM would pitch N-series to those customers to keep them in the IBM camp.
The downside for IBM is that IBM would often bring the first FAS into an environment, but once the customer saw the number of things they were missing as part of being an IBM customer rather than a NetApp customer, NetApp would sell the subsequent arrays (IBM didn't certify many versions of ONTAP and was often 6 months behind on software updates, as well products which became free for NetApp users were a pain to get a hold of for IBM customers, plus the My AutoSupport site was not available to IBM customers - big let-down, and further, the support was terrible off-hours).
I could see the deal being lucrative for IBM in the beginning, being able to offer a scalable, high-performance multi-protocol storage array without losing customers to competitors, but once those customers started turning to NetApp for future purchases the writing was on the wall for the OEM deal (NetApp specifically supported SnapMirror from N-series to FAS for data migration and provided the licensing free of charge for that purpose).
I don't know how the Unified offering works out for Storwize customers but I've heard mostly good things on the block side (especially for FCP, with a few odd comments about poor iSCSI performance under VMware but that was from a P4000 pusher).
Re: Title Valuation?
When I saw the $450 price tag, my first thought was that I would have offered at least $500.
EDIT: They added "m" after the $450 literally two seconds ago. Dang.
...didn't do a very good job with the N-series. I had one and the support was miserable. 140TB in Production with top support and they wouldn't help with a failed ONTAP update after hours, hardware support only. They could never get the V-series license to work with their own DS4300 arrays (we had two of those as well). After a trio of massive failures (complete outages requiring the reboot of the entire storage stack) we were stuck with a bill for native disk shelves to replace the capacity of the DS4300's we couldn't use as advertised. Our account manager was kind enough to offer us a business card with "Sorry!" written on it.
The moment things started going to sh!t, IBM was in pushing the V7000, with no interest in trying to remediate the issues with the N-series. It was one of those things that you would have to remind salespeople of "You know you sell N-series, right?". The DS stuff would only get pushed to the smallest businesses, now with the V3700 pulling in the bottom end there is little point to continuing the relationship I suspect.
The brand recognition may have been nice (I was told IBM will sell the first one, then NetApp will sell all the rest) but I don't think they need it any more, to be honest. Not that it will help them, but I don't think they should be expecting much from IBM any more at this point.
I find it a bit...
...silly not to include the 3PAR 7450 and Compellent SLC/MLC AFA options simply because they are also sold with spinning disks in other guises. HP and Dell have dumped cash into optimizing those platforms for all-flash and they are marketed as being all flash options. So is the VNX-F 7600, come to think of it. They also offer a bit of a value proposition for HP/Dell/EMC existing customers around training, and many of the advanced array features some of the flash vendors are missing are already available to these models which are built on a known base product. I've used them all in their spinning disk forms and they're not lacking for features or performance capabilities.
I think they offer a ton of value in the AFA market, even if they are just flash-optimized versions of rust-spinners.
Out of curiosity I asked my contact who went down the Pure route how the whole thing turned out since I heard the "cheap disks" thing second hand.
Turns out Pure threw an agreement together whereby they would renew their support early and co-term the support for an additional shelf, and they'd get the shelf free because Pure screwed up. So they didn't replace anything but they did get a big spiff out of the deal.
Might have had something to do with their "fill in" array being a Nimble though (they are also very happy with it, lucky bastards).
I have seen...
...a situation where the system didn't nearly meet the marketing claims as far as the storage efficiency numbers were concerned. They were told 30x which got a good laugh out of everyone at first. After "data modelling" Pure said no less than 5.5x (not that they put it in writing though).
Actual result was 2.3x and not enough capacity in the array as it was originally sized, so the customer had to buy something more cost-effective to round out the capacity deficit (they needed about double the storage efficiency number to ingest everything with a bit of breathing room).
Unfortunate and very disappointing. My God is it fast, though! The CIO was pissed off and seeing red all they way up until all the comments on how amazing the ERP and CRM systems were running started to flow in.
I do believe that Pure ended up upgrading them to newer shelves at some point for dirt cheap dollars though. Never heard how that turned out.
Wow, imagine that!
Seriously, that's been a missing part of the formula for a while. I believe the technology existed in the DS8k series for a while but was sorely needed in the midrange. As soon as EMC, HP, or Dell heard storage tiering was a consideration in an RFP they would shred the Storwize on the lack of 3 tiers.
The thing of it was all 3 were doing it and claiming success, meanwhile only IBM was doing 2 tiers and claiming similar capacity efficiency through real time compression on fast disks. That made IBM the odd-man-out in competitive bids where tiering was important and I know it cost them business locally.
...34% of the array total cost from one vendor is in just software licenses/features (that are advertised in a manner which would indicate it is included with the base), that's the average of three customer quotes for the same model of device.
Vendor 2 works out to roughly 32% of the total array cost in just software bundles.
That's terrible, they should feel bad.
...software licensing. Its usually responsible for the sky-high quotes you see on the first round of pricing. I remember fondly asking our Dell rep about what features were licensed with our first EqualLogic array and he said "well, all of it". Nimble is the same. I believe HP is the same with the StoreVirtual (nee LeftHand) stuff.
I really don't have the stomach or the time for sorting through quotes to find out what licensed features or software need to be massaged down or out of the quote to hit price points. I especially hate per-spindle or per GB/TB costs thrown in on top of the the array and software costs. It's what I hate most about dealing with EMC, NetApp, HP 3PAR, and many others. Then I show it to a customer and they walk away from the deal, or buy the Nimble option.
Here, I'm going to go into my quoting tools and work up a few things to show you how much of the cost is wasted on this stuff. No specifics, just some general percentages.
...is literally a VNX virtual appliance as far as I understand it. Designed so you can move workloads from virtualized arrays to bare metal and back as needed. That's what I'm pulling out of this. Hopefully they get the costs right (as in: free for the first TB or something).
Re: "To replace a server motherboard takes the best part of half a day..."
Yup, it's never as urgent to them as it is to the business affected. As long as they can barely meet their SLA's they are happy.
I wonder what...
...the rebuild time will be on those drives in the real world. 2 days? We probably won't see any real midrange or enterprise arrays running these until they've validated the crap out of them in their labs but I wonder what Nimble could do with these and some of the latest-gen MLC modules coming out. XIV is another that could get a lot of out them with its rebuild performance and massive caches.
...I guess not any more than most, but I'd have to pry DD Boost for RMAN from my DB teams cold, dead hands.
I just wish the data protection ecosystem wasn't so complex, so many products with some deal of overlap.
Anyone hocking kit...
...should be able to arrange a demo of the product meeting the requirements, if not actually bringing you face-to-face with other clients running on the same platform. If they aren't willing to go the extra mile to actually show you the system doing what you want to see it do, I would question both the product and the partner.
Re: no EQL?
@Nate: No, Dell really doesn't want to sell EQL any more except to the installed base reluctant to migrate (they want new customers on Compellent). They are pricing the new SC4020 array to blow the EQL stuff out of the water. There is a roadmap for existing customers though which is nice, but it ends with replicating your EQLs to a shiny new Compellent. There is a fairly new gen of EQL so I wouldn't expect to see it die off for another several years but I don't think Dell would put it up in sales scenarios where they are better positioned to win with Compellent.
I also agree with your comment around how effectively useless this report is. And it really is useless, since the weighting of the categories will vary from one company to another. I resell some of those systems and for a large number of my customers, the winning product ends up being something other than what Gartner thinks is best based on their critetia.
This happens quarterly...
...so it's no surprise I'm seeing backwards graphs again this quarter.
...isn't in the list because the others all appear to be able to do true multi-controller configurations (with many caveats in some cases, Live Volume between Compellent heads seems to be nearly as functional as Cluster Mode).
But the gist of it is the EMC, Hitachi, HP, Huawei, Fujitsu, DDN, and Oracle arrays can all cluster across more than 2 controllers. I wouldn't really consider Oracle or the baby Hitachi considering the others in this competition though. Probably not NetApp either,, for the same reason Compellent doesn't fit.
...it's not unified, it still requires FS8600 NAS controllers to provide the file services. You can get the spec sheet from the Aussie Dell site.
Last I heard...
...Dell was letting go of a number of EqualLogic developers as part of their workforce reduction or realignment or whatever they're calling it. I do believe we are looking at the future of Dell storage here, maybe not now, but in a few years this maybe be the go-to product.
It looks like it may be the same Xyratex enclosure powering the 3PAR 7200 and IBM v7000 with more robust CPUs. 2 SAS ports and 4 FCP ports per controller, as well as 2 1GbE ports for replication and management.
If the FCP ports are on a PCIe card then there's no reason they wouldn't offer it with the quad port 1GbE or dual-port 10GbE cards. Compellent works at a much more granular and efficient level than EqualLogic with it's emphasis on the stripe over the RAID group and the tiny 512KB dynamic page over the 15MB size used by EqualLogic. I suspect EQL will remain in place for legacy customers but it wouldn't surprise me to see Dell try and move people over to the SC4020 as soon as possible.
I suspect we'll be hearing about EqualLogic to Compellent replication sometime in the very near future, and that will be a fairly sure sign that CML is Dell's vision for the future. (I had a few and I really did like them a lot).
...I would consciously do everything in my power to remove spanning tree from the data center network. It's really funny when STP has a hissy fit for some reason or another and your data center shits the bed for a minute. And by "funny" I mean "absolutely catastrophic".
From what I understand...
...IBM's N-series shipments aren't very significant as compared to the overall FAS shipments. We had one (an N6040) and our account manager admitted it wasn't something they push aggressively unless they know they can win with it (our ownership experience bears this out - what a nightmare).
...their age is starting to show. A few years back it was great stuff, the performance wasn't always the best but it was a solid Swiss Army knife product, you really could do just about anything you wanted on FAS storage arrays and DOT. I had a few of them myself and aside from some bad sizing which I inherited from prior to my joining to company, they were quite good.
The problem I saw was with their hybrid story at the time. Everyone else was doing tiering of some sort, EMC, Compellent and 3PAR (pre-acquisitions), Hitachi, IBM all had a tiering strategy to accelerate reads and writes. This meant that midrange and up storage buyers could use their existing arrays and start adding in the performance advantage of flash without paying huge costs, the array intelligence would move data to where it needed to be. NetApp offered Flash Cache cards (previously PAM cards) but they only accelerated reads. The only way to shoehorn flash into a FAS for write improvements was to put in all-flash aggregates and manually move workloads between them.
That meant bigger costs and manual effort compared to the competition. I hate both of those two things. I had a friend in another organization with a VNX5300 add 2 drives for FAST caching when his array started slowing noticeably and the performance improvement over his 50 spindles of disk was like 100% with latencies getting cut way down. 2 SSDs that popped right in to free slots in their existing enclosures. That's not a big cost in the grand scheme of things. I remember the discussion I had with my then-director about if we could do something similar (pre-Flash Pool) and the answer had a 6-digit price tag.
They needed to innovate more (and more often), their "no one wants tiering" strategy was a huge failure and now their products are trying to keep up with what everyone else has had for years.
I just started...
...as a consultant. I came from a job with a decent desktop, a netbook, an iPad, an iPhone, and a Mac Mini for use from home. Now I have a single HP laptop. Don't love it if I'm honest, primarily the screen is crap (1600x900 on a 14" screen, could be better) and there is no third mouse button for controlling scrolling or anything, but it does have a 256GB SSD and 8GB of RAM and the wireless has been rock solid so it's not terrible.
They did give me an external 20" monitor which is only 1680x1050, but I still have no external keyboard and mouse. And the case they gave me is garbage. Oh well.
I do miss the flexibility in the old place, and the extra toys as well.
...someone mentioned the baby Compellent array due out. I guess that's that as far as confidentiality goes.
The universal answer in IT...
They bought the technology...
...to integrate it into the UCS B-series chassis and leverage the unified networking technology. They are flash storage nodes.
Flash storage nodes.
It's networked storage. It competes with other networked storage. Do these execs even know what business they're in? If you buy a storage product vendor and you connect you new toy to other devices you sell to pool the resources, you are now selling shared storage products.
I for one would like to meet these execs. I have a very large clock tower in London I would like to sell them.
I don't think...
...NetApp has a choice in this, the FlexPod solutions have been quite popular locally but NetApp sales on their own have sort of tanked, at least in this area. They won't abandon a strong option like FlexPod simply because Cisco is pushing their own flash agenda.
EMC could get a little pissy about it, but only as it relates to VSPEX - as far as the vBlocks go all the storage products are EMC and that appears to be set in stone.
I'm not seeing a really convincing case for Whiptail from my perspective, not yet anyways. Sharing the flash modules amongst many workloads and letting the storage array sort out placement over time is more appropriate and cost effective than dedicated flash for individual workloads, I suspect this would be the case for most medium businesses and small enterprises to whom flash is still fairly expensive.
It definitely feels like a niche product that most vendors won't be overly concerned about, it's not like a general purpose storage system so I can't see if making that big of a splash.
Well, it's the BRIC countries, plus the next 13.
- Review Apple takes blade to 13-inch MacBook Pro with Retina display
- Game Theory The agony and ecstasy of SteamOS: WHERE ARE MY GAMES?
- Intel's Raspberry Pi rival Galileo can now run Windows
- Microsoft and HTC are M8s again: New One mobe sports WinPhone
- Kate Bush: Don't make me HAVE CONTACT with your iPHONE