Wait for the announcements next week at EMC World (if there are any). I asked about VPSEX Blue appliances with Hyper-V and ScaleIO and got some rather stunned "how-did-you-know" looks from the SEs.
231 posts • joined 26 Aug 2009
Misleading, but not incorrect
The EF system is one of the most proven *platforms*...
The EF system "platform" is the same as the old Engenio product, so technically he may be correct but highly misleading.
There are more than 750,000 LSI Engenio arrays, IBM DS-series, Dell PowerVault systems, NetApp E and EF series, I think even Cray used the "platform" for some of their storage products. I don't see THAT as a stretch as a lot of folks bought into that platform in one form or another.
But as far as actual EF-series specific arrays shipped? I doubt they've hit 10,000. Didn't they only just recently offer dual-controller systems?
Re: In before the flame war.
So do I, have not had a particularly good experience with Android to date (HTC One M7). Cyanogenmod makes it much less bad.
...the difference between an "application delivery controller" and an "application delivery switch" like Brocade's own ADX (which I've always referred to as a load balancer because that's what it is, and virtual editions are available for use)?
Yes, it is pretty great. Not for every use case, but it does what it says on the tin and our customers are pretty pleased with it in 100% virtual environments.
Re: No mention of The Register web site going down yesterday
Name and shame the load balancer!
I can only hope this means tossing some unneeded management and fat-cat sales types out the window and not hacking away your top developers, big thinkers, or good hunters. That's not often the case, sadly. I like Citrix products myself, I've had good luck with VDI-in-a-box, XenDesktop, XenApp, and NetScaler. XenServer is what I use in my lab and it's been rock solid.
Re: leaving to PureStorage
Also, Pure is full of VC cash, so I imagine they can pony up some big signing bonuses for top talent. People like money!
Re: I don't get it.
Wait, that might not be a part of the plan...
Re: I don't get it.
I would like to see it with at least a 1080p screen and Broadwell U processors as well, probably the i5-5250U. That would make for a snappy lightweight with battery enough to last me a fully day of poking around in client data centers.
You could have...
...taken the last two paragraphs, made them into a single paragraph, and skipped the rest of the article entirely. It's what we do - capture statistics with vendor sizing tools, determine requirements as far as features go, and build out a couple BoMs for different vendors products. Then we discuss with the client regarding performance, features, and cost comparisons. At the end of the exercise they end up with a properly sized array that performs well and does what they need it to regardless if its all-flash, hybrid, or even all-disk (yes, I'm not going to recommend flash to a cost-sensitive SMB who needs 10TB but only 200 IOPS). We sleep well knowing they are going to be satisfied with the product, and we make money off the implementation and training services. Happy customers.
...for this is because it would butcher XtremIO sales, which does it's data reduction in-line like most modern AFAs.
The all-flash VNX is intended to provide the same feature set as the disk-based and hybrid VNX systems so existing customers can have an all-flash option and manage with the same toolset/skillset and same features (RecoverPoint, VPLEX, ViPR, etc)
I've seen even smaller VNX systems with MCx code and a bit of flash deliver some big IOPS with consistently low latency considering the requirements of who is buying them (and the latency of the old VNX with large flash pools), but they are intended to be a multi-purpose, multi-protocol Swiss Army knife of storage with disk foundations, not a hyper performance all-flash thoroughbred.
I started with a Brocade partner about 10 months ago and we've been fairly successful with the ICX line of switching, seems to be very reliable and good features/price point but I'm irked about the older code versions Support is recommending to us and customers - get on with 8.x already!
The VDX switches are totally sweet, lots of use cases there. Performance looks to be as spec'd and the VCS fabric is very easy to configure and manage. Expensive, yes they are, but I approve.
The ADX is decent as well, not really NetScaler decent (cheaper though) but virtual ADX with Vyatta vRouters in highly virtualized multi-tenant environments actually works pretty well.
They seem to be a good company with good products on offer outside of just FC, just not big on fanfare and flash sadly. That might be all they are really lacking.
Missed as part of the announcement...
...but worth mentioning is they also now support:
- 4TB nearline drives;
- up to 6 expansion shelves;
- 25.6TB of SSD (16x 1.6TB) in the flash shelf.
(old: 3TB NL-SAS, 3 expansion shelves, 12 TB of SSD in the flash shelf)
Re: Who Takes DCIG Seriously?
That DCIG report is a load of garbage, any vendor who wanted to help them with the scoring was allowed to do so and any vendor who didn't was docked points. It is based entirely off of the marketing literature and websites, they did no lab up any of the products they "evaluated".
I'm really liking the Yoga and X1 Carbon laptops my co-workers have. They love them, too. Seems to be a case of "make good products and watch them sell well". And the price points for decent desktops is getting us into all kinds of price-sensitive places where we weren't competitive before.
I got stuck with an HP Elitebook just as we started the Lenovo partnership :( Bad keyboard, even worse screen, thicker, heavier, way too much flex in the chassis, and flaky USB connectivity. No X1, this.
...was the first industry certification I ever achieved (NetWare 4.11). Unfortunately I was not able to secure work as I had just graduated from college at the time and no one wanted a 0 experience admin. Shortly thereafter, Windows NT 4.0 took over the marketplace in my area and it was easier to find work as an MCP supporting Windows Server after a few months of desktop experience than it was to find anyone looking to hire untried CNAs, but I did certainly prefer NetWare.
Re: Getting my hopes up!
Yeah, BE worked for us for many years though a few upgrades until things just randomly stopped working, then they never, ever wanted to work again. Having NDMP break when all 15TB of your files are stored on a NetApp is hugely frustrating - went from 4GB/s down to 400MB/s by having the backup server mount the shares as network drives :-/ Symantec could never get it working properly again.
I went through 3 versions of BE (12.0, 12.5, 2010) where Exchange GRT kept breaking. It would work, and then not. They would tell me it's my Exchange server. I would upgrade to the newest version of BE, and it would work again on the same Exchange server. Then suddenly it would break again, and it was the Exchange servers fault. \
I had to buy NetWorker since Veeam couldn't do a thing for my physical servers (Win/Solaris/RHEL) or my filers or my Oracle DBs and I couldn't afford the price tag CommVault was quoting me. Same filer, same Exchange server, still running fine long after I left. Gross interface though, good Lord.
While I am glad...
...they are able to buy the kit they think they need, it's not exactly newsworthy, is it? That kind of hardware would amount to a PoC lab or side project for large enterprises. We have a customer who bought 16 loaded UCS chassis and a couple shmill in switches, can I submit an article about them?
As a past customer, I've seen Dell take a big dump on their competitors pricing on many occasions and win business as a result. They won a lot of mine for offering the features I want at lower prices (with good local partners which helps a lot).
My only concern is that with reduced margins comes reduced R&D spending. Although as a private company, they can dictate their margins with no regard for Wall Street. We'll see how the strategy plays out over the course of the year.
Re: VMware a positive or a negative?
Cisco is actually quite big in the Openstack field and uses Inktank on RHEL for storage as it is, so they don't necessarily need VMware at all. They are also ensuring the latest generation of products has full Hyper-V interop as well.
I am more expecting a Red Hat buy from Cisco though (now that Red Hat owns Inktank), that would have significant results on the quality of their Linux-based work.
...it's as easy as redirecting emc.com to cisco.com/storage.
Everyone else will have an overlap nightmare.
Re: Really Good Stuff
Agreed, the Sourcefire stuff is great. Adding it to the ASA line is a Good Thing. After that point, it's about training and education to keep it running properly. Hope to get my hands on this soon.
Re: Find me a 6TB solid state drive
The Compellent flash stuff works really well, having owned a hybrid systems with flash and disk. I found I had to let the data settle when migrating to the array so as to not fill the SLC tier too quickly. Once the colder data has de-staged to lower tiers you just let all the new writes hit SLC and let the array sort out the placement as it cools. Typical flash array performance, sub-millisecond latency and ridiculous IOPS, plus some added efficiency through thin writes and de-staging to more cost-effective media.
I'm actually not a customer anymore (I resell HP, EMC, and Nimble primarily and there's plenty of bug notices to go around), but I was a customer so felt the pain firsthand when a NetApp originally configured as a gateway decided to ignore all of it's natively connected shelves after an ONTAP upgrade. A DS4300 Turbo that would reboot randomly due to a bug in a watchdog timer. A remote InForm OS update which hung an F200 system, requiring a long drive and manual power down. An EqualLogic that wouldn't boot at all after a firmware upgrade, and then ran at half performance for two weeks before it got fixed. A nice, big V7000 system that could barely manage 20MB/s for iSCSI after an upgrade (I can assure you, that was plenty "disruptive" for my users).
Now when I look at the list above, that's a midrange NetApp, an Engenio-based IBM, a 3PAR, an EqualLogic, and a Storwize V7000. None of those are bad systems. I literally bet business on them being "not shit", but still there were bugs and odd behaviors. That is simply expected. It's why I always backed everything up to my Data Domain, then ran a clone job to send it all to my Spectra library before "non-disruptive" upgrades.
Now I DID miss the part about this being a disruptive upgrade (my bad), and that is definitely unfortunate. There's degrees of disruptiveness, from simply offlining both controllers at the same time to requiring complete evacuation. I would be pretty pissed about that myself especially if I had nowhere else to put my data and I absolutely had to, risking data integrity otherwise. I'd even be pissed if I just had to shut everything down to reboot a controller pair.
...someone blogged something about a rumored problem with an upgrade that's still in beta? I'd be a whole hell of a lot more concerned if it was production code, but according to EMC this is not the case. Sounds like a bunch of FUD to me (at least until XtremIO's start wrecking client data in the field post-upgrade).
For the record, I've seen just about everything shit the bed at least once during a supposed "non-disruptive" upgrade. Good backups are everything. Snapshots don't count.
Re: Not my GMail password
I was concerned as well when I saw this. I had a couple "failed login attempts" from somewhere in northern Nevada a few months back and I used Keypass to generate a strong random password and enabled two-factor. Also did same for Windows Live and Apple ID, and proceeded to generate random Keypass passwords for any sites using my Gmail account as a login. Now I just want my financial institutions to all offer the same.
I'm such a sheep...
...since I have a personal HTC One and a work iPhone 5, and when I compare the two I think to myself I do prefer the iPhone 5 in daily use even with it's lower PPI screen, but I like the size of the HTC for reading docs a bit easier. It's basically exactly what I want, even though I'm not excited by it at all. Sigh.
My Anaconda don't want none
unless you got buns hun.
...an agreement with Simplivity allows them the flexibility to lead with that option when it is the strongest option available without stepping on the toes of partners like the EMC federation, NetApp, and Nimble. Outright buying them only creates a quagmire of corporate bullshit between existing partners which I am sure they would rather like to avoid until they have a more accurate indication of how this agreement will pay off.
It's also worth noting that Cisco is an Inktank customer, running Ceph on UCS nodes for OpenStack storage. If they really wanted to make a move to shake things up and piss a lot of people off, they would buy Red Hat (who now owns Inktank), landing them an OS and hypervisor company with a scale-out disk storage offering to pair with their flash accelerators and a ton of open source know-how and resources to do pretty much whatever they want, extending the expertise into all of their existing products as well.
...is worthless. The last report was crap, this one is too.
Re: Most large companies are running at least two virtualisation platforms
I ran into that too at a previous job, Oracle Linux on OracleVM results in cost savings that are hard to ignore. Couple that with Windows Datacenter licensing and ECI including System Center and the advancements in Hyper-V and I'm starting to see a number of companies start splitting their virtualization infrastructure into separate silos to save money.
IBMs old servers, EMCs old storage. Wonder when they'll start reselling Cisco 3560's and Brocade 200E's?
That said, the 5100 was pretty popular with telcos in distributed offices. I saw a bunch of them in that role and as far as I know, they are still there. Most people buying them for general usage were pushed towards the 5300 from what I've been told so they could have the option for more FAST cache and file services down the road.
...a "channel partner" myself, and when I was a customer, I hated guys like me. I would usually deal directly with vendor SEs to avoid having partner sales teams in my boardroom pretending to know more about my IT challenges than me.
...it depends on where you are located. As a customer of Dell, IBM, and HP, my preference was to deal with Dell ProSupport above the other two. I never got someone in a "faraway land" and issues were always resolved quickly and with less hassle. HP was easily second so long as you got a decent level one tech with a... less noticeable accent. IBM was always a gong show (how many times do I need to speak to fulfillment on a single issue?), no matter who you were speaking with.
As a partner who leads with HP and provides the break-fix in our geographic location, I can tell you our customers first level experience results in regular complaints to our account managers but second level and on-site guys are easily meeting the SLA requirements and doing a good job overall.
F@ck you, cancer
That is all. Go Watson!
I'm not surprised...
...at how well they are doing after taking a close-up under-the-covers look at their tech. Nimble came in and put one through the ringer for us and the experienced storage guys walked away quite impressed. That CS210 screams, even under the very unfavorable conditions we subjected it to. Not bad for entry-level. The economics hit a sweet spot as well, but they will run into a problem if they don't turn a profit eventually.
Still curious if/when someone going to make the first move towards buying one of these start-ups.
Re: Was hoping...
Nice. There are 6 of us going through NIOP training tomorrow and Nimble is selling exceedingly well for us - almost every storage discussion becomes a Nimble discussion at some point, if only for a few minutes. We have a number of customers who either want to stick with FC after committing to fabric upgrades and a few others who are growing in terms of cold data (IOPS don't vary much but the capacity requirements would put them at or over the limits of the current arrays).
...to see the inclusion of Fiber Channel HBAs. I've heard "it's coming soon", but that's about it. At least the midrange stuff now has additional front-end ports. I'd still like to see them scale up higher in terms of disk shelves, seems dumb to arbitrarily limit their capacity to 3 expansion enclosures when there's nothing really limiting them architecturally.
We have several customers taking a cloud-first approach to core infrastructure services, it depends on the level of confidence they have in their service providers (including us). Some of them are so cloud-focused they are even buying cloud-managed networking/wireless/security and getting someone else to manage that for them as well. We've got a lot of businesses on our hosted Lync service and have had great results and growth in that area. Putting VoIP in the cloud is a big commitment, but it seems to be working out well.
A lot of it boils down to cost. I can put 10 medium instances of Windows Server or RHEL in our cloud for 5 years for less than the cost of a single server with local disks (never mind licensing etc). Since most of our smaller customers are running less than 10 distinct workloads (and simply using hosted services for common workloads like email and the aforementioned collaboration), many of them don't even bother with a server at all.
We've been following the trend...
...as well, since we're a premier partner. We seem to be losing a lot of disk-based business when we lead with HP. It's not that the deals aren't there, we just aren't winning them with HP. I'm not sure if it's a mindset thing or what. Might be a couple good quarters around the corner, but the past two have been pretty bad. Even traditional all-HP shops are taking the time to shop around and it hasn't been good for our high-end storage practice (and devastating on the low margin SME stuff).
There are at least a few VMware SE's mentioning that to customers scoping storage as well. Since we don't sell NetApp or HDS and EQL is our number 2 or 3 option for iSCSI, we're biting it occasionally when that question comes up. I swear there's another partner out there sabotaging us (good for them, I'd do it too). Other vendors promises are just that and most customers with any sense stopped believing those a long time ago (even though we lead with them usually).
The funny thing is...
...most partners aren't even allowed to sell NSX services or support yet. We're a large regional Premier partner and we're only just starting to get our partner briefings and there is some talk of training plans in the next 6 months. We're considering coupling NSX with Brocade VCS for a large data center build but yet we still have to wait our turn. Only select nationals with PSAs are actually permitted to sell the NSX product and services around it. So most people who say they've worked with it are full of shit, at least around these parts.
VMware is playing this really close to their chest right now, way too close to tell what impact this is going to have on the industry. Everything is pure speculation at this point. There is one large client that was looking for a significant amount of work to be done on their freshly implemented NSX environment around scripting and monitoring that is currently unfilled simply because no one knows the product yet.
@Tokoloshe: We're expecting that, based on what our PSE discussed. Some things are best left to ASICs, and when we pushed for more details around that statement and the impact of extensive ACLs and routing and load balancing configurations we didn't get very far.
Re: Smells like copy-protections
I think there are some advantages to the per-core billing model.
I worked for a company that used the Oracle Database Appliance to drive a RAC cluster. It was pretty simple for me to take a look in my storage management tool and server monitoring tool to turn around and tell Oracle exactly what size and amount of disk I/O and CPU/RAM utilization we were driving.
Their proposal had us running Oracle VM on the ODA hosts and running the RAC nodes as VMs on the hosts (which is fully supported and gets around their restrictive virtualization licensing). It ended up being a significant savings for the company and upgrades were dead simple (just add resources to the VMs as needed). Since Oracle provides Oracle VM appliances for many of their applications, provisioning new applications was a snap.
YMMV of course. Worked well for that company though.
I wonder if...
...there will be any encyclopedias or scientific periodicals in the huge collection (and if they will be searchable). I hate it when my son has a project and my wife's first instinct is to Google something and trust the contents in the first hit are factually accurate and not subject to bias. It would be nice to get him started down the path of proper research and use quality references that don't start with a "W".
Technical books, especially certification books, likely will not make the list since they are a good cash cow for their prospective companies but if they do make the list I will go straight to Amazon as soon as it's announced and buy a Kindle PaperWhite and a subscription. Fingers crossed on that.
"The Compellent array maxed out at around 6,000 IOPS but the Tegile Zebi hit 35,000, meaning more servers and users could be supported."
...sounds like someone either undersized or incorrectly configured the Compellent array, since a pair of SC8000 controllers with 64GB of RAM can do 6,000 8k (avg) random IOPS with 70%-ish reads on 24 drives - I know because I did it myself and hit email@example.com response time. A Compellent will not "max out" at 6,000 IOPS, not even close.
For reference, the 6 SLC + 6 MLC flash/hybrid shelf is designed to sustain 77,000 IOPS with sub-millisecond latency.
So either someone screwed the pooch designing or installing, or alternately, someone is lying.
So long as it drives their margins down!
We have a couple 3PAR customers running Tier 1 workloads on their mesh-active 10k systems and know many large local orgs using cluster-mode NetApp to run their Tier 1 workloads. Any by Tier 1, we're talking about utilities and hospitals and governments running hundreds to thousands of critical applications. Even with hardware failures or under extreme load they've all been fine (as long as they've been implemented properly, everything sucks with bad design).
I've been considering them both Tier 1 for a while now. Drives our EMC partner SE mad.
I've not heard...
...much about this company except one partner who went with an ISE box over a P4000 solution we priced out. They went through 2 of them and hours of support calls trying to get the thing working properly. Still not sure if they got things sorted out or not. To be fair, that was a few years ago (3-ish).
I doubt it...
...most of the deals I've been in leading with Cisco UCS have still gone to EMC or Nimble when it comes to storage, depending on customer requirements. Cisco account managers have only put Invicta in for special use cases.
They have been very aggressive with discounts for the VSAN nodes though and I too have heard about the Simplivity OEM talks/rumors. That combo plus ScaleIO through their BFFs at EMC would probably scratch any hyper-converged itch they have for the time being.
I still don't think they REALLY want to get too far down the path of being a distinct general purpose storage vendor. What I've seen locally is that a company buying Dell servers will buy Dell or EMC storage (or NetApp through someone else). HP buyers will buy HP or EMC storage (again, sometimes NetApp). Cisco UCS customers feel free to buy whatever they want, and we just try to encourage them to make it something we sell (all of the above, plus Nimble but not NetApp obviously). Cisco is happy because people jump into the Nexus product line as a result, they don't seem to care who wins the storage business.
I suppose we'll see. UCS is eating up the large enterprise/utility/government/cloud services market here right now but these places are usually the Cisco-or-nothing types anyways (at least on networking/wireless/collab), and even then I can't see then jumping storage platforms.
I would nickname it the c*ntblock though.