161 posts • joined 26 Aug 2009
I wonder what...
...the rebuild time will be on those drives in the real world. 2 days? We probably won't see any real midrange or enterprise arrays running these until they've validated the crap out of them in their labs but I wonder what Nimble could do with these and some of the latest-gen MLC modules coming out. XIV is another that could get a lot of out them with its rebuild performance and massive caches.
...I guess not any more than most, but I'd have to pry DD Boost for RMAN from my DB teams cold, dead hands.
I just wish the data protection ecosystem wasn't so complex, so many products with some deal of overlap.
Anyone hocking kit...
...should be able to arrange a demo of the product meeting the requirements, if not actually bringing you face-to-face with other clients running on the same platform. If they aren't willing to go the extra mile to actually show you the system doing what you want to see it do, I would question both the product and the partner.
Re: no EQL?
@Nate: No, Dell really doesn't want to sell EQL any more except to the installed base reluctant to migrate (they want new customers on Compellent). They are pricing the new SC4020 array to blow the EQL stuff out of the water. There is a roadmap for existing customers though which is nice, but it ends with replicating your EQLs to a shiny new Compellent. There is a fairly new gen of EQL so I wouldn't expect to see it die off for another several years but I don't think Dell would put it up in sales scenarios where they are better positioned to win with Compellent.
I also agree with your comment around how effectively useless this report is. And it really is useless, since the weighting of the categories will vary from one company to another. I resell some of those systems and for a large number of my customers, the winning product ends up being something other than what Gartner thinks is best based on their critetia.
This happens quarterly...
...so it's no surprise I'm seeing backwards graphs again this quarter.
...isn't in the list because the others all appear to be able to do true multi-controller configurations (with many caveats in some cases, Live Volume between Compellent heads seems to be nearly as functional as Cluster Mode).
But the gist of it is the EMC, Hitachi, HP, Huawei, Fujitsu, DDN, and Oracle arrays can all cluster across more than 2 controllers. I wouldn't really consider Oracle or the baby Hitachi considering the others in this competition though. Probably not NetApp either,, for the same reason Compellent doesn't fit.
...it's not unified, it still requires FS8600 NAS controllers to provide the file services. You can get the spec sheet from the Aussie Dell site.
Last I heard...
...Dell was letting go of a number of EqualLogic developers as part of their workforce reduction or realignment or whatever they're calling it. I do believe we are looking at the future of Dell storage here, maybe not now, but in a few years this maybe be the go-to product.
It looks like it may be the same Xyratex enclosure powering the 3PAR 7200 and IBM v7000 with more robust CPUs. 2 SAS ports and 4 FCP ports per controller, as well as 2 1GbE ports for replication and management.
If the FCP ports are on a PCIe card then there's no reason they wouldn't offer it with the quad port 1GbE or dual-port 10GbE cards. Compellent works at a much more granular and efficient level than EqualLogic with it's emphasis on the stripe over the RAID group and the tiny 512KB dynamic page over the 15MB size used by EqualLogic. I suspect EQL will remain in place for legacy customers but it wouldn't surprise me to see Dell try and move people over to the SC4020 as soon as possible.
I suspect we'll be hearing about EqualLogic to Compellent replication sometime in the very near future, and that will be a fairly sure sign that CML is Dell's vision for the future. (I had a few and I really did like them a lot).
...I would consciously do everything in my power to remove spanning tree from the data center network. It's really funny when STP has a hissy fit for some reason or another and your data center shits the bed for a minute. And by "funny" I mean "absolutely catastrophic".
From what I understand...
...IBM's N-series shipments aren't very significant as compared to the overall FAS shipments. We had one (an N6040) and our account manager admitted it wasn't something they push aggressively unless they know they can win with it (our ownership experience bears this out - what a nightmare).
...their age is starting to show. A few years back it was great stuff, the performance wasn't always the best but it was a solid Swiss Army knife product, you really could do just about anything you wanted on FAS storage arrays and DOT. I had a few of them myself and aside from some bad sizing which I inherited from prior to my joining to company, they were quite good.
The problem I saw was with their hybrid story at the time. Everyone else was doing tiering of some sort, EMC, Compellent and 3PAR (pre-acquisitions), Hitachi, IBM all had a tiering strategy to accelerate reads and writes. This meant that midrange and up storage buyers could use their existing arrays and start adding in the performance advantage of flash without paying huge costs, the array intelligence would move data to where it needed to be. NetApp offered Flash Cache cards (previously PAM cards) but they only accelerated reads. The only way to shoehorn flash into a FAS for write improvements was to put in all-flash aggregates and manually move workloads between them.
That meant bigger costs and manual effort compared to the competition. I hate both of those two things. I had a friend in another organization with a VNX5300 add 2 drives for FAST caching when his array started slowing noticeably and the performance improvement over his 50 spindles of disk was like 100% with latencies getting cut way down. 2 SSDs that popped right in to free slots in their existing enclosures. That's not a big cost in the grand scheme of things. I remember the discussion I had with my then-director about if we could do something similar (pre-Flash Pool) and the answer had a 6-digit price tag.
They needed to innovate more (and more often), their "no one wants tiering" strategy was a huge failure and now their products are trying to keep up with what everyone else has had for years.
I just started...
...as a consultant. I came from a job with a decent desktop, a netbook, an iPad, an iPhone, and a Mac Mini for use from home. Now I have a single HP laptop. Don't love it if I'm honest, primarily the screen is crap (1600x900 on a 14" screen, could be better) and there is no third mouse button for controlling scrolling or anything, but it does have a 256GB SSD and 8GB of RAM and the wireless has been rock solid so it's not terrible.
They did give me an external 20" monitor which is only 1680x1050, but I still have no external keyboard and mouse. And the case they gave me is garbage. Oh well.
I do miss the flexibility in the old place, and the extra toys as well.
...someone mentioned the baby Compellent array due out. I guess that's that as far as confidentiality goes.
The universal answer in IT...
They bought the technology...
...to integrate it into the UCS B-series chassis and leverage the unified networking technology. They are flash storage nodes.
Flash storage nodes.
It's networked storage. It competes with other networked storage. Do these execs even know what business they're in? If you buy a storage product vendor and you connect you new toy to other devices you sell to pool the resources, you are now selling shared storage products.
I for one would like to meet these execs. I have a very large clock tower in London I would like to sell them.
I don't think...
...NetApp has a choice in this, the FlexPod solutions have been quite popular locally but NetApp sales on their own have sort of tanked, at least in this area. They won't abandon a strong option like FlexPod simply because Cisco is pushing their own flash agenda.
EMC could get a little pissy about it, but only as it relates to VSPEX - as far as the vBlocks go all the storage products are EMC and that appears to be set in stone.
I'm not seeing a really convincing case for Whiptail from my perspective, not yet anyways. Sharing the flash modules amongst many workloads and letting the storage array sort out placement over time is more appropriate and cost effective than dedicated flash for individual workloads, I suspect this would be the case for most medium businesses and small enterprises to whom flash is still fairly expensive.
It definitely feels like a niche product that most vendors won't be overly concerned about, it's not like a general purpose storage system so I can't see if making that big of a splash.
Well, it's the BRIC countries, plus the next 13.
It's an S4810...
...not an S4180. It's a 48-port 10GbE switch, after all.
Whether or not the move from purpose-built ASICs to software on merchant silicon is a wise one remains to be seen. Dell is being very software-centric in networking and storage, however. Will it pay off? HP is doing ASICs in their networking line still, and 3PAR is highly dependent on its ASIC technology to deliver consistently high performance when run flat-out.
Re: All credit to CISCO....
Interesting times. Possibly ViPR will act as the management piece in that scenario, possibly orchestrating data movement between flash-enabled VNX and all-flash Invicta arrays? Maybe even managing the movement onto XtremeSF cards?
You'd have the cards installed in specific hosts as per application requirements, have a large pool of Invicta flash available for all hosts to use as-needed, and back-end that with FAST-enabled VNX or VMAX storage. (I don't see why it has to be EMC but it does make sense with ViPR in the mix, FusionIO/Invicta/NetApp could work just as well as long as there was an orchestration engine working in there to tell arrays where to move which blocks).
A bit late to the party...
...but we have a lot of Dell equipment here. Off hand, we have something like 1600 desktops/laptops, 30 servers, 3 Compellent arrays, and 2 EqualLogic systems. The support on the data center gear from our perspective is solid. Copilot for Compellent is fantastic, currently my #1 favorite. EqualLogic and PowerEdge support have all been very good as well, always within the SLA of the support contract.
We have had problems with hardware bought in one country getting warrantied in another, I do wish they would address that issue.
I'm pretty happy with the kit I'm getting. Low failure rate and solid support, freebies here and there, and the Help Desk guys are liking the new ultrabooks, very popular with our travelers.
I've had the same Account Manager for 5 years, he was in the role before I got here. Good guy, picks up a lunch here and there and is always up to talk. If we want to go direct through Dell, no problem. If we want a VAR involved, no problem. References? Done. Site visits? Tell me when you're free.
Our SE is another good guy, sent me to Enterprise Forum last year and is always shooting me emails about new products and sending personal invites to Dell events just to save me the hassle of filling out an online forum.
Yeah they've accidentally ordered a wrong IO card for me and the hardware DOES take too long to ship from the order date (a month, minimum) but a little planning and verification goes a long way.
IBM doesn't care to come around at all anymore after we had some serious issues with their kit. HP wants us to buy something first (specifically a 3PAR, it seems), THEN they'll start coming around. Dell and Cisco however (among a few others, Meru, FortiNet) seem to be genuinely interested in seeing us succeed. We're far from their biggest customers in this region but they're always excited to get in on the next "big project".
Sure glad my guys missed the axe, too bad there aren't more like them or I'm sure some of you guys would be singing praises. Good SE's and good account/sales guys teaming up to deliver products/services has had very positive results for us.
...look to be a way for Red Hat to further marginalize Oracle Linux?
I sure hope so.
...is awesome! Hoping for some exciting discoveries in 2014. My son and I will be on the back deck with his new telescope trying to make a few of our own (they're all exciting when you're 6).
The pricing wasn't nearly good enough...
...to beat Data Domain for us. EMC just mopped the floor in our backup RFP and took all the business for tape library, disk target, software, and services. The POC was flawless and the package did what they said it would. No one else enjoyed such results.
Now we're on to the storage front and EMC is off to an early lead with realistic sizing and reasonable performance claims, then adding data reduction and FAST goodies. Everyone else is basing their entire solutions around very optimistic performance expectations and data reduction technologies just to try and compete.
Considering that I need to live with the decision, good or bad, for the next 5 years or so at least, I'll take the workhorse general purpose storage array with lots of disk and flash and tons of expansion capability/capacity over the thoroughbred super arrays. I need reliable, predictable performance and capacity expansion increments based on real world data, not something based on best-case scenarios.
I suspect we will likely be an EMC shop for back and primary storage at some point in the future.
And by PS6xx0s, I mean the 61x0s and 60x0s models.
Also, it looks like the result was obtained from using 8 of the PS6210s models together, 100% read workload (on their web site) so "tuned for success".
Still, some good steps forward for this product line. I'm interested as well in the performance improvement on the spinning disk models where cache is hit frequently as well, 16GB is a big jump.
Dell also has an all-flash Compellent option that was GA as of Storage Center 6.4. And there was always an all-flash EqualLogic in the PS6xx0s, but it certainly wasn't this capable. It would be interesting to see how they obtained that 1.2m IOPS result. Some people buy flash expecting sub-millisecond access times from their apps, but some of us could live quite happily at 2ms especially if it's reasonably consistent and means expanding in cheaper increments.
I wonder how it would compete as far as features and $/IOPS vs. the rest of the flash market? I suspect some folks will need to either spread FUD (15MB page size, how does Dell sleep at night!) or cut costs significantly.
Add in the Force 10 switching providing low latency 10GbE with DCB for lossless Ethernet and you've got a pretty stout storage backend/fabric that shouldn't break the bank (although I'm looking at 300k for a flashy CML, so it's still steep even with the Dell price points).
Not a bad story for Dell tbh, especially existing EQL customers looking at a flash requirement. Now to pull a multi-controller Compellent out of the hat for Enterprise Tier 1 customers.
...the dual-active controllers?
As someone who HAS lost a controller, I don't consider it a business option in the slightest without a fail-over capability. Performance is a good thing, but only after resiliency and availability are factored in.
The reality is that VNXe's and FAS2220's are pretty cheap these days, highly expandable, have great support, and perform well while delivering solid engineering, integration, and software features right out of the box. Sure, they don't offer the fluff of an overgrown home NAS box but they will do block and file storage reliably and you will have a big company there to catch you if you fall.
Re: That covers "non-endurance related failure"
That's the piece that is missing, isn't it? You may only have one non-endurance related failure ever 105 years, but you may have a very sudden 25 endurance-related failures every 5 years depending on the workload.
...has already responded to the all-flash arrays with the 3PAR 7450.
IBM's problem is glitz and glamour. Look at how dull and boring the product line is! I've never heard an IBM rep get excited about their products. HP practically froths at the mouth about how the 3PAR stuff is the juggernaut of the storage industry, leaving everything else nothing more than a pile of smoking hardware and red flashing LEDs.
Plus HP positively jumps on any opportunity to do a proof-of-concept and ram the point home (I tend to be very critical of HP evangelists having come from HP before, but the products are sound and that is proven out in the field).
I asked my IBM rep about doing a bake off with a Compellent for me since that's what the very trusted VAR would sell us and he didn't want to participate but I should consider them anyways because of <marketing spew and boring Redbooks>. If the VAR sold HP I guarantee they'd be in like a dirty shirt with their goodies and free lunches and such.
Plus, 3PAR is yellow. Point for HP. If the rest of their equipment was painted to match my server room would look like a bumble bee because yellow means it's awesome.
As for EMC, they own so much of the worlds data (as in, resides on their products) that it is very tough to get people out of the ecosystem. Not that you'd want to as a customer, if you are an EMC shop with existing storage to replace, buying the newest version based on what the SE sizes up for you will certainly not get you fired.
And NetApp is NetApp. Apparently disk is dead and NetApp is so last decade, but it looks like no one got the memo.
Sweet F#ck All, great name for a product! Don't know if I'd trust "sweet f#ck all RAID" though.
Dell also uses Brocade parts in their blade chassis and older versions of the Active System were sold with Brocade FC switches as well.
Mellanox won't come cheap and Oracle and IBM might be a bit pissy about that purchase as they both use Mellanox products as well.
Brocade only really works if it stands on it's own as far as it's core FC business goes. It's ethernet business doesn't have the strength to sustain it. Maybe an HBA company and getting into the flash caching game?
...the author seems to have missed out on the fact that the new Cisco 6807-XL replaces the 6500E-series as Cisco's "core switch" and has massively improved thermals/power consumption and back plane performance.
That doesn't mean the 6807 is the king of all switches, but it wouldn't hurt to factor it in to the article since it's intended to replace the 6500 chassis (it even supports the currently available line cards intended for the 6500).
...is the number of X-Bricks that were rolled out in Limited Availability. Along with the 200 internal systems for training that would mean a total of 350 systems deployed in some form or another.
The 150 doesn't refer to the number of failures, but instead that of the 350 deployed systems, the failure rate seems high (without giving a specific number).
That could be 4, 10, or like 100. Unfortunately they didn't give us an actual number, just the number of units used to generate the sample size.
I was just at a solutions expo...
...and one of the Dell guys I know handed me one, said "I think you'll like this one", opened the screen and it woke to an Ubuntu screen. He gave me the password and I started pecking around. Extremely usable experience. Then he tapped the screen and all of a sudden Unity moved up a few rungs on my personal UI scale.
It's pretty sweet, I'll be honest. Glad Dell sees a market for a decent spec Linux ultrabook, even if it isn't everyone's favorite distro. At least the derivatives should all run well on it.
The Pure web site specifically mentions Active/Active on the newest controllers. I believe Active/Passive is referring to the older series.
I don't know about...
...all-flash datacenter in 5 years, but there is a good possibility of mostly-or-all-flash primary storage and greater use of in-host caching. I think you will see that become more and more common as flash costs come down and capacities go up. XtremIO primary plus Data Domain backed by Spectra Logic tape, for example. One for hot data, one for archive and hot backup, and one for long-term cold backup. It seems as though most vendors are developing similar strategies.
We are doing an RFP for a greenfield site and every single vendor is bringing some flash capabilities to the party (Dell, in fact, did go all-flash on primary storage). Workload-wise we don't need it but everyone is building it in to their platforms so when our workload patterns change in the future, which can be difficult to predict over 5 years, we will be positioned to deal with those changes and expand the use of flash as needed. Yeah, I'd love an all-flash storage array, but that's not always the most affordable thing for SME. Bring on the hybrid arrays with disk-to-tape backup, I say.
More than 5 years is hard to predict, a lot can change between now and then. There is already a lot of change in the storage industry right now and even worse, a lot of FUD in the marketplace. It's good times and sometimes the drama plays out like a soap opera, this is a great industry!
It's pretty cool...
...for the SMB space, especially considering you can buy a couple high horsepower Proliant servers with a bunch of NIC ports and some VMware Essentials licensing or Windows Server Datacenter, and share the built-in storage with all of the systems in the cluster. Good case to get some Procurve switches on heavy discount, too.
Create a small local datastore for the VSA, give it the rest of the disks in the system to share, cluster it with VSAs on the other nodes using the network RAID functionality, and enjoy the recent improvements to the tech.
I've tried it before in a lab environment and came away pretty impressed. It's really easy to use and works well. The real LeftHand stuff, especially 10GbE with fast drives, is very nifty (per-node expansion costs are a bit high though compared to adding a RAID pack or something like that but you do save a bit on software licensing).
Braver than I...
...as a Manitoban I prefer to keep my natural fur during the winter months, especially on the top of my head!
"Internet of Things", why is this a phrase
> Cisco came up with it to describe an everything-connected world, blame their marketing team
and why is the idea of it actually needed
> Internet-connected "things" means more ports sold, which is pretty important to a networking company
I've never been at work and thought "shit, if only I could check my fridge or microwave", maybe if I was a project manager I would but I am not.
> Nor have I, and I particularly hate it when someone buys a home surveillance camera which takes pictures based on movement and then emails them to peoples work addresses. Not everything needs to be Internet-connected or accessible 24/7 (but my boss loves changing the channel on his wife from his desk).
...about right (2.5x 2560Mb/s = 6400 Mb/s). Wiki backs that up. 12Gb is 3.75x the performance of Ultra320.
Load up a SAS bus with lots of SSD and load the controller up with front end 16Gb fiber channel and the bus itself may become oversubscribed under heavy load, further driving the requirements for better visibility, right from the application, on a VM, in a VM cluster, through an HBA, across the SAN fabric, into the front end of the SAN, through the write cache, then out the SAS bus to the the disks or flash modules sitting on the back end.
Another one! How cute.
The orange is a nice touch.
...are agreements, even if you don't like them. I'd say at this point EMC is standing on some pretty strong footing here, especially with the evidence they've provided of people taking documentation with them on their way out the door.
Lots of excitement...
...there is a lot of cool things coming out of the startups, but when I think long and hard about it I really can't see myself wanting to move away from the big players in all-flash or hybrid-flash systems. As we're currently discussing a storage refresh and working with multiple VARs and manufacturers, we always seem to come back to EMC (VNX with FAST), HP (3PAR with AO/DO features), Dell (Compellent with Data Progression across SLC/MLC), IBM (Storwise with Easy Tier), and NetApp (FAS with Flash Cache and Flash Pools).
We always get ramped up by the startup storage vendors only to be let down when we really look under the covers, mostly because the products are designed to do one thing well. It sure makes for a boring experience but at least if you have the big guys in your corner and a local SE you can choke, they'll try not to f#ck it up too badly (and they don't mind picking up a lunch tab every other month or so). If a startup gets it wrong, they might not even be around long enough to take your anger out on them.
Maybe the next time we look at a storage refresh, one or two of these vendors will still be around and worth taking a look at but until then I'm going to run safely into the arms of the big guys even if they run train on my budget for maintenance and support services.
,,,it's actually the opposite, everyone is connected to Cisco 2960x switches, gigabit to every desktop. From there uplinked to new stacks of 3850 carrying multiple 10GbE links back to the core, where users run train on the front end 1GbE ports of our NetApp filers. A funny thing happens between 3:30 and 5:00 every day as our engineering department tries to save all their drawings and assemblies and simulations. Some of those data sets are ~25GB :(
Should almost ask the networking guys to throttle everyone back to 100Mb.
...significant R&D investments in the short term and better communication about roadmaps and how far along things are (cough *ocarina* cough).
...more accurately the arrays can't "delete all data" but that someone could gain access to do so without authenticating.
It's not like there's a huge flaw in the array that results in data disappearing all of a sudden, this is no different than HP's slip up with the MSA storage arrays having a default account.
Proper segregation of management and production networks to isolate these management shells/GUIs should be best practice just about everywhere (although it's probably not, so maybe this actually will be a problem for a few customers).
I can't see how...
...people quitting to join an up-and-comer is "suspicious" in any way. Pure has tapped into the EMC resource pool and is likely making very attractive offers in their quest to push their platform to the next level. No big deal. The only reason it's suspicious is likely because these people are intimately familiar with EMC operations and possibly engineering as well, but you can't really "unlearn" anything. Some of those staff might see an opportunity to evolve the Flash Array product into what they envisioned for, say, VMAX, but can't/won't materialize for one reason or another. This is their clean slate. That's exciting stuff for established engineers.
There is a HUGE problem with EMCers taking EMC-confidential information with them on their way out the door, however. Not only is that almost definitely sue-into-the-ground worthy but some of that could be, oh I dunno, MY personal data. Yeah, I have a problem with that.
...bad data in this report. In a cursory examination I found a number of things missing or improperly tabulated from a number of vendors (support scores being different but the metrics are the same? missing features that ARE supported?). DCIG did a terrible job with this report. As someone with no stake in any of the vendors (customer) I don't think I will be using any of their analysis for my decision-making anytime soon. I've already harshly criticized it publicly on LinkedIn so not going to get into detail here but the report is worthless.
Anyone hoping to use this to assist in decision-making should buck up and do a bake-off instead and get something of value out of their time instead of a sponsored report that's so poorly done I question the integrity of the testers/analysts.
...ridiculous vRAM licensing, I'm inclined to buy more licenses. With support.
...are clouding their judgement on this. They need to get away from the DS series stuff and focus on the Storwize and XIV platforms.
They've made smaller versions of the Storwize platform (v3700, v5000 alongside v7000), but in the other direction they are limited by form factor. IBM should put the software on a pair of x3650's with a pile of PCIe slots and dual sockets with RAM out the wazoo. They would be able to scale up much higher than the v7000. Easy Tier across multiple types of spinning disk. And then scale it all out across multiple pairs of these monster controllers with workload mobility and maybe automated performance load balancing.
Then they need to define the use cases for XIV much better. Right now, whenever IBM sales comes around they are pitching it as a second- or third-place option for some reason but aren't espousing the advantages, specifically around resiliency and consistent performance. That kind of stuff is pretty important to some customers, often more important that outright maximum IOPS.
It's Dell software...
...featuring CDP - that means an AppAssure sales pitch.
- Batten down the hatches, Ubuntu 14.04 LTS due in TWO DAYS
- FOUR DAYS: That's how long it took to crack Galaxy S5 fingerscanner
- Did a date calculation bug just cost hard-up Co-op Bank £110m?
- Feast your PUNY eyes on highest resolution phone display EVER
- Wall St's DROOLING as Twitter GULPS DOWN analytics firm Gnip