150 posts • joined 26 Aug 2009
I just started...
...as a consultant. I came from a job with a decent desktop, a netbook, an iPad, an iPhone, and a Mac Mini for use from home. Now I have a single HP laptop. Don't love it if I'm honest, primarily the screen is crap (1600x900 on a 14" screen, could be better) and there is no third mouse button for controlling scrolling or anything, but it does have a 256GB SSD and 8GB of RAM and the wireless has been rock solid so it's not terrible.
They did give me an external 20" monitor which is only 1680x1050, but I still have no external keyboard and mouse. And the case they gave me is garbage. Oh well.
I do miss the flexibility in the old place, and the extra toys as well.
...someone mentioned the baby Compellent array due out. I guess that's that as far as confidentiality goes.
The universal answer in IT...
They bought the technology...
...to integrate it into the UCS B-series chassis and leverage the unified networking technology. They are flash storage nodes.
Flash storage nodes.
It's networked storage. It competes with other networked storage. Do these execs even know what business they're in? If you buy a storage product vendor and you connect you new toy to other devices you sell to pool the resources, you are now selling shared storage products.
I for one would like to meet these execs. I have a very large clock tower in London I would like to sell them.
I don't think...
...NetApp has a choice in this, the FlexPod solutions have been quite popular locally but NetApp sales on their own have sort of tanked, at least in this area. They won't abandon a strong option like FlexPod simply because Cisco is pushing their own flash agenda.
EMC could get a little pissy about it, but only as it relates to VSPEX - as far as the vBlocks go all the storage products are EMC and that appears to be set in stone.
I'm not seeing a really convincing case for Whiptail from my perspective, not yet anyways. Sharing the flash modules amongst many workloads and letting the storage array sort out placement over time is more appropriate and cost effective than dedicated flash for individual workloads, I suspect this would be the case for most medium businesses and small enterprises to whom flash is still fairly expensive.
It definitely feels like a niche product that most vendors won't be overly concerned about, it's not like a general purpose storage system so I can't see if making that big of a splash.
Well, it's the BRIC countries, plus the next 13.
It's an S4810...
...not an S4180. It's a 48-port 10GbE switch, after all.
Whether or not the move from purpose-built ASICs to software on merchant silicon is a wise one remains to be seen. Dell is being very software-centric in networking and storage, however. Will it pay off? HP is doing ASICs in their networking line still, and 3PAR is highly dependent on its ASIC technology to deliver consistently high performance when run flat-out.
Re: All credit to CISCO....
Interesting times. Possibly ViPR will act as the management piece in that scenario, possibly orchestrating data movement between flash-enabled VNX and all-flash Invicta arrays? Maybe even managing the movement onto XtremeSF cards?
You'd have the cards installed in specific hosts as per application requirements, have a large pool of Invicta flash available for all hosts to use as-needed, and back-end that with FAST-enabled VNX or VMAX storage. (I don't see why it has to be EMC but it does make sense with ViPR in the mix, FusionIO/Invicta/NetApp could work just as well as long as there was an orchestration engine working in there to tell arrays where to move which blocks).
A bit late to the party...
...but we have a lot of Dell equipment here. Off hand, we have something like 1600 desktops/laptops, 30 servers, 3 Compellent arrays, and 2 EqualLogic systems. The support on the data center gear from our perspective is solid. Copilot for Compellent is fantastic, currently my #1 favorite. EqualLogic and PowerEdge support have all been very good as well, always within the SLA of the support contract.
We have had problems with hardware bought in one country getting warrantied in another, I do wish they would address that issue.
I'm pretty happy with the kit I'm getting. Low failure rate and solid support, freebies here and there, and the Help Desk guys are liking the new ultrabooks, very popular with our travelers.
I've had the same Account Manager for 5 years, he was in the role before I got here. Good guy, picks up a lunch here and there and is always up to talk. If we want to go direct through Dell, no problem. If we want a VAR involved, no problem. References? Done. Site visits? Tell me when you're free.
Our SE is another good guy, sent me to Enterprise Forum last year and is always shooting me emails about new products and sending personal invites to Dell events just to save me the hassle of filling out an online forum.
Yeah they've accidentally ordered a wrong IO card for me and the hardware DOES take too long to ship from the order date (a month, minimum) but a little planning and verification goes a long way.
IBM doesn't care to come around at all anymore after we had some serious issues with their kit. HP wants us to buy something first (specifically a 3PAR, it seems), THEN they'll start coming around. Dell and Cisco however (among a few others, Meru, FortiNet) seem to be genuinely interested in seeing us succeed. We're far from their biggest customers in this region but they're always excited to get in on the next "big project".
Sure glad my guys missed the axe, too bad there aren't more like them or I'm sure some of you guys would be singing praises. Good SE's and good account/sales guys teaming up to deliver products/services has had very positive results for us.
...look to be a way for Red Hat to further marginalize Oracle Linux?
I sure hope so.
...is awesome! Hoping for some exciting discoveries in 2014. My son and I will be on the back deck with his new telescope trying to make a few of our own (they're all exciting when you're 6).
The pricing wasn't nearly good enough...
...to beat Data Domain for us. EMC just mopped the floor in our backup RFP and took all the business for tape library, disk target, software, and services. The POC was flawless and the package did what they said it would. No one else enjoyed such results.
Now we're on to the storage front and EMC is off to an early lead with realistic sizing and reasonable performance claims, then adding data reduction and FAST goodies. Everyone else is basing their entire solutions around very optimistic performance expectations and data reduction technologies just to try and compete.
Considering that I need to live with the decision, good or bad, for the next 5 years or so at least, I'll take the workhorse general purpose storage array with lots of disk and flash and tons of expansion capability/capacity over the thoroughbred super arrays. I need reliable, predictable performance and capacity expansion increments based on real world data, not something based on best-case scenarios.
I suspect we will likely be an EMC shop for back and primary storage at some point in the future.
And by PS6xx0s, I mean the 61x0s and 60x0s models.
Also, it looks like the result was obtained from using 8 of the PS6210s models together, 100% read workload (on their web site) so "tuned for success".
Still, some good steps forward for this product line. I'm interested as well in the performance improvement on the spinning disk models where cache is hit frequently as well, 16GB is a big jump.
Dell also has an all-flash Compellent option that was GA as of Storage Center 6.4. And there was always an all-flash EqualLogic in the PS6xx0s, but it certainly wasn't this capable. It would be interesting to see how they obtained that 1.2m IOPS result. Some people buy flash expecting sub-millisecond access times from their apps, but some of us could live quite happily at 2ms especially if it's reasonably consistent and means expanding in cheaper increments.
I wonder how it would compete as far as features and $/IOPS vs. the rest of the flash market? I suspect some folks will need to either spread FUD (15MB page size, how does Dell sleep at night!) or cut costs significantly.
Add in the Force 10 switching providing low latency 10GbE with DCB for lossless Ethernet and you've got a pretty stout storage backend/fabric that shouldn't break the bank (although I'm looking at 300k for a flashy CML, so it's still steep even with the Dell price points).
Not a bad story for Dell tbh, especially existing EQL customers looking at a flash requirement. Now to pull a multi-controller Compellent out of the hat for Enterprise Tier 1 customers.
...the dual-active controllers?
As someone who HAS lost a controller, I don't consider it a business option in the slightest without a fail-over capability. Performance is a good thing, but only after resiliency and availability are factored in.
The reality is that VNXe's and FAS2220's are pretty cheap these days, highly expandable, have great support, and perform well while delivering solid engineering, integration, and software features right out of the box. Sure, they don't offer the fluff of an overgrown home NAS box but they will do block and file storage reliably and you will have a big company there to catch you if you fall.
Re: That covers "non-endurance related failure"
That's the piece that is missing, isn't it? You may only have one non-endurance related failure ever 105 years, but you may have a very sudden 25 endurance-related failures every 5 years depending on the workload.
...has already responded to the all-flash arrays with the 3PAR 7450.
IBM's problem is glitz and glamour. Look at how dull and boring the product line is! I've never heard an IBM rep get excited about their products. HP practically froths at the mouth about how the 3PAR stuff is the juggernaut of the storage industry, leaving everything else nothing more than a pile of smoking hardware and red flashing LEDs.
Plus HP positively jumps on any opportunity to do a proof-of-concept and ram the point home (I tend to be very critical of HP evangelists having come from HP before, but the products are sound and that is proven out in the field).
I asked my IBM rep about doing a bake off with a Compellent for me since that's what the very trusted VAR would sell us and he didn't want to participate but I should consider them anyways because of <marketing spew and boring Redbooks>. If the VAR sold HP I guarantee they'd be in like a dirty shirt with their goodies and free lunches and such.
Plus, 3PAR is yellow. Point for HP. If the rest of their equipment was painted to match my server room would look like a bumble bee because yellow means it's awesome.
As for EMC, they own so much of the worlds data (as in, resides on their products) that it is very tough to get people out of the ecosystem. Not that you'd want to as a customer, if you are an EMC shop with existing storage to replace, buying the newest version based on what the SE sizes up for you will certainly not get you fired.
And NetApp is NetApp. Apparently disk is dead and NetApp is so last decade, but it looks like no one got the memo.
Sweet F#ck All, great name for a product! Don't know if I'd trust "sweet f#ck all RAID" though.
Dell also uses Brocade parts in their blade chassis and older versions of the Active System were sold with Brocade FC switches as well.
Mellanox won't come cheap and Oracle and IBM might be a bit pissy about that purchase as they both use Mellanox products as well.
Brocade only really works if it stands on it's own as far as it's core FC business goes. It's ethernet business doesn't have the strength to sustain it. Maybe an HBA company and getting into the flash caching game?
...the author seems to have missed out on the fact that the new Cisco 6807-XL replaces the 6500E-series as Cisco's "core switch" and has massively improved thermals/power consumption and back plane performance.
That doesn't mean the 6807 is the king of all switches, but it wouldn't hurt to factor it in to the article since it's intended to replace the 6500 chassis (it even supports the currently available line cards intended for the 6500).
...is the number of X-Bricks that were rolled out in Limited Availability. Along with the 200 internal systems for training that would mean a total of 350 systems deployed in some form or another.
The 150 doesn't refer to the number of failures, but instead that of the 350 deployed systems, the failure rate seems high (without giving a specific number).
That could be 4, 10, or like 100. Unfortunately they didn't give us an actual number, just the number of units used to generate the sample size.
I was just at a solutions expo...
...and one of the Dell guys I know handed me one, said "I think you'll like this one", opened the screen and it woke to an Ubuntu screen. He gave me the password and I started pecking around. Extremely usable experience. Then he tapped the screen and all of a sudden Unity moved up a few rungs on my personal UI scale.
It's pretty sweet, I'll be honest. Glad Dell sees a market for a decent spec Linux ultrabook, even if it isn't everyone's favorite distro. At least the derivatives should all run well on it.
The Pure web site specifically mentions Active/Active on the newest controllers. I believe Active/Passive is referring to the older series.
I don't know about...
...all-flash datacenter in 5 years, but there is a good possibility of mostly-or-all-flash primary storage and greater use of in-host caching. I think you will see that become more and more common as flash costs come down and capacities go up. XtremIO primary plus Data Domain backed by Spectra Logic tape, for example. One for hot data, one for archive and hot backup, and one for long-term cold backup. It seems as though most vendors are developing similar strategies.
We are doing an RFP for a greenfield site and every single vendor is bringing some flash capabilities to the party (Dell, in fact, did go all-flash on primary storage). Workload-wise we don't need it but everyone is building it in to their platforms so when our workload patterns change in the future, which can be difficult to predict over 5 years, we will be positioned to deal with those changes and expand the use of flash as needed. Yeah, I'd love an all-flash storage array, but that's not always the most affordable thing for SME. Bring on the hybrid arrays with disk-to-tape backup, I say.
More than 5 years is hard to predict, a lot can change between now and then. There is already a lot of change in the storage industry right now and even worse, a lot of FUD in the marketplace. It's good times and sometimes the drama plays out like a soap opera, this is a great industry!
It's pretty cool...
...for the SMB space, especially considering you can buy a couple high horsepower Proliant servers with a bunch of NIC ports and some VMware Essentials licensing or Windows Server Datacenter, and share the built-in storage with all of the systems in the cluster. Good case to get some Procurve switches on heavy discount, too.
Create a small local datastore for the VSA, give it the rest of the disks in the system to share, cluster it with VSAs on the other nodes using the network RAID functionality, and enjoy the recent improvements to the tech.
I've tried it before in a lab environment and came away pretty impressed. It's really easy to use and works well. The real LeftHand stuff, especially 10GbE with fast drives, is very nifty (per-node expansion costs are a bit high though compared to adding a RAID pack or something like that but you do save a bit on software licensing).
Braver than I...
...as a Manitoban I prefer to keep my natural fur during the winter months, especially on the top of my head!
"Internet of Things", why is this a phrase
> Cisco came up with it to describe an everything-connected world, blame their marketing team
and why is the idea of it actually needed
> Internet-connected "things" means more ports sold, which is pretty important to a networking company
I've never been at work and thought "shit, if only I could check my fridge or microwave", maybe if I was a project manager I would but I am not.
> Nor have I, and I particularly hate it when someone buys a home surveillance camera which takes pictures based on movement and then emails them to peoples work addresses. Not everything needs to be Internet-connected or accessible 24/7 (but my boss loves changing the channel on his wife from his desk).
...about right (2.5x 2560Mb/s = 6400 Mb/s). Wiki backs that up. 12Gb is 3.75x the performance of Ultra320.
Load up a SAS bus with lots of SSD and load the controller up with front end 16Gb fiber channel and the bus itself may become oversubscribed under heavy load, further driving the requirements for better visibility, right from the application, on a VM, in a VM cluster, through an HBA, across the SAN fabric, into the front end of the SAN, through the write cache, then out the SAS bus to the the disks or flash modules sitting on the back end.
Another one! How cute.
The orange is a nice touch.
...are agreements, even if you don't like them. I'd say at this point EMC is standing on some pretty strong footing here, especially with the evidence they've provided of people taking documentation with them on their way out the door.
Lots of excitement...
...there is a lot of cool things coming out of the startups, but when I think long and hard about it I really can't see myself wanting to move away from the big players in all-flash or hybrid-flash systems. As we're currently discussing a storage refresh and working with multiple VARs and manufacturers, we always seem to come back to EMC (VNX with FAST), HP (3PAR with AO/DO features), Dell (Compellent with Data Progression across SLC/MLC), IBM (Storwise with Easy Tier), and NetApp (FAS with Flash Cache and Flash Pools).
We always get ramped up by the startup storage vendors only to be let down when we really look under the covers, mostly because the products are designed to do one thing well. It sure makes for a boring experience but at least if you have the big guys in your corner and a local SE you can choke, they'll try not to f#ck it up too badly (and they don't mind picking up a lunch tab every other month or so). If a startup gets it wrong, they might not even be around long enough to take your anger out on them.
Maybe the next time we look at a storage refresh, one or two of these vendors will still be around and worth taking a look at but until then I'm going to run safely into the arms of the big guys even if they run train on my budget for maintenance and support services.
,,,it's actually the opposite, everyone is connected to Cisco 2960x switches, gigabit to every desktop. From there uplinked to new stacks of 3850 carrying multiple 10GbE links back to the core, where users run train on the front end 1GbE ports of our NetApp filers. A funny thing happens between 3:30 and 5:00 every day as our engineering department tries to save all their drawings and assemblies and simulations. Some of those data sets are ~25GB :(
Should almost ask the networking guys to throttle everyone back to 100Mb.
...significant R&D investments in the short term and better communication about roadmaps and how far along things are (cough *ocarina* cough).
...more accurately the arrays can't "delete all data" but that someone could gain access to do so without authenticating.
It's not like there's a huge flaw in the array that results in data disappearing all of a sudden, this is no different than HP's slip up with the MSA storage arrays having a default account.
Proper segregation of management and production networks to isolate these management shells/GUIs should be best practice just about everywhere (although it's probably not, so maybe this actually will be a problem for a few customers).
I can't see how...
...people quitting to join an up-and-comer is "suspicious" in any way. Pure has tapped into the EMC resource pool and is likely making very attractive offers in their quest to push their platform to the next level. No big deal. The only reason it's suspicious is likely because these people are intimately familiar with EMC operations and possibly engineering as well, but you can't really "unlearn" anything. Some of those staff might see an opportunity to evolve the Flash Array product into what they envisioned for, say, VMAX, but can't/won't materialize for one reason or another. This is their clean slate. That's exciting stuff for established engineers.
There is a HUGE problem with EMCers taking EMC-confidential information with them on their way out the door, however. Not only is that almost definitely sue-into-the-ground worthy but some of that could be, oh I dunno, MY personal data. Yeah, I have a problem with that.
...bad data in this report. In a cursory examination I found a number of things missing or improperly tabulated from a number of vendors (support scores being different but the metrics are the same? missing features that ARE supported?). DCIG did a terrible job with this report. As someone with no stake in any of the vendors (customer) I don't think I will be using any of their analysis for my decision-making anytime soon. I've already harshly criticized it publicly on LinkedIn so not going to get into detail here but the report is worthless.
Anyone hoping to use this to assist in decision-making should buck up and do a bake-off instead and get something of value out of their time instead of a sponsored report that's so poorly done I question the integrity of the testers/analysts.
...ridiculous vRAM licensing, I'm inclined to buy more licenses. With support.
...are clouding their judgement on this. They need to get away from the DS series stuff and focus on the Storwize and XIV platforms.
They've made smaller versions of the Storwize platform (v3700, v5000 alongside v7000), but in the other direction they are limited by form factor. IBM should put the software on a pair of x3650's with a pile of PCIe slots and dual sockets with RAM out the wazoo. They would be able to scale up much higher than the v7000. Easy Tier across multiple types of spinning disk. And then scale it all out across multiple pairs of these monster controllers with workload mobility and maybe automated performance load balancing.
Then they need to define the use cases for XIV much better. Right now, whenever IBM sales comes around they are pitching it as a second- or third-place option for some reason but aren't espousing the advantages, specifically around resiliency and consistent performance. That kind of stuff is pretty important to some customers, often more important that outright maximum IOPS.
It's Dell software...
...featuring CDP - that means an AppAssure sales pitch.
...when you still need to take a qualified training course before you're even allowed to challenge the exam?
Either make the courses cheaper (or at least the What's New option for previously certified VCPs) or drop the requirement entirely.
Nothing says entry-level...
...like 16Gb fiber channel, 12Gb SAS, and 90k IOPS.
I'm surprised they still ship the MSA instead of just dropping their pants on the 3PAR 7200. Must be some money to make there somewhere. Guess it's the same for IBM still pushing the DS series (NetApp/Engenio) alongside the v3700 in the entry-level.
Can do that on Compellent as well...
...using Storage Profiles to determine where the data and snapshot data is stored on a per-volume basis (which tiers the data and its snapshots should reside on). It's not as polished as the 3PAR method of actually assigning thresholds but will work well for most companies.
Alternately you can use the defaults and have all your writes come in to tier 1 RAID10 and let Data Progression re-stripe/move the data to where it needs to go over time.
It's a bunch of disks in a pool, sorted by tier. The system writes data at the RAID level you specify to the tiers you specify on a per-volume basis, then takes a snapshot of the data and converts that on the fly as well to whatever RAID level you specify and moves the data up or down through the tiers depending on how frequently it's accessed.
And they don't call the storage units "LUNs" either, if that makes you feel any better.
We are currently...
...doing diligence around converged infrastructure, lots of interesting things going on in this space. Based on our requirements, the Dell Active System is slightly edging out the vBlock 340, all the way up until we talk data protection where the vBlock builds on EMCs pile o' products. HP is sitting near the top as well. IBM is a fair ways behind, unless you are talking about price, but we haven't really sized up a FlexPod yet.
Lots of free lunches though, it's f@cking great.
...flash array only or does that include hybrid arrays and DAS or locally-attached flash? There are some entries that that don't make much sense given their position. Dell, for example, has an all-flash EqualLogic but I doubt enough people are buying it to make a dent in the market compared to the hybrid version. The all-flash Compellent isn't GA yet as far as I know either. And I'm pretty sure the NetApp flash pool technology for FAS filers is probably selling much hotter than the EF540.
We're starting to dip a toe in the flash pool...
...we just started this month with an order for some flash for one of our Compellent arrays. Plan is to pin our Oracle RAC redo logs to flash (using a RAID10 r/w, RAID5 replay storage profile) and monitor the performance.
We are also looking to do a greenfield datacenter next year and this will be a good proving ground for flash. I was very specific when I submitted the business case for SSD by stating it is a consumable device with a finite lifespan, but the SLC drives we bought into have a crazy lifespan as far as our daily change rate is concerned. I also made very certain to not promise the sky to my Oracle DBAs but rather said it "should help some".
Yeah, I'd love an all-flash Compellent or 3PAR or VNX2 or something but there is no way I could ever afford something like that with my budget. Smart money in my case is on reducing latency in the fabrics and providing a fast SSD tier for incoming writes to land on, lots of tier 2 spindles to support read I/O, and lots of tier 3 capacity for my cold data to sit on and age gracefully. That way I can float all our apps and scale out performance and capacity in both compute and storage independently.
Hybrid mid-range storage for me. Above 3 are strong choices. NetApp FAS with Flash Pool and Nimble Storage CS stuff is also attractive for our size.
The first thing they need to do is revamp their entire support model for the Enterprise line to match their Copilot support model. I've never had a BAD Dell experience, but my Copilot experience has ALWAYS been above expectations.
...on the enterprise side are pretty pumped, already spoke with my account manager and the consensus is good things for their piece of the puzzle, especially in terms of investment into their current portfolio.
I don't know if things are quite as rosy on the consumer side though.
Re: Disks in arrays
That was always my problem with the NetApp stuff, especially with the fiber channel shelves that held 14 disks. Optimum RAID group sizing was 16 drives according to the ONTAP cookbook, so I needed two shelves with two disks in the second shelf. Explaining that to a director was a pain in the ass, we'd end up buying two full shelves and leaving 12 disks unused until we bought yet another full shelf. It's really not just a NetApp problem, but the RAID group sizes being so large was always a bit of a pain come expansion time.
Moving to virtualized RAID storage systems made a big difference for us. Our usable capacity went up a fair amount, as well the performance per spindle is consistently higher. And we can add them as needed in any amount required either to deliver more IOPS, more capacity, or both.
Actually the entry-level Cisco exam for the CCNA cert has built-in simulators requiring a knowledge of the commands and understanding of the curriculum. We sent two of our guys out for 7-day boot camps and they crammed like mad and spent their evenings reviewing and practicing in the lab, and they both used most of the allotted time for their exam. They both passed, but both conceded it wasn't as easy as they thought. It's a good method, regurgitate some of the specs AND be able to do the real work.
I've heard the RHCE is similar, where there is a good chunk of practical work involved. Our head Linux guy did it and was really impressed with both the training and the certification program. Those sorts of certification programs are quite valid in my eyes and even though the CCNA is entry-level, if you've gone out and earned it then it's worth bragging about.
I've taken a bunch of MS exams but only remember one of them having a simulation exercise (which crashed). To be fair, my NetApp, EMC, HP, and VMware certs were all the same too. Might be a different story at higher levels though.
Don't let the door hit you in the ass on your way out.
Now to see what Mikey D has in store for his company.
- Does Apple's iOS 7 make you physically SICK? Try swallowing version 7.1
- Pics Indestructible Death Stars blow up planets with glowing KILL RAY
- Hands on Satisfy my scroll: El Reg gets claws on Windows 8.1 spring update
- Video Snowden: You can't trust SPOOKS with your DATA
- 166 days later: Space Station astronauts return to Earth