I'd really like an electric Fusion (Mondeo on the other side of the pond from me) driving all four wheels. After working as a vendor SE for a few years for a large territory, I can honestly say the most comfortable of all of the rentals I've ever taken was the Fusion. It was a great fit with good overall proportions for the family and doing "work stuff". If I'm going to spend Sport money to get the 2.7l Ecoboost AWD, I might as well just get all electric for probably the same money.
257 posts • joined 26 Aug 2009
Re: IBM, 10 write per day?
10 FULL drive writes per day. A 775GB drive would be able to sustain 7,750GB written per day for 5 years. Not 10 individual writes, that would be brutal!
Dell has an OEM division which specifically works with partners like Elastifile to provide standardized PowerEdge configurations for their appliances. These are not available to the general public and are sold directly to the OEM partner to integrate and test their software/product.
Dell would not resell the Elastifile appliance themselves, thereby avoiding sales team conflict. But if a customer decides to buy Elastifile appliances instead of Isilon or ECS, someone somewhere in DellEMC is retiring quota on the PowerEdge sales to Elastifile.
So was it MS-AI that told them to leave the Hulk out an extra lap so the grid could bunch up behind the safety car instead of boxing when Ricciardo did to maintain track position?
I would be more concerned with RAID rebuilds on a set of drives that size, because people are totally going to stick them in QNAPs and the whatnot - very popular in the SMB space.
Sales speak for "he missed his number and would rather leave on his own than be sent packing". At least this way when the company misses plan for the year, he isn't there to take the fall. When they backfill, the FNG will be viewed as their saviour for a few quarters as well, unless they also fail to perform.
Welcome to sales, here's your quota and your pink slip. Miss the first, sign and date the second.
4.5b/year is more than their competitors and one of the more exciting pieces of the acquisition.
Your picture of the "M640 blade server" is actually just the back of the M1000e blade chassis. Blades go in the front :)
Re: I can't wait to see...
Really depends on the workload on the array at the same time. I had an older NetApp FAS3140 poop out a 2TB 7K drive and it took 26 hours to rebuild.
I can't wait to see...
...the look on my customers faces when I have to tell them what the expected rebuild time on an array of these will be. Probably measured in days or even in weeks instead of hours. So many people telling me that "performance doesn't matter, we just need capacity", that's all well and good but in the event of disk failure, how much risk are you willing to accept that another disk or two won't fail during the rebuild window, especially when you are waiting 3 days for the operation to complete? Especially since it's still just a 7k spindle.
OTOH, these'll be great in a Data Domain or similar system. "What is your retention policy?" "EVERYTHING. FOR ALL ETERNITY."
"SimpliVity’s industry-leading software-defined data management platform"
You can't call yourself software-defined when your solution is dependent on a piece of proprietary hardware.
Re: 30k iops
That is weird.
Last year EMC had a low-cost all-flash VNXe bundle available through channel partners capable of 75,000 IOPS. I sold a few of them, one to a school who was blown away by how fast their VDI sessions would recompose - it was really impressive. I mean Unisphere is.. Unisphere, but the product as bundled was really good overall. Pretty sure the 75k was 100% read but I've seen it do 40k on an actual workload.
The VNX8000 has a published SPC-1 benchmark at 435,000 IOPS. Their current claims make no sense.
I have one customer in particular who cares about IOPS and latency beyond the "it's working, so it must be fine" metric with a VNX5400, they've benchmarked over 25k IOPS with no more than a few SSDs and the rest all spinning disk still under 40 spindles in total.
I sense marketing fail somewhere along the line. The VNX is a lot faster than this article leads on.
Note: I work for Dell now so I'm sure there's something I need to be up front about or something. Somewhere. At some point. Whatever.
Slow and steady...
...seems to be Brocade's strategy. I've been working with their non-FC gear for a year and a half, some of the stuff I inherited was a bit flaky on earlier code revisions but the ICX and VDX lines are both quite stable now. I typically lead with Cisco since our bench is stronger there (not a ton of difference though), and I do find more and more Brocade PoCs going on inside customer environments. I've seen a lot of BNA licenses go out at close to 0 dollars as well.
Also seeing a bit of uptake with their NFV stuff, some vADX here and some vRouter there. Nothing crazy but I think its easily digestible for customers looking to add to their existing environments.
They need a acquisition or something to get a bit more buzz going.
Re: Good for them!
Upvoted, any nation willing to push the frontiers of science is doing a service to us all.
...is the reason we like using Nimble with our managed services customers. The supports teams don't worry much about the arrays themselves, they simply consume and occasionally troubleshoot.
But they are not "storage people", typically they are generalists who only care that 1) it's fast enough; 2) there's enough capacity; and 3) it has no single point of failure. The nice thing about InfoSight is that when they start running into issues, it's dead simple for our storage specialists to analyse the data and make informed decisions as to how to proceed.
That keeps our managed services support teams busy supporting customers and their apps, and our storage specialists busy doing storagey things.
I had an 8-bay at a previous job for OwnCloud storage (TS8xx rack mount), loaded with approved drives. It was okay for what we needed at the time but had horrible write performance under load, then suffered a catastrophic failure resulting in data loss (which is impressive on a RAID-6 array) - sure enough the drives we had used which were on the approved list when we bought them were also removed sometime before the failure, and I know they weren't Reds because that's what we were going to replace them with.
So my experience with QNAP is poor at best, but I'd still like a 2-drive unit for home, some nifty features there from a home perspective that I could get on with and I'd love to have something on-prem for backup.
So they've "announced" something which is going to kill DSSD outright... Cool story bro, but if you don't have anything to SHIP to customers, you are just spewing hot air. I saw DSSD in the metal and customers are running it in beta deployments.
Yes, this technology will be very cool in the hyperconverged space. I suspect Nutanix and Simplivity will get more traction with this sort of thing since they are actual leaders in Hyperconverged systems.
I wouldn't mind seeing a Surface 4 (not Pro 4) with the Core-Y processor. The 3 with the Atom is almost enough for my day-to-day stuff but it really chugs when doing complex Visio diagrams with a couple other apps open, while the Pro 3 i5 trucks along just fine.
Agree. Our CEO had one and loved it, but yelled at us when it broke after a vacation because it took our guys a full day to even figure out what was and was not covered by the warranty. Turns out he had bought some extra protection which went to someone else, and they sent us a replacement via expedited post - still two extra days to get the big cheese back up and running.
Had it been a Dell or HP device with top support, that would have been fixed by the end of the day (at least the Dell and HP guys around here are alright with hitting their SLAs).
Re: Review of Windows
We did a PoC for a large Hyper-V deployment using Storage Spaces a while back and it was not spectacular - for about 70% of the workloads it worked well, but the outlying 30% experienced weird latency and performance issues - and these spikes would impact other resources. Bear in mind the solution was spec'd and configured by Microsoft professional services resources - not by us ourselves. They could never get it to work quite right and the extra time spent pushed our own implementation schedule back a ways so we finally had to pull the plug on it.
The reasoning was sound and the same as Trevor's - these are Windows admins managing the environment, not storage admins. Windows storage seemed a good fit. When you support 500 desktops and 200 servers with 5 staff you don't have time to become an expert in everything so sometimes the simplest thing that does the job is the best.
It was an F-16, actually, in a dogfighting contest.
I think they want to be able to print a report which states the F-35 "competes with purpose-built fighters in all roles" and be able to point to tests which prove it.
Unfortunately all they will have is evidence the F-35 provides close air support as well as an F-16 and dogfights like an A-10.
...would be nice to know, I see 8GB listed but will it do 16? Also, is the SD card bootable? The SSD would make a fine flash read cache device to accelerate the VMs running on disks while booting vSphere from the SD card.
Such goodness, many IPs.
But there is still too much money to be made from engineering your way around the problem. It's easier to tack on features to extend the life of IPv4 than it is to migrate fully to IPv6 for too many companies.
I had a pleasant experience with a smaller provider though, walked in with a customers shiny new PA-500 and checked the config sheet they left for the customers WAN connection - IPv6! I think they have something like 4 million IPs to themselves now. Not bad for an office of 30.
It's not groundbreaking but the feature list looks good, Family subscription option for 6 people is good. Radio stations with unlimited skipping is good.Unlimited listening from the Apple Music library (which I hope means ad-hoc selection from the iTunes catalog), with the ability to save for offline listening. Adding Android devices is icing on the cake for my One M7 and my sons' Tab 4.
Been testing ScaleIO...
...on a trio of decommissioned servers. Nifty so far. I love the management GUI. Performance is good enough for a bunch of old 146GB disks.
Haven't simulated any failures yet and only used a single host for testing but it's pretty cool and will be a thing to watch as it matures.
As an infrastructure guy...
...I loved the Oracle Database Appliances. I sat down with our lead apps guy, went over the configuration worksheet with an Oracle engineer on the phone, then spent a couple hours doing the rack and stack and networking configuration. After that it was a complete hand off. Old servers powered down and unracked, several TB recovered from production NetApp storage, a lot of finger pointing removed, and more importantly from my perspective was containing the apps team to a small footprint of hardware which was still more than enough to meet their needs and came bundled with it's own support AND hypervisor to screw up. Really all I needed to do from that point on was give them access to a Data Domain for RMAN backups and make sure no one unplugged anything. Easy breezy, lemon squeezy.
Wait for the announcements next week at EMC World (if there are any). I asked about VPSEX Blue appliances with Hyper-V and ScaleIO and got some rather stunned "how-did-you-know" looks from the SEs.
Misleading, but not incorrect
The EF system is one of the most proven *platforms*...
The EF system "platform" is the same as the old Engenio product, so technically he may be correct but highly misleading.
There are more than 750,000 LSI Engenio arrays, IBM DS-series, Dell PowerVault systems, NetApp E and EF series, I think even Cray used the "platform" for some of their storage products. I don't see THAT as a stretch as a lot of folks bought into that platform in one form or another.
But as far as actual EF-series specific arrays shipped? I doubt they've hit 10,000. Didn't they only just recently offer dual-controller systems?
Re: In before the flame war.
So do I, have not had a particularly good experience with Android to date (HTC One M7). Cyanogenmod makes it much less bad.
...the difference between an "application delivery controller" and an "application delivery switch" like Brocade's own ADX (which I've always referred to as a load balancer because that's what it is, and virtual editions are available for use)?
Yes, it is pretty great. Not for every use case, but it does what it says on the tin and our customers are pretty pleased with it in 100% virtual environments.
Re: No mention of The Register web site going down yesterday
Name and shame the load balancer!
I can only hope this means tossing some unneeded management and fat-cat sales types out the window and not hacking away your top developers, big thinkers, or good hunters. That's not often the case, sadly. I like Citrix products myself, I've had good luck with VDI-in-a-box, XenDesktop, XenApp, and NetScaler. XenServer is what I use in my lab and it's been rock solid.
Re: leaving to PureStorage
Also, Pure is full of VC cash, so I imagine they can pony up some big signing bonuses for top talent. People like money!
Re: I don't get it.
Wait, that might not be a part of the plan...
Re: I don't get it.
I would like to see it with at least a 1080p screen and Broadwell U processors as well, probably the i5-5250U. That would make for a snappy lightweight with battery enough to last me a fully day of poking around in client data centers.
You could have...
...taken the last two paragraphs, made them into a single paragraph, and skipped the rest of the article entirely. It's what we do - capture statistics with vendor sizing tools, determine requirements as far as features go, and build out a couple BoMs for different vendors products. Then we discuss with the client regarding performance, features, and cost comparisons. At the end of the exercise they end up with a properly sized array that performs well and does what they need it to regardless if its all-flash, hybrid, or even all-disk (yes, I'm not going to recommend flash to a cost-sensitive SMB who needs 10TB but only 200 IOPS). We sleep well knowing they are going to be satisfied with the product, and we make money off the implementation and training services. Happy customers.
...for this is because it would butcher XtremIO sales, which does it's data reduction in-line like most modern AFAs.
The all-flash VNX is intended to provide the same feature set as the disk-based and hybrid VNX systems so existing customers can have an all-flash option and manage with the same toolset/skillset and same features (RecoverPoint, VPLEX, ViPR, etc)
I've seen even smaller VNX systems with MCx code and a bit of flash deliver some big IOPS with consistently low latency considering the requirements of who is buying them (and the latency of the old VNX with large flash pools), but they are intended to be a multi-purpose, multi-protocol Swiss Army knife of storage with disk foundations, not a hyper performance all-flash thoroughbred.
I started with a Brocade partner about 10 months ago and we've been fairly successful with the ICX line of switching, seems to be very reliable and good features/price point but I'm irked about the older code versions Support is recommending to us and customers - get on with 8.x already!
The VDX switches are totally sweet, lots of use cases there. Performance looks to be as spec'd and the VCS fabric is very easy to configure and manage. Expensive, yes they are, but I approve.
The ADX is decent as well, not really NetScaler decent (cheaper though) but virtual ADX with Vyatta vRouters in highly virtualized multi-tenant environments actually works pretty well.
They seem to be a good company with good products on offer outside of just FC, just not big on fanfare and flash sadly. That might be all they are really lacking.
Missed as part of the announcement...
...but worth mentioning is they also now support:
- 4TB nearline drives;
- up to 6 expansion shelves;
- 25.6TB of SSD (16x 1.6TB) in the flash shelf.
(old: 3TB NL-SAS, 3 expansion shelves, 12 TB of SSD in the flash shelf)
Re: Who Takes DCIG Seriously?
That DCIG report is a load of garbage, any vendor who wanted to help them with the scoring was allowed to do so and any vendor who didn't was docked points. It is based entirely off of the marketing literature and websites, they did no lab up any of the products they "evaluated".
I'm really liking the Yoga and X1 Carbon laptops my co-workers have. They love them, too. Seems to be a case of "make good products and watch them sell well". And the price points for decent desktops is getting us into all kinds of price-sensitive places where we weren't competitive before.
I got stuck with an HP Elitebook just as we started the Lenovo partnership :( Bad keyboard, even worse screen, thicker, heavier, way too much flex in the chassis, and flaky USB connectivity. No X1, this.
...was the first industry certification I ever achieved (NetWare 4.11). Unfortunately I was not able to secure work as I had just graduated from college at the time and no one wanted a 0 experience admin. Shortly thereafter, Windows NT 4.0 took over the marketplace in my area and it was easier to find work as an MCP supporting Windows Server after a few months of desktop experience than it was to find anyone looking to hire untried CNAs, but I did certainly prefer NetWare.
Re: Getting my hopes up!
Yeah, BE worked for us for many years though a few upgrades until things just randomly stopped working, then they never, ever wanted to work again. Having NDMP break when all 15TB of your files are stored on a NetApp is hugely frustrating - went from 4GB/s down to 400MB/s by having the backup server mount the shares as network drives :-/ Symantec could never get it working properly again.
I went through 3 versions of BE (12.0, 12.5, 2010) where Exchange GRT kept breaking. It would work, and then not. They would tell me it's my Exchange server. I would upgrade to the newest version of BE, and it would work again on the same Exchange server. Then suddenly it would break again, and it was the Exchange servers fault. \
I had to buy NetWorker since Veeam couldn't do a thing for my physical servers (Win/Solaris/RHEL) or my filers or my Oracle DBs and I couldn't afford the price tag CommVault was quoting me. Same filer, same Exchange server, still running fine long after I left. Gross interface though, good Lord.
While I am glad...
...they are able to buy the kit they think they need, it's not exactly newsworthy, is it? That kind of hardware would amount to a PoC lab or side project for large enterprises. We have a customer who bought 16 loaded UCS chassis and a couple shmill in switches, can I submit an article about them?
As a past customer, I've seen Dell take a big dump on their competitors pricing on many occasions and win business as a result. They won a lot of mine for offering the features I want at lower prices (with good local partners which helps a lot).
My only concern is that with reduced margins comes reduced R&D spending. Although as a private company, they can dictate their margins with no regard for Wall Street. We'll see how the strategy plays out over the course of the year.
Re: VMware a positive or a negative?
Cisco is actually quite big in the Openstack field and uses Inktank on RHEL for storage as it is, so they don't necessarily need VMware at all. They are also ensuring the latest generation of products has full Hyper-V interop as well.
I am more expecting a Red Hat buy from Cisco though (now that Red Hat owns Inktank), that would have significant results on the quality of their Linux-based work.
...it's as easy as redirecting emc.com to cisco.com/storage.
Everyone else will have an overlap nightmare.
Re: Really Good Stuff
Agreed, the Sourcefire stuff is great. Adding it to the ASA line is a Good Thing. After that point, it's about training and education to keep it running properly. Hope to get my hands on this soon.
Re: Find me a 6TB solid state drive
The Compellent flash stuff works really well, having owned a hybrid systems with flash and disk. I found I had to let the data settle when migrating to the array so as to not fill the SLC tier too quickly. Once the colder data has de-staged to lower tiers you just let all the new writes hit SLC and let the array sort out the placement as it cools. Typical flash array performance, sub-millisecond latency and ridiculous IOPS, plus some added efficiency through thin writes and de-staging to more cost-effective media.