I love it.
While I do have concerns about chassis reliability and hopes for the availability of a spares kit, the raw idea I love. I hope they do well.
It is refreshing to see that someone pays attention in the server racket. It is also annoying that it has taken a top-tier server maker so long to get a true system out the door that is suitable for small and midrange businesses. But Dell may have finally done it with its PowerEdge VRTX. A typical branch or SMB data closet A …
While I do have concerns about chassis reliability and hopes for the availability of a spares kit, the raw idea I love. I hope they do well.
Reliability is as at least as good as a single server, but better, due to shared storage and the use of a virtual infrastructure (HyperV or VMware).
AC as I work for Dell.
Actually I am interested now, even for a large company its much better than getting a massive blade centre and only populating three or four servers for remote sites.
Looks perfect for a mini vm server cluster for a test lab.
Now to try and get someone to allocate some budget to one (I'll replace the Dell sticker with an IBM one to please the overlords)
I wonder how realistic this pricing is though, I wouldn't bat an eyelash at paying 5-10k for the small small businesses out there that need a multiserver tech setup. However I have a feeling there is a bunch of other charges we are not seeing at that point. Ballooning it out of the range of some of the customers that would other wise look at it. lets face it blades generally have a premium attached to them.
Would be nice to see a version with mixed 3.5" and 2.5" storage. Though with it being dell chances are your not able to put anything that isn't approved from dell to begin with. So no standard off the shelf commodity storage load up in the front hot swaps.
Dell won't provide technical support for 3rd party drives you add, but they don't prohibit non-Dell drives anymore.
I would be shocked if even a basic two node config and 6 cheap spindles is less than 15K.
I would anticipate a mid 20's price, for basic normal configs, and maybe 30K with full compute nodes. Even then I am very interested.
The addition of a high speed shared storage subsystem could be a clincher here. I'm guessing this will be almost DAS performance in a shared subsystem, something you would need to spend big money on to get today in a true external shared storage system. The fact it will be limited to four compute nodes is acceptable, considering the workload I can accomplish with 8 sockets these days.
I work at Dell. Get ready to be shocked. Pricing will knock your socks off. The simplicity of this product enables affordability.
IBM have had their Blade Center S small blade chassis available for some time, with similar type blades/storage/power, eyc configuration capability when to the Dell unit marketed in this article and at reasonable prices.
Was article text prepared by Dell marketing department?
I think this is a great idea, that more vendors should have created and marketed. The first pic in the article is so true in so many SMB's.
Did the author miss the hp C3000 chassis whilst he was burying his face in the Dell brochure?
You can usually get a free chassis when buying twp blades as well.
Now available a shorty, two blades and 1/10GB VC switch for £5.5K list with a free chassis choice that includes the premium platinum unit that lists at just under 6K itself.
C3000 - the advantages of the C7000 but cut in half? No, the author didn't forget this: the C3000 is complex, difficult to manage in comparison, no shared storage and cannot be fully populated on 110v power. Oh, and it's loud. The VRTXis none of those. Quiet as a mouse.
".....the advantages of the C7000 but cut in half?...." <Sigh> If you're going to try FUDing a vendor's kit to a customer that has used it then please at least try and learn a bit about the kit first! Please start here to save yourself further embarrassment (http://h18006.www1.hp.com/products/quickspecs/productbulletin.html#spectype=worldwide&type=html&docid=12790). I suggest you then stop off to have a look at the Fujitsu BX400 chassis as that also seems to be a superior offering (http://www.fujitsu.com/fts/products/computing/servers/primergy/blades/bx400/index.html), and even the old IBM Bladecenter would seem a better option (especially as IBM VARs seem to be selling them off cheap before the whole lot become Lenovo lines).
".....the C3000 is complex, difficult to manage in comparison...." Really? How? It uses the same redundant onboard administrator modules and web-based management as the C7000, uses the same blades so has iLO as well, and feeds into tools like SIM and Insight (or other vendors' management tools), so how exactly is it "complex" or "difficult to manage"? Oh, sorry - did you just expect me to accept that throw-away FUD line without some justification? Silly me!
".....no shared storage...." WTF? It takes the same interconnect modules, storage blades and tape blades as the C7000 and can connect to all the same external storage devices via FCoE, FC or SAS, or with the VSA software so we can use iSCSI. Try harder.
".....and cannot be fully populated on 110v power....." Wouldn't know about that seeing as we run them in branch offices - off 3-pin mains power - not telco racks. Please do clarify that interesting statement, just so we can gauge the male bovine manure content.
".....and it's loud....." Yes, and when was the last time you had a rack under your desk? We put them in network closets. If I wanted a desktop server I'd be buying a tower like an ML and using VMware or Hyper-V to carve it up, not a blades chassis.
Try again, only this time make a real effort.
C3000 is complex like a C7000. Yep, same management as a C7000, unlike the inherently easy and straight forward CMC management in the VRTX. I've seen both. I've played with both.
The C7000/C3000 in comparison is complex and designed for data centre use. YOu can use VRTX in a data centre if you want, but the complexity is designed out from the outset.
No shared storage:
The interconnect modules that go into the C7000/C3000 are SAS blades, which are (a) just SAS shared storage to be allocated to a single server (b) really expensive in terms of real estate and $/GB (c) inflexible - canot be extended as a single storage footprint outside the chassis.
Your comment on external storage is irrelevant as this applies to any server, including the VRTX, which can also connect to external storage, should there be a requirement. Does the C3000 have 8 universal PCI slots on the back which can be dedicated to a server - or shared across several. Of course it doesn't.
If HP really wanted to be innovative, it should have integrated enterprise-class storage like Dell with their Equallogic blades. It hasn't done so.
Cannot be fully populated on 110v Power.
Correct: you cannot fully populate a C3000 chassis with highest bin speeds/max memory servers and run this on 110v power. Or perhaps you can: after all, HP is unable to scale a half-height blade server to 768GB memory footprint, due to only9 having 16 DIMM slots. Dell has innovated to 24DIMM slots in its latest blade offering.
And it's loud
No one puts a rack under their desk, which is why customers aren't deploying C3000's under their desk. The VRTX is designed for this option, which is why it's whisper quiet, has a lockable front cover. You obviously know nothing about the ML as it's designed for data centre usage, rack-mount and its electrical spec means it's not designed to conform with 'elf-n-safetee regulations allowing it to sit next to a person at a desk. And it's data centre loud. The VRTX is whisper quiet.
Please do your research before defending the undefendable.
".....The C7000/C3000 in comparison is complex and designed for data centre use....." Just saying it is designed for data center use is not saying in any way that it is unsuited to branch office use. Quite the opposite seeing as it can be remotely managed. Once again, explain the remark on admit it is just FUD.
".....The interconnect modules that go into the C7000/C3000 are SAS blades....". Ignoring all the other interconnect options, including the Virtual Connect / Flex10 offings that Dell has SFA chance of matching, you seem to have deliberately ignored all the storage blades hp also offers. Including tape blades which still seem to be a blackhole in the Dell offering. Where is the tape backup in the supposed branch-office-in-a-box Dell offering? On, it's not in the box. And can you connect the VRTX to an external tape device? Nope, you don't even have an external USB port on the chassis. Instead, you have to stick an USB tape off the front of one of the blades in the VRTX.
"......If HP really wanted to be innovative, it should have integrated enterprise-class storage like Dell with their Equallogic blades....." Again, not only are you ignoring all the hp storage blades, you are being deliberately misleading as the Equilogic storage blades won't go into the VRTX. All the VRTX does is strap a 12-disk JBOD to the chassis for the four blades to share. The C3000 can use four server blades and still have space for four storage blades like the D2220sb, each with twelve disks, meaning FOUR TIMES the storage the Dell VRTX can offer. And then, whilst you can't put Equallogic blades into the VRTX, hp can put their VSA software onto their blades and offer a superior option to the Equallogic anyway. But I can understand why you're so determined to belittle external storage as the only option the VRTX can manage is iSCSI, meaning it has virtually zero external capabilities. Try again, little troll.
".....Correct: you cannot fully populate a C3000 chassis with highest bin speeds/max memory servers and run this on 110v power....." I call bullshit on that one! I've checked, we have C3000s in branch offices in the States that are fully-populated. I suggest you get some new FUD.
".......Dell has innovated to 24DIMM slots.....". So, not only have the holes in your non-arguments been exposed, we have only started to show how limited the VRTX really is. For a start, you only have the option of two two-socket Xeon blades - the M520 and the M620. The C3000 can handle any of the nine current Xeon, Opteron and Itanium blades, plus the workstation blades and prior generations. That's including the four-socket BL680c G7, which has 64 memory slots, and the eight-socket BL890c i4, which scales to 1.5TB in 96 memory slots. Oh, sorry, were you trying to pretend 24 slots was "innovative"? HAHAHAHAHAHA!!!!!!! Back to troll school for you!
You're making this so easy for me.
Denying complexity in the C3000 then in the next sentence mentioning Virtual Connext/Flex10 is a joke. This is too much for remote office/edge and overkill for this space. Take a look again at the back of a C3000 and see how busy it is and then come back to me to tell me how HP's reduced complexity.
Can you connect to an external tape? You obviously don't know what you're talking about - there are 8 PCI slots on the back to connect to whatever you want.
Storage is up to 25 drives, not 12. And you're forgetting that the C3000 is significantly larger than the V RTX.
Additionally, this is high function shared storage, not captive islands of storage JBOD like HP's DS220sb.
The Equallogic comparison was made regarding Dell's M1000e versus the C7000/C3000 because HP hasn't innovated anything close and the VSA is inefficient in terms of its raw vs usable disk numbers. Dell customers thus have a choice - HP customers don't.
"I call bullshit on that one! I've checked, we have C3000s in branch offices in the States that are fully-populated. " Not on 110v power you don't.
Dell offers fewer server options because the ones they offer are more flexible and scalable. HP's 2-socket servers don't have the same level of flexibility due to the age of the chassis design and limitations of the architecture. Come back when HP's finally designed a half-height server that scales to 768GB. I need two HP half-height servers to provide that amount of memory footprint to vCenter.
Why would I want an 8-socket server that goes to 1.5TB when I can have two 2-socket servers that get to the same memory footprint in the same space, at a fraction of the cost - and give me more options for redundancy in a virtual environment? Actually, why would I want an 8-socket server for campus, edge and departmental use in the first place? Sounds like HP's trying to solve problems that don't exist again.
I suggest that you get out of my sandpit, go back lick your wounds and go and do some more research.
"You're making this so easy for me...." Ignoring reality does seem tragically easy for you.
".....Denying complexity in the C3000 then in the next sentence mentioning Virtual Connext/Flex10 is a joke....." Once again, no meat to your argument. All we get is "it's complex" and "it's a joke", but absolutely ZERO details to back it up. That's like you're standing there and saying "the sky is green", then when someone points out the sky is actually blue due to light scattering you don't even try to discuss the science, just shriek "no, it's green, and light scattering is a joke!" Pathetic.
".....This is too much for remote office/edge and overkill for this space...." LOL, and now you're trying to redefine the market to suit your product, rather than designing a product to suit the market. The VRTX is just an attempt to take Dell's rather unappealing HPC cookie tray server gear (C5000, etc) and trying to flog it to a new market segment. Instead of trying to insist it is better than a much more capable blade chassis you should be comparing it to similar tech such as IBM's Flex gear or the hp SL range. Major fail! And only two Xeon offerings? Not even Opteron, let alone ARM - hp's Moonshot kit will eat it for breakfast! A bog-standard four-socket server has more capability, flexibility and expandability than VRTX.
".....Storage is up to 25 drives....." Dell maths not that good? Last time I checked, 48 was still a lot more than 25. LOL! Oh, and what about those missing Dell tape blades, I see you're positively "whisper quiet" on those!
".....this is high function shared storage..." And what exactly is "high function shared storage", besides Dell marketing buzzwords? Can it match the capabilities of hp's VSA software? Thought not. LOL again!
".....The Equallogic comparison was made regarding Dell's M1000e versus the C7000/C3000 ..... Dell customers thus have a choice - HP customers don't....." So you admit you we're talking complete male bovine manure when discussing Equillogic with regard to VRTX, but then insist Dell customers have choice when they actually have NO choice with VRTX! Well, I suppose they could buy the hp VSA software and run it in the VRTX, that might give them a choice. ROFLMAO!
".....there are 8 PCI slots on the back....." Now, this will be fun! How does VRTX share the PCIe slots? Are they even identical slots? Can more than one blade share for example an FC SAN adapter with another? No. The chassis can assign up to two PCIe slots per blade, no sharing allowed. They are not even all full-length slots, five of them being half-length and low-profile only, further limiting their usefulness. Please do pretend it is as flexible and redundant as the proper switch modules you can use with a real blade chassis like the C3000. Oh, and what about the extra Us you lose in the rack for all the switches you need seeing as VRTX doesn't have slots for SAN switch blades? Did you notice the C3000 doesn't need extra Us for SAN switch modules, they're already in the back of the chassis? Sorry, were you trying to pretend the VRTX was going to save you space?
And whilst we're talking about the back of the VRTX, shall we discuss your other hilarious statement that the VRTX is "whisper quiet"? Looking at the back of the VRTX chassis there are four tiny little PSU fans, probably 40mm at best. As anyone that has even built an home PC knows, the volume of air a fan can move depends on the speed of rotation and the area of the fan - many little fans have to spin faster to move the same as one larger fan, and in doing so those little fans make a LOT of noise! Yes, I'm sure the VRTX is "whisper quiet" when it's powered off, maybe just hums when idling, but when those blades are being hammered and those ickle fans are having to spin like Flynn, well then I bet it sounds like a box of angry banshees!
".....Not on 110v power you don't....." Once again, we have more of your "The sky is green because I say so" baloney and no proof or detail. I suppose no-one at Dell told you hp can offer even the bigger C7000 chassis NEBS compliant (http://h18000.www1.hp.com/products/quickspecs/13181_div/13181_div.HTML). You're getting quite yawntastic, TBH. Maybe you should go get the help of someone actually technical rather than just relying on Dell brochures and marketing gumph to cover the very obvious holes in your knowledge.
I suppose it is unkind to expect you to be able to back up your claims that this is somehow superior to a proper blade chassis, but then that is just one small side of the problem you face. The real competitor to the VRTX is going to be 2-socket towers such as the IBM x3500 M4 or hp Proliant ML350p Gen8, or even Dell's own Poweredge T620. The 2-socket space is the most popular segment in x86 servers. If a customer actually has a large enough branch office requirement for four 2-socket servers then they will probably be a big enough office to have space for a half-rack somewhere, which means you're competeing against proper rack servers like the Proliant DL560 Gen8 (4-socket, up to 1.5TB RAM, and only 2U). And once you're up against the 4-socket rack boxes you lose out on flexibility too - the granularity of the server you have on the VRTX is 2-sockets, so 16 Xeon cores, but what if you do the sizing and you need eighteen or twenty or more cores? With the rack servers, using KVM, VMware or Hyper-V, it's not an issue as you can make an image up to 4-sockets in size (eight if you go DL780). And I can use the virtualisation tools I know (VMware, KVM, Hyper-V) without having to fudge around with Dell's cruddy management software (and is the Dell management module in the VRTX redundant?). So the VRTX is not only not as good as a proper blade server, it's not even as good nor efficient as the bog-standard servers Dell is trying to aim it at.
And as another poster pointed out, you're also competing against inherently more flexible cloud services like AWS or Azure. With those you don't need a techie at the remote site to pull blades or replace PSUs because it's all done for you. You don't have to mess around with Dell's management software trying to fit odd-sized server image pegs into 2-socket holes, you simply buy the processing you require, and can flex it up and down without having to worry about redundancy or losing the whole VRTX in a power failure.
VRTX is an interesting idea technically, but it does not seem commercially a better idea than vanilla servers and nowhere near as good as a proper blade chassis. And with such a limited choice of blades - only two - it offers poor choice and no power-saving options like ARM or even Opteron, energy-saving being one of the prime drivers in distributed environments. Maybe the Dell trolls infesting this thread would like to do something useful and tell us when Dell plans on having an ARM or Opteron blade for the VRTX? Or will you be "whisper-quiet" on that as well?
Dell used to resell a baby super-computer called the Cray CX1-iWS that packed four blade servers with GigE, GPU and disk together with Windows HPC Server 2008 or SUSE Linux with beowolf, that was designed for heavy-duty engineering workstations.. this box looks like an updated replacement without the Cray fee for the sticker.
The large number of drive bays hints that they’ll be a SAP HANA announcement down the road, but a great box for startups
This product was conceptualized and architected from grounds up. No other product was copied or rebadged. If anything, we wanted to stay far away from C3000 type complex blade solution. You have to get one of these in your hand to try. You will know what I am talking about. Even though blade servers for compute were leveraged, the product is far from being considered just a mini blade chassis. It's formfactors and capabilities were planned for simplicity in millions of remote offices, branch offices and SMBs with 1000's of real customer feedback around the world and NOT planned simply as a blade chassis. It has full high availability on pice switch, storage disks, fan system, management, power system etc. The best part is the integrated true shared storage system which is highly affordable for such customers. It will not solve world hunger for people looking for extreme capabilities but it will be perfect for the masses who were looking for an integrated, simple, small, quite, manageable, high performance and extremely affordable solution with a lot of headroom for growth that runs on 110V (and world wide power voltages).
Not sure I like the idea of a single PCIe switch linking it all together. Once I look at using multiple servers I want no SPOF, at least not an active electronic one as opposed to a passive midplane and chassis.
Actually there are dual PCIe switch links connected in a fashion to address your concern.
While I love the idea, two things put me off. The first being cost. We'd typically spend 1-1.5K on a server, or two at most, so this is just fanciful stuff to us.
If I did though, my next challenge would be the number of points of failure that could bring the whole thing to its knees. Any of the major chassis components die, and the whole thing is a doorstop. At least with 2 seperate servers, neither rely on a shared chassis, and if one goes pop, the other doesn't.
Sure a warranty is nice, but not when the boss is breathing down my neck as he can't send an email.
Of course. But I think this could be very good as a "Backup" solution, as in, you run this somewhere else in the building remote from the main server room, and use it as another place to fall your VMs over to. Lot of power and storage in a quiet tower makes it fairly ideal for that.
If you're spending 1-1.5K on a server, you're likely not buying servers with a whole lot of redundancy (power supplies, etc.) so this would have at least as much redundancy as your existing environment in terms of connectivity, power redundancy, storage scalability, etc.
While I'm sure this excites the people who make their money running servers for people, I really can't see small businesses being excited.
Let's say you spend $5k on a server, you still need software. Then you need at least a parttime BOFH to keep it greased and ticking over. Then you still need offsite storage for backups etc, and someone to do them... TCO is going to be many 10s of K per year.
Or, you could set something up on Google Apps or similar and everything is done for less than 1K per year.
Successful small businesses rarely donon-core-business stuff that they can outsource.
Pretty cool -looks like Dell's put some real innovation into this product.
Biting the hand that feeds IT © 1998–2017