In part one of The Register's Build A Bonkers Test Lab feature, I showed you how to build a test lab on the cheap; great for a home or SMB setup, but what if we need to test 10GbE? Part two is a wander through my current test lab to see how I've managed to pull together enough testing capability to give enterprise-class …
Yes, it is a lot fantasy. It is a Bonkers test lab, after all. But the stuff I detailed in Part 1 is realistic and achievable. The Kingston Hyper-X array should also be realistically achievable for most, if the "high speed storage" part of the equation appeals to you.
The 10Gbit network with added WTF was built as much to see "can it be done" as anything. My hope is that having such a test lab lying around will allow me to do better reviews on more relevant equipment for The Register than I would otherwise be able to do.
Do we want to limit ourselves to reviews of the latest iPhone or consumer home NAS? Or do we want to occasionally tear apart some bit of midsize gear or even enterprise kit? If we do want to be able to throw that more powerful equipment on the bench and give it a run for its money, someone is going to have to build a bonkers test lab. So I did.
While out of "most of our pockets", 60K is nothing to a business AFAIK. I've seen people sink more pocket money into a house to buy/sell/rent project. If someone wants to make a server/system/service and it's not property, but internet based, well hopefully this article lets them know it's possible.
Now... where to get funding for my Bonkers Server business for downloading cats eating burgers website... ;)
Technet not enough to cover this lot (instead of spend $$$$ on Windows Licences)? Where does Technet run out of steam? To setup a test bed for AD/Echange/System Centre 2012 you'd be spinning up a fair few VMs - so where's the problem? Especially if you use the free HyperV for the physical boxes.
If you need to licence it properly, it's not really a test bed is it? Better to build a smaller local test bed and run the rest on Azure (some free time for that is included in an MSDN licence!)
More money than sense this man!
It's a testbed that needs licensing the instant I have to maintain "test" (or as we often refer to them "sandbox") copies of running instances. For example, my largest client has 250 VMs in production, among them there are 23 different "classes" of VMs. Each of these classes needs to exist in my testlab environment so that I can do things like test patches, the latest version upgrades to software and more.
In fact, this testlab just received its last components in the mail last night and they have already been pressed into service. That said, I use a single datacenter license to achieve this, and the rest of my lab runs Linux, as this is now the bulk of what I have deployed, and thus the bulk of what I have to test.
As for running the rest on Azure: no. For one thing, the cost of storage is too much, and my test labs often require the ability to access a significant subset of the live data for testing. For another, the laws of my nation do not allow me to store personally identifiable information on countries without robust civil liberties and privacy protections. That means the US is out, and trans-Atlantic data flinging in order to store in the EU is expensive.
I'll build my own private "cloud" thanks, and run my testlab requirements – and those I need to test the builds my clients have – on it. It's far, far cheaper over the expected 6 year life of this equipment.
I was with you Trevor until you said "cloud*"... oh wait, you used quotation marks too. all forgive. :)
SAN is dead
"up to 32 gigs of RAM; the maximum currently supported by VMware's free hypervisor." - Use KVM on RHEL or Ubuntu - it scales better. If you want to pay for support, use RHEV and buy commonly available, enterprise grade x86 hardware.
A RAID card is not necessary, software RAID is all you need.
"Unfortunately moving virtual machines from node to node in this configuration is slow and frustrating." - Trevor, you show the example of using 10GbE, you can set up Etherchannel and/or you can use Gluster to have additional resilience - that's the beauty of OSV. See the advantage of a Linux/DAS storage grid, the data is clostest to where it is needed and you get resilience built in - not possible in a in a traditional SAN setup unless you spend a pile of cash.
"Being a recycled server" - you hit it on the head there, using off the shelf hardware you can easily take one or more storage nodes out of the grid, replace motherboard/PCI cards/disks and put it back in again. This reduces TCO and allows you to plan your storage requirements more granular.
Re: SAN is dead
Gluster is on my list for later in the year. And RAID cards have some distinct advantages over software RAID. Specifically when you start pushing 1000 megaBYTES per second or higher through them. Software RAID is fine if you RAIN. It isn't so fine if you only have the equipment to build a single, reliable and eye-bleedingly fast storage node.
Trevor, did you look at using Infiniband instead of 10GbE? Obviously it wouldn't be suited to a production environment where you already have the switching infrastructure in place, but I'd be interested to see if you could shave £10k off the price for ostensibly the same set of capabilities...
I did; but I couldn't get anyone to send me Infiniband gear, nor are there local suppliers that offer it cheap. So if I did invest, it would a) be stupid expensive and b) I would be in deep ca-ca-poo-poo if anything whet splor and I needed a spare ASAP.
I call this my "test lab," but I should point out that my "live" corprate VMs occupy 1/5th of this particular setup at any given time. (Actually, they fit just fine in a single Eris 1 node, but that's a whole other story...)
I would love to test, review and otherwise learn about infiniband. With luck, some will show up on my doorstep one day.
Would be nice if El Reg did a similar article for the cheap home made infiniband networking that I have seen, there are some good on-line howtos already, but would be nice to see a reviewer do one.
There has been a lot of questions about this. I have put it on my list, I promise you. :)
Bypassing the usual flamewars...
I'll just ask - what on earth is that apparently several thousand $'s worth of kit stacked on top of ? Looks like one little bump could end your test lab in one go.
Re: Bypassing the usual flamewars...
Are you asking about the UPS from the dark ages, or the terrible IBM rack? Or the built-like-a-tank-will-never-ever-ever-ever-ever-die orange chair of doom?
Don't question the chair. The chair is indestructible. (And it has an equipment seatbelt.)
For the record though, we spent all night racking stuff last night...
There is no point in getting i5's ever for this sort of stuff when you can get Xeon E3's so cheap and use unregistered ECC with them.
Which OS to use?
Suerly if the end game is to build a test lab, then you will need to install an OS appropriate to what you are testing. Depending on what you are testing, this could be all Linux (various flavours), all Windows or in most cases a mixture. In my experience (20+ years) - a corporate network is Windows based, webfarms are Linux, Databases are MSSQL if you are using relatively small amounts of data and Oracle if you are using more.
Linux is also the weapon of choice when it comes to firewalls and proxys.
Like somebody has already mentioned - WIndows is used in corporate environments because it is easy to install, use and maintain. There are a lot of people out there who know it and therefore it is relatively cheap for compainies to hire people to maintain it.
64 cores 512GB of Main Memory and KVM
Imagine 64 opteron cores and 512GB of memory creating almost any VM guest imaginable for the cost of the hardware only. This is the modern day reality of a Dell R815 and Centos 6.3. I will not argue the religion of what operating system is better for this or that. I will share the most cost effective way we have found to implement virtual guests, and then we can talk about guest management hardened externally facing systems (ie. administration from the virtual machine console and no remote access to the guest over the network) Now we are starting to talk enterprise class!!
another option for cheap 10GbE would be the Brocade 6450-24 which is a 24 port 10/100/1000 with optional 4 10G SFP (I think there is a software license requirement here). The base switch seems to run under $2k at the low end, and has basic layer 3 abilities, the license add-on is in the ~$700 range seems like(2 ports). I'm not sure if the switch includes 2 ports of 10Gb licensing or none.
I haven't personally used them though a co-worker has several deployed for our internal corp IT network, he has no complaints.
for my needs juniper is far too complicated to work with, same for cisco (brocade is similar to cisco). But I suppose if your just configuring it once and not touching it after that then it's not too terrible.
Eadon I understood what you were saying
Eadon I understood what you were saying and agree
Yes it is a very nice test lab if you happen to have surplus kit lying around / you have some arrangement with the suppliers or you have some company profits that need to disappear.
If I was getting it all for free I wouldnt kick it out of bed but then again I wouldnt spend money on this setup when there are better solutions that are cheaper.
The headline is a bit misleading as the hardware alone is much cheaper.
I'd love to see how several different on-premise cloud solutions perform on this kind of setup, notably:
- SmartOS with the "cloud" GUI from this guy: http://blog.smartcore.net.au
Certainly not a #FAIL article. True, I'm not the least interested in how well HyperV runs on that - but that does not diminish the value of the article or the information contained within it.
Re: Interesting article
It's amazing how often commenters get bent out of shape by a title, instead of the comment. (Or by two paragraphs of an article, ignoring the entire rest of it.) *shrug*
That said...I now have sexy testbed. I have requests from folks to test openstack and cloudstack. I already have plans to test Hyper-V and VMware. I will add your recommendations of Proxmox and SmartOS to me list. What's the point of putting such a lab together if I can't test the things on it that matter to our readers?
The Fat Twin arrived. I was expecting it to come with a variety of configurations, apparently that didn't quite happen. Instead, I have 4 identical nodes: 2x Xeon E5 2680 /w 128GB RAM and 2x 480GB SSD. Should be good enough to give any of the virty stacks a run for thier money, no?
When the petty cash refills, I'll fill the other 4 nodes.
About the clicky admin articles
It's true. Every build/lab/howto I've read on the Reg is useless to me. I don't want to belittle those writing, but honestly, it does sound like a bit of filler. I mean, I could write you an article about how to set up the jack under my car and it might contain more useful tips that this. The best part about it was pointing out the switch. I'm looking for good 10GBASE-T hardware. Perhaps do a review of that. I'd read that.
Re: About the clicky admin articles
As soon as I get good 10Gbase-T hardware, I'll review it. I should point out that a review of the Supermicro and Dell switches is coming up here soon (i am just putting it in to the CMS now) and that the Dell switch in question does have a 10Gbase-T variant. (Albeit slightly more expensive.)
That said, if and when you have requests for things to review/do a how-to on etc...ask! I am (naturally) limited by what I can get my hands on...but I've been working hard to build a lab that will allow me the flexibility to do reviews on damned near anything. Maybe I can meet the request, maybe I can't...but I promise you, if readers ask for it, I'll do my level best to get hold of it and put it to the test.
You can also help by providing suggestions as to what tests you would like to see run. Contrary to popular opinion – especially those of the berate, denigrate and wail like spoilt chillum crowd – I do this "reviewing products" thing mostly to try to help. Not every article will be thought provoking or insightful to the totality of the readership, but I do hope that each one provides some benefit to at least some of them.
In the meantime, I'll poke some 10Gbase-T vendors and see if any are willing to have their switchen wrung.
...and now for something completely different.
I dunno. Blowing a lot of money on something outlandish just doesn't seem that interesting. On the other hand, it is pretty trivial to max out the more common tech. All it takes is a single spindle really. What would be more interesting is seeing how easy it would be to "take it up a notch". How accessable is another level of performance above and beyond what's cheap and readily available.
There seems to be a big gap here and the interesting story I think is how you could make that gap smaller. My own SSD-free setup could probably benefit greatly from such an incremental improvement.
"cardboard and duct tape"
I have 2 2.5inch laptop drives mounted with lego in my rig. One is a spare I had so used for backup, one is "about to die", so I'm testing it to see when it dies and using it as a kind of scratch disk/test disk for now. I don't mind if I trash the broken disk, so it's a good one to experiment on. :D
Re: First-degree burns treatment
I think it might be time for Eadon to put his money where his mouth is a write "that" article that he thinks is so missing from ElReg.
How many commentards didn't read the words 'Test Lab'?
I'm not a hardware techie, but it would seem obvious to me that, unless all of the IT shops that you deal with are 100% non-windows, your TEST LAB would need Windows in it at some point.
After all, TESTING would seem to be the point of a TEST LAB., whether or not you agree with the decision to use Windows or not.
Keep up the articles like this - while my professional sphere will never get this techy, its an interesting read all the same.
Ah, the test lab...
For the large part, our test lab is a strange mix of recycled kit and the odd new bit here and there. We've an old poweredge R805 or two that are out of warranty doing the grunt work of running the virtual servers in the lab. For storage, they back onto a Netapp FAS2050 that used to be in our DR bunker. for one of the line of business apps, we have a pair of wheezing old IBM power5 based AIX boxen, and some associated support hardware. One of our plans is to eventually bring the various business apps up in there so that we can have a tiny, sandboxed version of the company's network that vendors can play around with and break instead of the production environment. :)
Anon to protect my paycheck.
- Vid Hubble 'scope snaps 200,000-ton chunky crumble conundrum
- Bugger the jetpack, where's my 21st-century Psion?
- Windows 8.1 Update 1 spewed online a MONTH early – by Microsoft
- Google offers up its own Googlers in cloud channel chumship trawl
- Something for the Weekend, Sir? Why can’t I walk past Maplin without buying stuff I don’t need?