back to article Build a BONKERS test lab: Everything you need before you deploy

Every systems administrator needs a test lab and over the course of the next month I am going to share with you the details of my latest. In part one of The Register's Build a Bonkers Test Lab, we look at getting the best bang for your buck and doing it all on the cheap. Here is a look at my “Eris 3” test lab nodes; I have …

COMMENTS

This topic is closed for new posts.
  1. P. Lee

    No love for AMD?

    I would have thought an FX 8350 8-core would have given you i7 performance for i5 prices and that more real cores would be better for VMs than faux ones.

    I'm happy to be corrected - I still run a core2duo, so I'm just guessing :D

    1. Benjamin 4

      Re: No love for AMD?

      I'm just guessing here but he stated that he specifically wanted vPro and a set of remote management capabilities. I don't know about remote management but certainly vPro is restricted to Intel cpus (unless my brain is playing tricks on me cause I haven't done any major virtulisation work for a couple of years).

    2. Trevor_Pott Gold badge
      Unhappy

      Re: No love for AMD?

      @P.Lee I haven't run across an AMD chip that can "take" any modern Intel in the same price bracket since the Shanghai Opterons. (Note: I still have rather a lot of Shanghai Opterons!)

      I would be happy to be wrong here - I was quite an AMD fan back in the day - but AMD seem to be underperforming, overpriced and with power/thermal requirements that are absolutely embarrassing. Not something I want in my lab; especially if I pay for 'leccy.

      That said, I'll test anything that crosses my lab. I just haven't seen a good AMD widget cross it in a good long while. :(

  2. msage
    Thumb Up

    NAS

    I read your post with interest, for an admittedly smaller home lab, I have found the HP microservers to be ace and very, very cheap, although they only support upto 8Gb of RAM, they run ESXi / Hyper-v very well. As for storage did you know that the Netgear ReadyNAS series are on the HAL and support iSCSI, NFS and CFS, I currently have an ISO share shared out on NFS and CFS so both Windows and ESX can write to it. The ReadyNAS series are also a *lot* cheaper than the units you have discussed, just food for thought! :)

    As for network cards these tend to push the cost of a home lab up, I recently managed to get hold of some quad port HP cards from ebay for about £50.... I think when building a home/test lab, there is nothing wrong with spending a bit of money on quaility secondhand kit.

    1. phil mcracken
      Thumb Up

      Re: NAS

      I've found from personal experience that the later version of the microserver (N40L) does support 16GB of RAM. I used 2 sticks of Kingston Blu DDR3 1600 that I borrowed from my main machine and it seemed to boot up fine. Not so sure about whether the earlier version (N36L) of this server will support it though.

    2. Cosby

      Re: NAS

      You can run the N40L with 16GB RAM. See here: http://techhead.co/hp-proliant-microserverrunning-16gb-memory-with-vmware-vsphere/

    3. Trevor_Pott Gold badge

      Re: NAS

      @msage ReadyNAS are SLOW. $ for $, theya re among the slowest and lowest IOPS NASes I have ever worked with. They are rock solid and reliable - I have several, and trust them implicitly - but they are dead, dirt slow.

      Great for hosting ISOs. Worthless for hosting VMs proper on.

      1. msage

        Re: NAS

        Guess it depends on how many VMs you are running! :) I have run 10-20 webservers on one in a DMZ and for the 4 or 5 machines I need in a test lab they work fine.... Guess the spectrum is as broad as it is wide! :)

        1. Trevor_Pott Gold badge

          Re: NAS

          10-20 webservers would run fine...even off my crappy old Synology. I could probably run 2 mayyyyyybe 4 VDI instances without users screaming too hard. But I can't imagine running a financials DB off of that, or doing windows updates on 10-20 servers simultainiously.

          ReadyNASes - in my experience so far - don't provide more than 40-50MB/sec in RAID 5. I guess most of my "small business testlab" needs really do generally require a fully saturated gig-E connection's worth of storage.

          That said, they are good NASes, and reasonably cheap. If you don't need 100MB/sec, they are grand. :)

          1. Jerren
            Boffin

            Re: NAS

            The other consideration is redundancy, the DROBO NAS solutions have the ability to loose 2 drives without loosing the entire array in raid 5 configurations with less overhead. Having recently had 2 drives fail within hours of each other on a old Terastation I can tell you honestly yes it CAN and sadly does happen!

            Last thing I would want in my test lab would be to loose ALL of my VM's at once, even with backups your down for days restoring multi-terabyte drive arrays. Out of all of them the DROBO units seem to offer the most resiliency of the others out there for the price. Unless of course the entire DROBO box itself decides to take a dirt nap then your in trouble!

    4. xj25vm

      Re: NAS

      I like the HP Microservers in general - but one thing puzzles me: the power supply. Why install a 150W power supply in a server which can take up to 4 x 3.5" hdd's? Sure, to begin with, that is just about enough, but few years down the line, with capacitor aging - I'm not sure I would feel reassured if I'd be the owner of that server.

  3. Kurgan
    Unhappy

    Asus mainboards?

    I have had a lot of bad experiences with Asus mainboards (and with quite every consumer mainboard I have happened to use under heavy load). These mainboard are usually slow. Their buses are full of bottlenecks, so you don't get to use all fo the speed of the CPU or of the disks or of the RAM you are installing. I know that this is not a proper techical description of the issues I had, but I am no more "up to date" with modern hardware design. What I know is that I have seen more than one Asus-based "very fast workstation" perform very poorly at various I/O intensive tasks. I have seen the latest and greatest hardware (Asus mainboard) run terribly slow when compared to hardware that was 5 years old (Intel mainboard) at the same task (mechanical 3D CAD that needed to load hunderds of little files to create the entire project in RAM). It was not a video card issue, but definitely an I/O issue.

    How does your setup feel? Does it feel fast enough, considering the CPU and RAM you are using? Have you tried using different mainboards?

    1. Trevor_Pott Gold badge

      Re: Asus mainboards?

      As stated in the article, I prefer Supermicro motherboards, but have used ASUS extensively over the years. I can't stand Intel-branded motherboards, but have had few issues with Gigabyte.

      I have seen no issues (so far) with these Asus boards...at least not at the levels of usage detailed in this article. I have not been able to push the things far enough to get a full 10Gbit symmetrical (using iSCSI) out of them, and am as yet unsure if this is a limitation of the hardware, or a weird limitation of VMWare's ESXi. (I can saturate 2 10Gbit links using SMB 3.0 just fine.)

      I have a lot more research to do into that particular issue, but so far I am leaning towards "these motherboards can do anything it says on the tin with no issues whatsoever." I think we're actually so far along in the component design that you have to get up into extended ATX territory - and trying to max out 4+ PCI-E slots - before board manufacturer actually matters anymore.

      TL;DR: these boards rock, and I haven't been able to break them, despite trying really, really hard.

      I storage vMotioned 50 running VMs from one unit to the other while running in-OS backups on half of them and defrags on the other half, where all the storage was iSCSI.

      I think they're good.

      1. Lusty

        Re: Asus mainboards?

        You'll never saturate 10gb networking with iSCSI and the disks you mention without specially crafting a test to try it. You're lucky to saturate using cifs but that will all be caching rather than disk performance. With the disks from the article you'd be lucky to saturate 100Mbps with real traffic since they will only manage 100-125iops each.

        1. Trevor_Pott Gold badge

          Re: Asus mainboards?

          @Lusty I have other disks. And other arrays. See here: http://www.theregister.co.uk/2012/12/24/hyperx_3k_240g_review/

          As has been said before; there is a part 2 to this. And oh, yes...you can saturate 10GbE with iSCSI. Oh, yes you can. *muahahahahaha*

          1. Lusty

            Re: Asus mainboards?

            Of course you CAN saturate 10gbe with iSCSI, but to do that you need disks which are capable of enough IOPS to saturate the link for a given IO size, and spinning disks will need several racks worth to do so unless you craft the test purely to do so. If you want help fixing IO results speak to Violin, they have some smashing tests to prove their IO figures :)

            1. Trevor_Pott Gold badge

              Re: Asus mainboards?

              Reiterating: there is a part 2 to this. That part talks about enterprise things. Like 10GbE and the disks to drive it.

          2. Frank Rysanek
            Gimp

            Re: Asus mainboards?

            I recall trying to max out 10Gb Eth several years ago. I had two Myricom NIC's in two machines (MYRI-10G gen."A" at the time), back to back on the 10Gb link. For a storage back-end, I was using an Areca RAID, current at that time (IOP341 CPU) and a handful of SATA drives. I didn't bother to try random load straight from the drives - they certainly didn't have the IOps.... They had enough oomph for sequential load. I used the STGT iSCSI target under Linux. Note that STGT lives in the kernel (no copy-to-user). The testbed machines had some Merom-core CPU's in an i945 chipset. STGT can be configured to use buffering (IO cache) in the Linux it lives within, or to avoid it (direct mode). I had jumbo frames configured (9k). On the initiator side, I just did a 'cp /dev/sda /dev/null' which unfortunately runs in user space...

            For sequential load, I was able to squeeze about 500 MBps from the setup, but only in direct mode. Sequential load in buffered mode yielded about 300 MBps. That is simplex. The Areca alone gave about 800 MBps on the target machine.

            Random IO faces several bottlenecks: disk IOps, IOps throughput of the target machine's VM/buffering, IOps throughput of the 10Gb cards chosen, vs. transaction size and buffer depth...

  4. BlueGreen

    You may be doing things a bit too much on the cheap

    I guess your experience is the bottom line but the boards you mention don't take ECC mem and the rest of your components look cheap - in the sense of being home-use kit.

    From my experience you get what you pay for which is why my home machine is a small server (with a UPS; perhaps a good idea for you if you're doing lengthy work?).

    Though you say it's testbed not production, I've always found a little extra cash for quality pays in spades.

    1. Trevor_Pott Gold badge

      Re: You may be doing things a bit too much on the cheap

      @BlueGreen

      As mentioned in the article, part 2 will be more enterprise focused. That, and yes, it is testbed, not production. Completely different worlds, sir.

      1. Lusty

        Re: You may be doing things a bit too much on the cheap

        Just remember when you do that test in a real enterprise must use identical hardware to production, otherwise it's not actually testing the solution!

        1. Anonymous Coward
          Anonymous Coward

          Re: You may be doing things a bit too much on the cheap

          "identical hardware" Absolutely.

  5. Anonymous Coward
    Anonymous Coward

    Linux Storage Server

    I built a small VMWare test lab from some recycled HP DL380G4's for work a while ago. Its not very powerful, but for small scale testing and "playing" it does the job.

    To build a budget iscsi storage server I used another DL380G4 complete with 6 x 146GB SCSI drives and stuck Ubuntu Server 10.04 LTS on it. Using iscsitarget from the repository its quite simple to set up a single target to point to the disk presented by the RAID controller. Its just a test system, I don't need auth, or any advanced iscsi configuration. Performance is good enough for several VM's.

    Not so long ago I came across ZFS on Linux and decided to have a go. Its still very experimental, but I was only going to use it as a block device and none of the more advanced features ZFS has. I configured an MSA500 to present each drive an an individual drive and built a RAID Z pool. This can then be presented as a ZVOL which you can point iscsitarget to. Performance is very decent and often hits the GB NIC limit. I've not had any problems or data loss yet..

  6. Just a geek
    Happy

    Talking of the HP Microservers this is my set up:

    5 N40L's - two with 8GB running FreeNAS for replicated storage, three with 16GB (Yes, it's possible http://n40l.wikia.com/wiki/Memory) running VMWare ESX. All have three NICs and they plug into a managed gig switch.

    So costs?

    5 N40L's with the cashback = £500. Lets say £600 including delivery.

    RAM - 4 x 4GB = £105.58

    RAM - 6 x 8GB = £312

    FreeNAS hard drives = 10 x 2TB = £1600 (7.1tb usable per NAS box if you use RAID5).

    10 Intel CT Nics = £423.6

    Netgear Gigabit managed switch GS748T = £297.58

    Total = £3338.76 not including money back from selling the old RAM on ebay.

    The article says the total was $7965 which works out to £4901.77 and I'd say that the N40L route is a better route because of the options for different configurations, scenarios, etc.

    1. Trevor_Pott Gold badge

      Big difference seems to be the ability to go to 32GB per node using my Eris class systems versus only 16GB per node with the microservers. Will be an issue for some, not for others! Certainly the microservers are appealing from a cost perspective...

      1. Anonymous Coward
        Anonymous Coward

        I happily own a microserver but you really can't compare the CPU performance either. The microserver CPU is an almost 3 year old dual core 1.5ghz "bit faster than an atom" vs intel's latest true quad core with 6MB cache running up to 3.6GHZ

        Yes they are a bit cheaper but combined with the 32GB of ram the machines in the article can take, I know which i'd rather have running in my test lab.

      2. Alan Edwards
        Happy

        > Certainly the microservers are appealing from a cost perspective...

        They are, but remember the N40L is the quickest, and that is only a dual core 1.5Ghz Turion.

        I've got two MicroServers, one running FreeNAS, one running ESXi. I'm finding the ESXi machine a bit slow sometimes, Plex Server runs like a 3 legged donkey and can't keep up with transcoding video, so I'm looking at building an i5 based machine to replace it.

        The MicroServer is brilliant if you don't need a lot of CPU power.

  7. Joe Montana
    Go

    Storage

    Something like FreeNAS or OpenFiler is a good choice for storage, combined with a regular motherboard and a cheap hardware raid controller (so you have write cache which makes a HUGE difference for vm images, an hp p400 controller with 512mb bbwc can be had for 60 quid on ebay these days)...

    Performs well and cheap while providing the convenience of an appliance, you could even repurpose your old compute nodes, or use the same motherboards and just go for less ram or cpu since neither are terribly important for such purposes.

    Also the HP 1810-24 switches are a good choice, reliable, gigabit, managed and fanless.

  8. stu 4
    Gimp

    mac minis

    I'd have thought mac minis and a decent switch would be pretty efficient for this sort of thing ? Massive savings in power and space too?

  9. This post has been deleted by its author

  10. Andus McCoatover
    Joke

    LEGO?

    Furtively scanning the photos - not a cube of LEGO in sight!! How the hell do you expect to build a decent 'bonkers' machine without LEGO??? Bloke from Soton Uni. did it, claims it was child's play (His 6-year-old son built most of it, his dad - Prof. Simon Cox just had to rattle a few keys on a keyboard to make it 'ackle.)

  11. richardstevenhack

    He complains that Linux iSCSI servers are hard to set up and maintain. I set up openFiler on an old Dell box for a client serving up storage to four Apple iMacs doing video editing from two 4TB external hard drives. I set it up and it ran for nearly two years. The only time it had to be restarted was when the power crashed. Then the old Dell box power supply died and I set up the same system over a few hours on an identical box. That one ran until the company went out of business. No updates, no maintenance, nothing. Just cleanly served storage with zero downtime.

    The only difficulty is that openFiler documentation sucks and figuring out the order in the GUI to set up the iSCSI storage was a pain. But you only have to do it once, then write it down. A third party iSCSI client on the Macs was easier to set up.

    I had tried to use Windows iSCI to do this. But it never worked properly. The connections kept dropping - Microsoft has YET to get ANY kind of networking to work properly.

  12. Anonymous Coward
    Anonymous Coward

    Seagate??

    "The Seagate 3TB 7200.14 is a truly exceptional consumer disk"

    If you value your data, you won't touch Seagate consumer drives, regardless of what the R/W performance is like. Seagate have intentionally nobbled their consumer drives by disabling Error Recovery Control, making them essentially useless for a RAID array. As soon as you get one sector read error, the drive will sit there forever attempting to re-read the sector, until your RAID controller or software decides the entire drive has gone bad, kicks it out of the array and goes straight into degraded mode.

    But with ERC, the drive would return a failure code, the RAID controller would fetch the data from other disk(s), write it back to the bad drive, and everything continues just fine.

    Last Hitachi consumer drives I had still had ERC; otherwise look at the Western Digital "Red" drives which are aimed at SOHO NAS and *do* support ERC, whilst being hardly any more expensive than consumer drives.

    http://en.wikipedia.org/wiki/Error_recovery_control

    1. Russ Oliver

      Re: Seagate??

      Note sure if its massively relevant but the red's only have a 1 per 10^14 error rate

      Also interesting them "better" drives are pretty much same price as normal ones!

      New samsungs are probably best to avoid aswell due to being rebranded Seagates

  13. Nigel 9

    Just a change from all the naysayers

    Thanks for this Trevor - Always interesting to see other peoples take no a test lab (especially when its not just pulled from recycled kit, as it 99.99% of cases ;-) )

    One point on FreeNAS, I gave it a look and *really* wanted it to work as a VM replacement for for 2 NetApp StorVault units they decided werent good enough to support a Win2008 Domain. However, despite my best efforts (and a few others) and a trawling of forums, the AD integration just refused to work on 8.3. :(

    Eagerly looking forward to part 2.

  14. Rim_Block
    Happy

    iSCSI not so hard on Linux.

    ISCSI on Linux: Install CentOS 6.2 (minimal), Install the iSCSI target package via yum, edit the config file which has many examples listed within, start the daemon. Jobs done, at least for a simple implementation.

    I usually get around 90GB/s transfer depending on file sizes. The drives are 2TB Seagate Barracudas in two seperate raid 5 arrays on a HP P812 SAS controller. Networking is handled via an Intel quad port ET via LACP to a HP 1810-24G using a dedicated VLan for storage connections. Processor is a C2D.

    For the lab servers I would have gone with one or more of the following;

    Supermicro X9SCM-iiF + E3-1220L v2 (4 cores) / E3-1230 v2 (4 core + HT), 32GB ECC - Server MATX system

    Intel S1200KP + E3-1225 v2 (4 cores) + 16GB ECC / non-ECC ram - Workstation mITX system

    Intel DQ67EP + i5-2400 + 16GB non-ECC ram but with VT-d - Desktop mITX system

    Supermicro X9SCi-LN4F, G620 / E3-1220L v2 + 4GB ECC ram + SAS controller (IBM M1015) - Entry Storage server.

    With all the talk about the HP Microserver, it is also probably worth noting the HP ML110G7 is also a fairly good entry level unit and has also had a nuimber of cashback offers in the UK. It can run 32GB 'generic' ram (unofficially) and has compatibility with lots of second user parts available via various sites at low(ish) cost.

    TLDR: iSCSI on Linux is easy, other motherboards and cpus maybe better for not much more money :-).

    1. Trevor_Pott Gold badge

      Re: iSCSI not so hard on Linux.

      @Rim_Block Interesting thoughts. I note you suggest Intel boards. I've had a miserable history with Intel boards; do you have any direst experience with those models? I suspect there are plenty of good boards that would make a reasonable underpinning for a testlab; gods know I can't have tried them all!

      Re; iSCSI on Linux..."easy" is relative. I don't have a lot of trouble with it...but I work with Linux every day. That…and I wrote all the commands I needed down in a text file. :)

      I could point you at several Windows sysadmins that do run in to trouble with it. *shrug* There is no good GUI; it holds a lot of folks back. Enough that I would worry about junior admins raised on nothing but Microsoft being able to reliably use the thing.

      If, however, you know your Linux…go hard! The iSCSI targets for Linux are mature and stable. Maybe at some point I should do a "how to" for iSCSI on CentOS 6.2. Hmm...

      1. Anonymous Coward
        Anonymous Coward

        Re: iSCSI not so hard on Linux.

        "Enough that I would worry about junior admins raised on nothing but Microsoft being able to reliably use the thing."

        Tell them to RTFM/GetAnEducation or fire them. (snicker)

        iSCSI FTW!

      2. SImon Hobson Bronze badge
        FAIL

        Re: iSCSI not so hard on Linux.

        >> I could point you at several Windows sysadmins that do run in to trouble with it. *shrug* There is no good GUI; it holds a lot of folks back. Enough that I would worry about junior admins raised on nothing but Microsoft being able to reliably use the thing.

        Hmm, needs GUI to be able to use it, then not what I'd call a sysadmin. I guess in this context that "Windows sysadmin" is something different to a "real" sysadmin that does Windows.

        I don't do Windows stuff myself, but I observe enough to know that you don't need to go too far before you need a decent CLI. I've also observed enough "admins" who's approach is like the "infinite number of monkeys" - try all the tick boxes in the GUI and see what happens !

        1. Trevor_Pott Gold badge

          Re: iSCSI not so hard on Linux.

          @Simon Hobson as is well known, I disagree with the CLI uber alles crowd. I believe a GUI is agreat for administering. A CLI is great for automating. I administer a test lab, where I change things "to se what happens." I automate production, where it should do Only Pre-Tested Things.

    2. Anonymous Coward
      Anonymous Coward

      90GB/sec?!

      Do you mean 90MB/sec?

This topic is closed for new posts.

Other stories you might like