back to article VMware builds product executables on 50 Mac Minis

VMware runs a cluster of 50 Mac Minis and puts it to work preparing the executables and binaries that customers receive when they acquire the company's products. We're standing on the shoulders of William Lam's Virtually Ghetto for this story, after he delved into production uses of Mac hardware in an VMware environment. He …

  1. Trevor_Pott Gold badge

    “In what other situation can you have an entire spare server on hand for $1200?”

    I can think of one.

    1. Anonymous Coward
      Anonymous Coward

      And more amusingly, Apple's Mac production line is controlled by PCs running Windows!

  2. Nate Amsden

    sounds absolutely stupid

    what is the point? Because they think it's cute? I know vmware has hundreds of racks of servers in their labs, it boggles my mind they would use macs for anything other than vmware for mac products.

    1. Trevor_Pott Gold badge

      Re: sounds absolutely stupid

      I suspect that's exactly the point. They have to dev and test for Macs. You can buy VMware's ESXi and install it on any x86 compatible system. If you do so on a Mac, you can buy additional licenses of OSX and run them in virtualised environments. (Note: Apple does not license you to do this on non-Mac systems.)

      This is a tested - and IIRC, supported - environment. It stands to reason that somewhere in VMware there exists a production cluster in which running various versions of ESXi and Fusion occur. They probably also have slowly enlarged the cluster to run other Dev and Test workloads on this cluster, for the "simple to get an identical node" reasons that William listed.

      I have a customer about to light up an ESXi Mac Pro cluster. I expect to see more and more as customers turn away from Microsoft. For various reasons, getting money for Mac Pros to run a dev and test cluster (outside of one or two specifically for compatibility testing) is highly unlikely. But a Mac Mini probably just squeaks under the radar of "petty cash."

      One small Mac Mini cluster becomes two. Two become a larger cluster. Soon you have 50 of the things running legitimate Dev and Test workloads for reasons as much political as practical. It's very human. And hey...they probably get the job done just fine.

    2. John Robson Silver badge

      Re: sounds absolutely stupid

      At what point does 50 commodity boxes become more effective than one monolithic RAIDed, multiply redundant hunk of a machine.

      So they use redundant boxes, not individual components. I can't really see that as a bad thing. A few shelves of Mac Minis, needs a network switch and some cabling, a power distribution system, and a tray of USB keys. Pop the Mini on your desk, configure it into the cluster, power it down and pop it on a shelf.

      I can see plenty of use for this kind of resilience in a system. Wasn't Google reporting that consumer drivers were actually basically as good as enterprise drives, and they were using them, since with a globally multiply resilient architecture you design the thing expecting regular failures.

      1. Trevor_Pott Gold badge

        Re: sounds absolutely stupid

        "At what point does 50 commodity boxes become more effective than one monolithic RAIDed, multiply redundant hunk of a machine."

        3 nodes. 3 commodity nodes are the equivalent of one high end engineered server. Two in HA, and one on the shelf. (Because you don't get 4 hour with commodity.) Now, that doesn't mean "3 commodity to every enterprise server." That's the entry level to begin making sense of the equation. The actual algorithm is

        3 + 1.25N commodity nodes are equivalent to N high end enterprise nodes.

        This factors in failure rates, the fact that commodity nodes don't keep the same motherboards in production for as long as enterprise nodes, etc.

        Now, Supermicro changes the calculations some. They offer 7-year support on some of their boards. This means that the algorithm becomes

        3 + 1.1N Supermicro 7 year support nodes are equivalent to N high end enterprise nodes.

        My entire career has been about determining the maths on this. Factoring in hundreds of variables. Testing, retesting and doing it all over again. I have cooperated with hundreds of systems administrators around the world to figure out failure rates, which vendors to avoid, which models to avoid, and more. The hard work of making commodity as reliable as enterprise with a fraction the cost.

        Now, thanks to Facebook, Google and others that long effort is coming to a close. The Opencompute initiative is functionally industrializing my life's work, after having proven it at a scale I never could have dreamed of. (My biggest was 15,000 nodes in a single datacenter.)

        But yes, there is logic underpinning this "commodity madness", even if many of those whose paycheques depend on "enterprise vendors uber alles" will never understand it..

        1. dan1980

          Re: sounds absolutely stupid

          It's all about scale.

          It's similar to when you ask yourself how much time it's worth spending optimising/automating a process. If you'll only save 5 mins a day but it will take you a day to script and test and debug and document then it's not worth it.

          Likewise using commodity hardware. As Trevor says, the equation is usually different with commodity kit and you often have to buy more stuff than you would with big brand X. You'll also likely have to spend more time testing and tweaking your setup and build processes than you would just following a white paper or co-opting a vendor expert to assist in the build.

          That represents an overhead that can make tried-and-trusted, vendor-backed kit preferable in certain situations. But, the larger the scale, the more you stand to save by deploying commodity kit so the more time and resources you can spend getting it right.

          For a small deployment, you just can't justify buying test gear and experimenting. If you're filling a whole aisle then you can spend quite a bit because the savings from getting it right will be worth it.

          1. Trevor_Pott Gold badge

            Re: sounds absolutely stupid

            Exactly. This is why MSPs can deploy commodity hardware, but SMBs alone shouldn't. An MSP can afford the R&D, qualification, prototyping and validation because they're doing it over a whole stable of SMBs. The cost of all that is shared amongst the group.

            "It depends".

            Welcome to IT, eh?

            1. dan1980

              Re: sounds absolutely stupid

              If every problem had just one solution, nicely documented and laid out, then I supposed we'd be out of a job.

              Ours is to assess the criteria "it" depends on and spec accordingly. Sometimes the answer really is a near-blind adherence to a reference architecture. More than once I've been told the solution must use 'tier one' vendors only.

              Custom-tuned setups can be far more efficient than white-paper solutions but known quantities bring their own efficiencies and shouldn't be discarded.

              1. Trevor_Pott Gold badge

                Re: sounds absolutely stupid

                "Custom-tuned setups can be far more efficient than white-paper solutions but known quantities bring their own efficiencies and shouldn't be discarded."

                The answer being a function of whether or not you're spending your own money.

    3. Anonymous Coward
      Anonymous Coward

      Re: sounds absolutely stupid

      "what is the point?"

      Presumably VMWare need to get real work done these days andno one wanted them as desktops anymore...

  3. llaryllama

    Not such a bad idea

    Anyone who has worked in a real world corporate IT environment can probably see some benefits here. It's all very well saying that you could buy components for X and get a more powerful setup, but then some chump has to be paid to build and support those machines.

    There's no shame in using consumer gear for the job if it's good enough. Pre-built, easily sourced, easily swappable and free support. Not too shabby.

  4. Anonymous Coward
    Anonymous Coward

    It looks a little odd that a $6bn concern like VMware is using consumer-grade kit for important chores. After all, the company has partnerships with top server-makers, deep pockets and a customer-facing hybrid cloud that would be laughed out of business if it used Mac Minis.

    Not when you see how they are using them. It's a nice small, resilient cluster. To me, that's actually a sign these guys DO technology rather than go for big boxes. This is gear set up by tech heads rather than a spreadsheet monkey so I'd trust this a lot more.

  5. chivo243 Silver badge

    I'm watching and listening

    Our shop just converted to VMWare from Parallels for Mac, lots of Xserves and Older Mac Pros, we are primarily a Mac house but use Windows for the Finance and HR groups. We use individual Mac Minis for many different services, and I can say, swapping out a Mini is easier than a 2u Dell or HP.

    As we are now committed to VMWare now, I'm watching and listening. We are looking at deploying a pair of new Mac Pros in the future.

    1. Anonymous Coward
      Anonymous Coward

      Re: I'm watching and listening

      I tlloks you're never swapped a Dell or HP blade server...

      1. Volker Hett

        Re: I'm watching and listening

        I did, but never one for $1200 :)

    2. NogginTheNog
      FAIL

      Re: I'm watching and listening

      We use individual Mac Minis for many different services, and I can say, swapping out a Mini is easier than a 2u Dell or HP

      The point with Enterprise rack systems is that you shouldn't need to swap out the whole unit: the failed component will generally be removable and replaceable, sometimes without even powering down!

      1. Anonymous Coward
        Anonymous Coward

        Re: I'm watching and listening

        Hot swap CPUs and RAM? That's not for the Apple fanboys... they don't understand the difference between hardware designed for the datacenter and hardware which is not.

        Anyway VMWare build system is not a critical one - and that whole project looks more like sysadmin fun than something you would run your critical applications on.

        Also, not everybody has a Genius bar nearby....

        1. hypernovasoftware

          Re: I'm watching and listening

          "That's not for the Apple fanboys... they don't understand the difference between hardware designed for the datacenter and hardware which is not."

          Thanks for the generalization.

          I've been a Mac user since '98 and I've also been in IT for over 40 years.

          I know the difference and I'm sure many others do also.

          This isn't something a consumer would ever do or have any interest in.

  6. RonWheeler

    Makes sense

    ...only if you don't have to pay for VMWare licenses. Which these guys don't. For everyone else, fewer faster boxes is the way to go.

    1. GitMeMyShootinIrons

      Re: Makes sense

      Your argument only really holds water with Workstation/Fusion licensing. For ESXi (we'll ignore the free license option), going for a pair of Mac Minis would be the same as licensing a dual socket workstation, but be a bit more resilient (and a little more portable).

  7. Anonymous Coward
    Anonymous Coward

    Probably save a ton of noise and cooling with those things.

    They're using boxes with mobile CPUs and they have no use for the graphics. Mobile chips make less heat.

    Not sure what the "genius" bar would say. "You're using these as build nodes in a cluster?".

    1. Anonymous Coward
      Anonymous Coward

      Noise? Servers are in the server rooms, and since you can today administer them remotely - even hardware out of band with tools like Dell's iDRAC or the like, there is very little reason to go there but for physical operations like replacing or upgrading hardware. Cooling, evironmental sensors and UPS are something you need anyway to ensure safety and prolong hw life. Actual servers when idle are pretty cool. And unless you're using CUDA GPUs, servers don't come with hot graphic chips.

  8. Anonymous Coward
    Anonymous Coward

    Lets look at this logically, VMWare are a large specialised IT company with billion dollar capitalisation, lots of very clever savvy techies, some pretty stupid marketing people and some brain-dead naming conventions that confuse the hell out of me.

    They could probably have any computer system in the world to do their builds on. Even the most expensive X86 stuff isn;t that expensive in the grand scheme of things.

    So VMWare choose Mac mini servers. Is this because:

    a) they are all Apple Fanbois who worship at the alter of Steve Jobs (RIP),

    b) Their marketing dept wants Macs so they can be real cool, grow neck beards and have thick rimmed glasses and turtle neck jumpers.

    c) Their techies think this is a viable solution, that they can live with addresses their needs. They've looked into this carefully as its their job to do this, to think things through and have come to the conclusion its an acceptable solution, even though a number of El Reg readers question VMWares technical background and think its a dumb idea based on 30 secs of thinking.

    Answers in a postcard please.

    Thanks.

    P.S. I don't pay for VMWare ESXI licenses either. Please don't tell VMware that they're free.

    P.P.S Just decommissioned my Mac Pro 8-core cluster running VMWare ESXI 5.5. Nothing wrong with it but a little over the top for what I needed.Downgraded to a cheap PC using 1/4 the power.

  9. David Harper 1

    Seymour Cray and Apple

    I'm reminded of the anecdote about Seymour Cray told by Microsoft's Jim Gray. When Cray was told that Apple had just bought one of his supercomputers to design the next generation of Macs, Cray commented that he'd just bought a Mac to design his next supercomputer.

    1. Anonymous Coward
      Anonymous Coward

      Re: Seymour Cray and Apple

      Sure, to design the computer chassis Apples are good...

  10. Anonymous Coward
    Anonymous Coward

    On the purchase of Apple kit in business

    It is happening more - and not always for the better.

    What used to be regarded as the luxury reserve of a niche few, has now entered the consumer lexicon and what with every middle manager with a personal iPhone and iPad upgrade consuming every bonus and water-cooler show n' tell, the next natural move is to attempt to manoeuvre it into business.

    Whereas I once found myself with the full support of the coffer-wielders in saying 'No' to Apple requesters, I found the tide turning in a very different direction. Attempts to get Apple kit in turned aggressive, with suddenly having to justify 'why NOT' rather than it being the other way around.

    Presenting valid arguments - like, the stuff we already have already does what you're saying this panacea will do for us - garners only shouting and more aggression. How *dare* one challenge the shiny.

    Hence, the inclusion of Apple kit in business can be a deeply political thing, the reasons for which not always evident to the outside observer. I should think that someone, in authority, got an iSomething for his / her birthday and demanded or enabled the inclusion of said fruity cluster. It happens.

    I can't believe it's easier to take a piece of your datacentre to a high street shop, than build it from freely available and cost effective hardware for on-site management. No, this smells of a 'thou shall use' and 'yes SIR / MAAM' response.

    FWIW, I believe in the right tool for the job - but very often a business already has those tools, or others can be had for far lest cost.

    1. bpfh
      WTF?

      Re: On the purchase of Apple kit in business

      Why so many downvotes?

      After having worked with some huge multinational corporations, and having worked with some cash strapped SMB's I've seen both sides of the coin.

      When you do not have the cash or hardware, you have to find some inventive solutions to get things running, and you realise how much optimisation you can get out of code and run on previous generation hardware, it may take more time and research, but you also get a lot of experience getting the most bang per cycle out of your systems... The company "invests" in your salary and know-how, not it's systems.

      On the other side, if you have the money, and on the basis of "time is money", you can throw hardware at any problem and hope that it solves the problem you are working on without spending too much time on research and analysis - for example a slow website, just add more front ends and memory rather than spending 2 weeks finding and fixing bottlenecks... but it's nice being in a company that allows both technical lattitude and prides itself in innovative solutions, but also has a budget to purchase and invest in hardware when it's needed.

      Back to the topic above, $1200 for a mac mini, no lead time, small form factor, low power consumption, no farkeling about to install it, no time spent on integrating bespoke hardware à la google like installing drivers and having to do your own tech support with the OEM's, don't have to be installed in a server room cooled to siberian temperatures and any tech problem can be solved during your lunch break to a nearby apple store, and hardware available immediately for replacement or extension, as opposed to a 2 week delivery time from Dell actually sounds like a good idea to me and was very probably a gutsy move selling that idea to management, but it looks like it works.

      1. hypernovasoftware

        Re: On the purchase of Apple kit in business

        None of that anti-virus nonsense to have to constantly maintain either. How much money and lost productivity does that cost businesses in the long run?

  11. Anonymous Coward
    Anonymous Coward

    On-site support is too pricey for VMWare?

    "When they break, One of the guys in our datacenter will pull the machine and take it to the Genius Bar, where they will tell us what we already know and replace the drive."

    When something breaks here (and if we don't have a spare part already available, such a replacement disk) within four hours a support technician comes with the spare part (or the part is delivered via a courier if we can easily replace it, such a disk drive). Nothing exits our datacenter, and the replaced part, if can contain sensitive data as a disk drive, is properly "disposed of" (aka destroyed).

    1. Anonymous Coward
      Anonymous Coward

      Re: On-site support is too pricey for VMWare?

      I guess because you are compairing their system to yours that you must be doing the same type of thing.

      The difference being that VMWware system has a really robust design so when a nodes fails, they can take it to a shop when time suites them - where as you start to panic because you are suddenly at risk ... or is that not the case and you are just wasting money on expensive support because you are told you need to?

      1. Anonymous Coward
        Anonymous Coward

        Re: On-site support is too pricey for VMWare?

        Sure, maybe not everybody can afford 50:1 redundancy... usually you design for the required redundancy and more, so if a node fails, you need to restore it sooner than later. Nor everybody can afford a lot of spare capacity, so failed nodes mean more load for the other ones, depending on what node fails.

        Just remember - we are talking about the build system - I a node fails maybe you'll get the buld a little later- not the one that process for example orders, financial data, etc etc. - I would like to know on which hardware VMware runs what moves (and stores) money.

        My build systems often run on "decommissioned "hardware from the "front line" - why not, if it is still powerful enough? After all a build systems runs on its own, doesn't require huge computational resources and you can wait for it to finish, you don't need millisecond or better response time.

        Money then can be spent on test systems, for example, and make them mirror actual available hardware and the type software will be deployed on.

        Just a technician of mine travelling to a Genius bar, wait for his or her turn, let the "Genius" understand what's wrong and repair whatever needed, would cost me more than keeping some spares around and ensure systems that need it are repaired within a few hours.

        And moreover, no piece of equipment holding company data can leave the company site. If VMware trusts Geniuses is up to it, I could never authorize a build system with company products code on it, even on a damaged disk - even in temporary files or swap space, leave the site without being fully wiped or destroyed.

    2. James 100

      Re: On-site support is too pricey for VMWare?

      Yes, you can buy pricey support contracts where someone guarantees to bring you the new drive ... or drive (or even walk, from some urban offices) to a shop selling replacement parts off the shelf in a fraction of the time, during working hours.

      I could walk round to the shop and buy an IDE/SATA drive off the shelf in minutes. Not a penny paid in support contracts, and no courier service could come close to that time for any price. (OK, not at 3am - but then, being a university, we weren't working at 3am anyway, so that's academic: the shop was open all the hours we'd need it.)

      Except for the priciest specialist server drives, I suspect you'd find what we did with our desktop fleet: even with drives failing *within* warranty, the cost and hassle of getting warranty replacements outweighed the cost of just buying a new drive ourselves. (We had a load of dodgy Maxtors; it's a few years ago now, but I seem to recall replacements meant shipping the drive over to Ireland at our own expense, then waiting for the replacement to ship - or buying a brand new one on the spot.)

      It doesn't surprise me at all that VMWare have found this option works better for them for this kind of workload. I've heard of a few small offices doing something similar, relying on external disks for storage; ISTR one had two mirrored pairs of bootable drives, on two Mac Minis. One did "important" stuff, the other was non-critical - so disk failures were trivial (just buy and plug in a replacement), if the "important" machine died, they'd just swap both disks to the other and reboot. Simple and cost-effective, apparently.

      1. Anonymous Coward
        Anonymous Coward

        Re: On-site support is too pricey for VMWare?

        IDE/SATA drives? That's not what my server - nor my workstations - are using, sorry, it's not the type of disks you find at the corner shop.

        If you don't care about the warranty you can build what you need yourself (and save money), when you buy branded items is because of the warranty also. Sure, low-end cheap PC repairs can be cheaper if you buy replacement parts yourself - but when it comes to high-end workstation or servers repairs can be expensive as well and parts can be difficult to find outside the official channel. And laptops parts, but drivers, are not sold everywhere.

        Also in some countries labor safety rules forbids employees to mess with the hardware (if something wrong happens, the company is liable), you would need qualified technicians available for the job, or rely on the external service.

  12. calmeilles

    "It looks a little odd that a $6bn concern like VMware is using consumer-grade kit for important chores."

    First Apple stopped making enterprise quality kit. It wasn't wonderful stuff but at least it had a few basics like dual power supplies, multiple NICs and remote management.

    Then after long prevarication if finally licensed MacOS on VMWare... But only if your VMWare was on Apple hardware. Which would have been fine if they made anything fit to go into a data centre.

    I suppose that a 50-node ESX cluster addresses the lack of in-built redundancy, now you can even get cute little trays to mount Mac Minis in racks too. But still I'd tell people dazzled by the shiny that if they insist on Macs in my data centre their uptime SLA is 0.

    1. hypernovasoftware

      Ya sound like an Apple hater.

      Why the hate? Macs in the data center would make your job easier, not harder.

  13. Virag0

    It Works!

    I am running a setup like this. A 16gb Mac Mini Server runs ESXi and thunderbolt ethernet is used for

    iSCSI to run a HP-N54L hosting FreeNAS. I use ZFS for the iSCSI target and run all my VM's on it.

    Works great and does not use too much electricity...

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like