back to article You care about TIN? Why the Open Compute Project is irrelevant

There’s a lot of angst right now over the Open Compute Project, Facebook’s open-source data centre gift to the world. Some, as detailed by El Reg, describe Open Compute testing as “a complete and utter joke.” One that isn’t apparently very funny. At least, not to Cole Crawford, executive director of the Open Compute Project. …

  1. dan1980

    This message brought to you by your friendly local 'cloud' evangelist . . .

    Not that he's wrong when he says that there's no way you can build and run and maintain a datacentre as cheaply as Microsoft or Google or Amazon. Of course you can't.

    But the implication - that you can't deploy your workloads as cheaply in your own datacentre as you could by using Microsoft or Google or Amazon's public cloud is a little less cut-and-dry. Why? Because workloads vary. Simple as that.

    It's also true that the requirements surrounding those workloads vary and this adds to the murkiness.

    On one hand, you can argue that many looking to run their own data centres don't really understand the costs of doing it properly but on the other, you can also argue that many looking to host their applications on a public cloud don't understand the costs associated with doing that properly either!

    1. Ken 16 Silver badge
      Pint

      You type faster than me

      Have a beer!

    2. Alan Brown Silver badge

      "Not that he's wrong when he says that there's no way you can build and run and maintain a datacentre as cheaply as Microsoft or Google or Amazon. Of course you can't."

      Perhaps, but every time I've priced their services for what we do (big storage and heavy compute loads), they work out at least twice as expensive as running our own.

      I let mangelment go out and get schmoozed about cloud periodically and make sure they get full pricing for what they want to do. They get the idea fairly quickly that cloud is fine for intermittent or light loads and hopeless for more.

      1. dan1980

        @Alan Brown

        That's just it - for your load, which you know intimately, you can do it cheaper yourself with an architecture designed specifically for it.

        What these large cloud players are great for is situations where the demand is elastic.

        That's why they have had some great stories about research projects (e.g. running models and simulations) managing to run their workload in hours when it would take their in-house systems days or weeks (or longer) to complete.

        To spec their in-house systems to speedily handle the largest job is just not feasible as 99% of the time it will be vastly under-utilised.

        But if your load is relatively well-known and relatively constant - or at least predictably expanding - then an in-house system may well be better for you.

        The important point is that an 'elastic' system provides its benefit not just in being able to stretch to accommodate large loads but also in being able to shrink again, once those tasks/situation have ended and the workload returns to normal.

        These cloud services allow you to ensure that you always have the resources you need to run your tasks, however large the get and however quickly they change. And it's fantastic, but if that's not something you need then it's not necessarily going to be the best option for you.

  2. Ken 16 Silver badge
    Holmes

    Efficiency vs Cost

    An in house operation is never going to achieve the efficiencies of Google/Amazon etc but it may be able to provide core (steady load) processing at a lower cost. I think El Reg itself, in the past, has charted how cloud costs add up, to the point that a cloud solution is much more expensive for a known load than a dedicated set of servers. There is cost of management but the number of people with new skills needed to establish and run a cloud IAAS is maybe as high as for in house traditional workflows and they're harder to hire because they've got cloud on their CV. Cloud operators want to make money to invest in new server farms and they make it from their customers. It's got it's place but there's a reason people keep touting hybrid.

    1. P. Lee

      Re: Efficiency vs Cost

      Do you have elastic computer requirements?

      For most companies, I think the answer is, "not really." Increasing requirements? Perhaps, but not much that varies a lot up and down. Plus, hardware is cheap.

      From what I can see there are several things which reduce costs for cloud: cheap licensing (open source), load-balancing (for sharing hardware) and automation /single architecture.

      Perhaps going "cloud" is the only way to realise these benefits but it essentially requires application re-writes. If you are re-writing for the web, you may find you can achieve load-balancing distribution with some apache boxes, some VRRP and BIND. Perhaps you (shock horror!) don't need global reach for your apps, because all your users in an office down the road. The trick may just be to impose the same discipline on internal IT as a cloud provider would.

  3. Ole Juul

    the missing parts

    Where's the internet infrastructure going to come from, or is this spiel specific to a certain area with extraordinary uptimes? Many countries have real problems with net stability. Some countries are really bad, but it's not that good here in Canada either. Should one not count the cost of downtime?

  4. Anonymous Coward
    Anonymous Coward

    Blah, blah, blah. Sounds like a load of marketing to me. If the cloud would solve all problems, we would not have any other DCs left by now. When I talk to my customers, do the calculations of what it will cost, including transfer fees, migration from one cloud to another, factor in downtime etc. etc. Using the cloud for core enterprise workloads is not going to cut it.

    However! Small, agile startups, why not? Some test and dev, also sounds like a good choice. One thing is certain though, moving everything to cloud provider X, will give you a nightmare once you need to move to provider Y.

  5. Voland's right hand Silver badge

    Quality does not seem to be a metric of consideration.

    Quality does not seem to be a metric of consideration - it never was, never is and never will be in cloud which is what OCP and the like are aimed at. You are supposed to deliver _THAT_ at the software HA layer, not by gold-plating the hardware. A person who is lamenting about the lack of it has seriously missed the plot. By miles.

    1. Jim Mitchell
      Go

      Re: Quality does not seem to be a metric of consideration.

      And even if you buy the spiffiest, most reliable equipment, at cloud scale hardware will still have fails. So you need the software HA anyways. Once you have that, why spend the $$$ on the high end gear?

      1. dan1980

        Re: Quality does not seem to be a metric of consideration.

        @Jim Mitchell

        EXACTLY.

        In small, single-rack deployments, you might replace a drive a couple of times a year and a server maybe once a year.

        At that scale, hardware reliability does make a difference because, even a 'cheap' server is still some significant fraction of the total cost and you are likely to want it fixed or replaced by the manufacturer.

        One server is also a significant fraction of your computing power so having it out of commission is not ideal.

        Scale up enough and hardware failures on a server (whatever component) become a daily issue and as a result it is far simpler to just pull the server and replace it with a working one and then give the failed server to the appropriate team, who will assess it and either replace the faulty component and put it back into the spare pool or scrap it.

        Once you are at that scale, a failed server or 10 is just not significant anymore - it won't noticeable decrease your available compute power so the fact that it has failed doesn't impact you in any serious way. Provided, of course, the software layer is designed to handle it.

        If it is, you can take your time to remove the failed server and replace it, rather than pulling an all-nighter at the data-centre building a new one and swapping it in.

        There are configurations in-between the two extremes, of course, and the question becomes one of figuring out at which point certain configurations and paradigms become more efficient. The tricky part is that there is no fixed answer and it must be assessed based on all the factors and with a detailed knowledge of the workloads you are running and plan to run.

        The difficulty in hitting the 'sweet spot' is certainly one reason why 'cloud' can make a lot of sense - after all, the time and money it takes to figure this all out and optimise it is one of the factors! If it costs you a million dollars of staffing and testing and hardware and software to figure out an optimal solution that saves $100K a year, it's not necessarily worth it!

        As someone said above, a known load is often cheap to do yourself. The simple reason is that the 'elasticity' that is so great in the big clouds is part of the cost.

  6. Gordon 10
    FAIL

    Matt Assay says

    give your data to the NSA.

    How can you take someone seriously who flits from job to job like a Shoreditch luvee?

  7. Timo
    Pint

    I believe you hit your word count target

    Well done. You used a lot of words to say very little.

    Have a beer.

  8. Nate Amsden

    I know

    that I can operate data center stuff much better than amazon, microsoft, google etc as I have demonstrated for the past decade.

    They operate stuff pretty well *only if* you're willing and able to change your operating model significantly to fit in their "built to fail" model. Most apps, most orgs do not operate in that model, and sadly many people making the decisions don't realize this when they make those kinds of decisions. I maintain that every development team I have worked with, every company I have worked for has been this way, and the same goes for many others that I know others work at. Most people think cloud is just magic and it will "just work". This is probably closer to reality for SaaS (since everything is abstracted), couldn't be farther from the truth for IaaS (and really who uses PaaS yet these days).

    (both my current company and previous company launched their ground-up designed apps in a big public cloud with pretty disastrous results(first and foremost from a cost standpoint setting everything else aside), first company collapsed after I left they were spending easily $400k/mo on cloud hosting(I could do it all in house for around $1M of first year costs and about $150k/year after using tier 1 hardware and software), current company moved out in a few months and I operate their stuff still today, runs smooth as butter, I've had two, count em two server failures in 3.5 years(both recovered automatically), 100% storage uptime in 3.5 years, everyone sleeps well at night - haven't had to rebuild a virtual machine since we moved to the data center 3.5 years ago)

    The models are different. The model I work with provides higher levels of performance, availability, and generally significantly lower costs(though haven't priced clouds out in the past couple of years) because we know how to oversubscribe and share resources(doing this right takes experience). It's not as flashy, there are no APIs to dynamically scale up and down, that is a manual process (but realistically we haven't had this need, ever), lifetime of servers is measured in years. We have tons more functionality with our enterprise equipment than possible in a public cloud(I'm not going to bother explaining the details if you don't understand this, not worth my time).

    Their model is you have to build your apps to handle that. On paper it sounds smart, but in reality that is a lot of work, most companies opt to build features for customers rather than high availability.

    OCP to me is kind of dangerous, I know there are also a lot of people out there (I used to work for one) who just look for any excuse to cut corners on cost, not taking into account the risks involved in going with lower quality stuff ("it's all the same"). If you have the staff and expertise (and time) to handle it, great go for it. Most companies don't(none that I have worked for anyway, I work for small(er) companies).

    One company I was at tried adopting this model saying "oh we'll just hire an intern to swap hard disks" for a big hadoop cluster they were going to do. They ignored my suggestions and I left before they bought anything. The first round of cluster build out had 30-40% failure rate on systems for the first year or so(literally halving their hadoop capacity which impacted business because hadoop operates on a quorum model apparently the lead developer explained it to me a year or two after), they never hired interns to "swap hard disks". The leader of the group left not too long after. Company is still around but I heard all investors have pulled out and they are riding on their own(not a position I would want to be in given they went through probably 8 rounds of funding).

    I think cloud is a future, but that future is SaaS. IaaS is still a piece of shit when it comes to cloud, I don't really see it getting any better (at least in the biggest clouds). SaaS makes a lot of sense though.

    lastly, I still recommend this plugin it makes reading about cloud more enjoyable

    https://addons.mozilla.org/En-uS/firefox/addon/cloud-to-butt-plus/

    (there is one for chrome too I don't use chrome though)

    (maybe the plugin altered my comments to my butt I am not sure)

    1. Nate Amsden

      Re: I know

      forgot to clarify when I mean data center stuff I specifically refer to servers/storage/networking. I do use co-location providers for actual data center floor space, cooling etc(generally go for the high quality ones with N+1 everything). I use tier 1 ISPs for my internet uplinks.

    2. future research

      Re: I know

      Have an up vote from me. Not sure who down voted you, but the cloud is only cheaper if you have huge peaks in processing needs and can otherwise shut the servers down. For base stuff that needs to be always available (and not available as SaaS) then self owned will be cheaper

  9. Probie

    Misses the point

    So of course running a data centre efficiently is hard, no one said it was easy and this:

    “The type of work and focus needed to run a data center effectively is very different than running a short-term project. A data centre requires day in and day out focus on being perfect and making marginal improvements, while avoiding risk to production operations.”

    is stating the bleeding obvious and treating most people like idiots.

    Most people who look at ARM, and exotic projects (for tin) already have a sizeable invest in the data centre (one way or another). So really stop treating them like children.

    Also you forgot the point of Pay for provision on the cloud (run 24x7 as day to day data centre) and pay per drink (only turn on when you need it).

  10. LB45

    Nordstroms don't do internal data centers anymore?

    So scratch off Nordstroms as a place to shop. Unless it's cash only.

  11. Anonymous Coward
    Anonymous Coward

    More link bait from Asay

    In every article he writes, Asay would like us to believe that applications are the center of the enterprise universe.

    They are not.

    Nordstroms does not sell more stuff because of their applications. They sell more stuff because of their business processes (which may include applications). Simply, Nordstroms does not see value in IT the same way Walmart does - and that is OK. Different companies view IT differently. Walmart uses their IT as a competitive advantage for the business processes - value they could not get from AWS. Point is, not all businesses are going to find value by running their IT in a public cloud. Some will. But applications, by themselves, do not define business value.

  12. Roj Blake Silver badge

    Amazon, Microsoft and Google?

    You do know that there are other IaaS/PaaS companies out there at least as good and at least as cheap as those three, right? And the ones that aren't US-based don't give your data to the NSA either.

  13. Andrew Meredith

    Oooh .. another advert ...

    ... for putting all your eggs in someone else's basket.

    For a while I made a decent living rescuing people from failed hosting companies and setting them up with their own kit. To my knowledge only one of the firms that did this has scrapped their in-house servers .. and that was because they got bought and started using the buyers in-house servers instead.

    Maybe there should be a register of interests for IT journos as well as MPs.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like