back to article Microsoft's cloud leaves manual transmission behind

When you write technology blogs for a living you end up sitting through a lot of WebExes, watching a lot of training videos and going to a lot of conferences. A growing trend that emerges from all these presentations is the importance of autodeployment, something that has far bigger implications than a mere installation method …

COMMENTS

This topic is closed for new posts.
  1. K

    Excellent Article

    Thanks Trevor, as a Technical Manager in an SME, Your experiences and experimenting are quite an eye opener and gives me a lot of food for though.

    1. Mr Spock
      Boffin

      Re: Excellent Article

      You forgot the bit that goes 'SME viagra SME cialis SME levitra SME casino SME lonely wives' etc. ...

      1. This post has been deleted by its author

      2. K
        Gimp

        Re: Excellent Article

        Ahh Mr Spock, are you in the market for any? I know somebody who can hook you up!

  2. Irongut

    Admins as cattle ranchers eh? So what happens when all they are capable of is kill it & spin up a new one but the new machine has exactly the same problems as the old one and no one actually remembers how to troubleshoot and fix things?

  3. Do Not Fold Spindle Mutilate

    Your cattle can be any colour as long as it is black.

    Will the customers be willing to pay for your pet customizations if the cost of a cattle server is cheaper? Decades ago the same thing was being said by clerks filling out dozens of financial forms. All of those jobs are gone because most companies buy financial software packages. Now they are automating the jobs of the people who are left. Outside the world of computing when there is a significant price difference between a cheaper mass market product and a higher priced customized item lots of people buy the cheaper / cheapest. There is some market for tailors, independent mechanics and personal chefs but McDonalds servers billions of burgers. Would you like fries with your cattle server?

    Bye. Turn the lights out when you are gone. I retired from being a dba because the companies treating me like a pet, a person, changed to treating me like cattle, an expense or a steer.

    1. John Miles

      Re: but McDonalds servers billions of burgers. Would you like fries with your cattle server?

      and even McDonalds were trying to reduce the number of people asking would you like fries using call centres:

      http://www.nytimes.com/2006/04/11/technology/11fast.html?pagewanted=all&_r=0

  4. Paul Hovnanian Silver badge

    All I could think of is mad cow disease. Let just one carrier of BSE on to the ranch and trouble ensues.

  5. Nate Amsden

    infrastructure as code is crap

    And really only applies in hyperscale configurations, all other configurations the overhead is not worth the investment. I've been working with Chef for almost 3 years now which takes the concept much further than Puppet I believe and as someone who has managed servers since the mid 90s I firmly believe that the # of folks that are out there that are capable of operating at that level is very small. I've only come across a handfull in my career (I don't count myself as one of them as I believe Chef is a torturous tool as it is implemented today).

    These public cloud providers like the infrastructure as code approach because it puts the onus on the customer to figure things out, instead of on their IaaS offering. Because their IaaS offering is crap compared to the capabilities we have had in enterprise IT for the past ...6-8 years or so? How often do servers fail? Not often (if they are good quality), How often do racks of equipment fail ?The only time I've seen that happen is with a rack of equipment w/o redundant power and a blown power supply took out the PDU. It's raaaaaaaaaaare. Data center failures? I've seen more of these failures at public cloud providers than anywhere else myself. Again assuming your running quality equipment at a quality facility. I did move one of my employers out of a bad data center in Seattle(within ~6 months or so of starting my job there), about 3 years before that facility had a massive 30+ hour downtime due to a fire.

    Providers like Amazon and the like it's still somewhat rare, but their architecture is broken so you have to do much more work to get around their limitations higher up in the stack then you otherwise would have to. Though I think in some cases they are working to fix this for example by requiring network storage for the OS drive in some cases, but I don't believe they do things like preserve internal/external IPs and have automatic fail over etc (sure they do have elastic ips but that is only part of the solution)

    Configuring the operating system and basic software is obviously only the beginning, configuring the applications that run on top of them to handle such a situation can be much more difficult. Both my current and previous companies built their applications from the ground up in Amazon's cloud. Both organizations never put anything in for the "cattle" approach (I call it "Built to fail"). Today we operate in our own infrastructure and despite costs being a tiny fraction of what they'd be in a public cloud they continue to build more single points of failure, complex configurations etc. Our main application requires hard downtime for all app servers for core configuration changes, you can't change the configuration and say restart one server at a time, the configuration is globally shared/cached amongst all systems.

    Even before the public cloud -- I've worked at several internet SaaS-type startups and e-commerce companies(all using software developed in house), NONE of them put even a little bit of work into supporting the built to fail model from a development perspective. These aren't archaic institutions with decades of cruft in their application systems, these were all modern startups with fresh code bases, none of which was older than say 18 months when I started at the respective companies.

    The main reasons behind why this isn't done in many cases is it's not a priority. They'd rather get the next wiz bang feature out to impress the customers. Getting application level fail over is quite complex (outside of stateless web servers, though I saw even that overlooked once recently when a developer launched a new app that required session affinity for it to work - fortunately the fix for it was pretty simple but the point is he didn't think like that when building it), and rarely comes into play. I've seen this time and time again, year after year. I see developers cutting even basic availability corners all the time to get said features out, and getting full blessing from management in the process. Supporting built to fail would be like me being able to afford a nice helicopter for personal transport.

    All you have to do is look at the number of web sites that go down, even from major suppliers each time amazon has a hiccup in their infrastructure to see that the issue I describe is not isolated to my own experiences, it is wide spread. Each time there is a small group of people who make absurd claims like "oh you should use multiple regions and/or multiple cloud providers". Yeah, in an ideal world maybe that is possible but reality is obviously different.

    I remember that one 30+ hour data center outage a few years ago, among the sites that went down was "Bing Travel". You might of expected Microsoft of all folks to have some sort of distributed system, they are a big company after all but they did not. Not sure how long it took them to get that site back up(if they were able to get it up before the data center itself came back online). I imagine they do have better protection for the site today...Even more of a question is why they hosted it at the facility to begin with, it had a history of problems, and Microsoft obviously has the $$ resources to move stuff around if needed.

    When your small it makes sense to put this burden of failure on the underlying infrastructure, it's cheaper, it lets you focus more on your product rather than low level stuff (one alternative may be PaaS but again my experience says that PaaS is too limited in offering what the developers want which is often off the wall sort of things). *IF* you get to a really big scale (*most* applications/organizations will never get close, be realistic), then you can re-architect it a bit to handle such a situation (it'll likely be only a small part of a re-architecture to have the application scale to that level anyway).

    "Forcing" organizations into adopting built to fail (which is what the likes of Amazon do if you use their service, though I believe most do not realize it for some time) is just plain wrong.

    1. Tank boy
      Happy

      Re: infrastructure as code is crap

      I like "built to fail". I'm stealing it. That was a good rant on the uselessness that is the 'cloud'. I built my own cloud in my house. Working fine, and if it breaks, I don't have to spend hours on the phone dealing with people that have not a clue as to what I'm talking about (mention Linux and their heads start spinning); I've got everything backed up on CD/DVD/flash drives. Way cheaper than the cloud, and I own it.

  6. kmlbar

    Too many clouds

    It looks like every vendor wants you to use their cloud ... managing the clouds is going to become a full time job ... and trying to keep stuff out of the cloud is nearly impossible. For security focused SMB, the cloud is a PIA. Back in the good old days, we had file servers and everything was stored on them... and it was easy to find corporate documents. Now for some files, Adobe CC stores their files in the (only accessible to Adobe apps) Adobe CC cloud. Office 2013, wants to store everything in SkyDrive (or whatever it's called today), Apple machines are busy storing stuff in iCloud; your google docs are busily being stored in GoogleDrive; other users are stashing stuff in dropbox, cubby or the dozens of other cloud storage.

    It's a disaster... who knows where the files are, and who has access to what.

    The old model of "nothing on the desktop" and everything on the file server was way more productive.

    1. Frances Banana

      Re: Too many clouds

      @kmlbar

      You are so right with the amazing fragmentation of all services. Taking into account also recent spying buzz it indeed becomes a bit unbearable. The problem is, that many IT managers in larger companies absolutely don't see this as a problem. But it's also the SMB guys that somehow don't get the whole point - they want the cloud because they don't want to have a single humming server in their office (and also to spare some IT guy looking at it). But they so much don't see the problem of them NOT OWNING THE DATA, neither the service.

      Cloud is great for some applications - but when we have a fragmentation like you described - it's pure maniure stuffed in a banana having a sticker "mashed one, tasty one!"...

  7. IGnatius T Foobar

    Microsoft FAIL

    As usual, Microsoft takes a good idea (separating workloads from the hardware they run it on) and entangle it in their proprietary web of lock-in technologies (in this case, Microsoft System Center, Hyper-V, Azure, SkyDrive, and of course Windows).

    One need not involve the Beast of Redmond to enjoy the benefits of large scale virtualization. The "cattle" approach need not involve throwing away servers, either. Pretty much everyone now understands the benefits of virtualizing everything. Even a "single purpose server" is a candidate for virtualization, because if the server is dying you can move the workload to a new one without having to reinstall the operating system and applications. It unties the software from the hardware, and that's always a good thing.

    And since most Microsoft software is still so unstable that you need single purpose servers (who would dare run the pile of crud that is Exchange cohabitate on the same server with anything else?) we've also adopted virtualization as a common method of "sandboxing" unstable applications from dodgy vendors such as Microsoft.

    None of this means you don't take care of your physical servers, even if they're in house. And if I want to give them names and repair them when they fail, that's fine too.

  8. Stephen W Harris

    The point of "cattle vs pet" isn't to stop you having your unique and precious snowflake; it's to have you _define_ your unique snowflake in such a way that if it breaks (hard disk failure; hardware failure; whatever) then it's quicker to rebuild than to repair. SMEs benefit from this. An environment with 1000 OS instances definitely benefits.

    1. Trevor_Pott Gold badge

      You can only DEFINE your "pet" server is all elements of the stack - from OS to app to config - comply with that concept. The reason many of us have "pet" servers in the first place is because we are utterly reliant on applications that don't lend themselves to that. In addition: changing applications is a long, tedious, expensive process that most businesses aren't going to undergo just because it advances Microsoft's strategic initiatives or helps one of the nerd pack get a little more sleep once a year.

      I see no direct benefit to the SME and you have failed to articulate one.

    2. Anonymous Coward
      Anonymous Coward

      while I would agree how would I blow away our companies ERP and re-create it? First it is only installed once (not counting test/dev) so creating a definition of it wastes time second if I blow it away and replace it what do I tell the users who just lost all the data? So then I have to make the definition backup the old data and restore it into the new servers. Last of all how on earth do I keep the definition up to date as the application changes.

      All these issues can be addressed but at what cost. I like allot of the infrastructure deployment parts but am yet to see the use for servers in that companies generally only deploy a version of an application once.

  9. Paul Hovnanian Silver badge
    Joke

    Admins as cattle ranchers?

    Udder nonsense!

  10. Tim 11

    it's a nice idea...

    but applications can only be treated as cattle if the infrastructure and services they require work the same way. for most enterprise infrastructure (think SQL Server, Oracle, ASP.Net, J2EE, exchange, anything you like) that's as far away today as it's ever been.

This topic is closed for new posts.