back to article The changing face of branch offices

It is not only data centres and computer rooms that have started down the path of the “strategic consolidation” of resources. Whilst organisations rightly rate their people as amongst their most valuable assets, many have also begun to optimise the accommodation that they make available to their workers. The shift to “hot …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    WTF?

    Old news

    I worked in Germany in 99' 00' and we had remote offices all over the country with visiting support . Also supported a lot of home users. We got around the problem by shipping them computers that worked. Plus a user base that didn't usually mess around with their works laptops.

    Took a bit of testing and I credit the reliability to solid adherance to writing good procedures which were rigidly followed by the remote, visiting engineering staff. Typically it paid off and the more usual issues we had were the much rarer hardware failures ... as opposed to todays software issues, OS updates that brick the machine, badly written applications thrown together, etc.

    I got a phone call one day from a home user. I'd built and sent him his laptop and it worked so well for him, that he telephoned me in Muenchen to thank me. I was on cloud 9 all day. I love it when hard work, where it counts, pays off.

  2. Aitor 1

    Slaves?

    maybe the best idea woul be to use slaves.. after all, the idea is to pay less.. and hotdesking is, I can tell you, only good for low level work (cashiers, etc)

    As for no data on local brach offices... it is MORE EXPENSIVE to have it in the central offices, and less useful, but a good idea.

    1. Anonymous Coward
      Anonymous Coward

      More expensive to host centralized data?

      That all depends on the network costs, and while I don't see the network quotes for my clients I can say at least anecdotally that they appear to have been dropping significantly in the last 5-or-so years because most of my clients are aggressively consolidating/centralizing and it's not because their costs to manage distributed infrastructure have gone up. Larger branch offices might maintain a local domain controller and file and print server (sometimes on the same server), but pretty much everything else runs over the WAN (which, if you're talking Microsoft definition would probably qualify as a Private "Cloud" or sorts). Other than network, and back in the day server capacity issues in the old NT 3.51 and 4.0 days, it has never made sense for infrastructure to be highly decentralized primarily because you're wasting hardware and software costs with some amount (if not major amounts) of over-allocation for the smaller sites (ex: a small F&P server capable of hosting 100 seats only serving 30)... to say nothing of the management costs for having to deal with more smaller boxes vs. fewer larger ones.

  3. Daniel 1

    It really stresses your network protools

    When you run networking over such vast distances - especially Microsoft technologies, which simply weren't written to cope with this sort of scale, you really start to notice the latency. I'm here in North Tyneside, and I have mapped network drives to iSeries servers located in Brussels. You really do start to see how much NTFS and SMB/CIFS fails to cope with this scale of networking over that sort of range (bearing in mind that iSeries doesn't even run Samba, but IBM's own breed of CIFS-emulation, called 'Netserver'). Sometimes its actually easier to open a stunnel and TS or VNC into a machine in Belgium, if you only need to take a look at something, rather than connect to it direct.

    There is an estate of hundreds of physical and virtual servers, spread all over Europe, where I work, and if you should inadvertently click on the wrong folder, or server, your machine will completely stop responding for over a minute, while it dicks-around updating it's MFT.

    In our case, a lot of this consolidation is driven by legislation, which either requires the sensitive data, we handle, to be held in ever more secured and centralised locations, or simply makes it uneconomical to do otherwise. Only two people in IT, here on this site, actually have access to the room which holds our few remaining on-site servers (and these are test servers). Laughably - again for legal reasons - one of them only has access, because he's the Fire Warden.

    More of our sites are using these throw-away solid state devices for local file-and-print, because it is so much easier to manage and replace the things. Over longer distances, ad-hoc deployments of Sharepoint and Wikkis seem to be getting adopted (hooray, the 'cloud' has arrived). HTTPS actually beats dedicated networking protocols over these sorts of distances (or, at least, networking protocols and file systems, that were designed in the days when ten computers, sharing coax was a 'network', and half a gig was a 'big' hard drive).

  4. Henry Wertz 1 Gold badge

    Wow.

    @Daniel1, that sounds absolutely ghastly. I can't say it's really feasible, but a minute penalty for clicking on the wrong share? This really sounds like a setup that needs the remaining Microsoft stuff taken out.

    1. Daniel 1

      Well, yes, it is a bit

      However, I'm deliberately not blaming the Microsoft technology in-and-of-itself. The built-in design assumptions that are now failing those protocols, are exactly the assumptions that were sensible to make, at the time they were first developed, over a decade ago. (True, the complete lack of effort, by the vendor, to update them is what lies at the heart of quite how awful it is, of course, but even the best efforts would not have addressed the core of the problem. I really don't think any completely stateful networking protocol makes sense, over such ranges.)

      This is why I made my fairly flippant, but entirely valid, point about HTTPS web-based systems being used as a quicker and more reliable means of data exchange within the business. There isn't a strategy dictating it's uptake; there's just a need.

      One of the reasons we still connect to some of our machines using PuTTY, here at work, is that an established PuTTY session will always keep responding, even when the machine it is hosted on is completely frozen. Since this Dell laptop can freeze for upwards of ten seconds if I accidentally click on some HR share, that I don't have permissions to (while it relays this message back to me, from Vienna, or Madrid, or wherever), the temptation is to proceed with the assumption that what you are using is a truly "multi-tasking operating system" and try to give it something else to do while it waits.

      This id a Bad Idea, since it completely jams up it's buffer, with extraneous requests for work, that its hasn't the processing power to cope with. Its task priority was built for an age where a simple permissions message didn't have to cross mountain ranges and great stretches of sea water, just to tell you "you can't come in". And Microsoft are the world's great optimists, of course: they always prioritise the displaying of error messages right to the top of the pile (as I say, where I work, even our bollox-ups have to undertake journeys that would make Bilbo Baggins flinch.)

      So we learn to just switch to one of the PuTTY sessions and get on with something more productive in the mean time. We have two or three screens, as standard, anyway, so you just request some action from the Windows box, that will cause some visible change on one of the other monitors, when the queue finally clears, and in the meantime you get on with something more useful in the PuTTY session.

      Ah yes, green text on a black background. Welcome to the twenty first century: please form an orderly queue... However, if the damn machines themselves can't task-switch with any grace, then I guess we'll just have to (I've become much better at marshalling myself, than I ever was in C++).

      Ultimately, this points to hardware hypervisors, and Chrome-like operating systems, of course. But I restate my point. The problem isn't the Microsoft: it's this whole "desktop operating system" thing, which is failing us. The relaying of computing work backwards and forwards - when all I need to see is the outcome - is where the waste comes in.

      This isn't the future, of course; it's the past - since it's just a dandified version of the old client/server mainframe world. However, if it makes me more productive, because my productivity falls back to below the levels of the machines I use, then I'll be happy.

      (Our motto at work is: "You wait around for ages because the bus-errors arrive in threes".)

This topic is closed for new posts.