back to article 'Major incident' at Capita data centre: Multiple services still knackered

A major outage at a Capita data centre has knocked out multiple services for customers – including a number of councils' online services – for the last 36 hours. Some of the sites affected include the NHS Business Services Authority, which apologised on its website for the continuing disruption and said it hoped its systems …

  1. wolfetone Silver badge
    Coat

    Captia has crapped out, now to be know officially as Crapita.

    1. Loyal Commenter Silver badge

      You're a couple of decades late to that party.

      I had the misfortune of working with them some fifteen years ago, and we called them that back then.

    2. Steve K

      Private Eye always has

      Already is - see Private Eye magazine passim!

      1. Derezed

        Re: Private Eye always has

        ad nauseam

        1. Mike Pellatt

          Re: Private Eye always has

          Your "Private Eye ad nauseam" is my "Private Eye told the world all about <x>, repeatedly, years before anyone else woke up to it"

    3. Anonymous Coward
      Anonymous Coward

      I know someone who works for them, they have had no internals e-mails or on-line tools since the outage as well. Also their own staff call it CRAPITA.

    4. Anonymous Coward
      Anonymous Coward

      "The remainder of services are now being robustly tested"

      Translation "Sorry guys, but we pay such low wages that we get the lowest grade staff and they couldn't be bothered to test the generators. Rest assured that those responsible are now busy playing a datacentre sized game of switch it on and pray it comes up..."

      1. macjules

        Capita: "Our service outage is a minor impediment that we occasionally encounter on the road to providing better services to our clients"

        Translation: "Hey. its not OUR fault! We use the same maintenance company as British Airways."

    5. Oh Homer
      Trollface

      "Robustly tested"

      I hope they "robustly test" the cobwebs with a firm brush first.

  2. Lee D Silver badge

    Stop relying on one datacenter to be up.

    This is WHY Windows Server and lots of other OS have HA functionality.

    Hell, it's not even that hard to enable. Or just provide a secondary system somewhere else that does the same even if you don't have fancy connections between them.

    If your platform is not virtualised, why not?

    If your platform is virtualised, turn on the HA options so that the VM replica in another data center just starts up and becomes the primary and your domain names, etc. resolve to all IPs that can offer the services.

    I still don't get why ANY ONE FAILURE (one datacentre, one computer, etc.) is still a news item nowadays. It shouldn't be happening.

    Even if you deploy on Amazon Cloud or something, PUT THINGS ELSEWHERE TOO. It's not hard.

    1. FrankAlphaXII

      It seems that Crapita don't believe in Business Continuity otherwise an outage at one datacenter wouldn't take down part of the NHS and a number of local governments. As you stated, there should be no such thing as a single point of failure in 2017. That doesn't bode well for UK emergency preparedness at the most important level. If something as simple as internet communications get taken down that easily what happens when more than one of their datacenters fails and can't/won't be restored for weeks or months?

      I work in Emergency Management for a government agency at a local level plus I develop BC/DR plans for SMBs on the side so I see this kind of shit out of government outsourcing contractors all the time, beancounters that run businesses like Crapita (looking at you Serco, Egis, and Leidos) don't get really simple preparedness and mitigation concepts and if they do understand them, they'll be first to balk at the price tag associated with them. Until they've had their "effeciencies" blow up in their faces. Thing is, in this day and age fault tolerance and providing an emergency level of service for data when something does happen isn't hard or expensive and it's really unforgivable that a supposedly first in class outsourcing contractor can't provide it's expected level of service because their infrastructure's shit and their planning's worse.

      1. Alan Brown Silver badge

        "beancounters that run businesses like Crapita (looking at you Serco, Egis, and Leidos) don't get really simple preparedness and mitigation concepts"

        Then you ensure that the SLAs they sell you hold water and have penalty clauses.

    2. GruntyMcPugh Silver badge

      "Stop relying on one datacenter to be up."

      Indeed, a couple of years ago I did an audit at a well known bank, each of it's datacentres, which were almost identical. For some reason the door on the gents in one had a glass panel, and the other didn't, and the vending machines in the break area were further apart in one,.... but the IT equipment, mirrored exactly.

      1. Anonymous Coward
        Anonymous Coward

        > a couple of years ago I did an audit at a well known bank, each of it's datacentres, which were almost identical.

        N data centres costs N times as much.

        As an outsource provider, why would you do this when your liability to your customers is limited to giving them a free month's rental?

        1. TheVogon

          "N data centres costs N times as much."

          No it costs even more than that for full resilience. You need all the replication licenses for arrays, software, etc, the testing, the design, the recovery plans, the fast low latency interconnects between DCs, etc, etc.

        2. CrazyOldCatMan Silver badge

          N data centres costs N times as much.

          It's actually a bit more - N data centres cost (N+extra kit to do synch) times as much..

        3. GruntyMcPugh Silver badge

          Well, it's rather down to the procurement process, and to make sure there's a real financial disincentive wrt downtime. It wouldn't surprise me to learn it would just be paid in service credits however, the relationship between Capita and our Govt is less than healthy.

    3. Anonymous Coward Silver badge
      FAIL

      But infrastructure redundancy eats into profit margins, so why would they?

    4. Anonymous Coward
      Anonymous Coward

      "Stop relying on one datacenter to be up."

      Having 2 DCs and designing for no single point of failure costs ~ 3 times the money. This is government IT we are talking about. The DR plan is probably to build a new DC!

      "If your platform is not virtualised, why not?"

      Because it's such a large system that it uses the resources of complete physical servers is usually the answer in these type of systems.

      1. robidy

        > This is government IT we are talking about.

        No this is crapita's over priced, under resourced....over promised and under delivered service in operation.

    5. Halfmad

      Thing is with these companies that although they may include agreeing to have failover sites etc when sh!t happens and those don't work they just say "hey sorry, won't happen until the next time it happens" and as the NHS is f*cking awful at contract law they have no monetary clause to hammer them with.

      Seen this so often in the past 10 years.

      1. CustardGannet
        Devil

        "Stop relying on one datacenter to be up."

        They would probably listen to your sound advice, if you put it on a sheet of 'Accenture' headed paper and charged them a 7-figure consultancy fee.

        1. handleoclast
          FAIL

          Re:consultancy fee

          My county council pissed away 7 figures to Price Watercloset Coopers to come up with ways of saving money.

          My suggestions:

          1) Don't piss away 7 figures to PwC

          2) Hire staff capable of coming up with suggestions themselves (suggestions other than asking PwC what to do).

          Ooooh, where's the IT angle? My county council uses Crapita for their payment systems. Who'd have expected that?

    6. Tom Paine

      It's not hard, but...

      ...it does cost money. Twice the money, in fact, plus the design overhead.

      1. John Brown (no body) Silver badge

        Re: It's not hard, but...

        "Twice the money, in fact, plus the design overhead."

        Surely Crapita have multiple data centres so spreading and mirroring the resources should be part of the standard service. Except when it affects the bigwigs bonuses.

        1. Linker3000

          Re: XML is so 1990's

          My rule for any situation that makes me want to start a sentence with "You'd think that...." is to STOP and take a reality check.

      2. Anonymous Coward
        Anonymous Coward

        Re: It's not hard, but...

        Especially when they're handcuffed to internal suppliers that are bleeding money. (SH)ITES have to claw it back somehow so Uncle Andy makes everyone play nice.

      3. SB37

        Re: It's not hard, but...

        It costs more than double the money. If you want true mirroring for disk storage you'll need four copies of the data - 1 at your source and 3 at your remote datacentre.

    7. Rob D.

      It's not hard but ...

      Actually it is hard because it isn't that simple. In the real world, most of the problems around business continuity come up because someone has tried to turn a tricky problem requiring attention to details in to something that has a simple answer which is easier to understand and by definition is cheaper.

      Commonly this sounds something like, "We paid to virtualise everything so we can just move it if we have a disaster to the other data centre. Easy - please explain why we have to pay for anything more?"

      Reality bites early in the requirement for budget up front for the significant additional planning, design, implementation, testing, training and infrastructure costs. The details house many devils here. Throw in time required for testing, operational training, operational proving in production, and by now the System Integrator is wishing you'd never shown up to explain what is missing while they work out how can they get past User Acceptance without anyone realising the business continuity isn't really there.

  3. Anonymous Coward
    Anonymous Coward

    Probably got their own staff to install the back up generators

    And then to test them. What could possibly go wrong.

    1. m0rt

      Re: Probably got their own staff to install the back up generators

      Bets on diesel in the generators being a couple of years old? The fact they are now having an issue with parts suggests that the sudden loss of power cause some great failures.

      Today we shall mostely be Capitalising on the Capitulations of the PITA that is Crapita.

      1. Anonymous South African Coward Bronze badge

        Re: Probably got their own staff to install the back up generators

        Not taking any bets, but regular testing of diesel generators need to be done.

        Heck, just kick out the mains CB and let the genny take over (for 30 minutes each week), this way you can weed out any old and dodgy UPS'es as well.

        1. Roger Varley

          Re: Probably got their own staff to install the back up generators

          "Not taking any bets, but regular testing of diesel generators need to be done."

          But they did. They sub-contracted the testing to Atos who declared them "fit to work" ......

          1. handleoclast
            Pint

            Re: Atos

            ROFL.

            Too true.

            And a week after Atos declared them fit to work, they died.

            Have a pint for making me laugh.

          2. PNGuinn
            Mushroom

            Re: Probably got their own staff to install the back up generators

            The gennys were tested weekly but noone thought to buy any fuel and they ran dry aftter 3 mins?

            They did buy fuel but it was petrol / bunker oil because that was cheaper?

            They went green and bought a load of cooking oil cheap?

            No - Crapita aren't even **that** capable.

            >> This might have helped.

        2. Anonymous Coward
          Anonymous Coward

          Re: Probably got their own staff to install the back up generators

          "Heck, just kick out the mains CB and let the genny take over (for 30 minutes each week)" - but don't, as has happened else where, do this many, many times and forget to refill the tanks once in a while.

        3. Anonymous Coward
          Anonymous Coward

          Re: Probably got their own staff to install the back up generators

          Ours are tested by the power failures hitting some weeks apart, lately.... just last time our small lab datacenter was kept alive by the UPS and its generator, the main one failed. Later they discovered scheduled maintenance was no longer active. Still, after asking several time, I don't know who's in charge of re-filling the diesel tank (I'm not authorize to perform it myself, you know, the dangers of handling dangerous chemicals and operating on machines I was not trained for...)

          1. katrinab Silver badge

            Re: Probably got their own staff to install the back up generators

            Isn't the refilling done by the tanker driver who delivers the stuff?

            1. Stoneshop

              Re: Probably got their own staff to install the back up generators

              Isn't the refilling done by the tanker driver who delivers the stuff?

              As the Germans say 'Jein' (contraction of yes and no): first someone[0], having been notified by Facilities that the tank is running low, has to call the supplier for delivery, then with the tanker arriving someone[1] has to unlock[2] the gate/hatch/trap door to the tank neck.

              [0] from Finance, or Contract Manglement[3]

              [1] from Security[3]

              [2] you don't really want someone peeing down the filler neck, or dropping sand or sugar in.

              [3] in extremely enlightened cases these responsibilities will have been delegated to Facilities as well.

            2. Alan Brown Silver badge

              Re: Probably got their own staff to install the back up generators

              Which is fine until someone shuts off the feed to one of the tanks (vandalism) and said driver pumps N amount of fuel because that's what he's expecting to pump instead of looking at the fill gauges and stopping when they say "stop"

              Cue multiple thousand litres of diesel not being in the tanks, but instead in the stormwater system and lots of people asking "what's that smell?"

        4. Stoneshop
          Flame

          Re: Probably got their own staff to install the back up generators

          Heck, just kick out the mains CB and let the genny take over (for 30 minutes each week)

          Ingredients: one power grid with regular shortish (30 minutes or less) outages, one computer room floor with various systems, one UPS powering the entire floor running at ~15% capacity, one diesel genny. Due to the regular power dips, we were quite sure the UPS and diesel were functioning as intended; fuel was replenished as needed. Then came the day that the power consumption of the computer room doubled due to an invasion of about 45 racks full of gear. And then came the next power dip. Which made the UPS (powering the computer room; the generator was hooked up so that it basically kept the batteries charged) suddenly work quite a bit harder. And longer; for a number of reasons. Which caused the temperature in the UPS room rise quite a bit more than previously. Environmental monitoring went yellow, and several pagers went off, and Facilities managed to keep the UPS from shutting down through the judicious use of fans scrounged from a number of offices.

          Moral of this story: cooling is important too, not just for the computer room, but also for the UPS room.

        5. This post has been deleted by its author

      2. Anonymous Coward
        Anonymous Coward

        Re: Probably got their own staff to install the back up generators

        "Bets on diesel in the generators being a couple of years old? "

        The staff probably nicked it all to fill their cars!

      3. Doctor Syntax Silver badge

        Re: Probably got their own staff to install the back up generators

        "Bets on diesel in the generators being a couple of years old?"

        Or the wrong sort of diesel.

      4. CrazyOldCatMan Silver badge

        Re: Probably got their own staff to install the back up generators

        Bets on diesel in the generators being a couple of years old?

        Or, in a very old situation, diesel in the under-carpark tank has seeped away into the subsoil because of a flaw in the tank..

        Which was fun when the generator did kick in for real but only ran for ~20 minutes before exhausting their local tank..

        No-one was checking the levels of diesel in the bigger tank. Oops.

  4. Chris G

    Just wondering

    If Wannacrypt has crapped Crapita

    1. Anonymous Coward
      Anonymous Coward

      Re: Just wondering

      You wish !

  5. batfastad

    Well!

    Well you don't think that the money their customers (NHS Trusts, Councils etc) pay actually gets spent properly and proportionally on the infrastructure backing their services do you?!

    Look it's contract renewal time... lets take the money and sweat the assets of our existing platform for a few more years. After all, we've got executive pay reviews coming up soon.

    The fact that a DC has gone down and that has taken out production service is unforgiveable in this day and age.

  6. adam payne

    Single point of failure again, well done.

    1. Terry 6 Silver badge

      All the eggs

      in one slimy rotten basket

    2. Dan 55 Silver badge

      Who here didn't know Capita is indeed a single point of failure?

  7. Aristotles slow and dimwitted horse

    The realities of Capita IT terminology...

    No single point of failure == Many points of failure.

  8. Anonymous Coward
    Anonymous Coward

    You are all wrong ! it says so here; http://storage.capita-software.co.uk/cmsstorage/capita/files/0e/0eb0f967-7265-4dab-afa5-d6db3f9ffbd3.pdf

    Backup data centre in Laindon, diesel generators ( tested twice a year) , UPSes, etc. so it can't be broken can it ?

    1. John Crisp

      "Backup data centre in Laindon"

      That explains it. The backup gear had all been nicked.......

    2. easytoby

      Unless they hare just plain lying...

  9. Anonymous Coward
    FAIL

    Am I missing something,,,

    "He added Capita has a virtualisation platform which hosts at least 30 clients and many internal Capita systems, "

    The beauty of VM's is you can spin them up in your BAU / DR site....oh wait.

  10. Mike-H

    What if these services had a DR option and the customer didn't take it? There's so much focus on cost these days.

  11. Anonymous Coward
    Anonymous Coward

    I just wonder

    How many of my colleagues find out whats going on by reading the news on el Reg rather than Capita Connections?

  12. Anonymous Coward
    Anonymous Coward

    This is indeed a tragic day for a leading British company. Still at least the weather is perfect!

    1. Anonymous Coward
      Anonymous Coward

      Still at least the weather is perfect!

      For meatsacks not inside a building, yes. But an interesting thought is that summer is now a time of real grid instability, because all that essentially unplanned solar PV dumped on the grid causes huge instability. Varying output (both predictable and not), asynchronous supply, lack of system inertia, all of these cause network and transmission problems. The hippies may b e rejoicing when there's a "no coal" day, but the system operators are sweating, I can assure you.

      And those network stability problems don't need to be absolute failures - just sufficient to push a particular line or substation out of tolerance and trip a breaker, and Bingo! Then you get the knock on effects. I can't say that had any bearing on Crapita's problems, but its a big deal that worries the network operators.

      1. Anonymous Coward
        Anonymous Coward

        "lack of system inertia ... a big deal that worries the network operators."

        Doesn't seem to worry anybody in the UK power industry enough to actually *do* much about it (e.g. invest in robustness). Competing privatised stovepipes is not an obvious way to encourage proper joined up thinking and consideration of the bigger picture - but who knew that?

        Anyway, it's 2017. System inertia, for example, doesn't just come from large lumps of rotating mass. It can come from "synthetic inertia" based on modern high performance power electronics, which achieve the same result as the rotating mass but do it more flexibly, via digital control mechanisms.

        Companies like ABB have, not surprisingly, been doing this at grid scale for a few years now. [In principle GEC might have had a go too, if they hadn't gone bust almost two decades ago, having made a strategic decision to put money in the bank rather than to invest in products and people and technologies.]

        See e.g. this handy summary of synthetic inertia in general:

        http://www.ee.co.za/article/synthetic-inertia-grids-high-renewable-energy-content.html

        and/or for some rather more detailed analysis with a specific focus on wind, there's e.g.

        http://elforsk.se/Rapporter/?download=report&rid=13_02_

        The UK have largely been ignoring these options, preferring to whinge ("insufficient inertia") rather than invest. It's so much more profitable to continue relying on 1960s miracles of engineering such as Dinorwig's fast response pumped storage, and to build relatively quick response diesel generator farms around the country. But other options are available, though some may require people to "think different" and worse still some of the other options may have a short term negative effect on corporate financial results. And apparently that's not allowed.

        [more in a moment]

        1. Anonymous Coward
          Anonymous Coward

          Re: "lack of system inertia ... a big deal that worries the network operators."

          [continued]

          Then again, maybe this (from 2016) is a better late than never sign of better things to come in the UK:

          http://uk.reuters.com/article/national-grid-battery-idUKL8N1B72XQ

          "Aug 26 EDF Renewables, Vattenfall and Eon were among seven companies which won four-year contracts with Britain's National Grid to supply super fast balancing services, National Grid, said on Friday.

          The contracts are the first Britain's power grid operator has awarded to battery storage technology, and were worth at total of 66 million pounds.

          National Grid needs to balance electricity supply and demand on the grid on a second-by-second basis to make sure the system runs efficiently.

          A total of 201 megawatts (MW) of capacity -- roughly the same amount as produced by a small power station -- was secured from seven companies at eight different sites, with the earliest contract starting in October 2017 and the latest in March 2018.

          The amount each company was awarded depended on the amount of capacity offered and how long it would be available for.

          [continues]".

    2. Anonymous Coward
      Anonymous Coward

      Leading? leading on the way down to hell, you mean?

    3. Anonymous Coward
      Anonymous Coward

      "a leading British company" - who's that then?

  13. GingerOne

    How are a company as big as Capita relient on ONE datacentre? Even forgetting their myriad of other failings surely this is reason enough for all of their customers to jump ship and for no one ever to employ their services again.

    I just cannot beleive this. Literally day 1, week 1, IT basics - make it fucking resillient!

    1. Anonymous Coward
      Anonymous Coward

      "make it flipping resillient!"

      Why would the people in charge want to make it resilient? It'll eat into those people's bonuses, surely?

      Until the impact of failure directly hits the pockets of the people in charge, and has a bigger impact than the cost of failure when it happens, those people have no motivation to build resilient systems.

      This isn't the 1990s any more you know, when IT people built systems resiliently **because it was the right thing to do for the customer**, and if you were good as a designer a system that provided critical functions in a degraded mode in the presence of partial failures wasn't always that much more expensive (in $$$$) than a basic setup straight from the box-shifters stocklist.

      Those days are long gone. When did you last read a news item relating to (e.g.) Tandem NonStop, or other high availability technology or techniques? Devops, yes. Kodi, yes. Drones, yes. Resilient systems? Pointers welcome.

    2. fruitoftheloon

      @gingerone

      They aren't....

    3. handleoclast
      Devil

      Re: make it fucking resillient!

      They did make it resilient. Well, the important parts.

      If the guys at the top get fired for incompetence (as they truly deserve) they still get a golden parachute. Big money either way. That's true resilience for you.

  14. Anonymous Coward
    FAIL

    Uh-oh!

    "Good afternoon, my name is Steve in Mumbai. I see that the fault you have reported is complete loss of data centre and failure of DR. I am here to help you with your complete loss of data centre and failure of DR. May I ask you first, have you tried turning your computer off an on again?"

  15. Anonymous Coward
    Anonymous Coward

    Presumably Pay360 customers know the system will be down for 5 (and a bit) days each year.

  16. GingerOne

    Is my place of work an anomoly? We don't have DR because we have a resillient always-on sytem with our own private cloud. I just don't understand why the beancounters in these places don't understand. Yes, good IT costs money, but guess what - it's worth it when shit goes wrong.

    If we lost a datacentre it would be a big worry for the infrastructure team and the rest of us in IT because our resilliency would be affected but the general userbase would carry on working as normal, non the wiser to any problems.

    1. easytoby

      It's an anomaly in comparison to NHS and many public sector and large charity situations. Here the knowledge in the customer organisation to specify and enforce appropriate contracts is missing. Also missing in many cases is the leadership strength to demand proper action on 'difficult' situations.

      1. Terry 6 Silver badge

        Part of the problem is that the bean counters demanding the (illusory) cost savings that lead to outsourcing all sort of services also refuse to pay for/retain the staff that can keep control of it. i.e. You don't just get rid of the school meals service, the cleaners or the payroll etc. you also get rid of the staff from those departments who know what is needed, and how it should be run. In fact, since the options for front-line staff savings are often not that great those supervisory and middle manager staff are the jam on the toast that helps to make the outsourcing costs seem to add up. And middle managers are always seen as a fair target, whereas the top brass on huge salaries always seem to survive.

        (And no, I'm not a middle manager, but I've seen how they and senior front-line staff can make so much difference.)

  17. Anonymous Coward
    Anonymous Coward

    Be prepared

    We keep a spare shilling for the meter nearby. Pah, DR, who needs it?

  18. This post has been deleted by its author

  19. Anonymous Coward
    Anonymous Coward

    Shareholders haven't grasped this yet

    Capita share price up 4.3% today (they were down yesterday as they went ex-div).

    1. Anonymous Coward
      Anonymous Coward

      Re: Shareholders haven't grasped this yet

      Haha! That's nothing, the share price dropped ~50% last year when the profit warning was issued, and never really recovered.

      Two of the directors just happened to have dumped a shit ton of shares the very same day!

  20. PeteCarr
    Facepalm

    No thanks!

    Just had a sales call from S3-Capita trying to flog infrastructure and hosting services. Asked the salesman "Has your data-centre come back online yet?" He laughed uncomfortably, then paused, I interjected, "That'll be a no then, and thanks but no thanks."

    1. Doctor Syntax Silver badge

      Re: No thanks!

      "thanks but no thanks."

      You thanked a (presumably) cold-calling salesman?

  21. Shareholder

    System failure

    Sys failure caused by incompetant directors, caused by a fourth rate HR section that can only select staff by looking at a bit of paper - not on ability. See what can happen!! Have read enough reports showing bad choices. 90% should be removed immediately, before customers leave.

    1. Inventor of the Marmite Laser Silver badge

      looking at a bit of paper -

      that and using LinkedIn

  22. Terry 6 Silver badge

    It is an eternal mystery

    Capita/G4S/whoever can hit the headlines for all the wrong reasons. Do all the potential clients run away from them as fast as they can possibly go? Or do they continue to line up and buy more?

    What would you expect to happen - and what does happen.

    It seems as though when you get big enough no amount of incompetence and failure can be enough to bring you down.

    1. Anonymous Coward
      Anonymous Coward

      Re: It is an eternal mystery

      "when you get big enough no amount of incompetence and failure can be enough to bring you down"

      The concept of too big to fail was pioneered by the banks with great success. I think that other sectors saw the financial crisis, and said "we'd like a piece of that". So Crapita have made themselves a de-facto part of the public sector and too large to be allowed to fail. But not just them. You might argue that there's alternatives to Google, and that Facebook is an unnecessary frippery. But would the US government really let those huge and convenient spying machines collapse if push came to shove?

      As another poster comments, the public sector customers ought to be able to nail Andy Parker's scrotum to a gate post, but won't because they are poor at agreeing contracts, poor at interpreting contracts, and worse at holding big suppliers to account. In fairness, the OP didn't mention Fat Andy's knackersack, but the general drift was there.

    2. Anonymous Coward
      Anonymous Coward

      Re: It is an eternal mystery

      well let's wait an see... they have the whole of the bank holiday weekend to cobble together some kind of solution... if they're not back by Tuesday surely someone will start to ask some serious questions about the outsourcing culture that we've adopted via stealth campaigns over many years... this could become a very hot political potato.

      1. Anonymous Coward
        Anonymous Coward

        Re: It is an eternal mystery

        surely someone will start to ask some serious questions about the outsourcing culture

        What's that?

        Rocking the boat that's floated by the extreme capital investment leverage bought hy putting customers' balls 5 cm over the asphalt at 110 miles/h?

        Not going to happen if people with share options can pretend to be the one company which exploits IT with efficiency that cannot be found anywhere else on the planet.

  23. fruitoftheloon

    the other data centre...

    I left Capita 10 yrs, ago, we had a v v important internal system in West Malling and a 'warm' Dr standby in the other data centre.

    We did a real fail-over test (ironically) on my last day, it worked fine...

    I wonder if some of those afflicted by this fsck up haven't been paying for a warm/hot DR, if not, TOUGH SHIT!!!

    1. Anonymous Coward
      Anonymous Coward

      Re: the other data centre...

      Are those that are trying to fix this even in this country?

  24. Anonymous Coward
    Anonymous Coward

    Just like Pigs...

    ... Capita parts don't fly!

    The anonymous customer gave Capita undue credit when he said "They have probably had to fly parts in from out of the country as the infrastructure is so old."

    Were parts needed for this outage (seems unlikely) then I can categorically say that Capita will use the cheapest means possible to ship them - usually next day courier as immediate couriers are considered too expensive and needs 2 manager approvals. This itself causes untold delays because 1) managers are rarely available 2) bonuses could take a hit so extreme reluctance to authorise persists.

    Also, why should they worry when they're not the ones hurting with system outages when so often the pain is carried by their customers? Generally the take is that if the customer was stupid enough to take out a contract without service penalties then there is no need for them to pull their finger out. When parts are needed the first question (before what part do we need?) is "Are there service penalties?"

  25. amanfromMars 1 Silver badge

    The Revolution will be Virtualised

    Clouds Hosting Advanced Operating Systems in Chaos and Melting Down. Well, well, well ...... Who'd have a'thunk it ...... a Cyber FCUKishima in Dumb Servering Systems.

    And to think that such is only the Start of the Beginning of All that is Planned. Or would you like to think and disagree?

    1. Scroticus Canis
      Happy

      Re: The Revolution will be Virtualised

      I almost understood that. Damn this spliff must be good or Are you back on the meds again?

  26. Anonymous Coward
    Anonymous Coward

    The way to a grand upgrade of the DC's hardware appeared to be not very hard-to-find... and the shareholders would finally welcome this long-awaited opportunity to invest in the stability of their own future income...

    Just a power fault, not the value service infrastructure (-:

    What do you think would be the lower bid and how long will it stay on bottom after *this*?

  27. petetp

    Crapita eats the shit sandwich again.

    How does this company manage to stay in business?

    1. Destroy All Monsters Silver badge

      Just order more sandwiches?

    2. Vic

      Crapita eats the shit sandwich again.

      As the old saying goes, "The more bread you've got, the less you taste the filling"...

      Vic.

  28. cantankerous swineherd

    has this got anything to do with the British Hairways clusterfuck?

    1. Destroy All Monsters Silver badge

      Multiple clusterfucks incoming

      Apparently they are not related and BA has denied that any hack occured.

      “Uh, we had a slight computer malfunction, but uh… everything’s perfectly all right now. We’re fine. We’re all fine here now, thank you.” [winces] “Uh, how are you?”

  29. Scroticus Canis
    Facepalm

    Customer service and data are not Mission Critical

    The new Crapita motto.

  30. handleoclast
    Meh

    British Airways

    BA has suffered a "major IT systems failure" that is affecting its global operations.

    Coincidence or another Crapita customer?

    This one is resulting in catastrophic disruption. Lots of delays and cancellations. On a holiday weekend, one of their busiest times. Gonna be a lot of compensation paid out to very unhappy passengers.

    If (it's a big if, I'm guessing here) BA's IT was outsourced to Crapita, BA is going to demand major compensation from Crapita. Council claims for compensation would be trivial compared to this. So if that's the case, and you have shares in Crapita, now would be a very good time to sell them.

    Again, let me emphasise, I'm guessing. Could be no more than coincidence.

    1. Anonymous Coward
      Anonymous Coward

      Re: British Airways

      I feel sorry for the passengers but not Bastard Airways, serves them right for outsourcing to India :

      "The GMB union says this meltdown could have been avoided if BA hadn't made hundreds of IT staff redundant and outsourced their jobs to India at the end of last year."

      Source:

      http://www.bbc.co.uk/news/uk-40069865

    2. Tail Up

      Re: British Airways

      Coincidence. And there's even another one: at the same time a couple of fighter jets were lifted from a Scottish base to keep IT up there :-)

    3. Robert Forsyth

      Re: British Airways

      These are low paying holiday makers, not business travellers

  31. Chronos

    As Battery Sergeant Major Williams said...

    Oh dear, how sad, never mind.

  32. Anonymous Coward
    Anonymous Coward

    Wankers

  33. itzman
    Mushroom

    What they didnt tell you about te cloud...

    was that it was your security and ability to do business at all you were outsourcing.

  34. Grunt #1

    To all Capita clients.

    Did you buy DR? If you did, was it tested?

    If not, then it is your fault.

    (Ditto BA)

  35. BoringOldSod
    FAIL

    Still down on Sunday 28 May 2017 06:45

    My pension scheme still has no online service. So that's an outage of 72 hours (at least). Perhaps they'll refund the annual management charge?

  36. BoringOldSod
    FAIL

    Lots of Pension Schemes affected

    Not just my pension scheme is affected... some other small schemes also affected: Teachers Pensions, NHS Pensions

    (https://www.nhsbsa.nhs.uk/employer-hub/pensions-online: "Service disruption. We are sorry for the continuing disruption to our services and for any inconvenience this may be causing. We are hoping to have our systems available by noon on Friday so please try again at this time. Until then contact centre staff will, regrettably, be unable to assist with requests that require system access. Thank you for your patience.")... it's Sunday 28 May now...

    good thing it's a long bank holiday weekend... an extra day to get the rubber bands and sticking plasters in place.

  37. Anonymous Coward
    Anonymous Coward

    Beanclusterfsck?

    Previous public sector role: engineers were taken aside early in job and it was explained very clearly that bad design, corner cutting or straightforward screw-ups could lead to loss of job and the possibility of personally facing civil or criminal charges (with no official support). Beancounters who force cuts? No sanctions at all. Middle or senior managers who back the bean counters? Safe as houses. Middle or senior managers who back the engineers? Non-existent, as 'the engineers are there to tell us how to do things, it's their fault if it goes wrong'.

    DR for the individual bean counters and friends are well rehearsed, as others have said: sanctions against them seem to be totally absent - cuts can be made, "savings" made, yet when it goes wrong, those people are absent.

  38. Destroy All Monsters Silver badge
    Mushroom

    Self-inflicted Cyber Pearl Harbor

    Always remember the terrible banking holiday of May 2017!

    1) BA down

    2) CAPITA and appendages down

    3) Sainsbury's down

    4) Theresa May generates hot air in parliament

    "I felt a great disturbance in IT, as if thousands of servers nicely aligned in datacenters were suddenly engulfed in fault reports and red blinkenlights, then silenced. I fear terrible downtime has happened."

  39. Grunt #1

    DR and BC are your parachutes.

    Who jumps out of a plane without a parachute and a reserve?

    1. Anonymous Coward
      Anonymous Coward

      Re: DR and BC are your parachutes.

      "Who jumps out of a plane without a parachute and a reserve?"

      Good point- has BA got anything in the air to jump out of at the moment though?

  40. BoringOldSod
    Thumb Down

    Still can't access my pension record

    So access to my pension scheme website restored on Sunday 28 May. Sadly it seems that none of the security login credentials have been restored in the backend system: "There has been a problem logging you in as xxxxxxxx. Please contact us and confirm as much information as possible to help us find your account (e.g. name, date of birth, national insurance number, login name)". So now I have to wait until Tuesday 08:30 to contact Capita - because obviously they haven't provided any emergency operational cover over the bank holiday weekend to deal with customer concerns. And what are the chances that come Tuesday I'll be able to get through on the telephone?? Have Capita provided any press statement? Have Capita made any attempt to contact scheme members? Of course not. Appalling (but not surprised).

  41. wyatt

    Back off holiday tomorrow, hope the team I'm part of has fixed anything we have there.. Long way to go and fix stuff if not!

  42. Terry 6 Silver badge

    From what I've seen when outsourcing happens (school cleaning, meals contracts, payroll etc) the suits in an office far removed from the frontline agree a work and cost schedule based on what they dream the job is. But it's really just a fantasy. As in the cleaning contracts, x number of desks at 4 seconds/desk, Y sq metres of flooring at 2 seconds per sq metre and so on. I'm sure other types of contract are equally unrealistic. But this does not actually allow for how long it really takes to do the job, such as to make the surfaces clean. Things like removal of glue and paint, or the general griminess of places where there are kids. And it doesn't take into account how long it actually takes to get round a room full of normal classroom stuff, or to get from one room to the next. All the details that make actually doing the job in the allocated time impossible.I have no doubt that every contract glosses over lots of these sorts of real details, and that, as in my observations, the voices of staff who know what the job(s) entail are the least trusted or listened to.

  43. BoringOldSod
    FAIL

    Access to pension scheme still not possible Tuesday 30 May 2017

    So the website was restored on Sunday. Unfortunately the website failed when it tried to communicate with the system backend and members were asked to call the operational team - but of course the operational team were not working on Sunday and Monday due to the bank holiday weekend.

    So I try again on Tuesday at 10am. The website is down again. I call the operational team; they ask for my National Insurance number - they are not aware that the website is down again. I ask to speak to a supervisor - supervisor tells me that the IT team are dealing with an incident caused by a major power failure. Supervisor tells me that the IT team are working to restore the website as the last step in the recovery process. However the supervisor has no timescales for when service will restart. She can only offer an apology.

  44. EnviableOne

    ON the other hand the cost nutral runway option at Gatwick is starting to look attractive, cos even BA can take off from there

  45. BoringOldSod
    FAIL

    Pension scheme issues still not resolved 6 days later

    So today the online access website reappears. I type in my login credentials and the website accepts the credentials but when the website tries to communicate with the backend the session fails with the message "There has been a problem logging you in as <username>. Please contact us and confirm as much information as possible to help us find your account (e.g. name, date of birth, national insurance number, login name).". So I follow the instructions on screen only to be told by the poor front line staff that they are aware of the problem and the IT team are still working to resolve the problem. I ask how I will know when the service has been restored - I'm told I will receive an e-mail but probing this a little further the poor front line staff tell me that a communication will be sent to all members who have had problems accessing their pension account in ... er... "a few weeks". When I question this timescale I'm politely told that my "concerns" will be passed on - who to I wonder - some distant black hole in some far away galaxy?. I'm also told that those members who have not had problems will NOT receive ANY communication about the problem - despite them paying the Capita annual management charge for the scheme. Communicating with all members would seem to me the "treating the customer fairly" approach - much vaunted in the Financial Services industry - but this would cost Capita money so there's no chance of Capita going anywhere near this suggestion.

    I suspect the only reason that the website has been made available is to avoid the poor front line staff having to e-mail all the scheme forms and documents to disgruntled members - access to these doesn't require a successful login. But perhaps I'm growing cynical in my old age? (as you can see my username here was chosen wisely).

  46. BoringOldSod
    Facepalm

    Teachers Pensions still affected 6 days later

    Teachers Pensions Facebook message posted about 09:30 today (31 May 2017)

    "While our systems continue to return to normal, we are still unfortunately experiencing delays. Please accept our apologies. We are working hard to try and get the systems working as normal and appreciate your patience at this time."

  47. BoringOldSod
    Happy

    Capita just keep giving...

    So now I get an e-mail from Capita's Chief Executive Andy Parker - who in March announced he was leaving Capita:

    "I am very sorry for the service experience and difficulties you have had in accessing the Teachers Pensions website. The system issues that recently impacted Teachers Pensions were caused by a failure of the power to re-instate cleanly back from our emergency generator which had been operating as required in one of our data centres following a power outage in the local area, this then caused damage across connectivity and other elements requiring replacement and recovery of systems. Our IT team have worked continuously since the incident with full support from Capita’s senior management to fix the issue and restore all Teacher Pensions services as safely as possible. Our number one priority was to ensure payment systems were enabled to process payments to teachers and this was successfully restored to ensure all payments due to members were made as expected. Our attention then moved to ensure that the website and other services were restored. All services are now restored and functioning satisfactorally [sic].

    Please rest assured that all personal data and records were safe and secure at all times throughout this incident. We are undertaking a full investigation of the incident and its root cause, and we will identify any further remedial actions that need to be taken to mitigate any re-occurance [sic].

    I have passed on your comments regarding the customer service you experienced and I have asked the business to ensure that lessons are learnt from this and remedial actions taken to ensure customer service is delivered to the highest level at all times.

    Kind regards,

    Andy Parker

    Chief Executive"

    Now this would be reassuring but for one minor detail (and a few typos) - I'm not a member of the Teachers Pension scheme!! Just a lowly member of one of the other lesser pension schemes "managed" by Capita. Perhaps Mr Parker was distracted by a bee in his garden?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like