Captia has crapped out, now to be know officially as Crapita.
'Major incident' at Capita data centre: Multiple services still knackered
A major outage at a Capita data centre has knocked out multiple services for customers – including a number of councils' online services – for the last 36 hours. Some of the sites affected include the NHS Business Services Authority, which apologised on its website for the continuing disruption and said it hoped its systems …
COMMENTS
-
-
Friday 26th May 2017 13:00 GMT Anonymous Coward
"The remainder of services are now being robustly tested"
Translation "Sorry guys, but we pay such low wages that we get the lowest grade staff and they couldn't be bothered to test the generators. Rest assured that those responsible are now busy playing a datacentre sized game of switch it on and pray it comes up..."
-
Friday 26th May 2017 10:38 GMT Lee D
Stop relying on one datacenter to be up.
This is WHY Windows Server and lots of other OS have HA functionality.
Hell, it's not even that hard to enable. Or just provide a secondary system somewhere else that does the same even if you don't have fancy connections between them.
If your platform is not virtualised, why not?
If your platform is virtualised, turn on the HA options so that the VM replica in another data center just starts up and becomes the primary and your domain names, etc. resolve to all IPs that can offer the services.
I still don't get why ANY ONE FAILURE (one datacentre, one computer, etc.) is still a news item nowadays. It shouldn't be happening.
Even if you deploy on Amazon Cloud or something, PUT THINGS ELSEWHERE TOO. It's not hard.
-
Friday 26th May 2017 11:04 GMT FrankAlphaXII
It seems that Crapita don't believe in Business Continuity otherwise an outage at one datacenter wouldn't take down part of the NHS and a number of local governments. As you stated, there should be no such thing as a single point of failure in 2017. That doesn't bode well for UK emergency preparedness at the most important level. If something as simple as internet communications get taken down that easily what happens when more than one of their datacenters fails and can't/won't be restored for weeks or months?
I work in Emergency Management for a government agency at a local level plus I develop BC/DR plans for SMBs on the side so I see this kind of shit out of government outsourcing contractors all the time, beancounters that run businesses like Crapita (looking at you Serco, Egis, and Leidos) don't get really simple preparedness and mitigation concepts and if they do understand them, they'll be first to balk at the price tag associated with them. Until they've had their "effeciencies" blow up in their faces. Thing is, in this day and age fault tolerance and providing an emergency level of service for data when something does happen isn't hard or expensive and it's really unforgivable that a supposedly first in class outsourcing contractor can't provide it's expected level of service because their infrastructure's shit and their planning's worse.
-
Friday 26th May 2017 12:15 GMT GruntyMcPugh
"Stop relying on one datacenter to be up."
Indeed, a couple of years ago I did an audit at a well known bank, each of it's datacentres, which were almost identical. For some reason the door on the gents in one had a glass panel, and the other didn't, and the vending machines in the break area were further apart in one,.... but the IT equipment, mirrored exactly.
-
Friday 26th May 2017 13:13 GMT Anonymous Coward
"Stop relying on one datacenter to be up."
Having 2 DCs and designing for no single point of failure costs ~ 3 times the money. This is government IT we are talking about. The DR plan is probably to build a new DC!
"If your platform is not virtualised, why not?"
Because it's such a large system that it uses the resources of complete physical servers is usually the answer in these type of systems.
-
Friday 26th May 2017 13:42 GMT Halfmad
Thing is with these companies that although they may include agreeing to have failover sites etc when sh!t happens and those don't work they just say "hey sorry, won't happen until the next time it happens" and as the NHS is f*cking awful at contract law they have no monetary clause to hammer them with.
Seen this so often in the past 10 years.
-
-
Saturday 27th May 2017 12:42 GMT handleoclast
Re:consultancy fee
My county council pissed away 7 figures to Price Watercloset Coopers to come up with ways of saving money.
My suggestions:
1) Don't piss away 7 figures to PwC
2) Hire staff capable of coming up with suggestions themselves (suggestions other than asking PwC what to do).
Ooooh, where's the IT angle? My county council uses Crapita for their payment systems. Who'd have expected that?
-
-
-
Friday 26th May 2017 16:28 GMT Rob D.
It's not hard but ...
Actually it is hard because it isn't that simple. In the real world, most of the problems around business continuity come up because someone has tried to turn a tricky problem requiring attention to details in to something that has a simple answer which is easier to understand and by definition is cheaper.
Commonly this sounds something like, "We paid to virtualise everything so we can just move it if we have a disaster to the other data centre. Easy - please explain why we have to pay for anything more?"
Reality bites early in the requirement for budget up front for the significant additional planning, design, implementation, testing, training and infrastructure costs. The details house many devils here. Throw in time required for testing, operational training, operational proving in production, and by now the System Integrator is wishing you'd never shown up to explain what is missing while they work out how can they get past User Acceptance without anyone realising the business continuity isn't really there.
-
-
-
Friday 26th May 2017 11:02 GMT m0rt
Re: Probably got their own staff to install the back up generators
Bets on diesel in the generators being a couple of years old? The fact they are now having an issue with parts suggests that the sudden loss of power cause some great failures.
Today we shall mostely be Capitalising on the Capitulations of the PITA that is Crapita.
-
Friday 26th May 2017 11:42 GMT Anonymous South African Coward
Re: Probably got their own staff to install the back up generators
Not taking any bets, but regular testing of diesel generators need to be done.
Heck, just kick out the mains CB and let the genny take over (for 30 minutes each week), this way you can weed out any old and dodgy UPS'es as well.
-
-
Saturday 27th May 2017 17:22 GMT PNGuinn
Re: Probably got their own staff to install the back up generators
The gennys were tested weekly but noone thought to buy any fuel and they ran dry aftter 3 mins?
They did buy fuel but it was petrol / bunker oil because that was cheaper?
They went green and bought a load of cooking oil cheap?
No - Crapita aren't even **that** capable.
>> This might have helped.
-
Friday 26th May 2017 13:20 GMT Anonymous Coward
Re: Probably got their own staff to install the back up generators
Ours are tested by the power failures hitting some weeks apart, lately.... just last time our small lab datacenter was kept alive by the UPS and its generator, the main one failed. Later they discovered scheduled maintenance was no longer active. Still, after asking several time, I don't know who's in charge of re-filling the diesel tank (I'm not authorize to perform it myself, you know, the dangers of handling dangerous chemicals and operating on machines I was not trained for...)
-
-
Sunday 28th May 2017 12:18 GMT Stoneshop
Re: Probably got their own staff to install the back up generators
Isn't the refilling done by the tanker driver who delivers the stuff?
As the Germans say 'Jein' (contraction of yes and no): first someone[0], having been notified by Facilities that the tank is running low, has to call the supplier for delivery, then with the tanker arriving someone[1] has to unlock[2] the gate/hatch/trap door to the tank neck.
[0] from Finance, or Contract Manglement[3]
[1] from Security[3]
[2] you don't really want someone peeing down the filler neck, or dropping sand or sugar in.
[3] in extremely enlightened cases these responsibilities will have been delegated to Facilities as well.
-
Monday 29th May 2017 02:25 GMT Alan Brown
Re: Probably got their own staff to install the back up generators
Which is fine until someone shuts off the feed to one of the tanks (vandalism) and said driver pumps N amount of fuel because that's what he's expecting to pump instead of looking at the fill gauges and stopping when they say "stop"
Cue multiple thousand litres of diesel not being in the tanks, but instead in the stormwater system and lots of people asking "what's that smell?"
-
-
-
Friday 26th May 2017 17:16 GMT Stoneshop
Re: Probably got their own staff to install the back up generators
Heck, just kick out the mains CB and let the genny take over (for 30 minutes each week)
Ingredients: one power grid with regular shortish (30 minutes or less) outages, one computer room floor with various systems, one UPS powering the entire floor running at ~15% capacity, one diesel genny. Due to the regular power dips, we were quite sure the UPS and diesel were functioning as intended; fuel was replenished as needed. Then came the day that the power consumption of the computer room doubled due to an invasion of about 45 racks full of gear. And then came the next power dip. Which made the UPS (powering the computer room; the generator was hooked up so that it basically kept the batteries charged) suddenly work quite a bit harder. And longer; for a number of reasons. Which caused the temperature in the UPS room rise quite a bit more than previously. Environmental monitoring went yellow, and several pagers went off, and Facilities managed to keep the UPS from shutting down through the judicious use of fans scrounged from a number of offices.
Moral of this story: cooling is important too, not just for the computer room, but also for the UPS room.
-
This post has been deleted by its author
-
-
Tuesday 30th May 2017 13:36 GMT CrazyOldCatMan
Re: Probably got their own staff to install the back up generators
Bets on diesel in the generators being a couple of years old?
Or, in a very old situation, diesel in the under-carpark tank has seeped away into the subsoil because of a flaw in the tank..
Which was fun when the generator did kick in for real but only ran for ~20 minutes before exhausting their local tank..
No-one was checking the levels of diesel in the bigger tank. Oops.
-
-
-
-
Friday 26th May 2017 11:24 GMT batfastad
Well!
Well you don't think that the money their customers (NHS Trusts, Councils etc) pay actually gets spent properly and proportionally on the infrastructure backing their services do you?!
Look it's contract renewal time... lets take the money and sweat the assets of our existing platform for a few more years. After all, we've got executive pay reviews coming up soon.
The fact that a DC has gone down and that has taken out production service is unforgiveable in this day and age.
-
-
Saturday 27th May 2017 07:18 GMT Anonymous Coward
Still at least the weather is perfect!
For meatsacks not inside a building, yes. But an interesting thought is that summer is now a time of real grid instability, because all that essentially unplanned solar PV dumped on the grid causes huge instability. Varying output (both predictable and not), asynchronous supply, lack of system inertia, all of these cause network and transmission problems. The hippies may b e rejoicing when there's a "no coal" day, but the system operators are sweating, I can assure you.
And those network stability problems don't need to be absolute failures - just sufficient to push a particular line or substation out of tolerance and trip a breaker, and Bingo! Then you get the knock on effects. I can't say that had any bearing on Crapita's problems, but its a big deal that worries the network operators.
-
Saturday 27th May 2017 08:02 GMT Anonymous Coward
"lack of system inertia ... a big deal that worries the network operators."
Doesn't seem to worry anybody in the UK power industry enough to actually *do* much about it (e.g. invest in robustness). Competing privatised stovepipes is not an obvious way to encourage proper joined up thinking and consideration of the bigger picture - but who knew that?
Anyway, it's 2017. System inertia, for example, doesn't just come from large lumps of rotating mass. It can come from "synthetic inertia" based on modern high performance power electronics, which achieve the same result as the rotating mass but do it more flexibly, via digital control mechanisms.
Companies like ABB have, not surprisingly, been doing this at grid scale for a few years now. [In principle GEC might have had a go too, if they hadn't gone bust almost two decades ago, having made a strategic decision to put money in the bank rather than to invest in products and people and technologies.]
See e.g. this handy summary of synthetic inertia in general:
http://www.ee.co.za/article/synthetic-inertia-grids-high-renewable-energy-content.html
and/or for some rather more detailed analysis with a specific focus on wind, there's e.g.
http://elforsk.se/Rapporter/?download=report&rid=13_02_
The UK have largely been ignoring these options, preferring to whinge ("insufficient inertia") rather than invest. It's so much more profitable to continue relying on 1960s miracles of engineering such as Dinorwig's fast response pumped storage, and to build relatively quick response diesel generator farms around the country. But other options are available, though some may require people to "think different" and worse still some of the other options may have a short term negative effect on corporate financial results. And apparently that's not allowed.
[more in a moment]
-
Saturday 27th May 2017 08:04 GMT Anonymous Coward
Re: "lack of system inertia ... a big deal that worries the network operators."
[continued]
Then again, maybe this (from 2016) is a better late than never sign of better things to come in the UK:
http://uk.reuters.com/article/national-grid-battery-idUKL8N1B72XQ
"Aug 26 EDF Renewables, Vattenfall and Eon were among seven companies which won four-year contracts with Britain's National Grid to supply super fast balancing services, National Grid, said on Friday.
The contracts are the first Britain's power grid operator has awarded to battery storage technology, and were worth at total of 66 million pounds.
National Grid needs to balance electricity supply and demand on the grid on a second-by-second basis to make sure the system runs efficiently.
A total of 201 megawatts (MW) of capacity -- roughly the same amount as produced by a small power station -- was secured from seven companies at eight different sites, with the earliest contract starting in October 2017 and the latest in March 2018.
The amount each company was awarded depended on the amount of capacity offered and how long it would be available for.
[continues]".
-
-
-
-
Friday 26th May 2017 13:26 GMT GingerOne
How are a company as big as Capita relient on ONE datacentre? Even forgetting their myriad of other failings surely this is reason enough for all of their customers to jump ship and for no one ever to employ their services again.
I just cannot beleive this. Literally day 1, week 1, IT basics - make it fucking resillient!
-
Friday 26th May 2017 13:58 GMT Anonymous Coward
"make it flipping resillient!"
Why would the people in charge want to make it resilient? It'll eat into those people's bonuses, surely?
Until the impact of failure directly hits the pockets of the people in charge, and has a bigger impact than the cost of failure when it happens, those people have no motivation to build resilient systems.
This isn't the 1990s any more you know, when IT people built systems resiliently **because it was the right thing to do for the customer**, and if you were good as a designer a system that provided critical functions in a degraded mode in the presence of partial failures wasn't always that much more expensive (in $$$$) than a basic setup straight from the box-shifters stocklist.
Those days are long gone. When did you last read a news item relating to (e.g.) Tandem NonStop, or other high availability technology or techniques? Devops, yes. Kodi, yes. Drones, yes. Resilient systems? Pointers welcome.
-
-
Friday 26th May 2017 14:00 GMT Anonymous Coward
Uh-oh!
"Good afternoon, my name is Steve in Mumbai. I see that the fault you have reported is complete loss of data centre and failure of DR. I am here to help you with your complete loss of data centre and failure of DR. May I ask you first, have you tried turning your computer off an on again?"
-
Friday 26th May 2017 15:35 GMT GingerOne
Is my place of work an anomoly? We don't have DR because we have a resillient always-on sytem with our own private cloud. I just don't understand why the beancounters in these places don't understand. Yes, good IT costs money, but guess what - it's worth it when shit goes wrong.
If we lost a datacentre it would be a big worry for the infrastructure team and the rest of us in IT because our resilliency would be affected but the general userbase would carry on working as normal, non the wiser to any problems.
-
Saturday 27th May 2017 08:48 GMT easytoby
It's an anomaly in comparison to NHS and many public sector and large charity situations. Here the knowledge in the customer organisation to specify and enforce appropriate contracts is missing. Also missing in many cases is the leadership strength to demand proper action on 'difficult' situations.
-
Saturday 27th May 2017 16:58 GMT Terry 6
Part of the problem is that the bean counters demanding the (illusory) cost savings that lead to outsourcing all sort of services also refuse to pay for/retain the staff that can keep control of it. i.e. You don't just get rid of the school meals service, the cleaners or the payroll etc. you also get rid of the staff from those departments who know what is needed, and how it should be run. In fact, since the options for front-line staff savings are often not that great those supervisory and middle manager staff are the jam on the toast that helps to make the outsourcing costs seem to add up. And middle managers are always seen as a fair target, whereas the top brass on huge salaries always seem to survive.
(And no, I'm not a middle manager, but I've seen how they and senior front-line staff can make so much difference.)
-
-
-
This post has been deleted by its author
-
Friday 26th May 2017 18:56 GMT Shareholder
System failure
Sys failure caused by incompetant directors, caused by a fourth rate HR section that can only select staff by looking at a bit of paper - not on ability. See what can happen!! Have read enough reports showing bad choices. 90% should be removed immediately, before customers leave.
-
Friday 26th May 2017 19:44 GMT Terry 6
It is an eternal mystery
Capita/G4S/whoever can hit the headlines for all the wrong reasons. Do all the potential clients run away from them as fast as they can possibly go? Or do they continue to line up and buy more?
What would you expect to happen - and what does happen.
It seems as though when you get big enough no amount of incompetence and failure can be enough to bring you down.
-
Saturday 27th May 2017 07:27 GMT Anonymous Coward
Re: It is an eternal mystery
"when you get big enough no amount of incompetence and failure can be enough to bring you down"
The concept of too big to fail was pioneered by the banks with great success. I think that other sectors saw the financial crisis, and said "we'd like a piece of that". So Crapita have made themselves a de-facto part of the public sector and too large to be allowed to fail. But not just them. You might argue that there's alternatives to Google, and that Facebook is an unnecessary frippery. But would the US government really let those huge and convenient spying machines collapse if push came to shove?
As another poster comments, the public sector customers ought to be able to nail Andy Parker's scrotum to a gate post, but won't because they are poor at agreeing contracts, poor at interpreting contracts, and worse at holding big suppliers to account. In fairness, the OP didn't mention Fat Andy's knackersack, but the general drift was there.
-
Saturday 27th May 2017 15:06 GMT Anonymous Coward
Re: It is an eternal mystery
well let's wait an see... they have the whole of the bank holiday weekend to cobble together some kind of solution... if they're not back by Tuesday surely someone will start to ask some serious questions about the outsourcing culture that we've adopted via stealth campaigns over many years... this could become a very hot political potato.
-
Saturday 27th May 2017 18:34 GMT Anonymous Coward
Re: It is an eternal mystery
surely someone will start to ask some serious questions about the outsourcing culture
What's that?
Rocking the boat that's floated by the extreme capital investment leverage bought hy putting customers' balls 5 cm over the asphalt at 110 miles/h?
Not going to happen if people with share options can pretend to be the one company which exploits IT with efficiency that cannot be found anywhere else on the planet.
-
-
-
Friday 26th May 2017 20:47 GMT fruitoftheloon
the other data centre...
I left Capita 10 yrs, ago, we had a v v important internal system in West Malling and a 'warm' Dr standby in the other data centre.
We did a real fail-over test (ironically) on my last day, it worked fine...
I wonder if some of those afflicted by this fsck up haven't been paying for a warm/hot DR, if not, TOUGH SHIT!!!
-
Friday 26th May 2017 22:48 GMT Anonymous Coward
Just like Pigs...
... Capita parts don't fly!
The anonymous customer gave Capita undue credit when he said "They have probably had to fly parts in from out of the country as the infrastructure is so old."
Were parts needed for this outage (seems unlikely) then I can categorically say that Capita will use the cheapest means possible to ship them - usually next day courier as immediate couriers are considered too expensive and needs 2 manager approvals. This itself causes untold delays because 1) managers are rarely available 2) bonuses could take a hit so extreme reluctance to authorise persists.
Also, why should they worry when they're not the ones hurting with system outages when so often the pain is carried by their customers? Generally the take is that if the customer was stupid enough to take out a contract without service penalties then there is no need for them to pull their finger out. When parts are needed the first question (before what part do we need?) is "Are there service penalties?"
-
Saturday 27th May 2017 05:51 GMT amanfromMars 1
The Revolution will be Virtualised
Clouds Hosting Advanced Operating Systems in Chaos and Melting Down. Well, well, well ...... Who'd have a'thunk it ...... a Cyber FCUKishima in Dumb Servering Systems.
And to think that such is only the Start of the Beginning
of All that is Planned. Or would you like to think and disagree? -
Saturday 27th May 2017 07:45 GMT Anonymous Coward
The way to a grand upgrade of the DC's hardware appeared to be not very hard-to-find... and the shareholders would finally welcome this long-awaited opportunity to invest in the stability of their own future income...
Just a power fault, not the value service infrastructure (-:
What do you think would be the lower bid and how long will it stay on bottom after *this*?
-
Saturday 27th May 2017 15:02 GMT handleoclast
British Airways
BA has suffered a "major IT systems failure" that is affecting its global operations.
Coincidence or another Crapita customer?
This one is resulting in catastrophic disruption. Lots of delays and cancellations. On a holiday weekend, one of their busiest times. Gonna be a lot of compensation paid out to very unhappy passengers.
If (it's a big if, I'm guessing here) BA's IT was outsourced to Crapita, BA is going to demand major compensation from Crapita. Council claims for compensation would be trivial compared to this. So if that's the case, and you have shares in Crapita, now would be a very good time to sell them.
Again, let me emphasise, I'm guessing. Could be no more than coincidence.
-
Saturday 27th May 2017 15:54 GMT Anonymous Coward
Re: British Airways
I feel sorry for the passengers but not Bastard Airways, serves them right for outsourcing to India :
"The GMB union says this meltdown could have been avoided if BA hadn't made hundreds of IT staff redundant and outsourced their jobs to India at the end of last year."
Source:
http://www.bbc.co.uk/news/uk-40069865
-
-
Sunday 28th May 2017 08:47 GMT BoringOldSod
Lots of Pension Schemes affected
Not just my pension scheme is affected... some other small schemes also affected: Teachers Pensions, NHS Pensions
(https://www.nhsbsa.nhs.uk/employer-hub/pensions-online: "Service disruption. We are sorry for the continuing disruption to our services and for any inconvenience this may be causing. We are hoping to have our systems available by noon on Friday so please try again at this time. Until then contact centre staff will, regrettably, be unable to assist with requests that require system access. Thank you for your patience.")... it's Sunday 28 May now...
good thing it's a long bank holiday weekend... an extra day to get the rubber bands and sticking plasters in place.
-
Sunday 28th May 2017 10:45 GMT Anonymous Coward
Beanclusterfsck?
Previous public sector role: engineers were taken aside early in job and it was explained very clearly that bad design, corner cutting or straightforward screw-ups could lead to loss of job and the possibility of personally facing civil or criminal charges (with no official support). Beancounters who force cuts? No sanctions at all. Middle or senior managers who back the bean counters? Safe as houses. Middle or senior managers who back the engineers? Non-existent, as 'the engineers are there to tell us how to do things, it's their fault if it goes wrong'.
DR for the individual bean counters and friends are well rehearsed, as others have said: sanctions against them seem to be totally absent - cuts can be made, "savings" made, yet when it goes wrong, those people are absent.
-
Sunday 28th May 2017 15:01 GMT Destroy All Monsters
Self-inflicted Cyber Pearl Harbor
Always remember the terrible banking holiday of May 2017!
1) BA down
2) CAPITA and appendages down
3) Sainsbury's down
4) Theresa May generates hot air in parliament
"I felt a great disturbance in IT, as if thousands of servers nicely aligned in datacenters were suddenly engulfed in fault reports and red blinkenlights, then silenced. I fear terrible downtime has happened."
-
Monday 29th May 2017 15:55 GMT BoringOldSod
Still can't access my pension record
So access to my pension scheme website restored on Sunday 28 May. Sadly it seems that none of the security login credentials have been restored in the backend system: "There has been a problem logging you in as xxxxxxxx. Please contact us and confirm as much information as possible to help us find your account (e.g. name, date of birth, national insurance number, login name)". So now I have to wait until Tuesday 08:30 to contact Capita - because obviously they haven't provided any emergency operational cover over the bank holiday weekend to deal with customer concerns. And what are the chances that come Tuesday I'll be able to get through on the telephone?? Have Capita provided any press statement? Have Capita made any attempt to contact scheme members? Of course not. Appalling (but not surprised).
-
Tuesday 30th May 2017 09:22 GMT Terry 6
From what I've seen when outsourcing happens (school cleaning, meals contracts, payroll etc) the suits in an office far removed from the frontline agree a work and cost schedule based on what they dream the job is. But it's really just a fantasy. As in the cleaning contracts, x number of desks at 4 seconds/desk, Y sq metres of flooring at 2 seconds per sq metre and so on. I'm sure other types of contract are equally unrealistic. But this does not actually allow for how long it really takes to do the job, such as to make the surfaces clean. Things like removal of glue and paint, or the general griminess of places where there are kids. And it doesn't take into account how long it actually takes to get round a room full of normal classroom stuff, or to get from one room to the next. All the details that make actually doing the job in the allocated time impossible.I have no doubt that every contract glosses over lots of these sorts of real details, and that, as in my observations, the voices of staff who know what the job(s) entail are the least trusted or listened to.
-
Tuesday 30th May 2017 09:37 GMT BoringOldSod
Access to pension scheme still not possible Tuesday 30 May 2017
So the website was restored on Sunday. Unfortunately the website failed when it tried to communicate with the system backend and members were asked to call the operational team - but of course the operational team were not working on Sunday and Monday due to the bank holiday weekend.
So I try again on Tuesday at 10am. The website is down again. I call the operational team; they ask for my National Insurance number - they are not aware that the website is down again. I ask to speak to a supervisor - supervisor tells me that the IT team are dealing with an incident caused by a major power failure. Supervisor tells me that the IT team are working to restore the website as the last step in the recovery process. However the supervisor has no timescales for when service will restart. She can only offer an apology.
-
Wednesday 31st May 2017 14:46 GMT BoringOldSod
Pension scheme issues still not resolved 6 days later
So today the online access website reappears. I type in my login credentials and the website accepts the credentials but when the website tries to communicate with the backend the session fails with the message "There has been a problem logging you in as <username>. Please contact us and confirm as much information as possible to help us find your account (e.g. name, date of birth, national insurance number, login name).". So I follow the instructions on screen only to be told by the poor front line staff that they are aware of the problem and the IT team are still working to resolve the problem. I ask how I will know when the service has been restored - I'm told I will receive an e-mail but probing this a little further the poor front line staff tell me that a communication will be sent to all members who have had problems accessing their pension account in ... er... "a few weeks". When I question this timescale I'm politely told that my "concerns" will be passed on - who to I wonder - some distant black hole in some far away galaxy?. I'm also told that those members who have not had problems will NOT receive ANY communication about the problem - despite them paying the Capita annual management charge for the scheme. Communicating with all members would seem to me the "treating the customer fairly" approach - much vaunted in the Financial Services industry - but this would cost Capita money so there's no chance of Capita going anywhere near this suggestion.
I suspect the only reason that the website has been made available is to avoid the poor front line staff having to e-mail all the scheme forms and documents to disgruntled members - access to these doesn't require a successful login. But perhaps I'm growing cynical in my old age? (as you can see my username here was chosen wisely).
-
Wednesday 31st May 2017 14:51 GMT BoringOldSod
Teachers Pensions still affected 6 days later
Teachers Pensions Facebook message posted about 09:30 today (31 May 2017)
"While our systems continue to return to normal, we are still unfortunately experiencing delays. Please accept our apologies. We are working hard to try and get the systems working as normal and appreciate your patience at this time."
-
Wednesday 31st May 2017 17:02 GMT BoringOldSod
Capita just keep giving...
So now I get an e-mail from Capita's Chief Executive Andy Parker - who in March announced he was leaving Capita:
"I am very sorry for the service experience and difficulties you have had in accessing the Teachers Pensions website. The system issues that recently impacted Teachers Pensions were caused by a failure of the power to re-instate cleanly back from our emergency generator which had been operating as required in one of our data centres following a power outage in the local area, this then caused damage across connectivity and other elements requiring replacement and recovery of systems. Our IT team have worked continuously since the incident with full support from Capita’s senior management to fix the issue and restore all Teacher Pensions services as safely as possible. Our number one priority was to ensure payment systems were enabled to process payments to teachers and this was successfully restored to ensure all payments due to members were made as expected. Our attention then moved to ensure that the website and other services were restored. All services are now restored and functioning satisfactorally [sic].
Please rest assured that all personal data and records were safe and secure at all times throughout this incident. We are undertaking a full investigation of the incident and its root cause, and we will identify any further remedial actions that need to be taken to mitigate any re-occurance [sic].
I have passed on your comments regarding the customer service you experienced and I have asked the business to ensure that lessons are learnt from this and remedial actions taken to ensure customer service is delivered to the highest level at all times.
Kind regards,
Andy Parker
Chief Executive"
Now this would be reassuring but for one minor detail (and a few typos) - I'm not a member of the Teachers Pension scheme!! Just a lowly member of one of the other lesser pension schemes "managed" by Capita. Perhaps Mr Parker was distracted by a bee in his garden?