back to article NHS patient database goes titsup

The database that stores vital medical information on millions of NHS patients crashed last week. Outsourcing giant CSC, which won a £973m contract to run part of the Care Records system in 2003 and has picked up more contracts since, was forced to invoke disaster recovery when hospitals and local surgeries were unable to …

COMMENTS

This topic is closed for new posts.
  1. Kris Kirkbride
    Paris Hilton

    Oh dear...

    Epic FAIL.

    Perhaps what they need is some more money....yes, give them more tax-payers money.....

    'Piss up' and 'brewery' spring to mind.

    Paris cos even she could arrange aforementioned drinks.

  2. Anonymous Coward
    Paris Hilton

    Oh Lordy...

    Such a surprise, a government IT project fks up.

    Paris, she'd have seen it coming too.

  3. Anonymous Coward
    Thumb Down

    Redundancy?

    So what you're telling me is that there is no permanently live, fallback system that can take over when the main one is unavailable. This is 2008, isn't it?

  4. Tony

    'Care records databases for other regions are run by BT'

    I don't suppose I can opt out of that can I?

    I DO NOT trust BT with my private data. They have already proven that they can't be trusted with such things.

  5. Anonymous Coward
    Pirate

    GPs were unable to access patient data...

    ...yet there was no impact to patient care?

    So CSC are saying that no GP surgeries in their territory needed choose & book and noone needed to give patients results of biopsies/blood tests? etc.

    Then again it's no surprise. Only last summert at the Intersystems Symposium CSC were they saying their system is 100% reliable, with so many backup systems it would be impossible the system would be unavailable.

    Yarr, pirate icon because you don't have a highwayman icon - because what CSC are charging for a glorified data centre is highway robbery.

  6. Juillen
    Alien

    It always seemed daft

    That under the 'design' of the care records system, there would be no records on a hospital site. Nothing. All done in a remote data centre.

    If you lost your network connection, then the hospital would have no access to records. Its own, or anyone else's.

    And when there is any fault in the data centres, all sites are taken out, instead of just one.

  7. Stuart Halliday
    Unhappy

    Ooops

    And in those millions of records, some poor sod who has a record for back pain, probably gets corrupted and now is down as having cancer or some other ailment.

    I wonder if individual records have a CRC/Hash checksum against them?

    I recently had cause to get a letter from the government telling me I had no NHI contributions for 2008. Yet they had sent me a computer generated P60 showing me I had.

    And they want to give us computer held IDs....

  8. Anonymous Coward
    Anonymous Coward

    hmmm

    "...there was no impact to patient care."

    Sounds like either a very impressive system, or a completely pointless one, if it being down has no impact on the patient care!

  9. Anonymous Coward
    Thumb Down

    @Redundancy

    Suspect that the costs for system redundancy were in the budget, but were seen as a nice-to-have and therefore not given the go ahead, based on the assumption it might only be used once or twice a year..... a bullet proof system and "value for money" are always in conflict...

  10. Markie Dussard
    Dead Vulture

    Alternatively ...

    NHS database goes titsup, disaster recovery plan works as designed, no patient data loss and no patient care jeopardised.

    So, everything happened as it was supposed to, but such an inconvenient fact isn't going to get in the way of your pre-slanted story.

  11. Anonymous Coward
    Joke

    Poor show...

    You people are slacking… a dozen comments in and no comments whatsoever about the OS it “must” have been running on…

  12. Anonymous Coward
    Thumb Down

    @ Alternatively...

    But this was not 'disaster recovery' - nobody said anything about fire/flood/dead servers or the like. The system was just unavailable for an extended period of time, which a hot standby would have coped with seamlessly. And even if it had been a disaster, the same rules apply. If this database is so critical - which it apparently is - the backup should kick in without any fuss. Sadly it would appear that no such backup is available. Fail on every level.

  13. Wokstation

    @Mark Dussard

    Do you have reading problems?

    "There was a temporary loss of services to a small number of Trusts within our region on 10th February 2009"

    Loss of services means they were unable to access the patient data on the system in several Trusts - that's quite a lot of people who's records were unavailable when they attended an appointment.

    How does that not jeapardise patient care?

  14. Anonymous Coward
    Dead Vulture

    @everyone except Markie Dussard

    Well yes that's how it's supposed to work. The shared database is is a way for different clinicians involved in the care of a particular patient to share data in an electronic form, rather than via letter or fax. Today, when you leave hospital, your GP is sent a letter describing the outcome of your hospital stay (what was done, medication given, etc). The shared database means that he can now receive that data electronically. This doesn't preclude the hospital system or the GP practice to have its own electronic record, as it's always done.

    So, if the connection to the shared database goes down, patient care is not severely affected, it just means that some information generated by another system will only be available once the connection is back up. And if some communication needs to happen in the meantime, they can always revert to good old manual methods, such as picking up the phone.

    It'd be nice if you stopped spreading FUD over that programme. I won't deny that it's grossly late and over-budget but it's also been designed so that if one component fails it has minimal impact on patient care. And obviously, the SLA around any system takes into account how critical that system is to said patient care.

    I mean, at El Reg, you do have email don't you? If your email server fails and is down for some time, it's inconvenient but it doesn't prevent you from doing your job does it? That's an over-simplification but you get the point.

    AC for obvious reasons

  15. John P
    IT Angle

    Totally epic fail

    I have a hot backup for our reports system at work, and they can't be bothered to have a hot backup for a system that provides half the country access to individuals' health records?

    What if someone with an allergy to penicillin had gone in and, because they couldn't check out their health records, they gave said patient penicillin?

    No impact on patient care my arse!!!

    There's no muppets icon, damn!

  16. Anonymous Coward
    Thumb Up

    "just unavailable for an extended period of time"

    Quite.

    What makes this even worse is that there are actually bits of the NHS that have a clue (or know who to call when they need a clue).

    For example, if the systems in charge of the UK's national blood transfusion service were to fail for more than a few minutes, what would happen? What if they were unable to cope with the load at a time of particularly high demand? What would happen if three previously-independent regional databases had to be merged into one, with minimal disruption to service?

    Give it some thought, and then read:

    http://www.availabilitydigest.com/public_articles/0310/uknbs.pdf

  17. Anonymous Coward
    Anonymous Coward

    @Totally epic fail

    If somone with an allergy to penicillin had gone in they would probably know about it by either asking the patient of looking at their notes which they still have and will for years to come (do you have even the faintest idea how much paper there is in the NHS?)

    Alternatively being doctors with training and common sense and professionalism and all that lovely stuff, if there's a doubt they wouldn't take the risk, just like it's always been.

    As a result of all this paper and the fact that it often gets stored off site, for the foreseeable future the IT systems can go down for short periods of time with absolutely no impact on patient care as all the relevent paperwork will have been collated days in advance.

    Nice bit of self awareness re the muppet icon by the way :-)

  18. Christoph

    Then why do it?

    "There was a temporary loss of services"

    "there was no impact to patient care."

    Then why are they being paid a billion pounds for the system?

  19. Anonymous Coward
    Dead Vulture

    RE: no redundancy AC

    No, no one is telling you there were no back ups, because there were, and they kicked in, and the DC failed over, hence no interuptions.

    Come on el reg, i knew you wouldn't want to post my previous comment, but this scaremongering is way way below you.

    I don't see the point in jumping on the public purse commisions 'good time' bandwaggon, by posturing every indifferent event with a negative slant, all they're doing is tryinfg to justify their existance. leave them to it, you don't have to join in.

    I'd love to know your sources for this crap...

  20. Anonymous Coward
    Anonymous Coward

    mod. are you or are you not

    Going to post my comment completely debasing your article? don't like the way it backs up my claims that you're scaremongering by any chance?

  21. Anonymous Coward
    Thumb Down

    FACTS

    Get your facts straight elReg, could land you in serious bother repeating hear say. You're as bad as the BBC...

    There are many incorrect statements in this report, i am not at liberty to comment on, just get it sorted.

  22. Chris Williams (Written by Reg staff)

    Re: mod. are you or are you not

    Hi,

    As the statement from CSC confirmed, there *was* a loss of service. The spokeswoman also confirmed that disaster recovery was invoked. I'm not clear which facts you're disputing.

    - Chris Williams

  23. Tim Schomer
    Coat

    @ Redundancy?

    No, I beleive it's been 2009 for over 6 weeks now.

    ... It's the one with this years callendar in the pocket

  24. Alexander Hanff
    Thumb Down

    NUBS 2

    When I worked on the development of National Unemployment Benefit System 2 back in the early 90s (a project run and delivered by ITSA - Information Technology Service Agency which was a government department not an external corporation) backup and redundancy were paramount. The entire system had 4 sites around the country for redundancy and it would require all 4 of the sites to go down at once for the system to fail.

    Sadly the site where I used to work is now occupied by EDS and it seems redundancy has become a thing of the past for government IT systems. NUBS2 was by no means perfect and was replaced by Jobs Seekers Allowance, but it seems to me that things back then (when they were run by civil servants) made a lot more sense from a development perspective than they do now.

  25. This post has been deleted by its author

  26. kain preacher

    I have feeling

    That NHS could break the most reliable system .. Why do I have the feeling that if if the NHS was running Unix they would be the first major organization to be wiped out by Unix virus .

  27. Anonymous Coward
    Anonymous Coward

    @ kain preacher

    "That NHS could break the most reliable system .. Why do I have the feeling that if if the NHS was running Unix they would be the first major organization to be wiped out by Unix virus ."

    if? Cerner Millenium for the southern cluster currently provided by Fujitsu (until someone else takes it on) runs on a UNIX OS. I assume BT who are also using Cerner Millenium for the London cluster use a UNIX OS.

  28. Remy Redert

    @Kain Preacher

    Don't blame the NHS. At least not all by itself.

    After all, this is all being outsourced to other companies, who should have the necessary knowledge to secure their servers effectively.

    That said, yeah, if the government runs a system entirely on unix it will be only a matter until they get screwed up somehow.

  29. Anonymous Coward
    Coat

    Manual DNS changes?!?

    I can't believe that CSC.... no I'll start again, I suppose I can believe that CSC rely on manual DNS changes for DR, Accenture were by no means perfect but once they handed over to CSC all of the systems were "downgraded" as far as they could be to start bringing in profit as soon as possible.

    I'm afraid that NPfIT or CfH whatever they call themselves this week are complicit with CSC as they do an appalling job of checking that delivery is to schedule, oh and by the way, security is one of the things cut to the bone, you might want to opt out of the PCR system now.

  30. Anonymous Coward
    Black Helicopters

    Wasn't automatic roll-over to second servers part of OBS?

    My recollection is that a second server situated geographically x (quite a large number) kilometers apart was in the OBS - upon which the contracts were based?

    If so, does anyone know whether automatic roll-over to the second server was also part of the OBS?

    I suspect the contracts (commercially confidential) do not hold any such requirements - seeing this is not the first time CSC servers have failed without automatic transfer to the backup servers.

    I take it that if there was no impact on patient care, the server which went down held no clinical or mission critical information and was not holding any GP or Community TPP SystmOne records? These are only held on central servers - and, to be fair, I believe TPP has had two servers for about 18 months now.

    How will the situation change change if Lorenzo is ever delivered, and *all* medical records of all sorts are held off site on CSC servers?

    Could I exercise my Choice - as a patient - to receive treatment at a site on a different system with, at the minimum, on-site backup?

    (living in NME: maybe I should emigrate?)

  31. Anonymous Coward
    Anonymous Coward

    @AC poor show, operating system

    It employs a large relational database, we know the operating system it will be running on: Solaris or a version of Linux. No question there.

    Windows wouldn't even come into the discussion at the requirements stage.

  32. Anonymous Coward
    Anonymous Coward

    Redundancy

    I worked for an investment bank employing redundancy with one database server in the UK and the backup in the USA.

    If one failed I could switch to the other in less than 10 minutes. Both databases were always uptodate with the same set of records.

    And they were holding 100 million records.

    Rocket science it ain't.

This topic is closed for new posts.

Other stories you might like