back to article EMC publicly denies Fairview borkage involvement

EMC denies that any XtremIO failure contributed to the Fairview Health Services system collapses as reported in the Minnesotan paper City Pages. Here is an EMC statement: The City Pages article was inaccurate and misleading across a wide range of facts, including the comments relating to the Dell EMC products. The Dell EMC …

  1. Anonymous Coward
    Anonymous Coward

    XtremIO is still the Samsung Note 7 of the storage industry

    Interesting, no customer quote in that canned statement. The logic then is that an employee of the hospital completely made the whole thing up?

    Short of a statement from the customer, DELL EMC can say whatever they want, it's still hollow.

  2. Anonymous Coward
    WTF?

    "was not the root cause" means something completely different, to me, than was uninvolved.

    1. Anonymous Coward
      Anonymous Coward

      "Never went down or failed at any point" doesn't say the data remained accessible in a timely manner, it simply means the cluster or hardware didn't die. That doesn't mean they didn't incur an effective outage due to a lack of responsiveness from the array.

      In terms of root cause that could well have been further up the stack and possibly due to the workload being thrown at XtremIO. If it's not dedupe or compression friendly then that can open a whole can of worms on any array that assumes everything will be...inline all the time:-).

      So pointing out root cause could be construed as a simple way of deflecting from the above. Having said that it's easy to point to a vendor or bit of tin in these situations but typically the true story is much more complex.

  3. Anonymous Coward
    Anonymous Coward

    Calling BS

    EMC can make statements, but clearly Fairview's own internal documentation says otherwise. From http://www.citypages.com/news/whistleblower-fairview-health-services-it-system-keeps-crashing/399552011:

    According to internal Fairview documents, glitches related to the EMC storage system are limiting care givers' access to Epic, a data system in use at Fairview and many other American hospitals. Epic's applications are responsible for everything from registering a patient and scheduling blood work to fulfilling pharmacy orders.

    --

    On September 1, "a major outage was declared" just after 9 a.m., an email written later that same day by (Don) Tierney acknowledges.

    "I'd like to begin by recognizing and apologizing for the difficulties this — and all — system outages cause," it says. "We know outages cause tremendous complications related to patient care and satisfaction, and for many of you, they make your jobs more difficult.

    "Today's event was a result of too much activity occurring on recently implemented storage system."

    --

    1. Anonymous Coward
      Anonymous Coward

      Re: Calling BS

      It was probably something HBA related. Beside most organisations if building on flash will also have some method or instant DR. I'd be surprised if both their primary and DR solution failed at the same time with the same issue..unless.... as I've said, their config is messed up.

  4. Anonymous Coward
    Anonymous Coward

    Who at Dell EMC made this statement?

    This statement looks like a copy/paste of a comment in the last El Reg article.

    1. Mark 85

      Re: Who at Dell EMC made this statement?

      Right... from someone who registered on the day of the article and made one post. It does have a certain odor to it, doesn't it?

  5. dpk

    The statement released by Dell EMC was approved by the customer. Did Chris check sources or just report someone else's shitty article ? Tabloid jornalism at its best.

  6. Destroy All Monsters Silver badge

    So how is King's College doing?

  7. j2pixel

    Citypages, my trusted provider of Complex IT information. Are you all even thinking before you cite a source as credible?

    Reading these comments... Sad.

    1. Anonymous Coward
      Anonymous Coward

      So you are suggesting that public records and emails are sketchy?

  8. josh.krischer

    Did you ever hear EMC admitting a failure? It is always the customer mistake.

  9. irrision

    "In terms of root cause that could well have been further up the stack and possibly due to the workload being thrown at XtremIO. If it's not dedupe or compression friendly then that can open a whole can of worms on any array that assumes everything will be...inline all the time:-)."

    Certainly not the case with Epic, it runs Intersystems Cache for it's backend database and it compresses and deduplicates (against the half dozen other full copy environments that are required to support the system) quite nicely. I've routinely seen 6+ full copies combined take slightly less than 20% more than the space one copy takes on an XtremIO array. They don't have the monopoly on this either, similar results with Pure storage arrays as well.

  10. Anonymous Coward
    Anonymous Coward

    Ah so easy to call BS when you know nothing...

    I'm from a vendor...Anyone who truly understands complex IT environments and all the inter-dependencies and possible points of failure are not commenting on this b/c they know that what EMC is saying is all too plausible. Sometimes config errors in a switch or back-up application, human error during a SW or component update or some other dumb thing unrelated to the array itself upstream or downstream can be the culprit to impact I/O to/from hosts and thus apps become unavailable. EMC isn't gonna lie just to save its rep; I'd bet my house the customer blessed the statement before it was made. Sounds more like the customer has been going thru a major IT overhaul to run new apps, serve more users and do shit it's older systems couldn't handle and their IT dept hadn't attempted before. Whenever any business pushes into new territory, things get glitchy until all the bugs get worked out - a painful process for those who've been thru it. Anyone with real IT experience knows this. I question the motives of the trash talkers as either FUD-flinging competitors or trolls who just like to bitch, complain and point fingers at things they know nothing about.

    1. Destroy All Monsters Silver badge
      Windows

      Re: Ah so easy to call BS when you know nothing...

      Chill, mate.

      We like to make fun of preening vendors and we know about the real world, too.

      Anyone who truly understands complex IT environments and all the inter-dependencies and possible points of failure are not commenting on this b/c they know that what EMC is saying is all too plausible.

      You know, there is a deep problem if vendors says that "the system may fail in spite of best efforts because of high complexity". We are deep in self-defeat terrain here, basically hanging in Moscow with Napoleon during winter and someone is putting the buildings on fire....

      ... and you have to pay for it.

  11. Anonymous Coward
    Anonymous Coward

    I spent 15+ years supporting and managing enterprise storage at major Storage vendors and I can only blame myself for it .... In my experience few outages are truly the result of bugs inherent to the storage system or hardware fault.

    Most often customers drive the system into the ground and the reasons can be:

    - no performance and capacity monitoring in place and no idea what the original specifications where

    - the person that sized and architected the system has long buggered off and is now doing something with cloud, devops or IoT - or is trying to get a job with a vendor

    - the storage admin hasn't been afforded any training on the product

    - the budget is tight and the required upgrade is on hold

    - we've bought the upgrade and we want to "configure it ourselves"

    Long story short - the "enterprise, solution or storage - architect" are teflon covered and won't accept responsibility.

    For the slightest glitch the storage admin will log a ticket with the vendor as a matter of arse covering.

    With any luck the vendor will hang themselves.

    The vendor SE and Sales rep have moved on or won't accept responsibility. Customer management refuse to tae ownership and "lean on" storage admin to sort it out. Storage admin "leans on" vendor support to sort it out.

    It eventually gets all to hard. Customer writes a cheque for a system that's fit for purpose.

    Problems stop for 24 months and the process starts over. Maybe this time we go to the cloud ? :)

    Vendor sales rep takes Its manager to overseas executive briefing centre. Next they buy another system.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon