Reply to post: Working for emergency services IT...

FYI: That Hawaii missile alert was no UI blunder. Someone really thought the islands were toast

Anonymous Coward
Anonymous Coward

Working for emergency services IT...

....for a large, international corporation (the facility I work at is the size of a small city, with 5500 employees, and they have similarly large facilities in Canada, Australia, and Europe), I can certainly understand how something like this goes down. At the company I work for now, the alerting system can sometimes be ambiguous. It has four basic levels of panic: Green (meaning resolved/benign), Yellow (meaning change in status- check me out), Orange (meaning someone should really look into this), and Red (meaning get your ass over here post haste!).

Many of the Yellow alerts are actually important: very, very important: it might mean that the temperature is low enough that something like a water pump won't work because it's iced up (this is northern Minnesota where -20F is routine Dec-Feb, sometimes weeks at a time). Conversely, a Red alert might be something as benign as a GFCI breaker tripping, one which powers/effects nothing. For someone who is only instructed that yellow can wait, but red means ASAP, that often means that a GFCI outlet gets reset in 20 minutes, but a frozen fire-water pump has to wait three days until a crew with blow-torches and heaters can thaw it out, sometimes having to wait another week until the burnt-out motor can be replaced.

Many SCADA systems are designed similarly (in my experience) and I would assume that the sort of system used by the HIEMA would also be similar to that of our vendor, who specializes in emergency alert/management systems (and who also claims to offer services to state, national, and international organizations).

Many of the alarms I have to resolve manually involve what would be, to the typical person, very ambiguous status updates, often updated frequently, from systems that have/are:

1) poorly configured (the client has no real notion of what's important or not, especially if they have been operating 100+ years), yet they demand that this and that are real emergencies, even if they're not. Everything is important, throw it at the wall and see what sticks.

2) overly sensitive (loosing communications with remote sensors for 3 seconds might trigger a Red alert; if this happens frequently, the PIC becomes numb to red alerts and simply resolves them all, even if the alert is "MELT DOWN IMMANENT: EVACUATE 100-MILES NOW", where as neophytes are sending in 10 reports a minute regarding a radio link that went down for 30 seconds)

3) no protocol for verification: there is no person in the department involved available to verify that there might be an explosion, fire, zombie invasion, etc. The rational for this often involves obscure union lore and tradition where 90 years of collective bargaining amounts to: "it's not my job; unless there's overtime At double scale." That, or as this report implies, they were experiencing a life changing bowel movement when the pretend apocalypse was occurring.

This is NOT to say that such scenarios should not be planned for. Lord knows I've been involved with enough to know that not every event you think is a drill is not in fact a serious event, but that these things need some level of actual coordination among the PICs. Right Hand, meet Left Hand. For two supervisors to perform this sort of drill (one of which is apparently ignorant of the fact that this is a drill), then passing the blame to the person who pushed the button is inexcusable. If anyone should be reassigned/punished, it should be the first shift (midnight) supervisor. Send s/he back to the dispatch chair for a spell to relearn what it means to be grunt, and why clear, unambiguous commands are important in any large security scenario. The person who actually pushed the button should be cautioned- verify with your supervisor even if s/he's dropping a deuce.

Even where I work, this proposed excuse is one that is viable, and indeed something similar has happened before. But it happened long ago, and remedies and protocols have been put in place since then so that there is a minimum of 10 minutes of overlap time occurs between shifts where proper passdown can happen among everyone; Various team leads both verbally and via email inform each other of impending drills, continuing incident's, and the like. Scenarios like drills and tests are forwarded to shared email accounts (locally hosted dedicated IBM Notes, no less, isolated from our slow/intermittent Outlook 365 cloud system- this is northern Minnesota where even enterprise broadband can be spotty) where all authorized parties see the same messages. This creates an unambiguous chain that can be followed to see where communications broke down. You do not, in this day and age, in public or private sector security/intelligence simply rely on "oh, I thought I told so-and-so...", to cover your behind. Document, verify, and if all else fails, follow your standing orders and watch as sh!t miraculously defies gravity and rolls uphill.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019