The US House of Representative's Committee on Homeland Security called this week for the Nuclear Regulatory Commission (NRC) to further investigate the cause of excessive network traffic that shut down an Alabama nuclear plant. During the incident, which happened last August at Unit 3 of the Browns Ferry nuclear power plant, …
Ah Browns Ferry...
They really should check to see if this place is built on an Indian graveyard.
Reactors 1 and 2 were shut down for a year in 1975 when a worker went looking for air leaks in a firewall (the wall type fire wall, not the computer kind) - with a lighted candle. The flame caught on polyurethane foam insulation which had been installed illegally instead of fireproof mortar. Serious damage was done to the control cables of both reactors and forced an inpsection and upgrading of fire protection at all American nuclear plants.
CTU to the rescue
Did Habib Marwan have anything to do with this? I assume the NRC called in Edgar Stiles and Jack Bauer to save the day!
scrapheap challenge - build a nuke station!
"Such failures are common among PLC and supervisory control and data acquisition (SCADA) systems, because the manufacturers do not test the devices' handling of bad data, said Dale Peterson, CEO of industrial system security firm DigitalBond.
What is happening in this marketplace is that vendors will build their own (network) stacks to make it cheaper"
Since when did the commisioning authorities allow DIY comms in mission critical systems?!
Control systems are not commercial systems
My father was an EE in the control system industry inventing things like the Bailey Meter 756, the first commercially successful parallel processor.
A later system, the 855, had a hardware scheduler that switched register sets every clock tick so each application had no impact on the CPU time needed by another.
He also designed a network architecture much like IBM's later token ring that had fixed allocations of bandwidth for each node. Again, no node could use more than its share.
These designs are an anathema to performance-driven designs, but sometimes positioning control rods or transporting heat is more important than raw performance.
For BSDs sake
Why would anyone need to write their own dodgy stack code when the BSD reference code is free to use in commercial applications.
BSD (FreeBSD, OpenBSD, BSD), Linux, Windows, HP-UX, Solaris, etc all say that they are not certified for "mission critical" use. The OSes used are carefully tested for such use. It's just that you can't test for every eventuality. And some piece of hardware broke.
Now Congress has to get their fingers in it with "oh, maybe hackers were involved". I don't know of many SCADA systems that are hooked to "the net". And those that are segregate control from observation functions.
Not cheaper ...
""Such failures are common among PLC and supervisory control and data acquisition (SCADA) systems, because the manufacturers do not test the devices' handling of bad data, said Dale Peterson, CEO of industrial system security firm DigitalBond.
What is happening in this marketplace is that vendors will build their own (network) stacks to make it cheaper""
Not cheaper, as was pointed out with FreeBSD, but totally locked-up in terms of integration. Strange eth. packet sent to strange controller. Reminds me of the Banyan Vines protocol that would assemble ethernet broadcasts packets ...
Thus, captive market, and very high prices ... and vulnerability to anything from outside ...
Hey...if microsoft is involved... why bother conjecturing any further?
PLC's don't run windows. They are firmware devices. It's always possible that there was a hardware failure in the network equipment.
Why Ethernet? Use a Field Bus
My concern is that people are using Ethernet when there are numerous rugged industrial buses such as PROFIBUS, ControlNet not to mention just segmenting your network (no matter what bus you use) and de-rating the network load AND putting monitoring in your PLC or PAC code to look for devices falling off the bus and letting the SCADA operator know.
This system was just badly engineered. Everyone knows that PLCs and PACs have flaky Ethernet stacks. They hate too much chatter and disappear from the network from time-to-time. The automation software for SCADA isn't much better.
If it is critical, I would look at wiring/networking it in such a matter to make it just a little bit rugged.
I first came across this phenomenon about 8 or 9 years ago. Several PLCs connected to the Ethernet network (the Ethernet connection only for remote monitoring/programming - not for control), and all had faulted, shutting down a hydro-electric power station. When investigated, I discovered all the PLCs had erased their application software, looking like they'd just come out of the box.
An IT engineer had been on site at the time replacing a blade in a hub.
On discussion with the manufacturer, this was a known problem, due to what they called an 'Ethernet Storm,' but not a problem they thought was a serious issue and needed publicising. They even had a fix, but wanted £2000 per processor to upgrade the firmware.
I pointed out that there were serious implications especially for Chemical/Nuclear plants etc. and that they should be proactively addressing the problem. They eventually agreed to upgrade the problem to something called a 'code 10' and that way all our processors would be upgraded for free. In the new revision, there was a new register called an 'Ethernet Storm counter'.
Since then, the problem has re-occured and we are now uprevving all our processors to the latest revision of firmware - which they say is now definitely Ethernet storm resistant (we'll see!).
In critical applications where control system components need to communicate with each other, we do use Controlnet, Modbus, etc., monitor for device failures and all the other good practice that Ranjan advocates, but Ethernet is widely used for non critical connections for MIS, Remote Monitoring etc. Who would foresee that a non-critical connection to an Ethernet network could erase the memory of a processor?