Its not about open source
Just because the code is proprietary doesn't mean that it can't be validated or that it cannot be provided to the FDA.
More than one-fourth of defective implantable medical devices discovered this year were probably the result of bugs in the software used to control them, a group advocating open source software claimed in a report that argues against the use of proprietary code in the life-saving products. Although the pacemakers, implantable …
Just because the code is proprietary doesn't mean that it can't be validated or that it cannot be provided to the FDA.
I'm usually an advocate for all things open/unlocked, but medical devices may be a rare exception to that. Don't want patients flashing their own firmwares willy nilly, ha ha.
As for the open sourcing of the code base in the name of security, it makes sense. You wouldn't use proprietary voting machines to tabulate an election, would you? Err, scratch that.
Banks wouldn't trust your money to an unverified closed platform, would they? Oh, wait.
The military wouldn't use a closed source operating system to run it's battle ships would it, much less one with known vulnerabilities, right?? Ah it's hopeless.
a random group of open source coders, with little to no knowledge of the hardware involved, be more likely to spot defects in the code than the specialist group of developers who created and reviewed it, along with the extensive testing that it would have to have been put through before use.
There is absolutely no need for the code to be open sourced, for it to be properly checked, it was a cock up plain and simple! Yes, if you throw an inexhaustible supply of people at it, you are more likely to spot problems, but conversely, any genuine problems spotted are just as likely to be buried in the sea of dross also raised.
Now imagine this had happened and the bug was known and reported, but sat in a queue waiting to be evaluated, or that the software was sat on a shelf waiting to be approved while everything raised was all investigated, and he died much sooner from a lack of the pacemaker
A surprising number of heath professionals also code, and work on open source projects such as OSCAR (google it)
Because you can't trust that the device manufacturer hasn't skimped on QA, and is working on the famous formula that the car industry uses, as explained in Fight Club
Take the probable rate of failure A
Times it by the number of devices in the field B
Times it by the average cost of a legal cost of a settlement C
A x B X C = X
If X is less than the cost of a recall / extensive QA testing.. they won't do it.
Having intimate knowledge of the pacemaker hardware is a moot point when you are looking for memory overflows, buffer over runs and badly allocated values.
You wouldn' t have needed to know the details to prevent this error.. and I am sure the same applies to pacemaker and implant stuff too.
um..i don't even know where to start on this one. i have a pacemaker. that i am dependent on. when it "pooh's it's knickers" - and i don't have a backup generator lying around - i am screwed. second. i am in the security field. I have less than three months before i need a replacement. i have many questions.
mostly - why is the decision to enable wireless communications made by the physician? i think i should get the right to choose that one. i don't use the phone communications for the same concerns. i drive myself to the doctors office old school. and when i get there - if he's pointing a wireless wand at me - i am going to be a wee bit torqued.
i am the one that has the pacemaker. i am the one that has to worry at night what is going to happen to me. i should be the on to decide. and i know i am going to go in there and argue with someone who thinks they are superior - and wireless is good. boogers.
time to dust off my yagi antenna and get crackin'...
Uhmm, perhaps wireless communication is enabled because pacemakers/ICDs are fully implantable devices. With no wires that stick out through the skin. Why no wires? Because of the severe immunological security breach that would result, namely bacteria would cross the break in your skin and infect your pacemaker in about a week or so. Oh, and infect the wires running into your heart.
To prevent unauthorized wireless pacemaker transmissions, try wearing a tinfoil hat and vest.
Or should I call you "Shirley"?
Chain mail is the answer. One source is: < http://www.a2armory.com/chainmail.html >
Your physician gets to decide to use wireless because although you might know about hackers going through the wi-fi, he definitely knows about bacteria going through the wires, and he knows which type of unauthorized access is more dangerous.
just so folks know (because I dunno at this point in this thread if folks are serious or not), but currently my pacemaker is happily working away inside my chest with no wires breaking through the skin.
i go to the doctors office where at a high level description, a person places what looks like a giant mouse over the pacer site, and queries the device through the skin - no penetration - no contraceptives required...
this is not without consequences in and of itself - i had a tech stop my heart and laid me over...last thing i remember is him trying to keep me in an upward seated position, then apologizing for hitting the "let's screw with the patient option and stop his heart for fun" test.
why on ([insert favorite god here]'s green earth would i want the option of not only a trained tech having access to this option, but some nob who just downloaded [insert favorite script kiddy website name here] "let's nuke people with pacemaker's' script?
this is bunk. i want the unit set to off as in hardware disabled and only allowed to be enabled from a pacemaker unit. i don't want some frigtard running [insert favorite 'nix shell here] iwconfig ethWPC (WPC = wirelesspacer - clever, eh?) up...nad hijacking me with their bluetooth enabled iphone 4 w/ Apple supplied condom and homemade yagi antenna...
now i need a couple of these...http://www.theregister.co.uk/Design/graphics/icons/comment/pint_32.png
Just because it is software doesn't make it any less the property and responsibility of its owner. It is no different in this respect than any other item or process used to make the product.
Last time I looked that would be a quarter, or at worst 25% but one-fourth? I'm sorry it just sounds dumb!
One might think that after the Therac-25 that this sort of thing wouldn't even need discussion.
It's not just implantable medical devices where there's software at the centre performing safety critical jobs. Cars - loads of little software systems in cars, lots of them with your life under their control. Lifts - there's probably software systems in some of those these days.
The car industry likes to refer to the MISRA rules for coding standards. But in my experience that is horse shit. Sure, a developer creating an antilock braking system can follow the MISRA guidlines, all well and good.
However, I've taken a look at the source code for the libraries for some of the tool chains that claim to be MISRA compliant. For one of them the library source code was terrible, definitely not MISRA compliant and, as I discovered, buggy. And the compiler too was bug strewn.
Now how is the anti lock brake developer supposed to develop safe software when the underlying tools are not in themselves necessarily any good? Testing is either affordable and likely not completely exhaustive, or commercially crippling. Testing is not the whole answer.
And these shortcomings do show up in incidents involving cars. A friend's car suddenly decided to deploy all the airbags whilst they were driving along the motorway. That's all under the control of a micro somewhere or other in the car, so the likelihood is that the software got it wrong. Luckily, despite much panic and swerving, they didn't go on to have a big high speed accident.
There is certainly a lot of commercial pressure to use software in small, mass produced systems fulfilling safety critical roles. The normal safety critical methods of triple redundancy don't make commercial sense (in the case of cars) or don't physically fit (in the case of mdeical devices).
The formal algebraic methods for designing software systems fall at the first hurdle because there's no mathmatically proven CPU out there. And, so far as I know, the formal methods don't really have a way to deal with asynchronous events, and that discounts the use of useful things like interrupts. So all that makes it very difficult to prove a single CPU implementation of a software system.
Personally I don't think that opening the source code for IMDs will make the blindest bit of difference. Too many people will start having 'opinions', a small percentage of which may actually be valid, and each would have to be very carefully considered. And we certainly don't want a gnuZap, do we?
The companies obviously have a commercial interest in reliabilty. But frankly, and with the best will in the world, how would anyone actually prove beyond doubt that a death or injury was caused by a software error without the IMD being permanently wired up to a debug rig and the patient being heavily monitored 24/7? I don't doubt that the manufacturers try very hard, but I can't see how they can possibly test every single possible eventuality to complete exhaustion.
What would make a difference would be for someone to manufacture a mathematically proven CPU, and for these industries to adopt formal methods. It's expensive, it's hard, it's not pretty and there isn't many people (least of all me) out there who can actually do these things. But if you did build such systems that way then you would be able to show mathematically that your software/hardware implementation was indeed correct and beyond dispute.
Closed Source software is often questionable. It is so often poorly designed and has too much eye candy. It has a poor uptime record. I cite m$ in particular, but also aim my criticism at a lot of other Closed Source vendors. Another example is some of the commercial crapware used to interface with mobile phones - it just doesn't work.
Now while some Open Source software projects are in their early stages, an important, valuable and wide user base project such as this would receive enormous support and produce a quality product. OK maybe it doesn't have all the fancy features that someone I don't know dictates, but you can bet that it will do what I want : keep me alive and not let me down at the worst moment.
I will also point out that many Open Source coders are highly qualified, Engineers, Scientists, etc. These people are more in touch with reality than most !
So bring on Open Source and improve the product so more people survive. More, these coders will not ask for mega bucks. To know that they have improved the lot of others is enough. They give their effort for the benefit of all.
I do have ask the question why such an important device is so complex ? Or at least why the core part : the life preserving ticker driver is not isolated from the rest of the device such that it will keep going even if the rest fails ?
All software has bugs. Sure there's buggy proprietary software out there, but there's far more buggy & crap open source around. Most of that stems from the fact that way too many open source coders get bored once the project is 80% done, and rarely if ever do proper testing anyway (they may think they do, but they don't). People who sell their product for a living have to do some level of proper testing, if only to maintain the level of customer satisfaction required to get paid so they stay in business.
Clearly medical devices have bigger risks, are more likely to lead to court cases when they go wrong, and so require better testing.
The idea that because the code is public it will somehow magically get more and better review and testing is plain daft, even more so in a specialized field where there are very few people with the knowledge required to understand all the corner cases the code must handle.
No way is anyone implanting a FOSS-driven gadget in me, no matter how many self-certified "experts" have "reviewed" the code!
You're not wrong that 80% doesn't get finished or a lot is buggy, but we're not talking about the code being written by the OS community here - only that it is open to review so that bugs and issues might be more readily found.
Would you like some help with that?
"100% inspection is 80% effective." I've seen that borne out dozens of times both in manufacturing and coding. The whole "Open-Source" finds all the bugs argument is a sham.
A formal CPU (I thought there was one out there for military applications) and formal methods and testing are the best bet for eliminating bugs.
I (at one time) worked for an ICD company. We extensively tested the silly thing, and ran it through its paces. It took no less than 4 W95 machines to do the testing (I thought 4 machines was overkill). The actual devices used 6C02's (it was back in 1998), and they got "permission" from the 65C02's vendor to use it in a medical device. The code WAS in ROM (unalterable), but the code used parameters to do the setup. The units used inductive pickups to do the transmission to/from the implantable device. With the cost of the beasts, due to its development and insurance, they included a laptop for the programming (setup) of the device.
So, yes they are looked at quite extensively, but as everyone says, there is ALWAYS one more bug! So I do applaud the added scrutiny.
Patients, doctors and even technically knowledgeable outsiders couldn't possibly evaluate this code in any useful way. You'd get about 500,000 lines of uncommented C or C++, the writing of which was based on decades of proprietary knowledge which is NOT 'open source' and without which the source code alone will tell you nothing.
It's fine to have the FDA auditing these companies to ensure they're working in ways that makes good engineering sense. But beyond that, we have to simply trust the makers of medical devices and accept our fate.
It's a flipping pacemaker. It's got a little 8 bit micro in there. And you're wrong, it doesn't take any specialized knowledge to identify many of the kinds of mistakes that are found in them. I'd go on how I hate everything you've written, but none of that is fit to print.
"You'd get about 500,000 lines of uncommented C or C++"
If that were the case, the manufacturer deserves to to be shut down!
As already pointed out, you are looking at a small low power CPU because you can't have a lot of power in the first place, and (hopefully) the minimum amount of code that will do the required task. Code that should be properly structured, documented and commented.
I think certification should include a code review, but I doubt that 'functional' part is easy to verify by those outside of that industry, even though the 'procedural' part is subject to the usual sort of bugs that a lot of software suffers from.
But not all, at least it is unlikely they they use malloc()/free() types of operations (a common source of long term failure due to memory leaks), and probably have only limited I/O with some sort of careful sanity checks for M&C.
In my line of work I have programmed DSP for embedded systems without review and they stay up for years, so it is possible to make reliable software. But I would not bet my life on my code unless I had a 2nd (and 3rd) opinion!
If it had 500.000 lines of commented code, or even just 500, it would be enough to FAIL IT by any serious programmer.
'It was hard to write, it should be hard to understand' is NOT the way to program.
Blue screen of death.
Clearly you've never thought about embedded software. How often does your microwave oven crash? Or your TV? Or your DVD player? Or your bedside clock? Or your digital watch? Every single one of those is using embedded software, and I would be prepared to stake my entire life's savings (and indeed my life itself) on all of these having a better up-time than the best-developed open-source project. Sorry, you're too ignorant to have an opinion worth listening too.
And then we come to the whole idea of open-source code in embedded software. Idiotic isn't the half of it. Sure, if it's something that can readily be made standard, and you've got oodles of processing power to handle all the different configurations, and you've got endless testing resources to make sure all the different configurations can be tested, then great. Back in the real world, anyone thinking this is a good idea on a tiny low-power micro is too stupid to draw breath. Additionally, if you think that "many eyes" from amateurs can replace a proper validation program, you're probably too stupid to be able to read. Even in a context which people here might have met, there's a major reason Macs crash less than PCs, and it's not better coding - it's bcos Macs only have an infinitesimal fraction of the different configurations that a PC can have, so the same amount of testing on both will cover a much smaller percentage of the possible use cases on the PC side. Taking that to the extreme, a games console almost never crashes, bcos there's only one possible configuration.
I have no problems with *disclosure*. If someone wants to say "this is our code - have a look if you want", then fine. Frankly I'd like to see more issue databases made publicly-available too, so it's harder to cover stuff up. But the idea of setting up an open-source pacemaker project on SourceForge is so ludicrous as to deserve the scorn of every engineer and healthcare worker worldwide. "They all laughed at Christopher Columbus - but they also laughed at Bozo the Clown". I know which category I'd put you in.
"Every single one of those is using embedded software, and I would be prepared to stake my entire life's savings (and indeed my life itself) on all of these having a better up-time than the best-developed open-source project."
Sorry, but my cable box frequently crashes, and same reports from those with Sky TV boxes, and my radio watch has often gone to the wrong time due to inadequate data validation (I presume). Where as our work Linux server and desktop PCs frequently have uptimes of several years, until typically the AC power or HDD fails.
Your reasoning that Windows vs Mac reliability is predominantly down to hardware choice is also a tad myopic, why has there been *so* much trouble with IE over the years? Nothing to do with hardware, a lot to do with corporate goals that were ill aligned with security or reliability.
Most of the rest of your comments are very reasonable, and I have no doubt you are a capable developer, but I am afraid you have a rather skewed view of 'open source' compared to my experience.
Thank you so much for enlightening me ! You clearly know your stuff. Hmm. And you call me ignorant.
Clearly you need to think. Yes the bloody device *should* be a simple low power PIC based device. But we're talking about historic data retention, the means to extract that data and reconfiguration of the device. Also do we want to perhaps make the device able to recognise some of the more dangerous heart conditions that might occur ? Suddenly we have far more scope for complexity in the device. We also what you have what you are too stupid to consider : the host operating system and proprietry interface software of the tools used with the device. Next time use your brain !
Enough said, except you could quadruple your brain power with the simplest PIC available clocked back to 1Hz !
Eh? On the basis of a sample of code which *doesn't* meet MISRA, you say that MISRA is horseshit? Did I miss something here - like perhaps a toke of the spliff you were passing round...? ;-) I could tell you I'm an F1 driver (and I'd be lying), but that doesn't make Michael Schumacher's driving skills horseshit.
MISRA is *not* horseshit. MISRA starts from the base assumption that everyone screws up sometime, and it proceeds from there to try to minimise the risk of that screw-up getting out into the world. At the lowest level, you have MISRA-C coding standards which guide coders away from risky coding practises. Above that, you have a way of running your project so that reviewing and testing are done to best-practise. All this is worth nothing if (a) the coders are crap, (b) the coders don't actually follow their quality plan for reviews and testing, (c) the management don't make sure the coders follows the quality plan, (d) the management don't let the coders fix bugs, and/or (e) the QA department doesn't keep a proper eye on the project to make sure any of this happens. This isn't the fault of MISRA, any more than Jonestown was the fault of Kool-Aid. In the defence world, DO-178B takes a similar view, only with more formal guidance to what steps you take.
I take your point about CPUs - which is why in SIL3 systems it's mandatory to have multiple CPUs (ideally with independently-written software to avoid them all hitting the same bug at the same time) cross-checking each other. DO-178B also covers this. And whilst formal processes can't cover interrupts, there are *very* well-established methods for ensuring that your code will run safely in a multi-threaded or multi-rate system, even if you're using global variables (as most embedded code on a low-power micro does).
I've been working in embedded software for the last 15 years, much of that in automotive. For a safety-related project, my typical numbers are 10% of the time on requirements, 25% on design, 5% on coding - and 60% on testing. And that's just for one module, which then gets passed over to the customer for another year or so of integration testing, EMC testing in a whacking great Faraday cage with a huge sod-off spark generator, test-driving round a track with guys who can do unbelievable tricks with the car (I know of a driver having to go up a test ramp forwards fast, hand-brake-turn on the point of the summit and reverse fast down the other side of the ramp, so that rapid forwards/backwards changes in speed were checked), and pulling off every wire or jamming every component individually and in combination to check that things follow the failure modes they're supposed to. That doesn't mean a bug *can't* get out, but it makes it pretty damn unlikely - to the extent that you're getting more likely to have hardware failure than software failure. (SIL3 also mandates multiple-storage of variables to allow detection of and recovery from random memory corruption, for example.) It's quite likely that the software in your friend's car got it completely right, but the crash sensor failed.
SIL3 = retard designed the plant ! If it was designed properly no SIL ratings would be needed. Are you involved in plant design ?
Time and again I at work I see more and more levels of paperwork and approval. Has it improved anything ? NO. All that happens is that the paperwork is considered a guarantee of a successful outcome. Not so in the real world.
Now go pick your spots, say night to mom, get in bed with her on your side, try not to wet it, and suck your thumb.
> From 1997 to 2003, at least 212 deaths resulted from defects in five different brands of defibrillators.
My understanding is that a defibrilator is a last-resort treatment for someone whose heart has stopped, or whose heart is no longer beating effectively. A person who is at death's door, in other words.
In some cases the machine will work and the patient's heart will again start beating normally. In others ... the jolt fails, and the patient dies. The latter must be fairly common.
How can they tell that "at least" 212 of the people who died, did so because the machine's software was defective, rather than because the patient was beyond saving? Note "212", not "about 200". Even in the only case I can think of where it would be obvious that the machine malfunctioned -- the one where it refused to deliver a shock to the patient at all -- how can anyone say with certainty, that patient could have been saved?
I'm also wondering whether these machines have an emergency bypass - sod the computer, give the patient a jolt NOW! - or whether that is a bad idea. I do know that medics had completely manual defibrilators before microprocessors existed. Did these kill more than they saved?!
If you use it, I bet you would want 100% uptime on these things, whether they are open source or not. This thing just gotta work, otherwise YOU DIE.
Since the original software and firmware borked out, and the original hardware cannot be replaced THAT easily, you just need to outsource the code to run your ticker to a third party, anyway.
It better be more efficient and dependable than your brain, at any rate.
RIP slab, because, you know...
I have seens this interessting discussion and would like to comment on afew things:
Some ppl thing that when you get the ok from FDA or what ever than its ok. But do they have the experienced people to check the code ? Did they ever build such a device; a pacemaker, reathing aperatus ...(you name it) most likely not
when you talk privately to programmers on conferences your hear even worse things, bugs that never got fixed because it would require a new certification, auditing, .. so its never done, to much red tape.
medical devices can have a rather long use time and how often do they check if an upgrade is availabel ? Is the company that wrote the code still living ? If nothing breaks they tend to keep it as it is.
As a programmer i often wonder that so few serious cases come up
212 people dying from medical software failures over a five year period means that the engineers who do this stuff are doing very well. I wouldn't care to guess the number of people who require defibrillator treatment per year, but I wouldn't be surprised if this fell within five 9's of reliability (it would mean that roughly 1% of the US population required defibrillation every year).
As others have pointed out, a deep understanding of the hardware (and its undocumented bugs that are worked around in software) may well be critical. The main argument seems to be that more auditing is necessary, but open source is not the only route available-- companies could certainly release to independent testers and auditors while protecting their IP through NDAs. This would be likely to attract more experts, create more room for open dialogue, and allow more knowledge transfer about hardware details than plopping up a bunch of dense code on the web.
Even though open source does have the potential to have a positive influence on the code quality, it does not always work that way. But, even i the case it makes it better, that is still not good enough for Medical and many other appliances. What the Software Industry needs is engineering, not craftsmanship/programming.
By use of Formal Verification ALL defects can be eliminated instead of improving with Open Source.
Verum has the solution for this with their next generation Model Driven Design tooling based on formal verification. Patented software that guarantees defect free behavior in software. ( www.verum.com )
"Patented software that guarantees defect free behavior in software"
Firstly, maths is not patentable, at least not in civilised countries, and all formal logic is based on that.
Secondly, can this tool verify itself? Even if you have no errors in the formal specification for the system (and that is a BIG if), how can you be sure its code generation is without flaws? Had it also formally verified the compilers and libraries used to build the end applications from its auto-generated code?
And as others have already pointed out, can it in turn verify the CPU logic (even as VHDL or similar code) it will run on? Oh, and the VHDL compiler...
Remember the early Pentium's FPU bug?
Yes we need engineering and matching tools, but we also need to have projects that are of manageable complexity and where thorough testing is employed. No matter how much someone tells me a simulation works, I will not accept it until it has been demonstrated in hardware under both normal cases, and under out-of-specification cases, to see what really happens.
"a lot to do with corporate goals that were ill aligned with security or reliability."
IE was common to Windows and Mac. And I don't remember it being the cause of PC crashes either. Nor is Firefox exactly a poster child for security and robustness either.
Most PC crashes today are due to driver-level problems. Drivers are needed to support hardware. It's not a huge stretch to say that the more variation in hardware you've got, the more chances you have of hitting a driver-level problem.
And I'll own up to having seen digibox crashes too. But a digibox is just a PC on the inside anyway. Not all embedded stuff is as robust as everything else. :-)
Try it for yourselves, like Philips, Ericsson, Logica, Bosch Security Systems etc. who use it (and have experienced the power of it (and that it works...))
And yes, maths by itself is not patentable, the magic triangle of correctness is. But it is better to start using it, like I do too, and find out for yourself. The only person to convince you on anything, is you.