Reply to post: Re: Human vetting

Don't trust deep-learning algos to touch up medical scans: Boffins warn 'highly unstable' tech leads to bad diagnoses

MadAsHell

Re: Human vetting

Key here is invisible-at-first decision support. Let the humans do their assessment and then show them the machine's verdict, possibly reducing the number of false-negatives (generally a good thing, but that's debatable too, e.g. DCIS for Breast Cancer).

Years ago I used to mark undergrad essays for Medical Faculty - strict two marker system, and second marker was not allowed to see first marker's grades. Any papers differing by more than <x> percent had to be remarked - by both IIRC. That's a sensible basis of QA.

This isn't the first time ProcNatAcadSci (PNAS as they are now) has alerted the world to software issues affecting medical imaging data: see https://www.pnas.org/content/113/28/7900.full. Even the raw data from fMRI is transformed so much before any human can see it, that there is already plenty of potential for garbage in some cases. Applying dodgy neural network AI to the first-pass images is like feeding noise into a positive-feedback loop. It works, but the sound ain't pretty.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon