back to article Ahem, ahem... AI engine said to be good as human docs at spotting lung cancer developing

Deep learning algorithms can help doctors predict a patient’s risk of lung cancer, according to fresh research published on Monday. Lung cancer is the deadliest cancer in the US, there were an estimated 160,000 deaths in 2018. Catching the disease in its early stages can reduce the chance of mortality by up to 43 per cent. …

  1. Bonzo_red

    A good thing?

    Lung cancer identification today, mass facial recognition tomorrow.

    1. Joeyjoejojrshabado

      Re: A good thing?

      Not a particularly nuanced comment there Bonzo.

  2. Boring Bob

    It is only at the end of the article that it is stated that the radiographers and AI results are the same when all previous scans are used. I'm not surprised by this. The neural network cannot be better than the training data and the training data relies in part on the radiographers analysis (I can imagine it is easy to create a record of radiographers false positives, but a record of false negatives would be more difficult).

    1. JetSetJim

      > Each 3D scan is split into “volumes”, and each volume is labelled as cancer-positive if the patient underwent a biopsy or other forms of surgery and was diagnosed with lung cancer, and cancer-negative if the patient was free from the disease a year after the scan.

      As I read that, this is the training data and is effectively "Truth" and the "volumes" were given to radiographers and the AI to give a cancer evaluation. Then the false positive and negative rates can be assessed and compared. If this is the case, then in theory it's possible for the AI to beat the radiographers, with the assumption that the scan gives sufficient information to make that judgement (which it probably doesn't as cancers probably don't necessarily grow predictably).

      At least if the AI matches radiographers performance, and if the training set actually is sufficient for training an AI properly, it might give confidence that the radiographers know as much as they can know about diagnosing cancer based on these scans. If they were different (i.e. AI was better) it would then be an interesting study to find out why they were different to see if more knowledge could be accrued.

      I would be loathe to rely on a black box for a cancer diagnosis unless it can actually point to things and say why - I think I'd always prefer some human oversight, at least for a while yet...

    2. Anonymous Coward
      Anonymous Coward

      There is an advantage to training on old scans

      You don't have to rely on the radiologist's analysis, you KNOW which ones developed cancer and which ones did not.

      The biggest issue with lung cancer screening is that they don't. By that I mean that unless you already have symptoms they aren't going to give you a scan. I've never smoked, so I'm at low risk for lung cancer (though I live in an area with a lot of radon so maybe not that low) but that doesn't mean I don't have it. Maybe I do and I just don't have symptoms yet.

      If you have a way for completely automated screening they could do scans every five years or so as part of a regular checkup and a doctor wouldn't need to look at it unless it is flagged by the system for further analysis. Or the sorts of "health fairs" that my local university sets up with stuff like free cholesterol screening could do a free lung cancer screening. Since they don't use film the cost per scan is almost nothing if the analysis is automated. It would be cheaper for them to provide than the cholesterol test.

  3. Anonymous Coward
    Anonymous Coward

    I like the idea behind machine learning but there are a few unanswered questions. How much data do you feed into it before you trust it to do the work itself? What happens when this is all we have to do this work because not many people are going to train in a profession run by computers? I think it should be used as an addition to what we already have and not as an alternative but I fear the everlasting quest for more money will take that choice away.

    1. JetSetJim

      There are some standard techniques. Take a big data set, hive off 70% (or something else) as a training set, leave the rest as validation. Train using the training set, measure accuracy. Validate using validation set, measure accuracy. If they are both more or less equal, it's probably as good as it will guess. If training is better than validation, you've over-fit - possibly too large a network, but may depend on data characteristics. If validation is more accurate than training, summat weird is going on.

  4. Postdoc

    It might help if the, ahem, “human docs” looked at the chest x-ray the right way round.

  5. Sir Loin Of Beef

    "As good as"? Then don't produce it.

    1. Anonymous Coward
      Anonymous Coward

      How about "Cheaper than"? If machines can be as good as and cheaper than a human, maybe it will encourage doctors to do a bit of critical thinking instead of just pushing pills. Ego wouldn't be a problem at the very least.

      1. Mark 85

        I think you're not quite right. If it's cheaper and just as good as a human, the insurance will dictate that this is test of choice. Meantime, the docs don't have to think about the diagnosis, just push the treatments and/or pills. Might even drop the price of treatment... but I kind of doubt that.

  6. vtcodger Silver badge

    Not a big fan of AI

    I'm not a big fan of AI which IMHO too often turns out to be Artificial Stupidity (AS). But assuming the cost is reasonable, this seems me to be a reasonable supplement to human interpretation. Presumably subsequent human analysis will catch false positives. And false negatives -- the radiologist says cancer, the AI says no cancer -- will be very carefully evaluated before carving the patient up. I don't think the world needs incomprehensible technology that, for example, occasionally fails to identify a stopped emergency vehicle. But this seems potentially useful.

    1. veti Silver badge

      Re: Not a big fan of AI

      The story says that the AI produces better results - both fewer false positives and false negatives - than the average human radiographer.

      Presumably some human radiographers are more skilled than others. Maybe the best of them could still beat the AI, I don't know. But the thing is, not everyone can be screened by "the best" humans. Humans don't scale that way.

      But AI does. So if the AI outperforms the average radiographer - which is what the story claims - then it's good enough, and adding a human review step to the process would likely reduce the quality - by introducing delay, and increasing the likelihood of errors (both ways).

      1. tony2heads

        Re: Not a big fan of AI

        'The CNN model outperformed six radiologists in clinical settings. '

        ONLY SIX !

  7. Anonymous Coward
    Anonymous Coward

    Strange

    "When the patient’s previous scans were taken into account, however, the algorithms and the radiologists’ performance were even."

    So the doctors and algorithms are equal over multiple passes, but the algorithm is better at a one off pass?

    Is this just a case of the algorithms being able to analyse the digital images in more detail than the human eye, and once averaged across multiple images with slightly different perspectives, the two balance out?

    And if the results are identical, is the benefit of AI in reduced costs per patient, the time taken to process each patient, the ability to scale services with smaller teams of radiologists or is there no net benefit?

    1. tellytart

      Re: Strange

      The benefit would be more akin to allowing the AI to decide what goes forward to a human radiologist.

      I.e. If the AI is 90% certain that an image is cancer, then a follow-up is automatically scheduled, no radiologist needed.

      If the AI is almost certain the scan is cancer free, then no action taken.

      If the AI probability of cancer falls within a certain range, then the scans are flagged for a radiologist second option.

      This would reduce workload on radiologists to those cases where a trained eye and patient history (including familial history of cancer) is needed.

  8. Anonymous Coward
    Anonymous Coward

    As someone who lost parent to lung cancer I would like to see them progress this as I have little to no faith in the human link in this chain, so if the automation stands up it can only be a good thing IMO.

    My parent passed away 10-11 years after a false negative. They confused symptoms of Diabetes and a severe chest infection (as seen on CAT scan) as cancer and this got all the way down the road to scheduling a procedure to remove their lung. The surgeon was very pushy but my parent, thankfully, declined.

    10 years later all cancer symptoms were missed (for at least 6 months) and they got their cancer diagnosis 5 weeks before they died by which time it was in the cerebellum (and too late).

    Although I object to AI in the main (mainly because of the limitations and privacy invasions occurring to procure training datasets) If it can improve on the diagnosis front (cost, accuracy, etc.) then surely it can only be a good thing? (although I appreciate I am coming from a position of bias)

    1. Anonymous Coward
      Anonymous Coward

      My parent passed away 10-11 years after a "false negative"

      Sorry, should've read "false positive"

      D'oh

  9. DCFusor

    Actually, there is NO such thing as artificial intelligence - or even artificial ignorance = to be ignorant, you have to be aware.

    But classification is actually the only thing neural networks are good at, and this is that job, so maybe they get somewhere, especially when they learn to not go do "deep" and super-overfit as well, around 100% of the researchers in the previous wave mentioned would cause - just the errors we see the modern deep learning commit.

    Amazing, that. Someone was right, had good justification, logically and theoretically, predicted what would happen if they were ignored, they were ignored...and what they predicted came to pass. See for example Timothy Masters' work in the 90's or so.

  10. Anonymous Coward
    Boffin

    It'll be banned

    It'll be like Tesla 'Autopilot', when the first patient dies from a missed AI diagnosis the lawsuits will be flying demanding $M compo and grand-standing politicians will be demanding the technology is banned 'for the sake of the children!'

    1. Mark 85

      Re: It'll be banned

      That happens now with missed diagnosis and/or wrong treatments. It may just reduce the number of lawsuits a bit but then again, the lawyers do have a strong lobby as most legislators are lawyers. Birds of a feather and all that.

  11. JeffyPoooh
    Pint

    Big difference remains

    A human doctor can do this...

    #1, Cancer,

    #2, Not cancer,

    #3, Not cancer,

    #4, Cancer...

    #5, What? Oh, very funny... A closeup picture of a pepperoni pizza? Good one. Was that a Turing Test? Did I pass? What did the A.I. say about that one?

    A.I. is hard.

    Strong A.I. (as would be needed in "outdoor" random uncontrolled environments) is very hard.

  12. Anonymous Coward
    Anonymous Coward

    Who did what?

    Quote: "It was trained by looking at nearly 30,000 images taken from the National Lung Screening Trial (NLST), a publicly available dataset."

    *

    The article is not clear about one thing -- namely that the "training" was done with images ALREADY CLASSIFIED BY HUMAN BEINGS.

    *

    So "AI" is NOT DOING THIS ANALYSIS without the help of human doctors.

    *

    Now......there are studies out there which show that AI analysis can be better than human analysis ONCE THE AI HAS BEEN TRAINED BY HUMAN BEINGS.

    *

    Is this really "artificial"? Why do we have to hear the hype about AI, when the truth is that "people plus technology" can sometimes be more effective that people on their own. Is this news?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like