back to article Don't believe the hype: Today's AI unlikely to best actual doctors at diagnosing patients from medical scans

Don’t fall for overblown claims that AI algorithms are just as good as, or even better, than human doctors at diagnosing diseases from medical images. That's according to a study published in The British Medical Journal on Wednesday. A group of researchers, led by Imperial College London, studied 91 peer-reviewed papers that …

  1. Anonymous Coward
    Holmes

    What should the A in AI stand for in medicine?

    Artificial? No. Artificial Intelligence has been 5 years for decades and will be 5 years away when I die.

    OTOH, as the author of the study points out, Assisted Intelligence where the machine assists the intelligent doctors and nurses has great promise. He labels it as it should be labeled - machine learning. Unfortunately that label doesn't bring VC funding or research grants.

    (P.S. IT has been going downhill since we stopped calling it DP.)

    1. Rich 11
      Joke

      Re: What should the A in AI stand for in medicine?

      and will be 5 years away when I die.

      ...tragically of a treatable ailment which ML failed to pick up early enough.

  2. NATTtrash

    Don't believe the hype indeed...

    It is a very subtle, but very correct touch by El Reg to use the title of an 80s song for this, because that how long this kind of attempts have been going on. And, as a result, discussions about these experiments. In the 80s the magic word was "medical expert systems". The presentations I can remember from my own professional (medical) past was that you typed in your symptoms on one end, and a perfect diagnosis would roll out on the other side. Or so they suggested. Over time this quietly left the building, because those pesky patients didn't define their ailments in clear terms, fuzzy data all around, and pick lists and decision trees somehow didn't seem to capture all that other sensory input physicians seem to use. Bummer...

    Fast forward a couple of decades on, and we have so-called personal health monitoring devices, which claim a lot, give nice graphs and data to send home, but are major inaccurate. But sell well, just like those snake oil treatments in the Dark Ages.

    A virus, an orange president and Google decision tree web sites that look good on large card board signs. The cool feeling we're so much cooler and smarter than we were during the last century, because now have cloudy AI thingy stuff. Which seems to have the (same) issues similar stuff had decades ago. And don't get me wrong: yes, of course my tools got better over time. Yes, our scanning capacities did increase with automated image analysis and increased digital enhancement. But, we do have to get real on this fetishism that "the AI machines can heal humans better than other humans". Because as the track record shows machines (and the people building them?) are no where smart enough (yet). But hey, we do like selling them to the ignorant to make money!

    Furthermore, think about this: the major part of a medical treatment is is the human aspect. Just consider this example: got corona? Please input your data at the screen at the entrance of the hospital... Processing... Our triage protocol shows that you will not be treated at this facility. Next patient please... Doesn't feel right when it concerns you personally, now does it?

    1. Roland6 Silver badge

      Re: Don't believe the hype indeed...

      >"Furthermore, think about this: the major part of a medical treatment is is the human aspect. Just consider this example: got corona? Please input your data at the screen at the entrance of the hospital... Processing... Our triage protocol shows that you will not be treated at this facility. Next patient please... "

      If you replaced entrance to hospital with NHS 111 and a real person at the end of the phone, you've got the current situation. Given the state people have to be in to be admitted to hospital, the "your condition isn't serious enough to be admitted, please call back if they worsen" responses must be difficult conversations...

  3. Michael H.F. Wilkinson Silver badge

    Depends what you want to call AI

    Recently, AI has been associated with deep networks, and little else, whereas the name used to cover far more. I have been doing research into medical image processing, and in that area many new useful tools have been developed that have certainly undergone clinical trials. Very many segmentation tools, image enhancement filters, and visualization methods contain methods from statistical pattern recognition, neural networks, support vector machines, learning vector quantization, etc. Many of these are very good at finding needles in haystacks, or allowing the doctor to zoom in on suspect regions in huge 3D scans or pathology specimens (often Gpixel order of magnitude). These methods have proven their worth in allowing a doctor to make decisions more effectively. They should not, and never were intended to replace a doctor.

    Personally, I much prefer developing tools that can explain WHY they think a certain classification has been made (e,g, benign/malignant) and how sure they are of their decision. Otherwise, doctors (and I myself) will view these methods with deep suspicion.

    1. Anonymous Coward
      Anonymous Coward

      Re: Depends what you want to call AI

      No-one would regard what is essentially standard pattern recognition as AI. The techniques for drawing the borders between classes have become more complicated but its essentially the same template of data -> features -> classifier that was established 4 decades ago.

      Researchers need to stop kidding themselves that they're involved in AI. Its pattern recognition. Some of it is very clever pattern recognition but equally a lot of it isn't and is re-hashing decades old work and mistakes.

      1. Mage Silver badge
        Headmaster

        Re: essentially standard pattern recognition

        <pendant alert>

        Replace "recognition" with "matching" and you are totally on the money.

        No machine or computer or algorithm or AI has ever "recognised" anything in any field if we use the word "properly".

        </pendant alert>

        I did upvote.

        1. Anonymous Coward
          Anonymous Coward

          Re: essentially standard pattern recognition

          Oh, is the term pattern recognition now wrong? How fascinating. I guess they'll have to re-write a few hundred books and withdraw a load of PhDs now.

          How did so many experts get this basic term wrong?!

          1. Il'Geller

            Re: essentially standard pattern recognition

            Set Theory: how to find an intersection between two sets of patterns?

            - When “matching” the exact match of two sets is sought for.

            - When “searching”, the most suitable is searched for.

            Dictionary definitions on the patterns’ words narrow the areas of the intersection and help to “find”, not “match”; as Microsoft, OpenAI and IBM proved. The doctors had not been using Set Theory and, therefore, the match, not search was used.

  4. steviebuk Silver badge

    Covid has maybe...

    ...exposed the somewhat bullshit claims of a lot of AI. Because the GPs are now saying they struggle to diagnose people remotely because of requiring to listen to their chest.

  5. SVV

    co-author of the study and CEO of Cera Care, a startup

    Gee, what an almighty coincidence. Nice of you to degrade the concept of academic computer science research papers into unproven advetorials for the business you're trying to hype. If you can't or won't show your workings, it's not science. In fact it raises the suspicion that, yet again, you've really developed something that already knew the answer to the question it's supposed to be solving and used that to nudge the algorithm along to the correct answer.

    1. Alister

      Re: co-author of the study and CEO of Cera Care, a startup

      Gee, what an almighty coincidence. Nice of you to degrade the concept of academic computer science research papers into unproven advetorials for the business you're trying to hype.

      I'm not sure where you are getting that from?

      If you can't or won't show your workings, it's not science.

      Yes, that's the point that he's making, a majority of the studies claiming equivalent performance to human doctors are not replicatable, and cannot or will not show their data.

    2. Schultz
      Boffin

      What is Science?

      Strictly speaking, science is the observation and modeling of those things we can't yet observe or model. Maybe substitute 'understand' for 'model' for a clearer picture of what science does -- although most scientists would be careful to use such vague wording.

      Applying a technology (e.g., AI) to second guess clinical diagnostics is technology, not science. Science would enter the picture if something unexpected would be learned from such a system. But that is not the goal of AI in most cases. In a clinical setting, the goal is to systematically exploit our knowledge of "this image shows cancerous tissue, that image does not".

      The boundary between science and technology can be fuzzy sometimes -- new technology is often required to make new observations. But you can quite clearly distinguish the two if you look at the motivation of the researcher / engineer, or at the results (in hindsight ;).

  6. Mage Silver badge
    Flame

    Marketing

    Look at the sorts of companies making the claims (= Issuing PR) and their track record of real deployments.

    Having taken up programming in the first place due to reading SF about AI, I conclude very many years later it's going to remain SF.

  7. c1ue

    This shouldn't be surprising.

    The deployment of political campaign style PR into pushing startup memes was deployed to perfection by Uber, and that hasn't gone un-noticed.

  8. Pascal Monett Silver badge

    "the rest of the 81 were purely academic"

    In other words, pie-in-the-sky, Ai-is-wonderful, please-continue-funding-me papers that apparently have a diametrically opposite view from papers based on actual data.

    Why am I not surprised ?

    These are the kinds of papers that proclaim that facial recognition works almost perfectly, when actual trials come back with a success rate of less than 13%.

    Look, I understand that theoretical physics is just as important as actual physics, but the difference is that theoretical physicists do not try to pass their musings as actual science. They ask science engineers to create the experiments that justify or invalidate their theories. Once a result is obtained, they review their theories and progress in their musings.

    It seems that, as far as "AI" is concerned, there are no such updates. That means that actual AI is nowhere near being created because the pie-in-the-sky musing are not bothering to ground themselves in reality.

    Oh well, it's for the better I guess. The longer we take to build an actual Skynet, the better.

  9. a_yank_lurker

    AI = Artificial Idiocy

    The flaw with AI is not that under idealized conditions it can out perform a human but that is fails miserably when the you have real data. Real data is often incomplete and ambiguous. Remember a couple of the symptoms of Covid-19 resemble the flu which makes it hard to diagnosis based on symptoms alone. I doubt any artificial idiocy system would do much better with ambiguous symptoms than a person especially one who is picking up on a subtle clue that is not the official list of symptoms.

    Pattern recognition software is limited by the patterns it has been 'trained' on and the further away from this 'training' data the actual symptoms are the worse it will perform. And artificial idiocy is nothing more than overglorified pattern recognition software with a lot of ignorant hype behind it.

  10. Drew Scriver

    Although the article seems to be on the money, it does not highlight that many (most?) of the clinical trials fall far short of what the average person would consider adequate.

    Often the trial groups are quite small and the duration of "long-term" often is what most people would call "short-term".

    Then there are the quirks. I remember one study (Gardasil or its successor, if I'm not mistaken), that only considered adverse effect that arose within two weeks of an injection - even though the vaccine itself was said to not take full effect until a number of injections over 18 months.

  11. Conundrum1885

    Lack of data??

    If it wasn't for the current crisis I'd probably have gone for an MRI anyway.

    Apparently a basic scan isn't *that* expensive and it would be handy to have a base comparison.

    The data thus gathered could have actual applications, such as providing direct information on

    my neural anatomy for things like EEG inferfaces and other non invasive monitoring.

    There's a lot to be said for every doctor's surgery having one in terms of rapid diagnosis as the

    vast majority of modern scanners are comparatively low field due to advances in technology.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like