Welll...
The problem as always with something like this is clinical responsibility. If you make something that can tell whether or not you have some condition or other, it is saying either "yes, you have it", or "no you don't".
The problem faced is that if your box is even slightly subjective (such as a neural network output), you cannot really afford to have it say anything substantive. So the answers it has to give have to be toned down to "maybe, go see a doctor", or "maybe not, go see a doctor". In which case, what's the bloody point of not just going to see a real, qualified human doctor in the first place?
Not toning the answer down and giving a straight yes/no answer means you're accepting clinical responsibility for the accuracy of the answer, and the resultant liability for those occassions where your box's answer turns out to have been wrong. A false positive upsets patients, and may lead to damaging and inappropriate medical intervention. A false negative may kill them. If you've made claims of complete reliability, you take the blame for that.
So whilst their system might have a strong performance from a statistical point of view, it doesn't amount to anything practically useful at all unless Google are actually willing to accept the liability for the system's performance. I can't see them doing that.
The same's true for a human doctor, but they can get insurance cover.