Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is a paper discussing higher-order interdependencies between diagnoses [1] on X-ray images (they seem to apply LSTM to derive those dependencies). This could be probably extended to include data outside X-ray images. My take is that it's pretty impressive what we can derive from a single image; now if we have multiple low-level Deep Learning-based diagnostic subsystems and combine them together via some glue (either Deep (Reinforcement) Learning, classical ML, logic-based expert system, PGM etc.), we might be able to represent/identify diagnoses with much more certainty than any single individual M.D. possibly could (also creating some blindspots that humans won't leave unaddressed). It could be difficult to estimate statistical properties of the whole system though, but that's a problem with any complex system, including a group of expert humans.

The main critique for CheXNet I've read was focused on the NIH dataset itself, not the model. The model generalizes quite well across multiple visual domains, given proper augmentation.

[1] https://arxiv.org/abs/1710.10501




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: