Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How come we don't have extensive software for helping doctor decision making by making use e.g of bayesian inference while feeding on the available superintelligence that enable those 24 millions paper? Expert systems long passed the hype curve and it's time for them to cycle up again!


An older comment of mine https://news.ycombinator.com/item?id=30049522 fits well here. I'll adapt it to your question ;)

Basically: medicine as a whole is already some sort of expert system.

- Data collection and cleanup: Researchers conduct experiments to produce meaningful data and extract conclusions from that data.

This part isn't more automated because we have strict rules that prevent medical data collection and analysis without a clear purpose. Otherwise we'd be able to collect a lot more information to try and extract results from it using more inference-oriented techniques (deep learning and the like).

- Modeling & training: Expert panels produce guidelines from the results of that research. These panels are the "training part" of the system.

As a sibling comment said, replacing these panels with ML-based techniques isn't trivial because the data produced in the previous step is fairly noisy (p-value hacking, difficulty of capturing all the variables, etc.). Furthermore, the techniques that yield best results nowadays also produce them without clear explanations on why they hold, which is not something we are prepare to accept in medicine.

- Execution: Doctors diagnose and treat following said guidelines. In fact, they use decision flows that they themselves call... algorithms!

The main reason why execution is not automated is that we do not have the technology for machines to capture the contextual and communication nuances that doctors pick up on. There can be a world of difference between the exact same statement given by two different patients or even the same patient in two different situations. Likewise, the effect of a doctors' statement can be quite literally the opposite depending on who the patient is and their state of mind. One of the most important aspects of the GP's job is to handle these differences to achieve the best possible outcomes for their patients.

All that being said, there are companies trying to produce expert systems to help doctors diagnose. See https://infermedica.com/product/infermedica-api for instance.


Because research can be controversial. There’re papers in my field saying patients have increased frequency of certain cells. There’re other papers saying they’re not. Go figure.


Nailed it. With publish or perish incentivizing shenanigans like "p-hacking", many of those papers are the research-equivalent of spam.


I think Watson does something like this.



Did Watson fail because they where bad at their job or because the problem is much harder than people assumed?


Electronic health care records are not high quality data. They are qualitative, often discretized, and also distorted by fiscal shenanigans.

The best “EHR” data we have—quantitative and minimally biased—-are from large genetically diverse animal cohorts like the BXD mouse family.


I think the marketing got ahead of the tech. I would classify that as a business failure.


Watson failed because it's marketing pretending to be a technology.


I wonder how many years that sets back the field. Who will want to invest in something that could end up being Watson 2.0?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: