Hacker News new | past | comments | ask | show | jobs | submit login

What do you mean by this being a "horrible" use of AI? (Although as another commenter has mentioned, this should more properly be called ML).



It's quite easy to correctly classify 100% of benign cases as benign.


If it's so easy, then why do people die from having lesions misdiagnosed as benign?

Even if the success rate of the human eye was in the 99.5%+ range, why not have an extra sanity check from an AI model?


> If it's so easy, then why do people die from having lesions misdiagnosed as benign?

You're confusing False Negatives with True Negatives. For Non-Benign (Positive) vs. Benign (Negative) classification:

* True Positive Rate (TPR): non-benign classified as non-benign.

* False Positive Rate (FPR): benign misclassified as non-benign.

* True Negative Rate (TNR): benign classified as benign.

* False Negative Rate (FNR): non-benign misclassified as benign.

> It's quite easy to correctly classify 100% of benign cases as benign.

You can engineer a 100% TNR if you just classify everything as the "benign" negative class. The FNR is going to be 100% too, but that doesn't matter -- you correctly classified 100% of benign cases as benign.

> why do people die from having lesions misdiagnosed as benign?

Because the FNR is not 0%. FNR is important. You probably want a decent TPR in there as well. And FPR can be very important too, depending on how life-changing/painful/invasive the treatment for a positive case is!


Because this non-AI function 'correctly diagnoses 100% benign cases as benign':

    def is_benign(mole: Image) -> bool:
        return True
...but also misdiagnoses 100% malignant cases.


To contextualize, I think the tool in this article correctly diagnoses 97% benign cases as benign but misdiagnosis 22% of malignant cases.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: