Hacker News new | past | comments | ask | show | jobs | submit login

This is an area that fades in and out of focus with such venues as the Workshop on Human Interpretability in Machine Learning (WHI) [1]. It's becoming increasingly important when it comes to auditability and understanding of what is actually learned by algorithm. Avoiding classifiers from learning to discriminate based on age, race, etc [2] or in domains where it's important to know what the algorithm is doing such as medicine. Work in understanding DL is not really interpretable in any domain, typically they train another (simpler, less accurate) model and use that to explain what the model is doing or use perturbation analysis to try to tease out what it is learning. If all you care about is getting the right answer and not why you get that answer maybe it doesn't matter.

I wouldn't say I'm hating on DL nor that I hate on random forests, or ensembles, etc., but when you have very little data fitting an uninterpretable, high dimensional model might not be the right answer, in my opinion, see [3].

[1] https://arxiv.org/html/1607.02531v2 [2] https://arxiv.org/abs/1606.08813 [3] https://arxiv.org/abs/1601.04650




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: