Hacker News new | past | comments | ask | show | jobs | submit login

> Not to mention that generally ML models are not useful for assessing risk. ML nearly always focuses almost exclusively on some point estimate rather than a distribution of what you believe about a value.

It is actually quite a common practice to design neural networks that output probability distributions.




That distribution is still a point estimate for a multinomial, not truly the distribution of your certainty in that estimate itself. This is essentially a generalization of logistic regression, which will of course give the probability of a binary outcome, but in order to understand the variance of your prediction itself you need to take into account the uncertainty around your parameters themselves.

This can be done for neural networks, through either bootsrap resampling of the training data or more formal bayesian neural networks, both of these are fairly computationally intensive and not typically done in practice.


I was going to say, that seems like an "easy" second step once you get your ML to output hard numbers -- tack on ranges and confidence intervals.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: