Hacker News new | past | comments | ask | show | jobs | submit login

>Say you flip a coin a few times and only get heads. Your maximum likelihood (frequentist) estimate is that the coin will always land heads. In a Bayesian setting, if you have a (say uniform) prior on the probability that the coin lands heads, your maximum a posteriori estimate of this probability will be non-zero, but will get continue to get smaller if you continue only seeing heads.

Not quite. If you have a uniform prior, there will be no difference between MAP and MLE.

>From the vantage point of Bayesian inference, MLE is a special case of maximum a posteriori estimation (MAP) that assumes a uniform prior distribution of the parameters.

https://en.wikipedia.org/wiki/Maximum_likelihood_estimation

More discussion here:

https://stats.stackexchange.com/questions/64259/how-does-a-u...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: