Not one mention of the EM algorithm, which is, as far as I can understand, is being described here (https://en.m.wikipedia.org/wiki/Expectation%E2%80%93maximiza...). It has so many applications, among which is estimating number of clusters for a Gaussian mixture model.
EM can be used to impute data, but that would be single imputation. Multiple imputation as described here would not use EM since the goal is to get samples from a distribution of possible values for the missing data.
EM imputation (or single imputation in general) fails to account for the uncertainty in imputed data. You end up with artificially inflated confidence in your results (p-values too small, confidence/credible intervals too narrow, etc.).
> It has so many applications, among which is estimating number of clusters for a Gaussian mixture model
Any sources for that? As far as I remember, EM is used to calculate actual cluster parameters (means, covariances etc), but I'm not aware of any usage to estimate what number of clusters works best.
Source: I've implemented EM for GMMs for a college assignment once, but I'm a bit hazy on the details.
I've been out of the loop for stats for a while, but is there a viable approach for estimating ex ante the number of clusters when creating a GMM? I can think if constructing ex post metrics, i.e using a grid and goodness of fit measurements, but these feel more like brute forcing it
There are Bayesian nonparametric methods that do this by putting a dirichlet process prior on the parameters of the mixture components. Both the prior specification and the computation (MCMC) are tricky, though.
An ELI5 intro: https://abidlabs.github.io/EM-Algorithm/