I wish they actually engaged with this issue instead of writing a fluff piece. There are plenty of problems with multiple imputation.
Not the least of which is that it's far too easy to do the equivalent of p hacking and get your data to be significant by playing games with how you do the imputation. Garbage in, garbage out.
I think all of these methods should be abolished from the curriculum entirely. When I review papers in the ML/AI I automatically reject any paper or dataset that uses imputation.
This is all a consequence of the terrible statics used in most fields. Bayesian methods don't need to do this.
The prestigious journal "Artificial intelligence in medicine"? No. Just because it's on Google scholar doesn't mean it's worth anything. These are almost all trash. On the first page there's one maybe legit paper in an ok venue as far as ML is concerned (KDD; an adjacent field to ML) that's 30 years old.
No. AI/ML folks don't do imputation on our datasets. I cannot think of a single major dataset in vision, nlp, or robotics that does so. Despite missing data being a huge issue in those fields. It's an antiqued method for an antiqued idea of how statistics should work that is doing far more damage than good.
Ok that's interesting. I profoundly disagree with your tone, but would really like to hear with you regard as good approaches to the problem of missing data (particularly where you have dropout from a study or experiment).
Perhaps looking into the issues with uncongeniality and multiple imputation may help, although I haven't looked at MI for a a long time so consider my reply as an attempt to be helpful vs authoritive.
In another related intuition for a probable foot gun relates to learning linearly inseparable functions like XOR which requires MLPs.
A single missing value in an XOR situation is far more challenging than participant dropouts causing missing data.
Specifically the problem is counterintuitively non-convex, with multiple possibilities for convergence without information in the corpus to know which may be true.
That is a useful lens in my mind, where I think of the manifold being pushed down in opposite sectors as the kernel trick.
Another potential lens to think about it is that in medical studies the assumption is that there is a smooth and continuous function, while in learning, we are trying to find a smooth continuous function with minimal loss.
We can't assume that the function we need to learn is smooth, but autograd specifically limits what is learnable and simplicity bias, especially with feed forward networks is an additional concern.
One thing that is common for people to conflate is the fact that a differentiable function is probably smooth and continuous.
But the set of continuous functions that is differentiable _anywhere_ is a meger set.
Like anything in math and logic, the assumptions you can make will influence what methods work.
As ML is existential quantification, and because it is insanely good at finding efficient glitches in the matrix, within the limits of my admittedly limited knowledge, MI would need to be a very targeted solution with a lot of care to avoid set shattering from causing uncongeniality, especially in the unsupervised context.
Hopefully someone else can provide a better productive insights.
I feel like multiple imputation is fine when you have data missing at random.
The problem is that data is never actually missing at random and there’s always some sort of interesting variable that confounds which pieces are missing
True true but how do you account for missing data based on variables you care about and those you don't?
More specifically, how do you determine if the pattern you seem to be identifying is actually related to the phenomenon being measured and not an error in the measurement tools themselves?
For example, a significant pattern of answers to "Yes / No: have you ever been assaulted?" are blank. This could be (A), respondents who were assaulted are more likely to leave it blank out of shame or (B) someone handling the spreadsheet accidentally dropped some rows in the data (because lets be serious here, its all spreadsheets and emails...).
While you could say that (B) should be theoretically "more truly random", we can't assume that there isn't a pattern to the way those rows were dropped (i.e. a pattern imposed on some algorithm that bugged out and dropped those rows).
> how do you determine if the pattern you seem to be identifying is actually related to the phenomenon being measured and not an error in the measurement tools themselves?
If the “which data is missing” information can be used be to compress the data that isn’t missing further than it can be compressed be alone, then the missing data is missing at least in part due to the phenomenon being measured. Otherwise, it’s not.
We’re basically just asking if K(non-missing data | which data is missing) < K(non-missing data). This is uncomputable so it doesn’t actually answer your question regarding “how to determine”, but it does provide a necessary and sufficient theoretical criteria.
A decent practical approximation might be to see if you can develop a model that predicts the non-missing data better when augmented with the “which information is missing” information than via self-prediction. That could be an interesting research project...
There’s already a bunch of stats research on this problem. Some useful terms to look up are MCAR (missing completely at random) and MNAR (missing not at random)
Maybe in academia, where sketchy incentives rule. In industry, p-hacking is great till you’re eventually caught for doing nonsense that isn’t driving real impact (still, the lead time is enough to mint money).
Very doubtful. There are plenty of drugs that get approved and are of questionable value. Plenty of procedures that turn out to be not useful. The incentives in industry are even worse because everything depends on lying with data if you can do it.
Indeed. Even worse some entire academic fields are built on pillars of lies. I was married to a researcher in one of them. Anything that compromises the existence of the field just gets written off. The end game is this fed into life changing healthcare decisions so one should never assume academia is harmless. This was utterly painful watching it from the perspective of a mathematician.
I assume by "in industry" they meant in jobs where you are doing data analysis to support decisions that your employer is making. This would be any typical "data scientist" job nowadays. There the consequences of BSing are felt by the entity that pays you, and will eventually come back around to you.
The incentives in medicine are more similar to those in academia, where your job is to cook up data that convinces someone else of your results, with highly imbalanced incentives that reward fraud.
Yes, precisely this! I’ve seen more than a few people fired for generating BS analyses that didn’t help their employer, especially in tech where scrutiny is immense when things start to fail.
My intuition would be that there are certain conditions under which Bayesian inference for the missing data and multiple imputation lead to the same results.
What is the distinction?
The scenario described in the paper could be represented in a Bayesian method or not. “For a given missing value in one copy, randomly assign a guess from your distribution.” Here “my distribution” could be Bayesian or not but either way it’s still up to the statistician to make good choices about the model. The Bayesian can p hack here all the same.
Not the least of which is that it's far too easy to do the equivalent of p hacking and get your data to be significant by playing games with how you do the imputation. Garbage in, garbage out.
I think all of these methods should be abolished from the curriculum entirely. When I review papers in the ML/AI I automatically reject any paper or dataset that uses imputation.
This is all a consequence of the terrible statics used in most fields. Bayesian methods don't need to do this.