There's a related section about 'mathiness' in section 3.3 of the article "Troubling Trends in Machine Learning Scholarship" https://arxiv.org/abs/1807.03341. I would say the situation has only gotten worse since that paper was written (2018).
However the discussion there is more about math which is unnecessary to a paper, not so much about the problem of math which is unintelligible or, if intelligible, then incorrect. I don't have other papers off the top of my head, although by now it's my default expectation when I see a math-centric AI paper. If you have any such papers in mind, I could tell you my thoughts on it.