I actually do see DL research on tabular problems come out, but it seems like they generally aren't better than xgboost or similar boosted tree models on the same dataset, and when they are better it's only marginal and not worth the interpretability tradeoff in the real world. Boosted trees are already really good for most tabular tasks, so it makes sense that most of the focus would be on images and natural language where they are a big improvement over traditional methods.
Additionally, it seems to me that most of the money spent on research in the DL space goes to places like OpenAI whose main goal is to develop AGI. They see large transformer models as the path to AGI, and once you've got AGI solved the AGI can solve all the other problems.
Additionally, it seems to me that most of the money spent on research in the DL space goes to places like OpenAI whose main goal is to develop AGI. They see large transformer models as the path to AGI, and once you've got AGI solved the AGI can solve all the other problems.