I'm always amazed by how little attention tabular/relational data gets in DL research, despite being far more common in applications than images or unstructured text.
I actually do see DL research on tabular problems come out, but it seems like they generally aren't better than xgboost or similar boosted tree models on the same dataset, and when they are better it's only marginal and not worth the interpretability tradeoff in the real world. Boosted trees are already really good for most tabular tasks, so it makes sense that most of the focus would be on images and natural language where they are a big improvement over traditional methods.
Additionally, it seems to me that most of the money spent on research in the DL space goes to places like OpenAI whose main goal is to develop AGI. They see large transformer models as the path to AGI, and once you've got AGI solved the AGI can solve all the other problems.
I started Approximate Labs to tackle this problem! We just recently addressed one of the biggest gaps we believe exists in the DL research community for this modality, dataset; we openly released the largest dataset of tables with annotations last month https://www.approximatelabs.com/blog/tablib
Feel free to reach out to me / join our discord if you want to talk about tabular data / relational data and its relation to AI~