It’s worth stating that being able to work with hierarchical or graphical data is powerful and has very broad applications. The interest is not purely intellectual.
For more, I highly recommend looking at the accepted papers from the NeurIPS 2018 Relational Representation Learning workshop. [1] I really enjoyed the workshop and I hear workshops tend to represent a (rough) frontier of the subfield.
I'm just curious - why do I primarily see Chinese researchers publishing deep learning stuff on arXiv? Is it subsidized over there? Just look at the publications linked in this thread so far, 2/3 are from Beijing.
There are a lot of talented ML researchers in China. This is a product of (a) the government and major companies (i.e. BAT) investing heavily in fundamental ML research (b) the population size (c) a long tradition of STEM-focused education in China. So, it's not surprising that would be the case.
The interesting questions are if China is uniquely focused on deep learning over other ML techniques, and Chinese research compares in terms of quality. Anecdotally (speaking as a researcher in the field) papers from Chinese institutions seem disproportionately focused on deep learning (whereas, for example, the UK does great work in Bayesian ML and the US does disproprotionately well in NLP). I'm not a deep learning researcher so I can't judge the technical merit, but I was just at NeurIPS in Montreal, and I saw about equal representation of Chinese institutions as South Korean ones. South Korea, with ~1/25 the population, punches way above its weight per capita.
I thought the question was more "why arXiv and not other journals" in which case maybe 'prestigious' Western journal publications just aren't valued as much in China?
I can't speak specifically to this topical domain, but generally speaking (assuming the research topic isn't politically sensitive) there are professional incentives to do transnational work -- whether it's publishing in Anglophone journals, organizing international conferences, etc. So probably, it's not that foreign journals are valued less. (Probably, in fact, the contrary.)
Most deep learning researchers post to arXiv, not just Chinese. With so many seemingly obvious ideas in deep learning, it's an easy way to lay claim to an idea and be first. Since everyone knows that preprints and revisions will be on arxXiv, that's where people search for deep learning papers by default, even if the paper is also published elsewhere.
"A Comprehensive Survey on Graph Neural Networks " https://arxiv.org/abs/1901.00596