Hacker News new | past | comments | ask | show | jobs | submit login
Graph Neural Networks: A Review of Methods and Applications (arxiv.org)
134 points by painful on Jan 8, 2019 | hide | past | favorite | 15 comments



Graph network reviews seem all the rage, here's another recent one:

"A Comprehensive Survey on Graph Neural Networks " https://arxiv.org/abs/1901.00596


There is definitely a lot of interest around the topic. Over 20 graph related papers where accepted for ICLR 2019 - https://openreview.net/group?id=ICLR.cc/2019/Conference


It’s worth stating that being able to work with hierarchical or graphical data is powerful and has very broad applications. The interest is not purely intellectual.


For more, I highly recommend looking at the accepted papers from the NeurIPS 2018 Relational Representation Learning workshop. [1] I really enjoyed the workshop and I hear workshops tend to represent a (rough) frontier of the subfield.

[1] https://r2learning.github.io/


Also this: https://arxiv.org/pdf/1812.04202.pdf (“Deep learning on graphs: A survey”).


Are there any good Python libraries that make working with Graph Neural Networks as easy as working with Keras/Pytorch/fast.ai?

All I can find is https://github.com/tkipf/gcn, and from the same authors reimplementations in Pytorch https://github.com/tkipf/pygcn and Keras: https://github.com/tkipf/keras-gcn many stars for the main repo (1100), but not that much usage?


Deep Graph Library (DGL) [1] came out in November and I've heard good things. It looks easy and intuitive.

[1] https://github.com/dmlc/dgl


I'm just curious - why do I primarily see Chinese researchers publishing deep learning stuff on arXiv? Is it subsidized over there? Just look at the publications linked in this thread so far, 2/3 are from Beijing.



There are a lot of talented ML researchers in China. This is a product of (a) the government and major companies (i.e. BAT) investing heavily in fundamental ML research (b) the population size (c) a long tradition of STEM-focused education in China. So, it's not surprising that would be the case.

The interesting questions are if China is uniquely focused on deep learning over other ML techniques, and Chinese research compares in terms of quality. Anecdotally (speaking as a researcher in the field) papers from Chinese institutions seem disproportionately focused on deep learning (whereas, for example, the UK does great work in Bayesian ML and the US does disproprotionately well in NLP). I'm not a deep learning researcher so I can't judge the technical merit, but I was just at NeurIPS in Montreal, and I saw about equal representation of Chinese institutions as South Korean ones. South Korea, with ~1/25 the population, punches way above its weight per capita.


I thought the question was more "why arXiv and not other journals" in which case maybe 'prestigious' Western journal publications just aren't valued as much in China?


Almost all ML research is published on arXiv.

In ML (as in most of Comp Sci) conference proceedings (NeurIPS, ICML etc) are where the prestige publishing is.


I can't speak specifically to this topical domain, but generally speaking (assuming the research topic isn't politically sensitive) there are professional incentives to do transnational work -- whether it's publishing in Anglophone journals, organizing international conferences, etc. So probably, it's not that foreign journals are valued less. (Probably, in fact, the contrary.)


Perhaps their papers are disproportionately not accepted to conferences so we see many more on arXiv


Most deep learning researchers post to arXiv, not just Chinese. With so many seemingly obvious ideas in deep learning, it's an easy way to lay claim to an idea and be first. Since everyone knows that preprints and revisions will be on arxXiv, that's where people search for deep learning papers by default, even if the paper is also published elsewhere.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: