This is interesting, and smart, using the knowledge present in the normalized relationships / data connectivity as part of the knowledge for training.
A properly designed relational database is a kind of propositional knowledge base, with each tuple ("row") being a "fact"; it makes sense to mine this as part of learning. It's how a human would "read" this data, so makes sense.
Nitpicking:
"The core idea is to view relational tables as a heterogeneous graph,
with a node for each row in each table, and edges specified by primary-foreign key
relations."
The word "relation" is used incorrectly here -- the "relation" is the table, the name for the foreign-key relationship is... relationship. The key distinction being that relations (big bags of facts) exist independent of those pre-declared relationships; the relationships can be restructured in any-which-way through queries, with foreign key constraints just being a convention for doing that. That's the key benefit of the relational data model over hierarchical/tree/network/graph databases where the relationships are hardcoded, and the key thing that Codd was getting at in his original paper: relational databases are the theoretically most flexible model of data (apart from just bags of text) because the relationships are derived, not hardcoded.
So what they're describing here is each tuple in the relation as a graph node, edges defined by the attributes of the relation; which is precisely how one does the inverse (describe graphs as relations) See e.g. RelationalAI's product (https://docs.relational.ai/rel/concepts/graph-normal-form), or how graphs are modeled in e.g. Datalog or Prolog.
Let me help you - SQL and the whole predicate logic is prolog from start to end and also grammar and also regex and also recursion and also generative.
The thing is to make something actually useful out of it. Stochastic guys figured how to encode information with their stochastic approach. The relational algebra guys did something with discreet relations, we’ve been only scratching grammars even though LZW, sequitur and alike are grammar tasks.
Markov chains included, we’ve done very little in this regard. Much to expound on here. Patinjali was onit 3k years ago.
A properly designed relational database is a kind of propositional knowledge base, with each tuple ("row") being a "fact"; it makes sense to mine this as part of learning. It's how a human would "read" this data, so makes sense.
Nitpicking:
"The core idea is to view relational tables as a heterogeneous graph, with a node for each row in each table, and edges specified by primary-foreign key relations."
The word "relation" is used incorrectly here -- the "relation" is the table, the name for the foreign-key relationship is... relationship. The key distinction being that relations (big bags of facts) exist independent of those pre-declared relationships; the relationships can be restructured in any-which-way through queries, with foreign key constraints just being a convention for doing that. That's the key benefit of the relational data model over hierarchical/tree/network/graph databases where the relationships are hardcoded, and the key thing that Codd was getting at in his original paper: relational databases are the theoretically most flexible model of data (apart from just bags of text) because the relationships are derived, not hardcoded.
So what they're describing here is each tuple in the relation as a graph node, edges defined by the attributes of the relation; which is precisely how one does the inverse (describe graphs as relations) See e.g. RelationalAI's product (https://docs.relational.ai/rel/concepts/graph-normal-form), or how graphs are modeled in e.g. Datalog or Prolog.