The the following utterance, sort of looks like the triplet data structure used in graph/knowledge databases:
"Alive loves Bob"
What do you know? Nothing. Was it Alice who said she loves Bob, or was it Bob who said it is Alice who loves him, maybe Carol saw the way Alice looks at Bob and then conclude she must love him. What is love anyway? How exactly is the love Alice has for Bob quantitively different than my love of chocolate. It might register similar brain activity in a MRI scan, and yet we humans recognise them as qualitatively different.
A knowledge base is useless if you can't judge wether a fact is true or false. The response to this problem was for the semantic web community to introduce a provenance ontology, but any attempt to reason over statements about statements seem to go nowhere. IMHO you can't solve the problem of AGI without also having a way for a rational agent to embody its thoughts in the physical world.
Agreed. Human thinking is arbitrarily high-order -- we use statements about statements about statements with no particular natural complexity limit. This seems to me the big limitation of knowledge graphs: The majority of real-world information, just like the majority of natural-language sentences, are highly nested relationships among relationships.
That was my motivation for writing Hode[1], the Higher-Order Data Editor. It lets you represent arbitrarily nested relationships, of any arity (number of members). It lets you cursor around data to view neighboring data, and it offers a query language that is, I believe, as close as possible to ordinary natural language.
(Hode has no inference engine, and I don't call it an AI project -- but it seems relevant enough to warrant a plug.)
"Alive loves Bob"
What do you know? Nothing. Was it Alice who said she loves Bob, or was it Bob who said it is Alice who loves him, maybe Carol saw the way Alice looks at Bob and then conclude she must love him. What is love anyway? How exactly is the love Alice has for Bob quantitively different than my love of chocolate. It might register similar brain activity in a MRI scan, and yet we humans recognise them as qualitatively different.
A knowledge base is useless if you can't judge wether a fact is true or false. The response to this problem was for the semantic web community to introduce a provenance ontology, but any attempt to reason over statements about statements seem to go nowhere. IMHO you can't solve the problem of AGI without also having a way for a rational agent to embody its thoughts in the physical world.