[...] one also needs predicates or relations: things that have
truth values, and order terms. Thus, ‘greater then’ is a relation,
and ‘a>b’ is either true or false. Relations can also be things like
IsA, HasA, BelongsTo, LivesIn, EmployedAt.
In a biological system, these predicates are encoded in the graph itself. This is important for two reasons. First, this dynamic predicate encoding contains by necessity a representation of what the predicate actually means. Second, this allows a graph relationship between predicates themselves, and thus it's possible to learn new ones, abstract higher-level ones, or modify existing predicates.
Why do I think this is a problem? It's clearly not relevant for AI tasks, but AGIs have to be general and self-organizing. Of course I do agree with the research consensus that we probably have to "cheat" by hardcoding certain things (brains do it too, afterall), but maybe tags for node relationships should not be predefined like that if you want a system that can reason freely.
The problem is of course one of practicality. If you want a system you can explicitly program from the outside, there has to be a set of predicates which both the programmer and the AI understand to mean the same thing. However, in doing so you're putting a severe limit on the amount of self-organization a brain can perform. Natural systems are not like that.
In addition, an AGI is likely to be homoiconic, representing its knowledge, its heuristics, its meta heuristics and meta meta heuristics in a flat (non hierarchical) manner.
The brain does not have unlimited self-organizational ability either though - it's why neurological development in the womb is so important, because if the proto-brain doesn't get initially structured just like, we don't wind up with something capable of general reasoning later in it's life.
So I don't really think this is a problem - or I'd contend we don't know enough to say it definitely would be.
I think I failed to get my point across, then. If you go back to my original comment, you'll find that priming the developing brain has in fact been addressed and that's not what I mean. I was referring to the concept of hard-coded relationship predicates which make the system easy to program from the outside, but which have the drawbacks I already talked about.
I agree, the way that the system reasons should be configurable by the system itself. Of course you have to give it some fundamental operation(ultimately we operate within the mappings of the universe), but this should be as low level as possible, allowing the system to build its own composite structure with which to reason.
The qualities of any finite system are finite so "unlimited" doesn't mean much here.
At the same time, one thing that's notable about human intelligence is that, in comparison to source code, the learning and the skills gained in one context can be much very easily mobilized for use other contexts (see Winograd and Flores' use of Martin Heidegger's term "ready at hand" in their critique of AGI, etc).
And yes, I'd love to put this claim in more exact but but that would only be possible if we'd solved the question of "what is intelligence".
Yes, since the problem isn't solved, we can't prove that a lack of flexibility will doom any given system. But I'm still fine pointing out that my human intuition, honed by many years of evolution, says any general intelligence requires far greater flexibility than is evident in most humanly constructed logic systems.
Makes me think of "Artificial Addition" ( lesswrong.com/lw/l9/artificial_addition/ ):
> A human could tell the AA device that "twenty-one plus sixteen equals thirty-seven", and the AA devices could record this sentence and play it back, or even pattern-match "twenty-one plus sixteen" to output "thirty-seven!", but the AA devices couldn't generate such knowledge for themselves.
Basically, concept graphs are better thought of as programming languages with a propensity for hardcoding than as solutions to the AGI problem.
While I think the hypergraph is a great data structure, and I agree graph rewrite rules, written as graphs have a lovely symmetry, this approach feels too low level to me. Sure you can have graphs representing both data and process ala "To Dissect a Mockingbird", but it feels that this interesting interplay of data and process is sitting too high up in the hierarchy of structure, I think you want this happening at a very low level, giving more space for emergence (whatever that may be).
There is not enough symmetry, I think the correct solution to this problem is going to look obvious.
I don't think this is an engineering problem, I think it is a radical re-imagining of what intelligence is.
My hope is with reservoir computing.
I think what we need to do is a mashup of reward modulated hebbian learning and reservoir style techniques. We need to take a gigantic sledge hammer and smash apart the incoming stream of data, spray it as far across the space as possible and then linearly compose the pieces to construct something that looks right. Combine this with hebbian learning so that those mutated, fragmented, mutated pieces of the incoming object which are useful for the purpose of the device are made more likely to occur within the network.
So you need a structure where it is possible to enhance the probability of some perturbation of the data through some global learning rule. Then you need a way of bringing those pieces together to reconstruct either the object itself or an object of use. And you need lots of it, billions of active processing elements and trillions of sparse connections.
Just my rant, perhaps it will spark something in a mind elsewhere, just passing on the pieces of the puzzle that I have smashed apart in my head.
Perhaps we need to flip this all around, the patterns come from the world, they take root in our head and use the substrate to evolve, before passing out into the world again. what an GAI needs to do is provide a place for these patterns to take root and evolve according to the GAI specific objective function...
"So perhaps you’d think that all logic induction and reasoning engines have graph rewrite systems at their core, right? So you’d think. In fact, almost none of them do."
Hmm not sure about this sentence. The most popular knowledge representation format RDF is actually a graph representation. SPARQL is a query language based on subgraph matching. SPIN provides a framework for writing rules on top of that. Couldn't those be used as a starting point instead?
If you want to get an interesting introduction to the idea of concept graphs take a look here:
http://xnet.media.mit.edu/docs/conceptnet/install.html
This works with conceptnet4, but I think it's a nice simple introduction that you can run easily. Conceptnet5 is very different from conceptnet4, but the examples for conceptnet4 are a good intro. I was never very successful with using conceptnet5.
Conceptnet data is considered noisy, ambiguous, redundant, etc. by researchers, but I think that describes our own knowledge as well. The problem that we found with conceptnet is that there's just too much knowledge in it and so relationships between concepts were washed out. It's almost like talking to an extremely knowledgable but boring person with no opinions - we were trying it out to look at relationships between concepts and emotions.
By relationship between an emotion and a concept, I mean the distance in the graph between the concept of the emotion and another concept. For example, 'cake' is closer to happiness in the graph than 'rain'.
And related to this, you get contradictory relationships between concepts. For example, some people from some cultures will have positive emotions close to 'dog', but other cultures have negative emotions associated with 'dog'. By trying to fill a concept graph with all information it can find (wikipedia, news articles, web crawling, etc) you just get a big mess.
Another problem with conceptnet 4 is that the data is culturally limited - for example it's obvious that it was filled by MIT students when you search for concepts related to happiness and the concept of getting a good grade on a test is higher than the happiness of kissing a girl.
What would be useful is to develop personal conceptnets by analyzing all communication of that person. If you can include voice communication you can get even better emotional data than you would from just analyzing text. Even better would be to keep a time series of personal conceptnets and blend them over time, and from that you can analyze what external factors changed a person's emotions/concepts. You could even do experiments - apply some external factor and see how they change.
I've been playing around with ConceptNet for a year now, it's extremely impressive but I haven't quite applied it to a domain yet.
I often wonder about the cultural aspect you mention as I worked with a group that was attempting to model space, time and opinion data: i.e. 'dog isA food' differs according to geographic location. The old joke goes: scientists create a super intelligent computer and ask it 'is there a God?', and the computer replies 'there is now'. If future AIs are trained primarily on the English speaking Internet then perhaps God really will be an Englishman. For that to really be the case though the computer's reply would have to actually be a joke.
Perhaps it's time, given people are claiming they've passed the Turing test on dubious grounds for media attention, that a new test is established based on whether the computer can tell a funny joke. Higher level comedy plays with meaning and context, thus to make a joke you have to display mastery over the possible meanings as well as building a mental model of how your interlocutor will process it. Thus a being that can tell a joke is displaying a distinct sophistication. Many humans might fail that one though.
Why do I think this is a problem? It's clearly not relevant for AI tasks, but AGIs have to be general and self-organizing. Of course I do agree with the research consensus that we probably have to "cheat" by hardcoding certain things (brains do it too, afterall), but maybe tags for node relationships should not be predefined like that if you want a system that can reason freely.
The problem is of course one of practicality. If you want a system you can explicitly program from the outside, there has to be a set of predicates which both the programmer and the AI understand to mean the same thing. However, in doing so you're putting a severe limit on the amount of self-organization a brain can perform. Natural systems are not like that.