Hacker News new | past | comments | ask | show | jobs | submit login
A Minimal Architecture for General Cognition [pdf] (byu.edu)
81 points by luu on Aug 12, 2015 | hide | past | favorite | 12 comments



When I was an undergrad doing AI at the start of the 90s, the 'general AI architecture' seemed to already be a passing phase. Though many of the proposed architectures were not symbolic, the approach felt part of the symbolic AI bubble that had recently burst. The idea is appealing, but the difficulty in showing success even in isolated tasks made the claims of general architectures rather bald and self-aggrandising.

I felt at the time that it would take a lot of progress before it was warranted. We'd need to actually develop tools that were widely applicable and solve certain fundamental problems (the frame problem, the problem of behavior composition, the problem of inference). I'm not entirely convinced 'deep learning' is as widely applicable or capable as its hype, and we haven't found too many interesting new general purpose cognitive algorithms in the last 30 years. So it still feels way too early to be talking about general models of cognition, when we don't have convincing data that shows the generation of cognition in any context.

An interesting paper, but it feels very 1980s, to me. At root a combination of AI techniques glued together with hand waving and optimism.


Github of MANIC implementation: https://github.com/mikegashler/manic/blob/master/docs/index....

Docs for implementation: https://htmlpreview.github.io/?https://github.com/mikegashle...

From docs:

Does it work? It passes some tests. More testing is still needed.

What is the license of this code? ...Creative Commons Zero public domain dedication. In other words, you can do pretty much whatever you want with this code.

Why is MANIC so slow? It does a lot of stuff. Your brain has about 100 billion neurons that all run in parallel. This code is doing it all in serial on the Java Virtual Machine.

My thoughts:

Can this be trained for "abstract" knowledge (wikipedia) or does it need specific, detailed knowledge? (the paper examples train with several related real-world states)

Why can't this be run in parallel?

EDIT, additional sources:

SVG diagram of the MANIC architecture: http://uaf46365.ddns.uark.edu/lab/cogarch.svg

2011 paper which began this work: http://axon.cs.byu.edu/papers/gashler2011ijcnn2.pdf


Hmm,

I can't tell from this description how "MANIC" solves the broad frame problem.

Another issue that I think Nicholas Cassimatis has focused on the most is that any multi-model approach is going to need much tighter integration between modules than ordinary software system normally have.

And altogether, the document seems little more than a series of arrows drawn between different subsystems. It may be "correct" but so broad as to be not-useful. I'll have to look at their implementation.


Anyone who has had a visceral reaction might agree with Cassimatis about tighter integration between modules.


It looks like they just put a planning algorithm on top of a deep learning algorithm. I think this ignores the advancements of reinforcement learning. And I don't think this model adds to much. A GA isn't enough to do high level planning, and you need more than just a big neural net to get AGI.


"A GA isn't enough to do high level planning"

It's quite surprising how much can be done by a GA, especially on a computer where more alternatives can be considered per unit time.

One of my brothers learned to fix cars via GAs. I remember the first time he tried: armed with only a screwdriver and his statement "There are only 3 screws on this damn carburetor; there must be some way to tune the thing!" The other brothers, taught by our father the intricacies of mechanics, laughed derisively.

Nowadays he's not afraid to replace struts or overhaul the top-end of an engine. I stand in awe, both of my brother and the effectiveness of GAs (otherwise know as "trial and error"). While perhaps not the fastest path to a solution, they usually get the job done.


What is a GA?


Genetic Algorithm [1]. The MANIC planner uses a GA.

[1] https://en.wikipedia.org/wiki/Genetic_algorithm


...if general intelligence is one of the ultimate objectives of the fields of machine learning and artificial intelligence, then they are very much on the right track...

Bold assertion.


Sounds like they've reinvented deep reinforcement learning.


To be fair, the document claims not to be trying to invent anything new but rather to be cobbling together different known algorithms into a broad intelligence architecture.

I'd see the problem laying in these approaches not easily inter-operating with each other rather than in the approaches themselves.


Interesting, but study of human consciousness suggests that the basic model used for human cognition is sequences of framed and situated actions. If this is true then a different model might be better for planning and evaluating progress regardless of goals or context.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: