Hacker News new | past | comments | ask | show | jobs | submit login

Symbolic AI seems to me inherently a wrong concept.

> A more contrived relation to this in real life that I've been thinking about: if a child knocks over a vase, you might be angry at them, and they might have done the wrong thing. But why did they do it? If a child can explain to you that they knew you were afraid of insects, and swung at a fly going by, that can help you debug that social circumstance so you and the child can work together towards better behavior in the future.

Maybe. What if the real reason was a combination of a misconfigured bayesian network in child's head that mistakenly assigned high probability to seeing a fly when there was a dust speck combined with an overreactive heuristics that made it start waving hands around? Or something? There may not be a reason that can be specified in symbols relevant to anything else.

In general, a rational reasoner modelling a probability network would make a decision because all the items upstream added up to a particular distribution downstream. There are no symbols involved in such computation, and it would be strange to suddenly include them now.

Also, if you really sit do and do some serious introspecting, you'll realize that any symbols we assign to explain our thoughts are arbitrary and after-the-fact. Brains don't run on symbols, they generate them later for communications.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: