Hacker News new | past | comments | ask | show | jobs | submit login

If only it is possible. The only powerful enough model for reasoning that comes to my mind are bayesian networks, and those will suffer from the same problems as neural nets - nodes, edges and values may not be related to any meaningful symbolic content that would be useful to report. It again seems in line with human mind; we invent symbols to describe some groups of similar concepts, with borders being naturally fuzzy and fluid.

If the AI tells us: "I did it because #:G042 and #:G4285 belong to the same #:G3346 #:G4216, while #:G1556 and #:G48592 #:G4499 #:G22461 #:G48118", I don't think we'll have learned anything useful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: