Hacker News new | past | comments | ask | show | jobs | submit login

Inconsistent axiomatic systems can prove anything. Thus either Haskell programs can prove the limits (and prove the opposite statement, as they are inconsistent) or they aren't axiomatic system as you mean it. Both possibilities contradict your statement:

> Chaitin's incompleteness theorem mean AI cannot learn




If programs are not axiomatic systems, then they cannot learn, period. Only ones that have consistent axioms have a hope of learning, and even then it is severely limited.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: