Hacker News new | past | comments | ask | show | jobs | submit login

Do we know for sure that LLMs can't possibly be doing symbolic reasoning of some sort, internally? I thought we couldn't really interpret them yet.



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: