Hacker News new | past | comments | ask | show | jobs | submit login

Humans are only capable of principled reasoning in domains where they have expertise. We don't actually do full causal reasoning in domains we don't have formal training in. We use all sorts of shortcuts that are similar to what LLMs are doing.

If you consider AlphaTensor or other products in the Alpha family, it shows that feedback can train a model to super-human levels.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: