Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In theory, if you have some people who know what they're doing, they could design enough different kinds of world-model tests that they could significantly reduce the likelihood of the LLM having a world model.

I think I would probably word the distinction I would draw as "it is technically unfalsifiable, but it is not untestable."



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: