Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, sure, but the problem is that LLMs can’t reason and revise, architecturally. Perhaps we can chain together a system that approximates this, but it still wouldn’t be the LLM doing the reasoning itself.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: