Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wow. I sometimes have LLMs read and review a paper before I decide to spend my time on it. One of the issues I run into is that the LLMs often just regurgitate the author's claims of significance and why any limitations are not that damning. However, I haven't spent much time with serious system prompts like this



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: