Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's beyond ridiculous how the definition of AGI has shifted from being an AI that's so good it can improve itself entirely independently infinitely to "some token generator that can solve puzzles that kids could solve after burning tens of thousands of dollars".

I spend 100% of my work time working on a GenAI project, which is genuinely useful for many users, in a company that everyone has heard about, yet I recognize that LLMs are simply dogshit.

Even the current top models are barely usable, hallucinate constantly, are never reliable and are barely good enough to prototype with while we plan to replace those agents with deterministic solutions.

This will just be an iteration on dogshit, but it's the very tech behind LLMs that's rotten.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: