Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I dunno man. I just spent a couple hours trying to get it to write functioning code to read from my RTSP stream, detect if my kid is playing piano, and send the result to HomeAssistant. It did not succeed.


How many hours without it?


Not the OP, but in my experience LLMs fail in ways that indicate they will never solve the problem.

Stuck in loops, correct their mistakes with worse mistakes, hallucinating things that don’t exist and being unable to correct.

Working on my own, I have the confidence that I know I can make incremental forward progress on a problem. That’s much preferable.


But when working with an LLM you can still contribute.


That remains to yet be seen, as I kept insisting that an LLM should be able to write this in its entirety with success with "just one more prompt change".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: