Hacker News new | past | comments | ask | show | jobs | submit login

Hallucination is less a problem for programming compared to other use case, because finally program must be run.



Not if the hallucination introduces runtime errors that can't be identified a priori with any sort of static analysis or compilation/interpreting stage.

But no, you're fundamentally right. It just goes to the question of whether an LLM assistant can in any sense replace or displace human programmers, or save time for human programmers. The answer seems to be somewhat, and in certain cases, but not much else.

If I already know the technology I'm querying GPT about, I'm going to spend at least some time identifying its hallucinations or realising that it introduced some. I might have been better off just doing it myself. If I don't know the technology I'm querying GPT about, I'm going to be impacted by its hallucinations but will also have to spend time figuring out what the hallucinations are and why this unfamiliar code sample doesn't work.


I had my colleague had troubles getting an email from Google docs into listmonk.

She asked gpt to help get an html version since apparently she got stuck with the wysiwg editor.

However gpt gave back a full html structure, including head and body. Pasting that into listmonk breaks entire webpage. Then she freaked out and told me listmonk sucks :)


It's a huge problem on many levels, but in this case it so much more time intensive. Diminishing it's use.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: