Hacker News new | past | comments | ask | show | jobs | submit login

They simply don't work that way. You are asking it for an answer, it will give you one since all it can do is extrapolate from its training data.

Good prompting and certain adjustment to the text generation parameters might help prevent hallucinations, but it's not an exact science since it depends on how it was trained. Also, an LLMs training data frankly said contains a lot of bulls*t.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: