Hacker News new | past | comments | ask | show | jobs | submit login

You should just need a better prompt. I think everyone would benefit from using a standardized prompt which asks the model to think through its work between `<thought>` tags before writing its response, and also reflecting on the response between `<reflection>` tags, and then outputting the final response afterwards





Easy, just instruct the LLM to not hallucinate the score, problem solved.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: