Hacker News new | past | comments | ask | show | jobs | submit login

Lol. After 5-6 questions it started doing the opposite response that I choose. I choose A every time and it started forcing me to take B option. Classic LLM success.



This sounds like a weird Neil Stephenson plot about the nature of choice in an AI controlled artificial world


It’s doing so because it includes the entire history of choices in every prompt. By using the API directly and modifying the response to only include the assistant messages and not the user messages it would stop this behavior.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: