Lol. After 5-6 questions it started doing the opposite response that I choose. I choose A every time and it started forcing me to take B option. Classic LLM success.
It’s doing so because it includes the entire history of choices in every prompt. By using the API directly and modifying the response to only include the assistant messages and not the user messages it would stop this behavior.