This is kind of a funny quirk given that yesterday I had to actively convince ChatGPT to even pretend something was real for a question.
The moment you tell it “pretend that X is Y” it immediately responds with some variation of “I am an AI trained on real info and can’t imagine things”. If you retry a bunch of times or actually try to convince it (“I understand, but if…”) it eventually complies.
The moment you tell it “pretend that X is Y” it immediately responds with some variation of “I am an AI trained on real info and can’t imagine things”. If you retry a bunch of times or actually try to convince it (“I understand, but if…”) it eventually complies.