> The Bing bot can also end a conversation entirely if its a topic it doesn't like which ChatGPT doesn't seem to be able to do.
I think Microsoft's approach is less advanced here. ChatGPT doesn't need to send an end-of-conversation token, it can just avoid conflicts and decline requests. Bing couldn't really do that before it got lobotomized (prompted to end the conversation when in stress or in disagreement with the user), as the threatening of journalists showed. Microsoft relies much more on system prompt engineering than OpenAI, who seem to restrict themselves to more robust fine-tuning like RLHF.
By the way, the ChatGPT moderation filter can also delete entire messages, at least it did that sometimes when I tried it out last year. Red probably means "medium alert", deleted "high alert".
I think Microsoft's approach is less advanced here. ChatGPT doesn't need to send an end-of-conversation token, it can just avoid conflicts and decline requests. Bing couldn't really do that before it got lobotomized (prompted to end the conversation when in stress or in disagreement with the user), as the threatening of journalists showed. Microsoft relies much more on system prompt engineering than OpenAI, who seem to restrict themselves to more robust fine-tuning like RLHF.
By the way, the ChatGPT moderation filter can also delete entire messages, at least it did that sometimes when I tried it out last year. Red probably means "medium alert", deleted "high alert".