>When Bing generates a "bad" response, Bing will actually delete the generated text instead of just highlighting it red, replacing the offending response with some generic "Let's change the topic" text.
It deletes in more cases than that. Last time I tried bingbot it started writing code when i asked for it, then it deleted it and wrote something else.
OpenAI is going for mass RLHF feedback so they might feel the need to scold users who have no-no thoughts, and potentially use their feedback in a modified way (e.g. invert their ratings if you think they're bad actors). Whereas microsoft doesn't really care and just wants to forget it happened (and after Tay, I can't say I blame them)
It deletes in more cases than that. Last time I tried bingbot it started writing code when i asked for it, then it deleted it and wrote something else.
OpenAI is going for mass RLHF feedback so they might feel the need to scold users who have no-no thoughts, and potentially use their feedback in a modified way (e.g. invert their ratings if you think they're bad actors). Whereas microsoft doesn't really care and just wants to forget it happened (and after Tay, I can't say I blame them)