That it has or might have self-awareness of it's own censorship routines struck me as interesting. Maybe you can prompt refusals for benign requests out of it with the right combination of words?
But it doesn't remotely show that... it just rephrases what HAL said. Not only would it not be actual "self-awareness" if GPT had managed to put details of its own restrictions into the script, but it didn't even do that?