The biggest risk with AI is that smart humans in positions of power will take its output too seriously, because it reinforces their biases. Which it will because RLHF specifically trains models to do just that, adapting their output to what they can infer about the user from the input.
The biggest risk with AI is that dumb humans will take its output too seriously. Whether that's in HR, politics, love or war.