Two large organizations powered by AI have fired all of the ethicists that have spoken up about it.
A self driving research team removed the automatic breaking capability from their cars because it was "problematic" resulting in the car killing a woman.
But as I mention, the examples I'm mainly thinking of are the developer community at large. Even on HN, we have pretty prominent folks who view the term "AI safety" with a considerable amount of derision.
> Also: tens of millions in donations to "AI safety" organizations. Yes, to answer the Economist; this movement is irretrievable.
> So I'd like to engage AI risk from both these perspectives. I think the arguments for superintelligence are somewhat silly, and full of unwarranted assumptions.
> But even if you find them persuasive, there is something unpleasant about AI alarmism as a cultural phenomenon that should make us hesitate to take it seriously.
AI safety needs to be legitimized as a respectable topic in the technological community.
Perhaps not actively dismissing, but considering how trivial it is to get ChatGPT to do things it's not supposed to, clearly the developers behind it gave only a passing thought to locking it down. That, IMO, is unacceptable and perhaps even unethical.