If you want an easy solution that makes good financial sense for the companies training AIs, then it's censorship.
Not training the AIs to be racist in the first place would be the optimal solution, though I think the companies would go bankrupt before pruning every bit of systemic racism from the training data.
I don't believe censorship is effective though. The censorship itself is being used by racists as "proof" that the white race is under attack. It's literally being used to perpetuate racism.
If you train an AI system on a non racist data set, I bet you would still end up with racist or similar content, simply because exploitation, hatred and oppression of weaker groups is such a persistent part of our species history.
I think this line of thought would end up equating being considerate or decent, with “self censorship”.
But I guess I have an identity, which includes NOT being an asshole, and the tool should technically be able to be an asshole, because it’s trained on everyone’s content.
So now I’m far more confused that before I wrote the last paragraph.
PS: There is NO avenue of defense, where racists dont find things to prove their point. Flat earthers can conduct classical physics experiments, yet find issues with their own results.
Not training the AIs to be racist in the first place would be the optimal solution, though I think the companies would go bankrupt before pruning every bit of systemic racism from the training data.
I don't believe censorship is effective though. The censorship itself is being used by racists as "proof" that the white race is under attack. It's literally being used to perpetuate racism.