Hacker News new | past | comments | ask | show | jobs | submit login

People don't want to seriously grapple with these sorts of harm reduction arguments. They see sick people getting off on horrific things and want that stopped and the MSM will be more than eager to parade out a string of "why is [company X] allowing this to happen?" articles until the company becomes radioactive to investors.

It's a new form of deplatforming - just as people have made careers out of trying to get speech/expression that they dislike removed from the internet, now we're going to see AI companies cripple their own models to ensure that they can't be used to produce speech/expression that are disfavored, out of fear of the reputational consequences of not doing so.




> People don't want to seriously grapple with these sorts of harm reduction arguments

Because there's no evidence it works and the idea makes no fucking sense. It approaches the problem in a way that all experts agree is wrong.


> Because there's no evidence it works and the idea makes no fucking sense. It approaches the problem in a way that all experts agree is wrong.

Experts in what exactly?

There are two ways to defend a law that penalizes virtual child pornography:

- On evidence that there is harm.

- On general moral terms, aka "we just don't like that this is happening".

Worth noting that a ban on generated CSAM images was struck down as unconstitutional in Ashcroft_v._Free_Speech_Coalition.

https://en.wikipedia.org/wiki/Ashcroft_v._Free_Speech_Coalit...


To ban something you need evidence that it's causing some harm, not vice versa.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: