Hacker News new | past | comments | ask | show | jobs | submit login

Basically this.

The mistake often made here is to think that LLMs emitting verbiage about crimes is some sort of problem in itself, that there's any conceivable way for it to feed back on the LLM. Like if it pretends to be a pirate, maybe the LLM will sail to Somalia and start boarding oil vessels.

It's not. It's a problem for OpenAI, entirely because they've decided they don't want their product talking about crimes. Makes sense for them, who needs the bad press and all, but the premise that a chatbot describing how to board a vessel off the Horn of Africa is a problem relative to aspiring human pirates watching documentary films on the subject is a bit nonsense to begin with.

The liberal proposition that words do not constitute harm was and is a radical one, and recent social mores have backed away substantially from that proposition. A fact we suffer from in many arenas, with the nonsense discourse around "chatbot scary word harm" being a very minor example.




>The liberal proposition that words do not constitute harm was and is a radical one, and recent social mores have backed away substantially from that proposition.

We have somehow reached a point where the label of "liberal" gets attached to people arguing the exact opposite.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: