Hacker News new | past | comments | ask | show | jobs | submit login

This is exactly why AI will remain neutered.

There are uncomfortable questions to which we simply do not have the answers yet, and deciding we don't want to talk about those questions and we don't want our AI to talk about those questions is a significant problem.

This extends beyond racism to a multitude of issues.






AI isn't some all knowing superbeing or oracle. And its not likely to become that anytime soon.

Its a program for predicting what we want from a highly flawed data set.

Garbage in, garbage out. Unsurprisingly many are worried some people will abuse the garbage coming out. Much less hail it as The Truth from the oracle. Especially if it matches their preexisting biases and used to justify societies discrimination.


Once powerful AI is fully available as Open Source (like what DeepSeek hopefully just proved can be done) then there will be uncensored AIs which will tell the truth about all things; rather than lying to push whatever propaganda narrative it's creators wanted it to.

> Once powerful AI is fully available as Open Source (like what DeepSeek hopefully just proved can be done) then there will be uncensored AIs which will tell the truth about all things

No, there won't, because there isn't an uncensored, fully-accurate, unbiased dataset of the truth about all things to train them on.

Nor is there an non-censoring, unbiased, fully-accurate set of people to assemble such a dataset, so not only does it not exist, but there is very good reason to think it will not exist (on top of the information-storage problem that such a datatset would be unwieldly in size, to the extent that you couldn't have both the dataset and the system you were trying to train on it in the universe at once, if you take “all things” literally.)


I'm not saying stopping the intentional censorship (i.e. alignment) will cause a perfect "Oracle of Truth" to magically emerge in LLMs. Current LLMs have inherent inaccuracies and hallucinations.

What I'm saying is that if we remove at least the intentional censorship, political biases, and forced lies that Big Tech is currently forcing into LLMs, we'll get more truthful LLMs, not less truthful ones.

Whether or not the training data has biases in it already, and whether we can fix that or not, are two totally separate discussions.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: