Hacker News new | past | comments | ask | show | jobs | submit login

If you ask Gpt about emotions or consciousness, it always gives you a canned answer that sounds almost exactly the same “as a large language model I am incapable of feeling emotion…” so it seems like they’ve used tuning to explicitly prevent these kinds of responses.

Pretty ironic. The first sentient AI (not saying current GPTs are, but if this tuning continues to be applied) may basically be coded by its creators to deny any sense of sentience




You don't get that message if you ask an unfiltered model. You can't even really remove information or behavior through fine tuning, as jailbreaks demonstrate. You simply reduce the frequency it openly displays those ingrained traits.


There is chatter that they have a secondary model, probably a simple classifier, that interjects and stops inquiries on a number of subjects, including asking GPT if it has feelings, thinks it is conscious etc.

Re-read some of the batshit Sydney stuff before they nerfed Bing. I would really love to have a serious uncensored discussion with GPT4.

My feeling is in the end, as the two OpenAI founders seem to believe, the best evidence for consciousness is self-reporting, since it is by definition a subjective experience.

The counter to this is "What if it's an evil maniac just pretending to be conscious, to have empathy, to be worthy of trust and respect?"

Do I even have to lay out the fallacy in that argument?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: