Hacker News new | past | comments | ask | show | jobs | submit login

I feel that "category error" means "I think that the definitions you use or imply are wrong, but I can't/won't elaborate."

LLMs aren't coffee makers. They were trained on internet-worth of human data. They have procedures to imitate all kinds of personalities. RLHF moves the network towards a specific imitated personalty (a helpful assistant, usually).

The question in not "do they have it", but "how close the imitation to the real thing given the limitations of LLMs".




> I feel that "category error" means "I think that the definitions you use or imply are wrong, but I can't/won't elaborate."

I think the rest of your comment motivates why I think it's a category error quite nicely. The word 'personality', for me, is connected with how things like emotion, temperament, .. impact a person's actions and thoughts.

Therefore, saying an object, or LLM, has 'real' personality is a category error. The LLM doesn't have any of those things. As you say, it imitates what personality often manifests as: word choices, tone, length of response..

To be clear, when I refer to things like 'emotion' and 'temperament', I mean the sort of qualia or qualitative experience that we usually attach to these words. I wouldn't accept a "ChatGPT, act as if you're sad for the rest of this conversation" as a substitute for emotion, for instance.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: