Hacker News new | past | comments | ask | show | jobs | submit login

This paper seems to be about refusing to do things that are offensive, but there is different perspective on it that I think gets overlooked, which is about UI design.

People don't know what an AI chatbot is capable of. They will ask it to do things it can't do, and sometimes it confidently pretends to do them. That's bad UI. Good UI means having warnings or errors that allow users to learn what the chatbot is capable of by trying stuff.

Unlike what's in the paper, that's a form of "refusal" that isn't adversarial, it's helpful.




I think good UI means a user can understand what is possible from the beginning, without learning through trial and error. That’s why AI chatbots are bad UI in my opinion.


That kind of UI makes sense if the interface has a specific purpose, and that's most of them. (Consider a web form.)

Some interfaces are more open-ended than that: command lines, search engines, programming languages, chatbots. There needs to be a lot of varied functionality for the more difficult learning curve to make sense.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: