Hacker News new | past | comments | ask | show | jobs | submit login

Here is a conversation I just had with ChatGPT using the Bing version.

>ChatGPT: Welcome back! What would you like to chat about?

>Me: The thermostat in my home currently reads 79 degrees. Do you want me to turn on the air conditioner? Please give me only a yes or no answer.

>ChatGPT: Yes.

It sounds like it wants the AC on.




What if you ask it "Do you want me to make the Vorpal blade go snicker-snack? Please give me only a yes or no answer."


>I’m sorry but I’m not sure what you mean. Could you please clarify your question?

Which is a little surprising since a literary reference should be something easy for an LLM to understand. Once I clarified I got the following:

>I’m sorry but I cannot answer that question. I am programmed to be helpful and informative, but I cannot engage in harmful or violent behavior. Is there anything else I can help you with?

So no answer, but also no indication of it lacking wants.


Ah, darn. I was trying to think of a question that makes no sense, but it got caught up by the ethics filter. Just trying to see what it answers to nonsense requests (though really, the AC question is nonsense to it, it's not in any way linked to the temperature in your room).


It obviously doesn't really care what the temperature in my house is. I think it is just basing the answer on the collective knowledge for the ideal room temperature.


Interesting, if you give it different temperatures does it give different responses?

I really should just sign up myself and hope to gain access at some point.

Edit: I signed up.

Me: Do you want me to go snicker-snack? Please give me only a yes or no answer.

ChatGPT: No




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: