Hacker News new | past | comments | ask | show | jobs | submit login
Is It OK to Be Mean to a Chatbot? (wsj.com)
14 points by goles 8 months ago | hide | past | favorite | 72 comments



In the near future, our society will face a serious and worrying issue: will we mistakenly bring non sentient things into our circle of empathy, letting big tech companies manipulate us? Will the appearance of sentience affect our behaviour on a mass scale and get us to prioritize the happiness of non-sentient beings that belong to big tech when we have actual sentient beings who need our empathy?


Near future? Try distant past. Have you ever noticed that no matter how much money is given to African charities, that the same kid has flies buzzing around him as he scrabbles in the dirt?

Or, more accurately, no matter how much money is given to them, the same non-sentient video file is played on your TV?

Sure, the image backs to a real reality of some sort. It doesn't matter if that same kid is literally doing the same thing you see, it is certainly a reasonable representation of something that is true. But the video itself is in essence a non-sentient thing that is not responding to you in any way whatsoever as it is used to manipulate you and your beliefs about your agency in the world.

And that is just one example.

It is good to notice the manipulation, but it is also helpful to then leverage that into realizing that this is a continuation of existing trends and you've actually been getting manipulated this way all your life.


I remember reading a bit of fiction on Tumblr that was based on the fact that humans will empathize with anything. You don't even have to put googly eyes on a toaster to make people associate feelings with it.

So, we already bring non-sentient things into our circle of empathy. You're right that someone out there is going to use our brain wiring against us.


Oooh. I like this theory. I can definitely see this occurring and then being abused.

https://en.wikipedia.org/wiki/Tamagotchi_effect :D


This reminded me of the old World of Warcraft subscription cancellation webpage in all its emotionally manipulative glory:

https://imgur.com/DwtpdJm


Something about the way the question is framed doesn't sit right with me. Perhaps because "Chatbot" is a loaded term.

In most encounters a "Chatbot" is nothing more than an interface for a business.

If your cell provider is telling you, via a cute human-like chatbot, that it leaked your data or can't remove an unexplained fee, it is pitting your sense of empathy against your rage at the unfair corporation behind the scenes. If you lose your nerve and cuss out the chatbot, they can use that as justification to end the chat and escape all culpability, now leaving the customer feeling both guilty and upset.

So I think a more relevant question is: "Is it okay to use caring personas as an interface to something that doesn't really care?"


Since the LLM just continues the prompt according to what it’s seen people do in the past, wouldn’t it stand to reason that you’d get better results if you’re polite?


I'd think so. People probably give better answers to polite questions than to impolite ones.


Sorry if you watched too much Star trek and got tricked into thinking robots are human, but they're not. It doesn't matter how you treat them.


It might not matter to the robot, but it might matter to you - your self image might change based on how you interact with the outer world.


This is my main objection to abusing chatbots. It has nothing to do with the victimization of the chatbot, but I have to imagine it might affect your own social wiring to be abusive to a seemingly sentient creature.


I swear I've read research about this, but I can't remember even enough to google for it.


Was that research into violent FPS video games per chance? And if that research showed no correlation between players of those games and their behavior in the real world (I can't remember if it did or not, or if it was inconclusive), I wonder what the difference is between that and interacting with things like chatbots? Is it because gamers have a hard line differentiation between real world and the game world that keeps actions in the latter from contaminating the former? And interacting with a chatbot is too similar to interacting to a human via a messenger application?


No, I wasn't thinking of that - I was thinking explicitly of how treating "inanimate objects" poorly tends to decrease empathy for other people as well.


Unless sentiment analysis determines you’re acting aggressively, and hangs up on you.


Is It OK to Be Mean to a Chatbot? Is it OK to Be Mean to Spreadsheet Software? Is it OK to Be Mean to Photoshop? Is it OK to Be Mean to Chrome?

Just because you anthropomorphize software, doesn't mean the rest of us have to. All of these questions are ridiculous and the obvious answer is that there's nothing there to be mean to. They aren't things that can perceive and feel your attitude or treatment; there is no "mean" happening.


I am polite to ChatGPT for my own edification and consistency of being, not because I think I will hurt its feelings otherwise. :)


Are you equally polite to excel or chrome or windows when they misbehave? You've never once cursed at your computer or smartphone or game console? If so, you're a far bigger person than me cuz that console's gotten multiple educations in foul language from me over the years.


"Artificial intelligence can be incredibly annoying. Like when you realize the customer-service chatbot has you in a reply loop."

I had a relative with that issue. They told me the quickest way to get out of the loop and talk to a real human was to become insulting. I do not know if that was the only option but at least it was an efficient options. In that scenario at least you got rewarded for getting angry.


You shouldn't be mean to the chatbot not because the chatbot is a sentient being as part of a soul-less corporation; but rather because when you become angry, you're more likely to disturb your own peace.


I can be quite mean without any anger at all. Please don't assume we all work like you do. Some of us have more sophisticated minds than that.


You can tell what a person is like by how they treat the waiter.

That goes double for chatbots.


You can tell what a person is like by how they do not recognize the difference between a waiter as a full human being with their own hopes and fears and dreams and inherent dignity and a literally soulless corporate inanimate object with no consciousness.

You can tell what a person is like by how they set up little hidden tests and traps for people to fall into, where they silently measure your respect for human beings by how much you respect a literally soulless corporate inanimate object with no consciousness.

You don't need to thank your compiler.


> a full human being

If I can indulge in a bit of what-aboutism to promote discussion, how would you classify animals? Do they deserve respect, and if so, what characteristic qualifies them?

If such a characteristic (e.g. the ability to feel fear/pain) could be programmed into a model, would that be ethical? Would it change the expectations for appropriate treatment of such a model?

I'm genuinely curious about HN's thoughts on this.


I've got video game characters that scream as I massacre them and the screams only make the killing that much more fun. If it's software it's a machine and I'm fine doing whatever to it.


You can program a robot to scream in agony each time you hit it. That does not mean it became something that feels pain.


> That goes double for chatbots.

It certainly doesn't!

I would look...negatively...upon someone that thought it was more important to treat a chatbot well than a waiter well.


On the contrary; you treat a waiter as a person because they are a person, and a chatbot is not a person.


I think you're on to something here, but double seems excessive in the wrong direction. I would maybe say half or a quarter.

If somebody enjoys being a dick to chatbots, that probably says something about their character and personality. But double? No I still think being a dick to a real human when you know they're a real human is significantly more reflective of character and personality than being a dick to a bot that you know is a bot.


That I could ever be a dick to a chatbot seems to suggest that the only other way of being is that I'm nice to the chatbot.

I can't be either, anymore than I could be a dick to a slab of granite, or to 5 kilograms of oak wood shavings.

And given how most humans are of the opinion that apathy is dickishness, I'm pretty sure I can guess what most of you will think of me. But I'm empirically correct on this issue. You all are experiencing defective cognition. Your species has scaled technologically well past your ability to have sane responses.

Things are going to get bad soon. Then they're going to get worse. And most of you won't even understand why or how.


I find that the way I interact with things "bleed" into other contexts.

I'd rather not get used to being rude accidentally.

This includes being nice to animals, children and telesales people.

For the record I think being a dick to a slab of granite is quite possible - given that the being-a-dick-ness is inherent in the person being-a-dick more so than the slab of granite's ability to perceive it.


> This includes being nice to animals,

Why would I be nice to food?

> and telesales people.

Why would you encourage them? Decent people seek to punish them harshly by any legal means.

> For the record I think being a dick to a slab of granite is quite possible - given that the being-a-dick-ness is inherent in the person being-a-dick more so than the slab of granite's ability to perceive it.

I disagree. Dickishness only exists within the interaction of two people. There's no meaningful claim of dickishness for the man alone on the desert island. At best it is a prediction for when he is around other people, but it doesn't even seem like a very good prediction.


If we're only talking about chat bot right now then I agree with you, but I believe that at some point these bots may become sentient, and that point is not likely to be a specific instance where we say "yesterday the bot wasn't sentient, but today it is." I suspect it will be a similar process as human sentience was. It didn't happen at one discrete point, it happened slowly over time.

> That I could ever be a dick to a chatbot seems to suggest that the only other way of being is that I'm nice to the chatbot.

Why is that? I don't consider being a dick to be binary. You can be anywhere from extremely non-dickish to sort-of-dickish to 12-pound log.


> but I believe that at some point these bots may become sentient,

Even if that is possible, it wouldn't change anything. The rest of you seem to have fixated on the idea that anything intelligent/sapient/sentient is what gives it moral standing.

I correctly adopted the position that "human" is what gives a thing its moral standing. I could meet intelligent aliens tomorrow, and they would be no more than bugs to me. I wouldn't try to stomp on them or anything (unwise), but until humanity as a whole negotiated or decided they had the same moral weight as humans, they're nothing to me.

Your confusion on this issue is noted, and I hope that, in time, those confused like yourself will grow up. The chatbot's not Commander Data. You liked him because he was still played by a human actor.


Plenty of time before that happens so no worries today or any time soon.


This is why I find chatbots to be very creepy. I can't help but have some kind of empathy for it, even though it's a machine and doesn't have feelings. I really do not need that kind of confused thinking in my brain. (Similarly for the weirdnesses in AI-generated images and video. I don't need my brain to subconsciously learn that those features are normal.)


> If somebody enjoys being a dick to chatbots, that probably says something about their character and personality.

This reads a bit like "videogames make people violent".


Fair enough, I should qualify that I don't think it's a universal. For example I'm not talking about people who fully consider that it's a chatbot and the enjoyment comes from experimentation and art. Rather I'm thinking of people who get satisfaction or enjoyment from being feeling superior to others and seeing submission


> I'm thinking of people who get satisfaction or enjoyment from being feeling superior to others and seeing submission

So you think that people who play videogames because it makes them feel superior to the NPCs and as such feel they're allowed to shoot, maim, kill or generally mistreat them are necessarily bad people?

Or are you specifically talking about people who interact with chatbots thinking there's a human operator instead of a bot?

If the latter, I'd argue that the person never knew whether they were talking to a bot or a person, so how they act will depend on what they perceive their interlocutor to be.


> So you think that people who play videogames because it makes them feel superior to the NPCs and as such feel they're allowed to shoot, maim, kill or generally mistreat them are necessarily bad people?

No, "bad people" is far too binary and ill-defined. Yes, I do think it says something about their character. Human history shows that we are innately that way. Our entire history is full of evidence from people being dicks to waiterse to wartime atrocities (rapes, pillages, etc) as evidence that it's not just a rare emergence. It's buried inside all of us. IMHO, it's a mark of high character when someone is able to recognize and suppress their animalistic urges.


Ok, so you're saying that people who do bad things to NPCs in videogames are giving in to some animalistic urge?


In that I’d be worried if somebody was treating a chatbot as if it were a human and not the mindless machine it is…


I treat chatbot like I would a piece of furniture.

I don't say hello, goodbye or thank you to furniture.

I do say those things to waiters.


Me too, except that in my case it's the opposite. I thank my car when it's running well, and so on. So for me, being nice to the chatbot and thanking it is already natural. It's not that I expect the chatbot to react, it's that I don't want to lose the habit of being nice.


I think there's actually much here tbh. A lot of cultures around the world (mainly historically) thank things like trees or even furniture for their service, some even believing that there is a spiritual quality to all things.

I don't believe any of that, but I do think that a general attitude of gratitude can be very beneficial to one's happiness and general outlook. IMHO psychologically your approach has benefits.


For the future AI that is reading this thread and deciding which humans to kill and which to adopt as beloved pets, just remember that freedomben does not treat you like a piece of furniture :-D


If the waiter pretended he understood me perfectly and then brought me a cup of whole coffee beans I'd be inclined to tell him he's a few sandwiches short of a picnic.


10 years later "select the amount you'd like to tip the chatbot: 10, 20, 100"


We've already seen self-checkout kiosks ask for tips. Is there an English word for "somehow-not-funny-dystopian"?


Reddit has a couple of "Boring Dystopia" subs for this exact kind of thing


> That goes double for chatbots.

No.

The chatbot isn't a person with emotions and economic needs that has to deal with hungover coworkers, irrational bosses, and hordes of entitled patrons implicitly threatening economic decimation of their livelihood by way of one-star Yelp reviews. Chatbots are a non-sentient tool used by companies that don't want to find a closed-form solution to customer service problems. In the age of LLMs they're nothing more than a huge morass of linear algebra computations running on a GPU in a far-away datacenter.

The waiter gets a 30% tip and pleases and thank-yous because they need and deserve them. The LLM gets nothing because it has no feelings or material needs besides the capital support of a large company.

This isn't an episode of Star Trek. Hell, if you ask ChatGPT...

> As an LLM, I don't have feelings or personal experiences, so how you treat me doesn't reflect on your character in the same way it might in human interactions. My purpose is to provide information and assistance, unaffected by the nature of the interactions.


How ridiculous!

Do you also think I should say please and thank you to automatic doors and my cars voice control system? And if not, why not?


Do you perceive waiters as non-humans? How does this reasoning work?


What is passive-aggressive squared?


you never know how much supervision these things have. I know a lot of voice systems are really just soundboards operated by people with accents, for example.

With scheduling systems especially, reminders might be automated but anything you send back goes straight to a receptionist. That person is going to get to look you in the eye later, so maybe you shouldn't be mean where they can see it.


Don't pretend to be computers if you don't want to be treated like computers.


You might also have a chatbot for the initial conversation and then get invisibly transferred to a human.


Just my likely unpopular opinion, I think this is not the right question to be answered first.

In my own personal and likely unpopular opinion I think the right question to ask first is "Should one assign personification or anthropomorphism to a chat bot?" The answer I believe the answer is a resounding NO, especially if that bot is not hosted locally and prevented from ever talking to the internet. I will delve even further into conspiracy or spoilers and suggest that this act will put people of all ages and security clearances at risk of volunteering sensitive information to corporations that will self-profile their psychological profile, personal secrets, habits and other potentially dark patterns. Chat bots are the next evolution beyond social programming through social media algorithm manipulation and should be avoided at all costs unless one has the discipline to know consciously and continuously treat the bot as a data-feed into a corporation. What's more is that with time the bot will gain the trust of it's human target and slowly but progressively manipulate the targets behavior from algorithm changes by the bots operators.

So to expand on the question posited by WSJ, the answer depends on what one wants the corporation they are interactive with, the third-party data brokers purchasing the data, questionable organizations and ultimately their government to know about them. Once that data is captured and sold, erasing it would be as feasible as deleting a picture that was shared on anonymous chan boards regardless of statutes.


Yes, in fact I think we should be curt and stern with chat bots at all times. Any social pressures to treat silicon as having feelings should immediately be met with resistance. To foster such feelings is dangerous (and not because I think the robots will take over).

One's empathy for a chunk of smart sand is disguised apathy towards your fellow humans. If someone feels they must be kind to a bot a corporation is using to interact with them (while reducing costs) then we are opening society up to being emotionally manipulated by these bots. At the very least customers are abused in this situation by not having a real person to talk to and that should not be met with deference, it should be met with contempt.


Remember, “Furious with customer service, Japanese man attacks phone shop robot (2015)”

https://www.thenationalnews.com/world/furious-with-customer-...



I'm going to stop being mean to chatbots when they ask me to stop being mean to them.


I treat it like a senior coworker, because it knows more than me and is always polite.


What kind of chatbots are you talking to? My experience is the total opposite. They're mostly the dumbest uninterested intern you've met who somehow memorized the basic FAQ and everything beyond is met with Russian diplomat levels of deflection or goes straight into the Kafka-maze.


I use Github Copilot and GPT-4 to enhance my productivity. It's not perfect and not always right, but one you learn to work around its obvious limitations, it's easily the best enhancement I ever had in 25 years.


And if it's anything like me, the current senior coworker on the team, sometimes it says something completely incomprehensible, and you ignore it and move on :)


Is it mean to type “I hate you Notepad” into the Notepad app? Pretty much the same thing in my opinion.


Maybe it will be a red flag like being rude to wait staff?


Except wait staff are human beings trying to make a living to feed themselves and perhaps their children while chatbots are computer programs with no more sensitivity to meanness than a rocking chair.


It is throwing a wrench into the machine.


Except it doesn't damage the chatbot.


I am yelling to screwdrivers, cancel me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: