Interesting. Thank you for the detailed response. Paternalistic is indeed not the right term and it’s nice to worry about your fellow humans to this degree. I guess I am anti-social.
But I disagree on the premise that this is comparable to a dangerous boobytrap. It’s not completely without danger of course, but that’s a property it shares with kitchen utensils, cars, scissors, religious texts..
Are we going to preface the Bible with this kind of warnings too? I’m all for it, but be consistent.
I also think that warnings like that are cheap ways to evade liability and do not actually solve the issue which is people being stupid.
Your example is not about people being deliberate and thinking deeply about their actions based on given information. If people kill themselves after talking to a chatbot I do not think a textual warning would have sufficed.
> it is that there are potential and serious harms that are easily avoided if they are told
Let’s agree to disagree.
Sorry for sounding obtuse. I enjoyed your input, makes me think. I’m just a classic annoying neckbeard.
>>Are we going to preface the Bible with this kind of warnings too? I’m all for it, but be consistent.
Outstanding idea!!
>>warnings like that are cheap ways to evade liability and do not actually solve the issue which is people being stupid.
Yes, the all-too-common generic disclaimers and warnings to cover asses for liability are usually so broad and vague as to be useless. And generally with ordinary well-tested consumer products, the problem is user stupidity.
However, I think this is different - it is an entirely new level of tech that has never before been seen by anyone, it can be amazingly useful if used with a good skeptical eye, but also truly dangerous if it is trusted too much. And you've seen the level of anthropomorphization that happens here on HN, so there is a real psychological tendency to trust it too much. So, I'd say tell 'em.
Anyway, fun conversation, hope you're having a great weekend!
True, it’s definitely very new and even I am prone to believing what it says sometimes. Then I have to remind myself that every letter can be complete and utter nonsense.
Now I think of it, it’s like conversing with an expert salesman or politician. Very tiring as they are skilled in framing the conversation. You have to double check every word.
How cool would it be if a politician would have an overlay on TV saying not to trust what he/she says plus some real-time fact checking. Fun times.
I hope you’re enjoying the weekend too, have a good one!
But I disagree on the premise that this is comparable to a dangerous boobytrap. It’s not completely without danger of course, but that’s a property it shares with kitchen utensils, cars, scissors, religious texts..
Are we going to preface the Bible with this kind of warnings too? I’m all for it, but be consistent.
I also think that warnings like that are cheap ways to evade liability and do not actually solve the issue which is people being stupid.
Your example is not about people being deliberate and thinking deeply about their actions based on given information. If people kill themselves after talking to a chatbot I do not think a textual warning would have sufficed.
> it is that there are potential and serious harms that are easily avoided if they are told
Let’s agree to disagree.
Sorry for sounding obtuse. I enjoyed your input, makes me think. I’m just a classic annoying neckbeard.