Hacker News new | past | comments | ask | show | jobs | submit login

What difference does the exact phrasing make in case of "ChatGPT lies"? I don't think we have to be concerned about its reputation as a person, so the important part is making sure that people don't hurt themselves or others. "Lie" is a simple word, easy to understand, and the consequences of understanding it in the most direct and literal way are exactly as desired. Whereas if you go waxing philosophically about lack of agency etc, you lose most of the audience before you get to the point. This is not about intellectual rigor and satisfaction - it's about a very real and immediate public safety concern.



I understand that if you summarize a position you necessarily lose some accuracy, but if the summary is inaccurate and defamatory, that's not fair or helpful to the people involved. For example, “abortion is murder!” doesn't defame any particular person but it still casts people who had an abortion and doctors who perform them in a bad light. Similarly, I think “ChatGPT lies!” is unfair to at least the OpenAI developers, who are very open about the limitations of the tool they created.

The pearl-clutching around ChatGPT reminds me of the concerns around Wikipedia when it was new: teachers told students they couldn't trust it because anyone could edit the articles. And indeed, Wikipedia has vandals and biased editors, so you should take it with a grain of salt and never rely on Wikipedia alone as a fundamental source of truth. But is it fair to summarize these valid concerns as “Wikipedia lies!”? Would that have been fair to the Wikipedia developers and contributors, most of whom act in good faith? Would it be helpful to Wikipedia readers? I think the answers are no.

Like Wikipedia, to make effective use of (Chat)GPT you have to understand a bit about how it works, which also informs you of its limitations. It's a language model first and foremost, so it is more concerned with providing plausible-sounding answers than checking facts. If you are concerned about people being too trusting of ChatGPT, educate those people about the limitations of language models. I don't think telling people “ChatGPT lies” is what anyone should be


It's a similar case in principle, but devil is in the details. GPT can be very convincing at being human-like. Not only that, but it can "empathize" etc - if anything, RLHF seems to have made this more common. When you add that to hallucinations, what you end up with is a perfect conman who doesn't even know it's a conman, but can nevertheless push people's emotional buttons to get them to do things they really shouldn't be doing.

Yes, eventually we'll all learn better. But that will take time, and detailed explanations will also take time and won't reach much of the necessary audience. And Bing is right here right now for everyone who is willing to use it. So I'm less concerned about OpenAI's reputation than about people following some hallucinated safety protocol because the bot told them that it's "100% certain" that it is the safe and proper way to do something dangerous.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: