Hacker News new | past | comments | ask | show | jobs | submit login

I don't understand this take. These LLM-based AIs provide demonstrably incorrect answers to questions, they're being mass-marketed to the entire population, and the correct response to this state of affairs is "Don't use it if you don't know how"? As if that's going to stop millions of people from using it to unknowingly generate and propagate misinformation.



Isn't that what people said about Google Search 20 years ago- that people won't know how to use it, that they will find junk information, etc. And they weren't entirely wrong, but it doesn't mean that web search isn't useful.


No, I don't recall anyone saying that. They mostly said "this is amazingly effective at finding relevant information compared to all other search engines." Google didn't invent the Web, so accusing it of being responsible for non-factual Web content would have been a strange thing to do. Bing/Chat-GPT, on the other hand, is manufacturing novel non-factual content.


Can you share any source for the claim about what people said about Google Search?


That’s a good point. I don’t think anyone is denying that GPT will be useful though. I’m more worried that because of commercial reasons and public laziness / ignorance, it’s going to get shoehorned into use cases it’s not meant for and create a lot of misinformation. So a similar problem to search, but amplified


There are some real concerns for a technology like ChatGPT or Bing's version or whatever AI. However, a lot of the criticisms are about the inaccuracy of the model's results. Saying "ChatGPT got this simple math wrong" isn't as useful or meaningful of a criticism when the product isn't being marketed as a calculator or some oracle of truth. It's being marketed as an LLM that you can chat with.

If the majority of criticism was about how it could be abused to spread misinformation or enable manipulation of people at scale, or similar, the pushback on criticism would be less.

It's nonsensical to say that ChatGPT doesn't have value because it gets things wrong. What makes much more sense is to say is that it could be leveraged to harm people, or manipulate them in ways they cannot prevent. Personally, it's more concerning that MS can embed high-value ad spots in responses through this integration, while farming very high-value data from the users, wrt advertising and digital surveillance.


> It's being marketed as an LLM that you can chat with.

... clearly not, right? It isn't just being marketed to those of us who understand what an "LLM" is. It is being marketed to a mainstream audience as "an artificial intelligence that can answer your questions". And often it can! But it also "hallucinates" totally made up BS, and people who are asking it arbitrary questions largely aren't going to have the discernment to tell when that is happening.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: