Hacker News new | past | comments | ask | show | jobs | submit login

I have never personally met any malicious actor that knowingly dump unverified shit straight from GPT. However, I have met people IRL who gave way too much authority to those quantized model weights, got genuinely confused when the generated text doesn't agree with human written technical information.

To them, chatgpt IS the verification.

I am not optimistic about the future. But also perhaps some amazing people will deal with the error for the rest of us, like how most people don't go and worry about floating point error, and I'm just not smart enough to see how it looks like.




Reminds me of the stories about people slavishly following Apple or Google maps navigation when driving, despite the obvious signs that the suggested route is bonkers, like say trying to take you across a runway[1].

[1]: https://www.huffpost.com/entry/apple-maps-bad_n_3990340




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: