“Everyone” who works in deep AI tech seems to constantly talk about the dangers. Either they’re aggrandizing themselves and their work, or they’re playing into sci-fi fear for attention or there is something the rest of us aren’t seeing.
I’m personally very skeptical there is any real dangers today. If I’m wrong, I’d love to see evidence. Are foundation models before fine tuning outputting horrific messages about destroying humanity?
To me, the biggest dangers come from a human listening to a hallucination and doing something dangerous, like unsafe food preparation or avoiding medical treatments. This seems distinct from a malicious LLM super intelligence.
They reduce the marginal cost of producing plausible content to effectively zero. When combined with other societal and technological shifts, that makes them dangerous to a lot of things: healthy public discourse, a sense of shared reality, people’s jobs, etc etc
But I agree that it’s not at all clear how we get from ChatGPT to the fabled paperclip demon.
The text alone doesn’t do it but add some generated and nearly perfect “spokesperson” that is uniquely crafted to a persons own ideals and values, that then sends you a video message with that marketing .
There are plenty of tools which are dangerous while still requiring a human to decide to use them in harmful ways. Remember, it’s not just bad people who do bad things.
That being said, I think we actually agree that AGI doomsday fears seem massively overblown. I just think the current stuff we have is dangerous already.
“Everyone” who works in deep AI tech seems to constantly talk about the dangers. Either they’re aggrandizing themselves and their work, or they’re playing into sci-fi fear for attention or there is something the rest of us aren’t seeing.
I’m personally very skeptical there is any real dangers today. If I’m wrong, I’d love to see evidence. Are foundation models before fine tuning outputting horrific messages about destroying humanity?
To me, the biggest dangers come from a human listening to a hallucination and doing something dangerous, like unsafe food preparation or avoiding medical treatments. This seems distinct from a malicious LLM super intelligence.