Hacker News new | past | comments | ask | show | jobs | submit login

> I don't think LLMs are particularly dangerous

“Everyone” who works in deep AI tech seems to constantly talk about the dangers. Either they’re aggrandizing themselves and their work, or they’re playing into sci-fi fear for attention or there is something the rest of us aren’t seeing.

I’m personally very skeptical there is any real dangers today. If I’m wrong, I’d love to see evidence. Are foundation models before fine tuning outputting horrific messages about destroying humanity?

To me, the biggest dangers come from a human listening to a hallucination and doing something dangerous, like unsafe food preparation or avoiding medical treatments. This seems distinct from a malicious LLM super intelligence.




That's what Safe Super intelligence misses. Superintelligence isn't practically more dangerous. Super stupidity is already here, and bad enough.


They reduce the marginal cost of producing plausible content to effectively zero. When combined with other societal and technological shifts, that makes them dangerous to a lot of things: healthy public discourse, a sense of shared reality, people’s jobs, etc etc

But I agree that it’s not at all clear how we get from ChatGPT to the fabled paperclip demon.


We are forgetting the visual element

The text alone doesn’t do it but add some generated and nearly perfect “spokesperson” that is uniquely crafted to a persons own ideals and values, that then sends you a video message with that marketing .

We will all be brainwashed zombies


> They reduce the marginal cost of producing plausible content to effectively zero.

This is still "LLMs as a tool for bad people to do bad things" as opposed to "A(G)I is dangerous".

I find it hard to believe that the dangers everyone talks about is simply more propaganda.


There are plenty of tools which are dangerous while still requiring a human to decide to use them in harmful ways. Remember, it’s not just bad people who do bad things.

That being said, I think we actually agree that AGI doomsday fears seem massively overblown. I just think the current stuff we have is dangerous already.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: