Hacker News new | past | comments | ask | show | jobs | submit login

Can someone outline how AI could actually harm us directly? I don’t believe for a second sci-fi novel nonsense about self-replicating robots that we can’t unplug. My Roomba can’t even do its very simple task without getting caught on the rug. I don’t know of any complicated computing cluster or machine that exists that wouldn’t implode without human intervention on an almost daily level.

If we are talking about AI stoking human fears and weaknesses to make them do awful things, then ok I can see that and am afraid we have been there for some time with our algorithms and AI journalism.




> Can someone outline how AI could actually harm us directly?

at best, maybe it adds a new level of sophistication to phishing attacks. That's all i can think of. Terminators walking the streets murdering grandma? I just don't see it.

what I think is most likely is a handful of companies trying to sell enterprise on ML which has been going on since forever. YouTubers making even funnier "Presidents discuss anime" vids and 4chan doing what 4chan does but faster.


You start by saying “show me examples of” and finish by saying yeah “this is as already a problem.” Not sure what point you’re trying to make, but I think you should also consider lab leaks in the sense of weaponized ai escaping or being used “off label” in ways that yield novel types of risk. Just because you cannot imagine future tech at present doesn’t indicate much.


Consider an AI as a personal assistant. The AI is in charge of filtering and sorting your email. It has as a priority to make your life more efficient. It decides that your life is more efficient if you don't see emails that upset you, so it deletes them. Now consider that you are in charge of something very important.

It doesn't take a whole lot of imagination to come up with scenarios like these.


I don't need to know how a chess computer will beat me to know that it will beat me. If the only way you will entertain x-risk is to have a specific scenario described to you that you personally find plausible, you will never see any risk coming that isn't just a variation of what you are already familiar with. Do not constrain yourself to considering only that which can be thought up by your limited imagination.


> Do not constrain yourself to considering only that which can be thought up by your limited imagination.

Don't let your limited imagination constrain you're ability to live in fear of what could be. Is that what you mean? So it's no longer sufficient to live in fear of everything, now you need to live in fear even when you can't think of anything to be afraid of. No thanks.


Instead of taking the most obtuse reading of my point, how about you try to engage intelligently? There are some manner of unmaterialized risk that we can anticipate through analysis and reasonable extrapolation. When that risk has high negative utility, we should rationally engage with its possibility and consider ways to mitigate it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: