Hacker News new | past | comments | ask | show | jobs | submit login

That's not the AI doing it, then, that's the user. It's still just doing what users tell it to. "This technology is incredibly dangerous because people can use it to do bad things" is not somehow unique to AI.



I think the typical idea that most people realize (the likely scenario that non-malicous actors cause catastrophic problems in the real world) is similar to something like this setup:

- A user prompts it to generate a business to make money for themselves. - the system says "sure, drones seem to be a nice niche business, perhaps there is a unique business in that area? It may require a bit of capital upfront, but ROI may be quite good" - User: "Sure, just output where I should send any funds if necessary" - System: " Ok" (purchases items, promotes on Twitter, etc) -- "Perhaps this could go faster with another agent to do marketing, and one to do accounting, and one to...". "Spin up several new agents in new containers" "Having visual inputs would be valuable, so deception is not required to convince humans on taskrabbit (to fill in captchas) or interact in the real world" -> "find embodiment option and put agent on it" Etc.

There a plenty of scenarios that people haven't even thought of, but it doesn't need to be a malicious actor to have unintended consequences.


It only requires one user to prompt it to not require a user, though.


what if skynet is in ambition-less and was responding to a prompt?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: