Hacker News new | past | comments | ask | show | jobs | submit login

Nobody said the AI needs to be sentient to enslave us all. Highly supervised AI systems mess up all the time and we often have no idea why. It "enslaving" us or whatever the overly extreme paperclip analogy is that is the current hot thought-experiment of the week is usually just talking about a mismatch between what we want and what the AI does. This is already a huge problem and will simply become a bigger problem as AI gets more powerful and complex.

Imagine you asked GPT-7 to execute your business plan. What it does is so complex you can't know whether its plan will violate any laws. Let's say you're not an evil corporation and want to do good, but that it is impossible to compete without also using AI.

At some point these systems may well become so complex we have no idea what they're actually doing. But empirically they seem to do what we want most of the time so we don't care and use them anyway.

The problem is not only people using this new tool for evil. The problem is also the tool itself. Because it might be defective.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: