Hacker News new | past | comments | ask | show | jobs | submit login

These aren't aimed at people making bad AGIs deliberately, but rather, at well-intentioned developers launching AGIs with serious bugs.

That is precisely the approach that does not make sense to me. By the time friendly developers can launch AGIs, unfriendly developers will not be far behind. And AI safety seems likely to be an issue far before the development of AGI - a non-general malicious AI that can only hack internet services or trick humans into running arbitrary code via conversation is already quite a serious problem.

So to me focusing on "unfriendly developers building narrow AIs" seems more logical than focusing on "friendly developers building AGI".




People can already build robots with guns that run around shooting people, but they usually don't. We still have laws to protect us from that. Nation states have the resources to build armies of gun-shooting robots and they already do! But again, they don't use them to destroy humanity for other reasons, like politics and retaliation and all that.

So I don't think we need to worry about someone maliciously making an evil robot. That's already a problem and we already spent thousands of years figuring out systems of society to protect us from those dangers.


I think without visiting the root of the site many people are afraid these are ground rules for strong AI development as an open source project. If that were the case with Google's backing I would be more than apprehensive.

We all know strong AI wouldn't be some sort of robot running around shooting people like a movie, this would be an extinction event worse than a rather large asteroid if it fell into the wrong hands.

The first country or corporation to create strong AI capable of self evolution will either be able to immediately take control of the world as we know it, or worse create something capable of destroying humanity as we know it.

That's not what's going on at all for anyone else that at first glance though that (I'll admit I am guilty of thinking that after glancing at the article as well).

Goal 1: Measure our progress Goal 2: Build a household robot Goal 3: Build an agent with useful natural language understanding Goal 4: Solve a wide variety of games using a single agent

That's all they are trying to accomplish with this (so far anyway). None of which requires strong AI.

TLDR they are formulating some laws of robotics that are a bit more granular for dumb AI which are incapable of self improvement and very task oriented.


You are thinking in narrative terms about AI. Crazy <-> Rational is a far more likely issue with early AI than Good <-> Evil. The problem is you can't optimize for general intelligence only solutions to given problems. Let's say you have two twins one of which is smarter than the other. If you ask about future predictions then how do you know the correct answer before hand.

In other words the 99% of paperclip optimizers are going to start making virtual paperclips not paving over the universe with actual paperclips. Hacking your reward function is easer than solving hard problems.

PS: Don't do drugs :0


> By the time friendly developers can launch AGIs, unfriendly developers will not be far behind.

I think making friendly or unfriendly AI is similarily hard (you depend on the same variables just with different THEN clauses).

Making accidentally homicidal well-meaning AI is much easier than both IMHO.


The solution is this: call the police and arrest the malicious developers.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: