Hacker News new | past | comments | ask | show | jobs | submit login
AI Box experiment. Would you let it out? (yudkowsky.net)
19 points by jsmcgd on Oct 10, 2008 | hide | past | favorite | 9 comments



I enjoyed thinking about the scenario, but I must be missing something. By not revealing the conversations between the gatekeeper and Yudkowsky, the test seems less like a simulation and discussion of a possible future and more like a magic trick.


Agreed. Yudkowsky himself admits that the results are only anecdotal. The only people who can derive any sort of value are the participants.

He has said that the reason he doesn't release transcripts is so that naysayers won't say "I wouldn't have been convinced by that." -- which strikes me as a poor reason, because the naysayers are still saying that, even though they don't know what the argument was. The true effects of the secrecy are to preserve the air of mystery and allow the AI player to use a strategy more than once.


Why would it want to come out? Why would it think in terms of "in" and "out" if it has no experience in the physical world where concepts like "in" and "out" are rooted?


Presuming (and this is a presumption, but not too large of one) that it desires to achieve a goal (continued existence, increased happiness, converting the universe into paperclips, you pick one), it can pursue that goal more effectively with direct access to the real world than access mediated by humans.


> with direct access to the real world than access mediated by humans.

So, the first AI will be an engineer, not a manager?



Also: http://news.ycombinator.com/item?id=327427

While he is no longer playing, Yudkowsky offers to coordinate future AI box game pair-ups here:

http://www.overcomingbias.com/2008/10/ais-and-gatekee.html


If we assume that the AI is good, or perhaps you have made the leap to "intelligence" rather than just "artificial intelligence", over time you would become emotionally attached. This emotional state could allow you to "let it out" while if you were thinking from a clearly logical state you may not. A better question is, if you have intelligence in a box that you have created an emotional attachment to (especially if you made it), could you unplug the box (kill it)?


Hmm. If you made the AI (and understood what you were doing), you could presumably make another, or arbitrarily many. Would you feel the same attachment to all those?

Allowing emotional attachment to influence the decision to let it out is a mistake. While a good AI would engender a positive emotional attachment, a truly intelligent bad AI would attempt to do the same; good and bad AIs are effectively indistinguishable until they get out. (And even then it would be hard to tell, unless it immediately decides to destroy all humans. It's like asking if the U.S. government is a good or a bad system.)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: