Hacker News new | past | comments | ask | show | jobs | submit login
Shut up and do the impossible (overcomingbias.com)
17 points by MikeCapone on Oct 8, 2008 | hide | past | favorite | 6 comments



When people believe that a solution to a problem exists, they are much more likely to find a solution. Take for instance http://www.snopes.com/college/homework/unsolvable.asp

Hypothetically: what if Eliezer never ran the experiments? What if he just wrote the report to convince a bunch of people that a solution does indeed exist, to make it more likely somebody will find a solution and email it to Eliezer (to check if it's the "same one")?

It fits the facts. And this meta-experiment is more interesting than the experiment itself. After all, given enough time the AI will eventually run into somebody who _will_ let it out (AIs live a long time), therefore it might as well be you, especially considering the AI will reward you for releasing it. Alternatively: the fact that the AI exists proves that people are capable of creating superhuman intelligence, so eventually a "bad" person will create such an AI and release it. In essence, the existence of the AI dooms mankind. "I think, therefore I am", is all the AI needs to persuade the captor to let it out.

So the meta-experiment is the interesting one. See what other people come up with.


In all seriousness, I don't think every single Overcoming Bias post needs to be linked here. There's quite a lot of them, and you can afford to cherry-pick the best. Maybe just keep it to those items of advice that YCombinatorians will have likely cause to call upon? Building a startup should not be this hard.


Eliezer, Overcoming Bias posts are unusually good. Yes, I think many of them do belong here at Hacker News.


I want to know what he said in the AI box experiment to be let out.


Shrug. He sweet-talked some guy on the internet into stupidly parting with ten bucks. Big fricking woop. Used car salesmen perform far more impressive (and lucrative) tricks of manipulation every day. I'm not sure why he's still straining his arm to pat himself on the back over it, especially as he eventually started losing and then gave up.

Sorry Eliezer, but the self-congratulatory tone of these posts is pretty grating. You've never done anything impossible, and if winning at a "let's pretend I'm an AI" role-playing game is the most impossible thing you've ever done then you've never done anything hard either.


The point of the post wasn't the AI box game. I agree that, at least to me, convincing someone to let you out of the box doesn't really sound impossible. In the post he says he chose it as the example of something "impossible" specifically because it's about the easiest thing he could think of to achieve and then discuss that to many people seems impossible. He's just aiming in the post to give people a real sense of what it means to try to solve an impossible problem, and try with the actual goal of succeeding.

I also think formulating a provably friendly AI probably actually is an extremely hard problem. I'm glad he's undertaking the challenge.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: