> Yes, I know HPMOR is not your average fanfic (I've read at least two thirds of it), but just... take a moment to step back and admire the post, and the level of self-confidence required to write it. All I can say.
Consider that Harry in HPMOR is essentially a stand-in for a younger version of the author, and it becomes less surprising. He literally believes that he is an exceptional genius and that he is one of the only rational people in the world.
Additionally, he regards his work on AI as literally the most important thing in the world because he sincerely believes that it is the only possible way to save mankind.
I have some measure of respect for EY and he clearly is very intelligent but I think most people outside of his community would regard him as having a somewhat inflated ego.
Of course they would, you have to have pretty big ego problems to hate on him in the first place. Very few people in power will accept that they're not the smartest most capable guy around. Not saying EY is god and can do anything but I trust more in his potential and competence when he actually starts working on a task than in any random claimed profesional on said task.
You can not write all the POVs in HPMOR and still have 'giant ego' problems. He specifically addresses these issues (scene where harry learns to 'lose').
I think most peoples problem with him is that they hate how brazen he is in taking chances, they might think 'who is this arrogant asshole who thinks he DESERVES to talk to JKR' but the thing is he doesn't BELIEVE he deserves anything. He knows the only price for asking is hate by this group of people and the reward is far greater so why NOT ask?
> Not saying EY is god and can do anything but I trust more in his potential and competence when he actually starts working on a task than in any random claimed profesional on said task.
What are some examples of substantial accomplishments that we can look to so as to justify this faith in his ability to get things done, and done well?
I think you're taking my criticism as much more severe than it actually is. I don't doubt there is a lot of optimization that can be done in the realm of (for example) startup investing - I've seen enough of how the business world works - and I expect that EY may indeed be able to make a difference there.
But to have more trust in EY than professionals and subject-matter experts in general seems rather absurd, and is what opens up EY to criticisms of supporting a minor personality cult.
> He specifically addresses these issues (scene where harry learns to 'lose').
In which he is intended to learn delayed gratification, not humility.
> He knows the only price for asking is hate by this group of people and the reward is far greater so why NOT ask?
I'm not saying there is anything wrong with asking. I hope that he does succeed, particularly in contacting JKR, maybe even winning a Hugo awawrd (I enjoy HPMOR but I think that it is overrated). I have no problem with him seeking contacts to become an investor, talk to people about city optimization, etc. I have a lot of sympathy for what he's saying about the attitude he gets, and that is reflected on the rest of his work, because he's famous for writing fanfiction.
But simply reading his writings provides ample evidence that EY, while genuine and sincere in his beliefs, and intelligent, has a rather overly high estimation of himself and his work, and knowing that puts OP's comment into more context.
Biggest poulariser of the idea of existential risks, founder of the Field of Friendly AI research, founder of MIRI, an organisation dedicated to its research, author of a number of published articles on same. Better than most ever do but if that's all it's not enough given his ambitions.
Founder of a bunch of organizations that have done what exactly?
I like his writings, both fiction and not. At one point, I was I guess, kinda of a fan, and I wanted to look up what progress he'd made to his self-assigned goal of Friendly AI, and I couldn't find anything besides a few cute papers.
I was unimpressed. No doubt Eliezer is smart, but contrary to what he seems to think, there are hundreds of thousands of people in the world just as smart, though maybe in different ways. In the scheme of things, he's not that unique. I think Eliezer's ego would be appropriate for someone who had made some progress in those goals. Presently it's a little cringy ..... but I still hope he surprises us.
> Biggest poulariser of the idea of existential risks,
Hardly. Even EY points to science fiction as what inspired him in a lot of ways. Probably the biggest mainstream popularizer of the idea of existential risk these days is the History/Discovery Channel with the nonsense it puts out. Actually, you could probably just go with the movie/tv industry in general.
Even more scientifically, you've had worries about asteroid impacts, supernova radiation, grey goo, etc. longer than EY has been alive, and these ideas were "popular" and in the mainstream consciousness in a way that EY and his ideas are not and probably will never be. EY and MIRI are unknowns outside of a very narrow field.
> founder of the Field of Friendly AI research,
I am not really sure how much to credit him with this, but I suppose it is true that most AI research pre-EY consisted of trying to develop AI with discussions of "friendliness" being more informal.
> founder of MIRI, an organisation dedicated to its research,
An unknown.
> author of a number of published articles on same.
Articles with virtually nonexistent circulation outside MIRI and LessWrong. How many citations of EY's published articles exist outside of those communities? Being self-published is not exactly extraordinary.
Again, I don't have anything against EY. He's just simply not that significant of a figure. Maybe he will be in the future - he certainly thinks MIRI is the only organization worth donating money to because it is the only way to save mankind - but he isn't now. I would not be surprised if most AI people regarded him mostly as a crank. (I don't think that's so, but I think EY's circumstances make him somewhat antithetical to the mainstream scientific community.) To be sure, I haven't founded anything as successful as LessWrong, even, and I certainly haven't convinced anyone to pay me to think and formalize my ideas. By most measures EY is more successful than I am.
Sidenote: Don't google MIRI at work. The Machine Intelligence Research Institute is not the first result.
> you have to have pretty big ego problems to hate on him in the first place.
Why is that? This is equivalent to saying "He is rubber, you are glue, whatever you say bounces off him and sticks to you". His perceived public overconfidence is only harming him. He doesn't have to start self-deprecating, but toning it down a bit would reduce the amount of people claiming he leads a cult or has delusions of grandeur. I don't believe that the fate of the world rests on his or his organization's shoulders, and I doubt that very many LessWrong users do either. So what does he gain by asserting it does?
I am fine with him taking risks and I value his work, but that doesn't mean I have to value his level of confidence.
Consider that Harry in HPMOR is essentially a stand-in for a younger version of the author, and it becomes less surprising. He literally believes that he is an exceptional genius and that he is one of the only rational people in the world.
You can read his "autobiography" here: http://wiki.lesswrong.com/wiki/Yudkowsky%27s_coming_of_age
Additionally, he regards his work on AI as literally the most important thing in the world because he sincerely believes that it is the only possible way to save mankind.
I have some measure of respect for EY and he clearly is very intelligent but I think most people outside of his community would regard him as having a somewhat inflated ego.