Hacker News new | past | comments | ask | show | jobs | submit login

>LessWrong is a site devoted to the cult of the "singularity."

LessWrong is a discussion forum mostly pertaining to psychology, philosophy, cognitive biases, etc,. A frequent topic of discussion is artificial intelligence, but it is hardly centric. Saying that LessWrong is a hangout spot for "singularity cult members" (as you call them) is simply incorrect on multiple levels. Technological singularity is no more than a scientific hypothesis, and it's slightly dramatic to say it perhaps has cult members worshiping and breeding its realism. In actuality technological singularity just has scientists and researches observing/theorizing its stepping stones and outcomes. Maybe you meant transhumanists rather than "singularity cult members", which I suppose makes more since from your other statements.

>Eliezer Yudkowsky, is a secular humanist who is probably most well-known for his Harry Potter fanfiction

Yudkowsky is also a prominent researcher of a variety of artificial intelligence topics which is enhancing the field. Primarily he focuses not on developing a Strong AI (AGI), but rather focusing on safety issues that such a technology would pose.

>the most important thing we could possibly be doing right now is devoting all of our time developing a mathematical model of a "safe AI."

"friendly AI"* and I'm not sure what you're talking about when you say mathematical model, you should do more research it's mostly hypotheses and ideas for system transparency.

>But I tend to take everything I read there with a massive grain of salt.

Maybe you should visit LessWrong and read some articles about cognitive biases so you understand why someone saying "massive grain of salt" makes me want to kill innocent puppies.




> Primarily he focuses not on developing a Strong AI (AGI), but rather focusing on safety issues that such a technology would pose.

That's absurd at worst, science fiction at best, akin to worrying about manned flight safety in the 1500's.


Are you really trying to deny that google cars and other automated systems at least partially based on AI have safety issues? Even if we're talking autonomous, "life-like" AI, there is a long list of interesting philosophical and legal questions to be asked. I can't say I find any of the statements here or in the article very appealing, but you shouldn't dismiss real safety/security issues just because you don't like the guy.


Are you really trying to assert that MIRI is addressing systems on the level of Google cars, in any serious technical manner? If so, can you point to examples?


No, I'm saying that AI has wider applications, and I was responding to the manned flight safety example. Also, I'm arguing that we shouldn't dismiss the guy's arguments just because he's an ass. Especially with regards to this article, we really don't need to resort to a straw man to refute what he wrote.


AI in the sense implied does not exist. Otherwise "would pose" would be "poses" in the sentence I quote.


> Yudkowsky is also a prominent researcher of a variety of artificial intelligence topics which is enhancing the field. Primarily he focuses not on developing a Strong AI (AGI), but rather focusing on safety issues that such a technology would pose.

Nice defense on the other points. But no, Eliezer Yudkowsky has no peer reviewed publications, open source code, or really anything else to point to which provides any independent assessment of his contribution to the field of AI.

He has a couple of blog posts and self-published white papers. Forgive me for being skeptical.


There is the abandoned Flare language project from long ago: http://flarelang.sourceforge.net/




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: