Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Really? Why? What exactly is wrong with lesswrong? Please be specific.


From this thread [0], they are looked at as a sort of cult. Funnily enough, they look at HN the same way.

[0] -https://news.ycombinator.com/item?id=8053606


> Funnily enough, they look at HN the same way.

[citation needed]. This would be somewhat... surprising.


Singularity idiots?


Is there an argument to be made why artificial intelligence that surpasses human intelligence is not a problem for the human race?


Is there an argument to be made why $HELL is not a problem for the human race?


$HELL? Is that SHELL with a $? Is that Shell Oil and Gas? Is Shell Oil a problem for the human race? I suppose it could be, but your point escapes me.

Or maybe the $ is a typo and you mean the Christian notion of Hell? Is Hell a problem for the human race? Not likely as, even if it exists, it's only a problem in the after life. So again, your point escapes me.


null hypothesis.


We all ready have a well defined rigorous AGI[0] that would have god like intelligence. There are some approximations that played pacman [1] without knowing anything specific about pacman itself. i.e. it learned how to play by itself.

[0] http://wiki.lesswrong.com/wiki/AIXI [1] http://www.youtube.com/watch?v=yfsMHtmGDKE


So? You can have a neural network that "learns" tic-tac-toe by itself in 100 lines of code. Does that prove anything?


AIXI is an artificial 'general' intelligence. It is not hard coded to do a specific task. That neural network is useless at doing anything other than playing tic tac toe


Actually no -- it's the neural network + biases, start values that's good at playing tic tac toe. The neural network code itself is not "hard coded" for any specific task.

You could use the same neural network with different inputs to do other stuff, like recognize characters, and all kind of pattern matching / classification.


If by "null hypothesis", you mean artificial intelligence that exceeds human intelligence is impossible, that is a bold claim. Since human intelligence is nothing more than a successful evolutionary algorithm implemented in 3-dimensional cellular networks, it is inevitable that we'll be able to copy and improve the algorithm at some point in the future. Then we'll have artificial intelligence that exceeds human intelligence. Then what?


Probably the same argument that human intelligence that surpasses feline intelligence is not a problem for domestic housecats.


Seriously? Would you want to be neutered without your consent?


If it came to that I doubt I would have a choice. And if I didn't want to, I'd probably be wrong.


That's just because we happen to like housecats. The whales will have some legitimate gripes. I mean, they aren't even competing with us for anything.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: