Hacker News new | past | comments | ask | show | jobs | submit login

Right now? Not true to any extent, no.

But it's not exactly "blatantly incorrect assumptions". It's more wishful thinking. (The "self-improving, exponentially more powerful" AI is currently the only hope for an AGI - except that there isn't currently any realistic hope of a self-improving, exponentially more powerful AI.)




Wishful? More like nightmare thinking, hence the desire for some people wanting to ensure that if it can happen, that it doesn't.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: