But it's not exactly "blatantly incorrect assumptions". It's more wishful thinking. (The "self-improving, exponentially more powerful" AI is currently the only hope for an AGI - except that there isn't currently any realistic hope of a self-improving, exponentially more powerful AI.)
But it's not exactly "blatantly incorrect assumptions". It's more wishful thinking. (The "self-improving, exponentially more powerful" AI is currently the only hope for an AGI - except that there isn't currently any realistic hope of a self-improving, exponentially more powerful AI.)