What I really want to read about is why otherwise intelligent people lazily default to pattern matching when changing facts suggest they should change their mind.
And I think the red zone is a century away. We have no understanding of how our brains work, yet alone making devices as smart as we are in causal reasoning.
That's the misconception. AGI is very unlikely to happen in our lifetimes due to prohibitive physical limits. But you don't need AGI to disenfranchise more or less 90+% of humans.
I find smarter people tend to broadly overestimate human potential. We are unreliable, tire, are often untrustworthy, make mistakes, are subject to emotions, prone to illogic and irrationality, etc. We're a very suboptimal entropy-reducing machine relative to the forms of purpose-built AI and robotics that will be possible within the next 20 years. It's a shame people aren't more alarmed about this.