People were on totally opposing sides on how to deal with the risk, not dissimilar to now (with difference that the existential risk was/is actual, not hypothetical).
Sure, there are also some (allegedly credible) people opening their AI-optimist diatribes with statements of positive confidence like:
“Fortunately, I am here to bring the good news: AI will not destroy the world”
My issue is not with people who say “yes this is a serious question and we should navigate it thoughtfully.” My issue is with people who simply assert that we will get to a good outcome as an article of faith.
I just don't see the point in wasting too much effort on a hypothetical risk when there are actual risks (incl. those from AI). Granted, the hypothetical existential risk is far easier to discuss etc. than to deal with actual existential risks.
There is an endless list of hypothetical existential risks one could think of, so that is a direction to nowhere.
Many items on the endless list of hypothetical x-risks don’t have big picture forces acting on them in quite the same way e.g. roughly infinite economic upside by getting within a hair’s breadth of realizing the risk.
No, some risks are known to exist, other just might exist. If you walk across a busy street without looking, there is a risk of being run over - nothing hypothetical about that risk. In contrast, I might fear the force gravity suddenly disappearing but that isn't an actual risk as far as we understand our reality.
Not sure where infinite economic upside comes from, how does that work?