Of the concerns, even by people working on AI safety, I have seen expressed, what stands out to me are two biases: First, western scifi culture about robot uprisings. Secondly, anthropomorphizing what a machine "mind" would be like. Obv there are overlaps. But most of the flaws in that kind of thinking come from thinking that an AGI would think like a human, or any kind of animal.
It won't. It is not alive. It has no "selfish genes" It cannot die of starvation. It is not dead while turned off. What confuses people is that AI algorithms have finally combined with enough compute power to provide a conversational interface that frequently seems more "alive" than some humans. Educated humans do a lot of clever knowledge remixing, which is what generative AI is good at.
It is not like you or any other human. If scifi robots just said "Hey, chill, I'll do the dishes" that would have been a dull movie. We are inventing conflicts in our minds. And those are the minds of AI safety researchers, they are barking up the wrong tree.
It won't. It is not alive. It has no "selfish genes" It cannot die of starvation. It is not dead while turned off. What confuses people is that AI algorithms have finally combined with enough compute power to provide a conversational interface that frequently seems more "alive" than some humans. Educated humans do a lot of clever knowledge remixing, which is what generative AI is good at.
It is not like you or any other human. If scifi robots just said "Hey, chill, I'll do the dishes" that would have been a dull movie. We are inventing conflicts in our minds. And those are the minds of AI safety researchers, they are barking up the wrong tree.