ChatGPT could fool a lot of people already. Unless you’re looking to probe and check it’s an AI you probably won’t detect it over a chat session, especially with the right prompt giving it a background, context and the right speech/chat patterns.
They are talking about physical robots. Replicants/cylons, as referenced in the article. I agree with them.
Sci-fi style androids will definitely be feasible soon. Some of them will be capable of acting like humans, displaying convincing emotions, forming relationships, etc. However, they will definitely not be indistinguishable from humans at a physical level; not even remotely close.
Even AGI will not achieve that in our lifetimes, unless it first cures aging and disease thus extending our lifetimes indefinitely, which I also (sadly) regard as exceedingly unlikely in our lifetimes.
AGI may only be a momentary transitional state. It is assumed that ASI almost immediately follows AGI as once the machine is self aware and can improve itself then exponential intelligence takeoff is achieved.
Maybe in a text chat, but in a fluid one-on-one in-person conversation, or in a group conversation? Absolutely no chance, and no reason to believe it will get to this level, maybe ever.
I mean, have you tried chatGPT’s latest multimodal speech update. The voice generation is pretty damn human like. Again with the right prompt you could definitely fool someone for a while using voice, since that mode is pretty conversational and includes the AI asking relevant, context specific follow up questions.
For an in person, we would need to generate full fledged bodies that probably isn’t happening soon but it’s not insane to think of a mechanical robot under structure covered by artificially generated meat (which we can make now) with a chatGPT brain from 10 years in the future.