I am not convinced that an AI has to be smarter than humans for us to lose control of it. I would argue that it simply needs to be capable of meaningful actions without human input and it needs to be opaque, as in it operates as a black box.
Both of those characteristics apply to some degree to Auto-GPT, even though it does try to explain what it is doing. Surely ChaosGPT would omit the truth or lie about its actions. How do we know it didn’t mine some Bitcoin and self-replicate to the cloud already, unbeknownst to its own creator? That is well within its capabilities and it doesn’t need to be superhuman intelligent or self-aware to do so.
Both of those characteristics apply to some degree to Auto-GPT, even though it does try to explain what it is doing. Surely ChaosGPT would omit the truth or lie about its actions. How do we know it didn’t mine some Bitcoin and self-replicate to the cloud already, unbeknownst to its own creator? That is well within its capabilities and it doesn’t need to be superhuman intelligent or self-aware to do so.