No. I don't think anyone seriously believes that. AGI requires human level reasoning and it hasn't achieved that, despite what benchmarks show (they tend to focus on "how many did it get right" more than "how many did it fail in stupid ways").
The issue with most criticism of LLMs wrt AGI is that they come up with totally bogus reasons why it isn't and can't ever be real intelligence.
It's just predicting the next word. It's a stochastic parrot. It's only repeating stuff it has been trained on. It doesn't have quantum microtubules. It can't really reason. It has some failure modes that humans don't. It can't do <some difficult task that most humans can't do>.
Seems to be mostly people feeling threatened. Very tedious.