I'm not as up-to-speed on the literature as I used to be (it's gotten a lot harder to keep up), but I certainly haven't heard of any breakthroughs. They tend to be pretty hard to predict and plan for.
I don't think we can continue simply tweaking the transformer architecture to achieve meaningful gains. We will need new architectures, hopefully ones that more closely align with biological intelligence.
In theory, the simplest way to real superhuman AGI would be to start by modeling a real human brain as a physical system at the neural level; a real neural network. What the AI community calls "neural networks" are only very loose approximations of biological neural networks. Real neurons are subject to complex interactions between many different neurotransmitters and neuromodulators and they grow and shift in ways that look nothing like backpropagation. There already exist decently accurate physical models for single neurons, but accurately modeling even C. elegans (as part of the OpenWorm project) is still a way's off. Modeling a full human brain may not be possible within our lifetime, but I also wouldn't rule that out.
And once we can accurately model a real human brain, we can speed it up and make it bigger and apply evolutionary processes to it much faster than natural evolution. To me, that's still the only plausible path to real AGI, and we're really not even close.
I was holding out hope for Q*, which OAI talked about with hushed tones to make it seem revolutionary and maybe even dangerous, but that ended up being o1. o1 is neat, but its far from a breakthrough. It's just recycling the same engine behind GPT-4 and making it talk to itself before spitting out its response to your prompt. I'm quite sure they've hit a ceiling and are now using smoke-and-mirrors techniques to keep the hype and perceived pace-of-progress up.
OpenAI's Orion (GPT 5/Next) is partially trained on synthetic data generated with a large version of o1. Which means if that works the data scarcity issue is more or less solved.