When I step through the logic in my mind, seems likely.
Lets make the leap of faith that we can improve our AIs to actually understand code that it's reading and can suggest improvements. Current LLMs can't do it, but perhaps another approach can. I don't think this is a big leap. Might be 10 years, might be 100.
It's not unreasonable to think there is a lot of cruft and optimization to be had in our current tech stacks allowing for significant improvement. The AI can start looking at every source file from the lowest driver all the way up to the UI.
It can also start looking at hardware designs and build chips dedicated to the functions it needs to perform.
You only need to be as smart as a human to achieve this. You can rely on a "quantitative" approach because even human level AI brains don't need to sleep or eat or live. They just work on your problems 24/7 and can you have have as many as you can manufacture and power.
I think having "qualitatively" superiority is a little easier actually because with a large enough database the AI has perfect recall and all of the worlds data at its fingertips. No human can do that.
Let's say all of that happens. Is there any reason to believe that will lead to "a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence"?
Or is it more reasonable to suppose that 1) all those improvements might get us a factor of 10 in efficiency, but they won't get us an intelligence that can get us another factor of 10 in efficiency, 2) each doubling of ability will take much more than doubling of the number of CPU cycles and RAM, and 3) growth will asymptotically approach some upper limit?
A lot of tech improvements are s-curves or waves of s-curves, not exponentials. For a given problem domain you will still have s-curves, but for overall intelligence/progress it might be exponential in some overall sense.
Yeah sure, there are physical limits, and doing things like mining, manufacturing, and logistics are really slow compared to just thinking. But that's just time and money. I don't see why it wouldn't happen. You build a machine to make machines. First you have 1, then 2, then 4, then 8, then 16 etc.
Then when the earth is used up, you look at space. Space travel is a lot easier because you don't need to keep a meat bag alive.