All existing QC approaches have two fundamental limitations: error rate and coherence time. You can decrease error rate through error correction, but that comes at the cost of adding gates and/or storage to replicate the QC state, but that causes a decrease in coherence time. I have not seen even a theoretical framework allowing both to be increased simultaneously.
> I have not seen even a theoretical framework allowing both to be increased simultaneously.
The threshold theorem [1], showing this can be done in principle, was proven more than a decade ago.
But you don't have to believe the theory anymore, there's experiments now! Last month the google quantum computing team published an experiment [2] showing memory error rates (including decoherence) getting twice as good as a surface code was grown from distance 3 to distance 5, and twice as good again going from distance 5 to distance 7. The logical qubit's coherence time was longer than the coherence times of the physical qubits it was built out of.
The thing about this bet is you don't have to pay until 2048 whereas I have to pay as soon as it happens and accounting for savings rates the bet will cost about a dime a day.
In my experience were really bad at figuring out when something more than about 5 years out is going to happen so 10-20 years is much worse than a coin flip. That's partially because we don't know what second and third order problems could come up, but also because interest/investment could change rapidly. For example, if some other probabilistic method of factoring primes (or even NP complete problems!) were discovered that would massively reduce funding for QC.
And yeah, there's a time value of money, inflation, repayment risk, and a zillion things that vary the value of a long term bet's payout, but at ~3-6%/yr they don't affect it by more than a few factors of 2. The risk of having all the world's stored encrypted data decrypted after the fact makes even a miniscule risk that QCs can break RSA or other encryption too big to accept. Those scale by many factors of 10.
All existing QC approaches have two fundamental limitations: error rate and coherence time. You can decrease error rate through error correction, but that comes at the cost of adding gates and/or storage to replicate the QC state, but that causes a decrease in coherence time. I have not seen even a theoretical framework allowing both to be increased simultaneously.