All existing QC approaches have two fundamental limitations: error rate and coherence time. You can decrease error rate through error correction, but that comes at the cost of adding gates and/or storage to replicate the QC state, but that causes a decrease in coherence time. I have not seen even a theoretical framework allowing both to be increased simultaneously.
> I have not seen even a theoretical framework allowing both to be increased simultaneously.
The threshold theorem [1], showing this can be done in principle, was proven more than a decade ago.
But you don't have to believe the theory anymore, there's experiments now! Last month the google quantum computing team published an experiment [2] showing memory error rates (including decoherence) getting twice as good as a surface code was grown from distance 3 to distance 5, and twice as good again going from distance 5 to distance 7. The logical qubit's coherence time was longer than the coherence times of the physical qubits it was built out of.
The thing about this bet is you don't have to pay until 2048 whereas I have to pay as soon as it happens and accounting for savings rates the bet will cost about a dime a day.
In my experience were really bad at figuring out when something more than about 5 years out is going to happen so 10-20 years is much worse than a coin flip. That's partially because we don't know what second and third order problems could come up, but also because interest/investment could change rapidly. For example, if some other probabilistic method of factoring primes (or even NP complete problems!) were discovered that would massively reduce funding for QC.
And yeah, there's a time value of money, inflation, repayment risk, and a zillion things that vary the value of a long term bet's payout, but at ~3-6%/yr they don't affect it by more than a few factors of 2. The risk of having all the world's stored encrypted data decrypted after the fact makes even a miniscule risk that QCs can break RSA or other encryption too big to accept. Those scale by many factors of 10.
The linked article says that they've achieved "the baseline necessary to perform error correction". With the stated error rate, roughly how many physical qubits would be required to produce one error-corrected logical qubit?
The threshold is where you transition from needing infinite qubits to make an error corrected logical qubit, to needing a mere finite number. So... somewhere between 1 and infinity (exclusive).
Actually, because "in theory there's no difference between theory and practice but in practice there is", the number is probably still infinity. Like, if you look at figure 4 of their paper [0], you can see one device of the three is well above threshold at 1.5% error. They need sufficient quality more consistently before a large system built out of the pieces they are benchmarking would be below threshold.
So it can be as low as 3 if you're only concerned with some of the noise, but if you're trying to correct both for bit flips (0 exchanged for 1 and vice versa) and phase drift (0 + 1 being exchanged for 0 – 1 and vice versa) then you need at least 5 physical qubits to create one logical qubit, see [wiki] for details.
All existing QC approaches have two fundamental limitations: error rate and coherence time. You can decrease error rate through error correction, but that comes at the cost of adding gates and/or storage to replicate the QC state, but that causes a decrease in coherence time. I have not seen even a theoretical framework allowing both to be increased simultaneously.