Re: AI, it's a long way off still. The big limitation to anything quantum is always going to be decoherence and t-time [0]. To do anything with ML, you'll need whole circuit (more complex than shor's) just to initialize the data on the quantum device; the algorithms to do this are complex (exponential) [1]. So, you have to run a very expensive data-initialization circuit, and only then can you start to run your ML circuit. All of this needs to be done within the machine's t-time limit. If you exceed that limit, then the measured state of a qubit will have more to do with outside-world interactions than interactions with your quantum gates.
Google's willow chip has t-times of about 60-100mu.s. That's not an impressive figure -- in 2022, IBM announced their Eagle chip with t-times of around 400mu.s [2]. Google's angle here would be the error correction (EC).
The following portion from Google's announcement seems most important:
> With 105 qubits, Willow now has best-in-class performance across the two system benchmarks discussed above: quantum error correction and random circuit sampling. Such algorithmic benchmarks are the best way to measure overall chip performance. Other more specific performance metrics are also important; for example, our T1 times, which measure how long qubits can retain an excitation — the key quantum computational resource — are now approaching 100 µs (microseconds). This is an impressive ~5x improvement over our previous generation of chips.
Again, as they lead with, their focus here is on error correction. I'm not sure how their results compare to competitors, but it sounds like they consider that to be the biggest win of the project. The RCS metric is interesting, but RCS has no (known) practical applications (though it is a common benchmark). Their T-times are an improvement over older Google chips, but not industry-leading.
I'm curious if EC can mitigate the sub-par decoherence times.
> I'm curious if EC can mitigate the sub-par decoherence times.
The main EC paper referenced in this blog post showed that the logical qubit lifetime using a distance-7 code (all 105 qubits) was double the lifetime of the physical qubits of the same machine.
I'm not sure how lifetime relates to decoherence time, but if that helps please let me know.
That's very useful, I missed that when I read through the article.
If the logical qubit can have double the lifetime of any physical qubit, that's massive. Recall IBM's chips, with t-times of ~400microseconds. Doubling that would change the order of magnitude.
It still won't be enough to do much in the near term - like other commenters say, this seems to be a proof of concept - but the concept is very promising.
The first company to get there and make their systems easy to use could see a similar run up in value to NVIDIA after ChatGPT3. IBM seems to be the strongest in the space overall, for now.
I'm sorry if this is nitpicky but your comment is hilarious to me - doubling something is doubling something, "changing the order of magnitude" would entail multiplication by 10.
Hahaha not at all, great catch. Sometimes my gray matter just totally craps out... like thinking of "changing order of magnitude" as "adding 1 extra digit".
Reminds me of the time my research director pulled me aside for defining CPU as "core processing unit" instead of "central processing unit" in a paper!
Wouldn’t thoose increased decoherence times need to be viewed in relation to the time it takes to execute a basic gate? If the time to execute a gate also increases it may overtake practicality of having less noisy logical qubits.
Google's willow chip has t-times of about 60-100mu.s. That's not an impressive figure -- in 2022, IBM announced their Eagle chip with t-times of around 400mu.s [2]. Google's angle here would be the error correction (EC).
The following portion from Google's announcement seems most important:
> With 105 qubits, Willow now has best-in-class performance across the two system benchmarks discussed above: quantum error correction and random circuit sampling. Such algorithmic benchmarks are the best way to measure overall chip performance. Other more specific performance metrics are also important; for example, our T1 times, which measure how long qubits can retain an excitation — the key quantum computational resource — are now approaching 100 µs (microseconds). This is an impressive ~5x improvement over our previous generation of chips.
Again, as they lead with, their focus here is on error correction. I'm not sure how their results compare to competitors, but it sounds like they consider that to be the biggest win of the project. The RCS metric is interesting, but RCS has no (known) practical applications (though it is a common benchmark). Their T-times are an improvement over older Google chips, but not industry-leading.
I'm curious if EC can mitigate the sub-par decoherence times.
[0]: https://www.science.org/doi/abs/10.1126/science.270.5242.163...
[1]: https://dl.acm.org/doi/abs/10.5555/3511065.3511068
[2]: https://www.ibm.com/quantum/blog/eagle-quantum-processor-per...