... and couldn't get the correct answers for LiH. Interesting that the article didn't mention this.
From the paper:
> For lithium hydride, LiH, we were not able to reproduce closely the ground state energy with the currently available hardware. When accounting for 3 orbitals and using a scaling factor of r = 4, we already had to use 1558 qubits, which is a large fraction of available qubits. To summarize: the investigated method in general works, but it might be difficult to apply it to larger systems.
Presumably because that’s just a matter of scale. The calculation didn’t turn out wrong, the machine just isn’t powerful enough to calculate it. All of this work is proof of concept for more powerful devices down the road, the point isn’t that it’s better than a classical computer right now.
Right, and the paper itself is very upfront about that. I don't have any objection to the research, which is likely very valuable, but I do find it strange that the article claims they solved two problems without mentioning that only one of the solutions was correct.
It would be great if the article said they solved one and made good progress on techniques for the second that will likely work on next generation hardware, but the article didn't say that. I don't see how it being a matter of scale makes this acceptable. It's like the U.S. national labs unveiling the world's first exaflop supercomputer, with a footnote indicating that in fact the computer is only 100 petaflops at the moment.
> It's like the U.S. national labs unveiling the world's first exaflop supercomputer, with a footnote indicating that in fact the computer is only 100 petaflops at the moment.
This is referring to the Summit supercomputer announcement a few months ago.
Measuring the speed of a supercomputer is difficult because there are several factors that control performance that affect benchmarks very differently. The most common benchmark in use is LINPACK, which measures how long it takes the computer to solve an appropriately large matrix equation Ax = b and then computes how many floating-point operations such an equation notionally takes. This has been criticized for various reasons, but it's what TOP500 measures.
Summit scored 143 PFLOPS on this metric. However, the press release called it a exascale supercomputer because it can issue 1 quadrillion instructions per second, so 1 exaop. To most people in the industry, the goal of exascale meant 1 EFLOP on LINPACK, so it really does come across as saying "We built a 1 EFLOP computer (footnote: only 143 PFLOPS)."
There is no evidence that quantum annealing (what D-Wave does) is any better than classical computers.
There is a lot of evidence that quantum computers (the gate model) or, equivalently, quantum adiabatic computing is better than classical computing. All of it is based on a family of conjectures about the complexity classes P, BQP, and NP.
Scott Aaronson's blog is one of my go-to suggestions for rigorous introduction to the topic.
For the record, the concept is called Quantum Supremacy [1]. So far, there is no demonstration of quantum supremacy, but it seems like it may just be a matter of time.
Quantum supremacy is not merely about quantum computers performing tasks better than classical computers. It's about quantum computers achieving a superpolynomial speedup over classical computers, such that classical computers can't feasibly perform the task in a reasonable amount of time for all inputs.
That's an important distinction because it's significantly more difficult to achieve a fundamental asymptotic improvement instead of an iterative speedup for the same factor. If a quantum computer completes a task with complexity O(2n) that a classical computer requires O(10n) to complete, you don't have quantum supremacy. If your quantum computer can accomplish a task in O(2n) that your classical computer needs O(2^n) to perform, you've got supremacy.
Given that nuance, I wouldn't say it may be a matter of time before we demonstrate quantum supremacy. There is still an undercurrent of skepticism in the research community.
It’s related, but quantum supremacy is about asymptotic speedups rather than actual speed differences.
Additionally, it’s worth keeping in mind that D-Wave machines aren’t true quantum computers in the sense that they can’t perform Grover’s or Shor’s algorithms.
It’s not clear what “true quantum computer” means. There are many different types of quantum computers, and quantum annealing, what D-wave does, is one. It’s just the least interesting of the bunch...
Standard quantum chemical methods for this sort of problem would finish in a fraction of a second on a Raspberry Pi. Calculating the ground state energy of a tiny system like LiH was tractable way back in the 1960s. I'd need to see their actual numbers to determine when a conventional computer first reached their level of accuracy on LiH but I'm sure it is several decades back.
EDIT: according to the paper, the initial energy at each point was found using the Hartree-Fock method with a minimal STO-3G basis set. This is one of the simplest and oldest approaches to this sort of calculation on a conventional computer. For these starting calculations they used Psi4 [1] by way of OpenFermion [2]. For the H2 molecule, their additional DWave calculations improved the accuracy of the distance-energy curve over the baseline Hartree-Fock/STO-3G calculations. For LiH, there was no improvement (Figure 3). The total runtime of their approach was therefore that of the conventional approach plus an additional series of calculations that did not yield improvements in the case of LiH.
I appreciated the rudimentary presentation of the capabilities and limitations of the machine and calling out the connectivity of the qubits versus a universal quantum machine which would have full connectivity between all the bits.
I’d be curious is there a simple formula for calculating the “effective universal qubits” of the D-Wave?
2,048 indeed sounds like a lot of qubits based on my extremely limited knowledge of quantum, but with only ~6k connections versus fully connected which would be n(n-1)/2 = ~2mil is it just a marketing gimmick?
Why is it useful to push the bit count so high if the connectivity is so limited?
It's important to note here that the current gate-based quantum processors also suffer from the connectivity problems. It's not just annealers suffering.
In those architectures, the limited connectivity enforces usage of SWAP gates, which increases the circuit depth.
Circuit depth, with imperfect qubits and gates, is currently the limiting factor - we don't have practical error-correction for the chips of today's size. Hence one can only perform a certain number of operations before his computation decoheres and becomes useless.
I don't know of a good way to compare their real-valued gates to complex gates; it's probably a scaling factor due to encoding those gates as gadgets.
As for full connectivity; this is a benefit over gate-model: they can simulate fully-connected logical qubits; on the order of 2sqrt(Q/2), or 64, in their existing hardware.
As for the high bit count: sparse, structured problems can make very good use of the existing connectivity. Simulations of a cubic lattice are nearly competitive with modern classical hardware, for example. IIRC they can factor 16-bit numbers, too --these problems are "quasiplanar" and have relatively low connectivity requirements.
Full connectivity actually brings some huge engineering challenges; DWave's strategy is more about "this is what's actually possible today" and not "we're going to make universal quantum computers, don't ask us about error correction or crosstalks or calibration of large-scale microwave circuits"
"Quantum tech" is regrettably much too overloaded of a term.
The "quantum annealers" that D-Wave sells are not known to be more powerful than classical computers. For the moment, at best, they are interesting analog computers.
Quantum computers (either the circuit model or equivalently the quantum adiabatic computer model) are conjectured to be much more powerful than classical computers, but they are quite a bit different from D-Wave's quantum annealers (for starters, they are supposed to be able to keep all their qubits in a pure entangled state, which D-Wave definitely can not do). There are various experimental hardwares that are able to keep a handful of qubits entangled, but we will need thousands (if not millions) before being able to do anything useful with them.
If I understand correctly (which is a big if given it is quantum mechanics), any quantum algorithm can be stimulated on a non-quantum computer, but doing so has an exponential penalty.
I think the performance of the actual device is so limited it’s like running an iPhone simulator to debug your software.
But if the expected performance of the quantum machine is theoretically going to increase exponentially every X months for the next 2 decades, combined with the theory that certain problems shift from exponential to polynomial time solutions, yes, eventually the “debugger” will not be useful to actually try running your solver.
When people talk about "Quantum Computers" that can factor large primes, they are referring to a Universal Gate Quantum Computer. As of today, the largest Universal Gate Quantum Computer has 72 qubits.
The DWave Quantum Annealers are essentially a special purpose device that performs Quantum Annealing. What they refer to as 'Qubits' are very different from the entangled Qubits of a Universal Quantum Computer. To the best of my knowledge (and the paper explicitly distinguishes between the two types), it has yet to be demonstrated that Quantum Annealing is equivalent to Universal Gate Quantum Computer (and is generally suspected not to be), is in it's own complexity class, or even if it provides any complexity speedup over classical computers.
There isn’t a universal quantum computer in the sense that there is a universal Turing machine for classical computation. All quantum computers are special purpose circuits, not general logic machines.
1) Lithium Hydride? That's battery chemistry right there.
2) Marketing buzz from being seen as a forward thinking research company instead of a place that engineers their way out of costly regulatory compliance.
Given that this came from their SF group, it seems like a low-cost high-buzz exercise, and, if it had worked out, they might have some interesting methodologies for chemistry/structural improvement.
They probably sold it to management as exploring future simulation-and-optimization methodologies.
I mean, right now, the D-Wave computers seem too weak to be worth the expense compared to classical computers. So, it seems unreasonable for them to expect near-term practicality from this sort of investigation.
But in principle, if D-Wave systems greatly improve to the point that they're competitive with classical optimizers, then it'd be good for VW engineers to be able to leverage them to do stuff like, in this case, predict chemical properties.
This seems to be how the article sells it:
> “Our present work was a first field study of quantum chemistry problems on quantum annealing devices,” he says. “Our goal was to get a feeling for the bottlenecks of the problem. This in the end helps [us] to understand the underlying problems, and find new solutions or suitable subproblems.”
> none of the executives in charge of the fraud is in jail or financially punished.
This is nothing relevant to the article, it's just ad-hominem attacks, and factually incorrect at that.
The very wiki page you cite says that CEO Winterkorn is charged with fraud, the CEO of Audi has been arrested, and 6 executives in the US are presently under charges.
They do.
As an annecdote, the professor who formerly held the "Machine Learning" course at TU Munich left for VW's AI research group (https://argmax.ai) a few years ago. Whether that qualifies as "science" may be debatable, but the larger players in the automotive space are definitely invested in this kind of research.
From the article: His group’s paper runs down a wish list of quantum chemistry simulations that sufficiently robust quantum computation should be able to tackle. Such problems include designing next-generation batteries, optimizing solar cells via detailed study of photosynthesis, and faithfully simulating complex molecules without resorting to approximations that conventional computers must rely on to make the simulations tractable.
Quantum computing is also of interest to self-driving cars. I know that at least some other big car companies do quantum computing work for that reason. I wouldn't be surprised if this is a test of sorts.
They do. These days, battery chemistry is probably very interesting to them. Earlier, it would have been problems such as optimizing combustion chamber shape, etc.
Wouldnt these potential customers of D-Wave be better off just buying a HPC cluster with lots of CPU's, GPU's, and FPGA's? Probably more opportunities for hardware reuse, too.
I kinda doubt anyone's going to really criticize this one.
I mean, the controversy about D-Wave (if I recall correctly) was largely related to claims about it having achieved [quantum advantage over classical computing methods](https://en.wikipedia.org/wiki/Quantum_supremacy), along with misunderstandings about what kind of "quantum computer" it is.
However, it's generally accepted that D-Wave is a working computer that uses quantum models; that fact's uncontroversial.
In this paper, they reported using that uncontroversial ability to perform optimization problems to optimize some physics problems. And it worked basically as-expected.
Aaronson is unfortunately in a sunk-cost position when it comes to D-Wave - he's been so against them for so long that even if they start to produce good results his bias is going to make it difficult for him to fairly evaluate them, especially if it means reaching different conclusions about their past work than he did previously.
Probably better to look to other commentators on this one, at least until he has enough time to emotionally process the situation and come around to it. It's not so easy admitting you're wrong in the shtetl.
From the paper:
> For lithium hydride, LiH, we were not able to reproduce closely the ground state energy with the currently available hardware. When accounting for 3 orbitals and using a scaling factor of r = 4, we already had to use 1558 qubits, which is a large fraction of available qubits. To summarize: the investigated method in general works, but it might be difficult to apply it to larger systems.