Hacker News new | past | comments | ask | show | jobs | submit login
D-Wave: Truth finally starts to emerge (scottaaronson.com)
267 points by YAYERKA on May 16, 2013 | hide | past | favorite | 92 comments



I like that he addresses that his main fear that, if D-Wave does not succeed after riding the marketing hype-machine, a "QC Winter" would likely happen, not unlike the "AI Winter" (http://en.wikipedia.org/wiki/AI_winter) that occurred in the late 80's. The AI Winter was caused by a huge amount of over hyped claims and promises by the companies taking part which were never (and could never be) met, leading to the eventual collapse of the whole industry which is now only starting to see a practical recovery with the resurgence of Machine Learning.

D-Wave might be the belle of the ball today, but once the crowd turns, it could ruin an entire industry for a very long time. That's why skepticism is important when it comes to their claims.


AI had huge successes though the 80's an early 90's despite the two "AI Winters". However, people generally stop calling something AI once it starts to work. Consider path-finding sometimes it's AI and sometimes not. It's true there was a shifts in research funding (1974–80 and 1987–93) but plenty of things that where under the AI umbrella had moved on to their own separate and a lot of crap that was never going to work was cut.


There's also the fact that some "AI" consisted of redoing - poorly, out of ignorance - what had been done previously in fields such as operations research, optimal control, computer aided design, etc.


AI as a whole did not suffer as much as Neural Nets research did.


For the ones that don't want to read the whole post, the Truth is that the 3600 speedup claimed in this article: Commercial quantum computer leaves PC in the dust [1](HN discussion: [2]) isn't worth much.

Quoting Scott Aaronson's post: As I said above, at the time McGeoch and Wang’s paper was released to the media [...] the “highly tuned implementation” of simulated annealing that they ask for had already been written and tested, and the result was that it outperformed the D-Wave machine on all instance sizes tested. In other words, their comparison to CPLEX had already been superseded by a much more informative comparison—one that gave the “opposite” result—before it ever became public. For obvious reasons, most press reports have simply ignored this fact.

[1] http://www.newscientist.com/article/dn23519-commercial-quant...

[2] https://news.ycombinator.com/item?id=5697619


Geordie's (Dwave's CTO) rebuttal: [1]

"The majority of that post is simply factually incorrect.

I used to find this stuff vaguely amusing in an irritating kind of way. Now I just find it boring and I wonder why anyone listens to a guy who’s been wrong exactly 100% of the time about everything. Update your priors, people!!

If you want to know what’s really going on, listen to the real experts. Like Hartmut. [2]"

[1] http://dwave.wordpress.com/2013/05/08/first-ever-head-to-hea...

[2] http://googleresearch.blogspot.ca/2013/05/launching-quantum-...


[1] Contains no explanation about how Aaronson is 100% wrong. It is pure FUD. He won't actually argue points because that would allow Aaronson to rebut his rebuttal.

[2] has little information, and no numbers, or comparisons to other techniques. All it says is "Quantum Computing is Cool! And we're working on NP hard problems." It does say that they have a quantum computer from Dwave, but they make no claims as to it's actual power.

Adding emphasis to the main objectives from [2] "to study how quantum computing might advance machine learning." They're still researching things. [1] is a clear attempt to leverage the respectability of Google without actually addressing Aaronson's points.


How was Aaronson was 100% wrong before -

http://nextbigfuture.com/2013/05/aaronson-intuition-about-wh...

http://www.scottaaronson.com/blog/?p=306

"Even if D-Wave managed to build (say) a coherent 1,024-qubit machine satisfying all of its design specs, it’s not obvious it would outperform a classical computer on any problem of practical interest. This is true both because of the inherent limitations of the adiabatic algorithm, and because of specific concerns about the Ising spin graph problem. On the other hand, it’s also not obvious that such a machine wouldn’t outperform a classical computer on some practical problems. The experiment would be an interesting one! Of course, this uncertainty — combined with the more immediate uncertainties about whether D-Wave can build such a machine at all, and indeed, about whether they can even produce two-qubit entanglement — also means that any talk of “lining up customers” is comically premature."

* Dwave built the machines * Aaronson concedes that Dwave has achieved entanglement with a quantum annealing system for its full 512 qubits. * there were two big sales (Lockheed, Google)

My own prediction from 2006 http://longbets.org/266/

There will be a quantum computer with over 100 qubits of processing capability sold either as a hardware system or whose use is made available as a commercial service by Dec 31, 2010.

128 qubit system was sold in 2010. Quantum entangled annealing is proven in the USC paper of the 128 qubit system that was sold.


Also the funny part is that there are many nice ML algos that do not need harsh optimisations or even do not need optimisation at all (see random ferns for instance). Thus it might be hype for hype.


The 3600x speedup is correct. You can't compare the 'exact' solution of the D-Wave problem in the d-wave machine with a probabilistic algorithm like Simmulated Annealing. Sure, in practice you're using the approximate algos, with their comparable speedup.

I think the "skeptics" will bash D-Wave until, but probably after their systems are running circles around traditional computing.


Actually the D-Wave machine is also probabilistic.


Probably the biggest point to take away from the article is this:

> For years, I tirelessly repeated that D-Wave hadn’t even provided evidence that its qubits were entangled—and that, while you can have entanglement with no quantum speedup, you can’t possibly have a quantum speedup without at least the capacity to generate entanglement. Now, I’d say, D-Wave finally has cleared the evidence-for-entanglement bar—and, while they’re not the first to do so with superconducting qubits, they’re certainly the first to do so with so many superconducting qubits.

In other words, this is the first actual evidence that D-Wave even has the potential for quantum speedup.


I love this idea of having a computing system that might not work as advertised but still produce the right results in a repeatable manner. When the ESA had a computing system that did not work as advertised, they blew up the GNP of an african country and went straight to the textbooks.


That really confuses me. I haven't been following the story, but how can there possibly be questions over whether the thing is doing what they say it is? Are the results that ill-defined?


D-Wave sells you a "black box" that will solve certain problems. They claim (or at least, used to claim) it's using quantum computing to do so. Without knowing about the internals, there's no way to know if that's true.


They are playing with statistics and montecarlo stuff. Somehow it would work too in rocket science, you just have to try a lot of designs and after a while a significant portion of them will reach the moon, but they deemed the convergence speed to low, or the cost to high, I don't remember wich.

edit: a more serious answer is in the article, entanglement or not the algorithm naturally converge towards the solution. And they did not even check with convergence speed (I think the quantic version is meant to be faster) but they checked on some statistical criterion that is different in the quantic version and the plain version.


"you just have to try a lot of designs"

That's exactly what Wernher von Braun did with his early rocket designs. He (intelligently and quickly) iterated, keeping the parts that worked and getting rid of the parts that didn't.


Yeah, actually trying to be a smart rocket scientist can be seen as an euristic to make the stochastic process of finding a working rocket design converge faster.


> When the ESA had a computing system ..

That sounds interesting. Any link?



But there is poor to no evidence that adding more qubits scales linearly or exponentially or worse case at all and what we have is a probability of entanglement across a group of qubits

Maybe


It has the potential for a "quantum speedup" in the way that D-Wave's marketing defines that term. Just keep in mind that it has no resemblance to the usual meaning in a quantum computing context.

The D-Wave is not a quantum computer (as the term is normaly used), and was never meant to be one.


It seems to me that all the hyperbole is about raised expectations. Hell, the fact they can even get something like this to work, and are able to determine (somewhat) that it is working the way they intended is enough to get me excited. I don't care if the QC they built performs faster or slower than a classical algorithm, because that stuff will simply come with time as we understand more about how QC works. With classical computers, we had several decades of knowledge about how electromagnetism worked - it was easily observable, and applying those principles to a complex switching system (already implemented by some mechanical computers at the time) was relatively straightforward, at least from a physical standpoint. With QC, it seems to me (I don't really know more about than anyone else here) that we are making all sorts of theoretical discoveries as well as attempting to build a computer with those discoveries at the same time. So D-wave is basically the modern equivalent of Tesla and Turing, etc. all wrapped up into one big package. So the fact that this stuff is turning out not to be a complete fairytale is more than enough to get me excited.


Yes, the problem is about expectations. D-Wave told everybody that they made a quantum computer, when in fact they've redefined the term to mean something else.

Well, that something else is working, and altough a nice physics experiment, it still lacks any real world utility.


A really long time ago when I was in grad school playing with genetic algorithms and simulated annealing, I implemented something that seems awfully similar to quantum adiabatic annealing. It worked as follows:

1. Assign all states of each variable equal probability.

2. Sample the living crap out of the search space with some sort of energy function for every possible configuration, scoring the discrete values of the variables of each individual sample by Boltzmann weighting of the energy function.

3. Every so often update the weights for selecting each variables potential values using the accumulated scores for each variable generated during step 2.

4. Repeat steps 2 through 3 until each variable converges to a single state.

I never published anything but I learned three things from this process.

1. It worked like gangbusters to find the space surrounding global optima of reasonably complex functions

2. It worked like crap to refine really good solutions into really great solutions.

3. It was horribly dependent on the underlying representation of the variables (i.e. if you mapped the input variables to a spin glass, it was just awful)

That and I suspect somebody already has a fancy name for exactly what I did back then...



Sounds an awful lot like Monte Carlo to me as well.


It's not quite like it from what I just read, it almost seems like it's halfway between Quantum Monte Carlo and a multi-armed bandit.


While most of the recent popular coverage has been full of hype, Aaronson provides a concise summary of what's been going on before taking on the hype which appears to have left the poor man at his wit's end. Honestly I was rather confused also as the skepticism D-wave was met with at the beginning appears to have been replaced with a lot of hype without any mention of the actual physics of what's happening.

The position of most of the scientific community at the outset regarding D-Wave quantum computers was that it was uncertain what was going on at all. Nobody knew for sure if the D-Wave computers were really using quantum entanglement when they ran or not. Obviously a computer that does computations without doing at least some of the weird things allowed by quantum mechanics wouldn't be much of a quantum computer.

It appears that the D-Wave computers could indeed be taking advantage of entanglement. However since the D-Wave computers are not very isolated from their environment, the delicate effects they attempt to harness are sometimes disrupted when the computer interacts with its environment (aka decoherence to use the Quantum Mechanics term).

Overall it looks like D-Wave is making some progress on demonstrating their computer does really harness what's allowed by quantum mechanics. This is exciting, though ironically they have not caught up to their own overstated claims of what their machine does. Perhaps with more work they can better isolate their computer from it's environment and graduate from quantum annealing to reversible adiabatic quantum computing. Or maybe someone else working with some other physical system which has an intrinsically lower coupling to it's environment might beat them to it. An exciting time for the field nonetheless.

Getting a speed up on a particular class of problem could have a great deal of practical importance, but building a scalable computer that fully takes advantage of everything allowed by the laws of physics is the holy grail of quantum computing, and it doesn't look like D-Wave is there quite yet. Still an exciting time for the field nonetheless.


Aaronson is fabulous. In my opinion, possibly the best technical author around these days. If you haven't already, his new book[1] is absolutely worth checking out, especially if you're unfamiliar with quantum.

---

[1] : http://www.amazon.com/Quantum-Computing-since-Democritus-Aar...


His book is seriously awesome. It's in a weird space technically -- not sure I'd recommend it to someone who didn't take a least a few math or cs courses as an undergrad. But if you have, wow, it really shines some new light on things you thought you probably understood well.


Check out the lecture notes as well, they are thoroughly just as awesome:

http://www.scottaaronson.com/democritus/


I'm glad there are people who understand these things. I hope some day when the field of quantum computing congeals a bit more us mere mortals can begin to understand it as well.


Quantum computing is actually surprisingly approachable. Unfourtuantly, the written material on it is still mostly in the research paper format, which is very rarely usefull for mortals. However, many lectures on the subject are pretty approachable. I used to have a collection of links to Perimeter Institute videos, but they seem to have 404`ed, and I don't have the time to dig up the new urls.

If you are just looking for the theoretical basics , this youtube series [1] is a good place to start.

While I do find these videos approachable, they are fairly math heavy. Most of the lectures I've seen explain things at a level that almost anyone can get something out of, and you don't miss out if you skip over the stuff that is beyond you.

For most of the talks, the math involved is only algebra and vectors. Although, the vectors are represented using bra-ket notation which may throw you off.

[1]http://www.youtube.com/user/mnielsencourses?feature=watch


Glad you enjoyed the videos (I created them)! Unfortunately, that link has the videos in reverse order. Here they are in the intended order:

http://michaelnielsen.org/blog/quantum-computing-for-the-det...

The main thing that's needed to follow along is familiarity and comfort with basic linear algebra.


Yeah, I felt smart until I read this. Now I know how all those "business people" feel in meetings with their devs.


Part of the curse of being in technology (I'm an EE who does CS) is that its very hard to ever be surprised. Most of the "new" stuff you see touted, you know right away how its being done and the principles are uninteresting.

Its very exciting when something comes along that seems genuinely magic. Its working and I haven't the faintest idea of how. I love magic!


I know pretty much nothing of the physics and very little of the logic, but having listened to a few talks from the CS side of the field, it mostly looks like a combination of:

- Different hardware. This boils down to different gates/primitive operations, ala [1]. There are currently lots of different proposed methods of how to build the hardware, but I don't know that much about how the physical circuits in a traditional computer work, either.

- Different algorithms. See e.g. Shor's [2]. I think there are only one or two others that have been proven to be more efficiently solvable by quantum means, so a lot more work needs to be done in this area.

The field is in its extreme infancy, and I think it will always be more challenging to program quantum computers by a fairly wide margin.

[1] http://en.m.wikipedia.org/wiki/Quantum_gate

[2] http://en.m.wikipedia.org/wiki/Shors_algorithm


There are a few complications, like the impossibility of clonning a state, and the rsult being probabilistic. Also, there is the entire "let's operate that in a mixture of all possible integers" thing, but when I got to read about it, it was surprizinly possible to understand.


Ars Technica have written a fantastic and approachable 6 page article that introduces the concepts behind Quantum Computing. Definitely worth reading: http://arstechnica.com/science/2010/01/a-tale-of-two-qubits-...


Here's an extremely approachable chapter on quantum algorithms from an undergraduate algorithms course:

http://www.cs.berkeley.edu/~vazirani/algorithms/chap10.pdf


Holy Quantum Annealing Qubits Batman!

That article was not for the faint of geek.


I understand some of the words, but the article itself seems to me as ancient Greek.

Can someone explain to me what the hell the whole story is about?


QC is hard because it requires that the qbits be entangled with each other, but not interacting at all with the rest of the universe. This is comparable to trying to eat rice without a fork, or chopsticks, or allowing air to touch the rice. QC is very, very hard. It is entirely possible we won't see useful QC machines for decades, or even centuries.

D-Wave has attracted scepticism because in the many press releases they've issued over the last 14 years, they have repeatedly claimed to have made breakthroughs in quantum computing that would be perhaps 200 years before their time, if they weren't lying. Like a Chinese manufacturer saying they'll be selling an $11 phone next spring that has petabytes of ram, and can run a simulation of a human brain in real time, which the phone uses for accurate voice transcription.

D-Wave have finally, finally demonstrated a machine with a working qbit system. But the qbits are strongly coupled to the environment, and the machine runs many times slower than a classical electronic computer, even in the fake benchmarking problem that is the only thing the D-Wave machine can solve.

It is an important step on the road to usable quantum computing. It is not a general-purpose quantum computer. It can't be used for prime factorization, for instance.


Quantum annealing can be used to factor integers [1] (if those guys are correct) but the current D-Wave machines are probably unable to factor any interesting number. Here are some more hints [2] what is possible or not.

[1] http://physics.stackexchange.com/questions/11063/can-quantum...

[2] http://physics.stackexchange.com/questions/10496/what-can-th...


Quantum annealing can be used to factor integers in the same way simulated annealing or genetic algorithms can: yes, you can reduce factoring to a SAT problem and then give it to a genetic algorithm for SAT but it will probably take a geological era to factor even relatively small numbers. Quantum computers can factor N digit numbers with N^3 operations with Shor's algorithm, but no one has found a way to factor a number in polynomial time (in theory) with a quantum annealing scheme like the one allegedly implemented by the DWave. And I'm pretty confident that is neither possible.


It's fairly simple

When the Matrix was written the Lisp source code could not hope to compute the position of every quantum particle in the simulated universe in real time. So entanglement was invented as a means of linking quanta that were outside of each others relativistic cause-effect bubble thus reducing the number of actual positional calculations needed across the universe to a manageable amount.

However the perl code used to stich it altogether allowed entanglement in some cases within relativistic reach. The hope is that this will allow us to force a buffer overflow and introduce our own qubits to be processed on the Matrix substrate.

This will eventually lead to a sex party in a cave hundreds of miles below the surface. No quantum mechanics professors or students will be invited to the party

HTH


BACKGROUND: Quantum computing

If you are able to get substantial numbers of "quantum bits" to stay entangled with one another, and hence behave in all those counterintuitive ways quantum things do, then you can (in principle) use the resulting machinery to perform some kinds of computations faster than any "conventional" computer can do them.

Making that actually happen is an enormous engineering challenge. No one's been able to do it with more than a very few bits, yet.

If they did, it would be a big deal: in particular, something called Shor's algorithm allows you (in principle) to factorize numbers efficiently on a quantum computer, and a big enough quantum computer would effectively break RSA encryption. As an indication of the progress that's been made in practical quantum computation: the first ever demonstration of Shor's algorithm in practice was in 2001 when some researchers at IBM managed to use it to find that 15 = 3x5; there was a major breakthrough in 2011, when the algorithm was used to find that 21 = 3x7.

Some other varieties of public-key encryption are not, so far as anyone currently knows, broken once we have quantum computers. But exactly what quantum computers are capable of is a very open question.

BACKGROUND: D-Wave

There's a company called D-Wave that, for years now, has been touting what they claim is a quantum computer of an unconventional design, very different from what most quantum computing researchers have been trying to do (or trying to analyse the capabilities of). They have claimed that their machine works with hundreds of (qu)bits, whereas no one else is using more than, say, ten. They have attracted a lot of media attention and a lot of money.

(Their machine allegedly does something called "adiabatic quantum computing", which somewhat resembles the non-quantum optimization process called simulated annealing. It may or may not actually be more powerful than simulated annealing.)

Until very recently, D-Wave (despite their great media success) had provided no evidence at all that their machine actually does anything "genuinely quantum", or that it is able to do anything that can't be done just as well with conventional classical computers costing much, much less than their machine.

Scott Aaronson (a young but already eminent researcher in the theory of quantum computation) has long been a leading critic of D-Wave, countering their hype (and that of those in the media who like their story) with patient skepticism and careful analysis.

A couple of years ago, D-Wave (for the first time) offered some actual evidence for actual quantum effects having some actual contribution to the behaviour of their machine. Aaronson's post about this -- http://www.scottaaronson.com/blog/?p=639 -- said (among other things) "I hereby announce my retirement as Chief D-Wave Skeptic".

WHAT'S GOING ON NOW

In the last few days there have been breathless reports in the media about how D-Wave's machine has been found to be thousands of times faster (at solving a single particular problem, the one it was designed to solve) than conventional computers. These reports have been discussed here on HN, too.

So Aaronson is back to debunking D-Wave hype. First, though, the good news: it does appear that this latest work gives some evidence that D-Wave's machine is genuinely doing something quantum. Specifically, some researchers have taken the same kind of problems that D-Wave's machine solves, and compared the performance of D-Wave's machine with (1) an algorithm called "quantum Monte Carlo", which is approximately a simulation (on conventional classical computers) of the particular quantum thing it's alleged to be doing and (2) a conventional computer doing ordinary simulated annealing. They found that the performance characteristics -- which problems are easier to solve and which harder, and by how much -- match up well between D-Wave's machine and the simulation of the quantum process it's meant to be an implementation of, whereas classical simulated annealing doesn't match at all well. So it does seem pretty likely that D-Wave's machine is doing roughly what D-Wave say it is, and that this truly is a quantum effect. Yay!

The bad news, part 1: This particular quantum phenomenon turns out to be one that can be efficiently and accurately simulated using ordinary classical computers. In other words, in so far as D-Wave's machine is really doing that, it offers no prospect of a more-than-constant-factor speedup relative to conventional, "non-quantum" digital computers. (The quotation marks are because actually semiconductors, as used in all integrated circuits, are fundamentally quantum devices. But they don't exploit quantum coherence in the sort of way quantum computers do.)

The bad news, part 2: At the same time as one researcher was comparing D-Wave's machine against a bunch of classical optimization algorithms and finding that D-Wave's machine performs much better, another researcher was comparing it against a different classical optimization algorithm, namely (you guessed it) simulated annealing -- and finding that simulated annealing actually solves the problems just as well as D-Wave's machine, but much faster and on cheaper hardware.

CONCLUSION

(For the avoidance of doubt, this is my summary of what Aaronson says; I think he is almost certainly right because he demonstrably knows his stuff, but I'm in no position to give any endorsement beyond that.)

D-Wave do seem to have a genuine quantum device. However, it doesn't seem to be a quantum computer in the sense of something that exploits quantum effects to do computation faster than a classical device can do by more than a constant factor, and the recent hype about their machine is very misleading.

[EDITED to fix a goof where I missed out half a sentence.]


http://dwave.wordpress.com/2013/05/08/first-ever-head-to-hea...

Geordie's reply to Scott's blog: "The majority of that post is simply factually incorrect.

As one example, Troyer hasn’t even had access yet to the system Cathy benchmarked (the Vesuvius – based system). (!) Yes Rainier could be beat by dedicated solvers — it was really slow! Vesuvius can’t (at least for certain types of problems). Another is he thinks we only benchmarked against cplex (not true) and he thinks cplex is just an exact solver (not true). These types of gross misunderstanding permeate the whole thing.

I used to find this stuff vaguely amusing in an irritating kind of way. Now I just find it boring and I wonder why anyone listens to a guy who’s been wrong exactly 100% of the time about everything. Update your priors, people!!

If you want to know what’s really going on, listen to the real experts. Like Hartmut.

http://googleresearch.blogspot.ca/2013/05/launching-quantum-...


Scott's gauntlet throw down: "Now, as for the difference between the 128-qubit machine and the 512-qubit one: based on the results Troyer explained to me (some of which, unfortunately, are not yet public), I see Geordie’s bet and raise it. Give Troyer and his postdocs full access to the Vesuvius-based system. If they can’t outperform it classically within a year or so, I will eat my words."


As far as I can tell, nothing in your second link is inconsistent with anything Scott has written.


Not mine, Geordie's!


I see now. I guess since there wasn't a closing quotation mark my brain just decided to invent one at a random point.


Dear Gareth,

Please include this as a blog post on your blog, so that it does not get lost.

Thanks again for the explanation.

-Kaushik


Other people really did good enough, but you delivered beyond all expectations. Thank you very much.


I have nothing to add, I'd just like to give a more emphatic thanks for that summary than a simple vote allows.


This was a great summary - thank you.


A company that claimed to have built a general-purpose quantum computer, which would speed up a lot of computationally-intensive tasks, may have actually built a weaker device that does something called quantum annealing, which allows it to solve only a subset of problems - those that involve finding the lowest value state in a field of lots of values.

Furthermore, it doesn't look like the device solves those problems any faster than well-optimized code running on normal computers. Which cost about 10k times less.


Did they actually claim it was a general purpose computer? I have been following them for some time now (albeit not like a hawk) and everything I've seen from them was pretty clear in indicating that they took a different approach, i.e. adiabatic quantum annealing.


One could write many books on this subject. What exactly did you not understand?


When the teacher stands at the front of the class and says, "Are there any questions? ... Anyone? Everyone understood everything?" and the students remain silent, looking at each other and hoping the other students will ask a question, the students are doing so because they don't even know how to ask a question. Their confusion is so deep that they cannot articulate what their confusion is. They don't know what they don't know - but they know they don't know something.


I would just like a recap of the story for a layman. You know the sort of thing one would do to explain a computer system to his mother. The sort of thing Brian Greene does to explain string theory to the masses.


D-Wave is trying to make use of the quantum properties of matter in order to solve a particular kind of problem. They want to find the ground state (lowest energy) configuration of spins (a non-classical degree of freedom) in a 2D lattice composed of N atoms. Each atom is allowed to interact with its neighbors obeying rules that are defined by the way the computer is built.

They're essentially setting up a number of "qubits" (which is not related to "binary digit" other than they're both units of information) to naturally converge on the lowest energy state. All the qubits need to be entangled (fundamentally linked) in order for the system to work. This is a necessary but insufficient condition for quantum efficiency gains, which is the point Mr. Aaronson is making. The computer is not, according to recent evidence, providing computational gains. He then goes into the details, which you might or might not be interested in.

If you have other questions, please ask. I'm not an expert, but I understand his argument.


> “stoquastic Hamiltonians,”

Why not "stoquastic Hamilsquonians?"


Is there a SDK? I'd like to port a NES simulator.

(there is a conjecture that anything remotely able to do an addition will be used for emulating a Nintendo console)


I can try to give it a shot, however, I'm no expert and everything I write may be completely wrong.

Quantum annealing is essentially a quantum algorithm for searching in O(n^0.5) (as opposed to classical computing's O(n) )

D-Wave is attempting to build hardware that can perform quantum annealing

The news is that D-wave managed to perform quantum annealing

The author is sceptical that this is as impressive as it sounds because there is no evidence D-Wave actually did quantum annealing with a O(n^0.5) algorithm, and in fact it appears that it was in fact slower than simulated quantum annealing (that is, a classical machine attempting the quantum algorithm, which of course can be no faster than O(n))


Someone will ask how many Bitcoins this computer can mine any minute now!


We've had that already!

Some of the comments in this thread talked about it. (https://news.ycombinator.com/item?id=5697619)

I learned a bit, so it's not too bad. :-)


All of them?



PR departments have to release good publicity. Going too far is sleazy.

So is the recent D-Wave stuff just a combination of good PR and lazy journos, or are D-Wave being sleazy?

I guess it doesn't help that most people have no idea about quantum anything (let alone computing); or about massively parallel or P=NP etc.


There's an old joke a computer salesmen once told me:

Q: What's the difference between a computer salesman and a used car salesman?

A: The used car salesman knows when he's lying.


Q: What's the difference between a classical computer salesman and a quantum computer salesman?


Is the quantum computer salesman in a superimposition of truth-states?


A: Give me 10,000 answers and we'll select the one that's statistically significant.


Quantum computing reminds me of power from nuclear fusion: it's always 15 years away. I was awfully surprised when I read that D-Wave was already selling quantum computers. I'm glad that someone is willing to counter the hype. I find truth more interesting than fiction.


Coincidentally, D-wave's offices are located in Burnaby, BC, about a 10 minute drive from the offices of General Fusion, another company trying to crack the power-from-nuclear-fusion nut. They too have gotten a lot of flak for their claims about their technology, although I haven't heard much from that end of town lately.


Important comment from Geordie on the dwave blog:

"It’s absolutely possible that adding active error correction might help at some point, maybe even in the next generation. If that is the case, we’ll certainly try anything anyone can think of to make the processors work better! In the specific case of exhibiting scaling differences over conventional algorithms, I’d bet we don’t need error correction (at least at the current snapshot) but at the next level (say 2000+ qubits) maybe we might. If we do, no problem — we’ll find a way!"

http://dwave.wordpress.com/2013/05/08/first-ever-head-to-hea...


This thing wasn't delivered by aliens. For $10m isn't there an instruction book that describes what it's supposed to do? Can't they take it apart and look inside it?


We know what it's supposed to do. We don't know what it's actually doing - and neither do the people who built it, not with any certainty.

This is a relatively ill-understood discipline and a very fragile machine. 'Take it apart and look inside' won't tell you much about what happens when you turn it on. In that regard it might as well have been built by aliens. (Okay, it's probably not quite that bad.)


You mean collapse the superposition of states through observation?


OK, so the problem D-Wave has itself defined can be run faster on classical machinery. Does this also hold true for other problems run on the D-Wave?


The way one solves other problems on the D-Wave is by reducing them to the D-Wave problem, so save in extraordinary circumstances the answer will be "Yes."


The "D-Wave problem" (as Aaronson calls it) is kind of peculiar.

It's a certain kind of binary optimization problem that comes up in models of magnetic media ("Ising model"). In the original magnetic context, the two states are N and S. Each magnetic element within a 2D array of states jiggles around locally trying to align with its neighbors, and by doing so, each state-flip influences a global energy function.

The same problem comes up in image processing. The probabilistic equations describing segmentation of on-object versus off-object regions are the same mathematically as the magnetic energy function of the Ising model.

This particular problem is important in its niche, but it's really not that general. It's possible that even something as trivial (analytically) as going from 2 to 3 states will not generalize well to the D-wave hardware.

And it's possible that, by the time you transform a given problem (say, graph matching) to encode it in this model, you end up with either (a) something with more variables than the original problem, (b) something with exotic parameter settings that the D-wave hardware cannot handle, or (c) a model having an energy surface that is not well-suited to the particular D-Wave computational mechanism (annealing).

In some applications, the use of linear programming relaxations (I have not read the detailed paper, but I assume this is the CPLEX result discussed in the post) is much slower than annealing, and in others, LP relaxations are more competitive. Sometimes the LP relaxations give much better results, but they tend to be much, much, slower, for the Ising problem.


Also note that the topology of dwave's architecture strongly limits the instance of the Ising model you can implement and so the blowup in the instance size is twice: from your problem to the ising model and to map this Ising problem to the dwave's architecture.


That question is kind of meaningless.

The D-Wave computer solves a problem. That problem defines the computer. It's a general problem (I don't know if it's turing complete), thus it's a gneral computer.

The problem is that a classical computer can solve the problem faster. In other words that means that a classical computer can emulate the D-Wave faster than the original hardware.


>>>That question is kind of meaningless.

Typing it, I knew I ran the risk of the answer being circular, but I still had to ask.


Can anyone attempt to explain this in layman terms? I can't even figure out if this is a real paper or not!


I'm not an expert on quantum computing, but it looks like two points to me:

1) D-Wave devices have finally been proven to meet a fundamental requirement to do quantum computing. This is a significant scientific advancement.

2) Despite this, they don't seem to answer any computational questions more quickly than commodity electronic computers.

Again, I'm not an expert. The author's reasoning does seem sound. Still, I'm not qualified to assess either of those claims directly.


> a fundamental requirement to do quantum computing

Yes, and it's a necessary, but not sufficient requirement. They don't have a quantum computer by the usual meanning of that term, but they have a different kind of computer, that may or may not do something better than a normal computer.



[deleted]


looks like Reddit's leaking again...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: