I can't comment on the theory side of quantum computation (BQP vs the classical equivalent BPP), but I would like to make an engineering comment about why I think quantum computation will be very difficult to scale, maybe even impossible.
To have good quantum registers (long memory time), you want each qubit to interact as little as possible with its environment. This means isolation (qubits far apart) or encoding the information in systems that interact weakly with each other.
However, to have good quantum computation (quick), you want strongly interacting qubits.
The two goals are diametrically opposite, so it's not clear how we can achieve both of them in a single system.
IMHO, the first interesting things that will come from quantum information science will be about quantum simulations of physical systems and quantum communication. Computation might come later... but I'm sure, by then, we'll have moved away from RSA and ElGamal.
It's a fair argument that the problem is unlikely to be solved because the solution wants to go in opposite directions (both weakly and strongly interacting). However, let me point out that there are many technologies that face strong tradeoffs and yet have been successful. Consider hard drive memory, for example. You want it to be easily switchable so that you can access bits quickly with little power. But you also want it to be hard to switch, so that it remembers your data for a long time. The engineering challenge is achieving both seemingly opposite goals at once. The success of memory technologies (among others) makes me optimistic that this is a solvable problem.
As a counterpoint, I would point to how far we have been able to scale classical computers (over almost a century of work). We have gotten to the point where we are pushing up against fairly fundamental limitations of the physics (speed of light, quantum effects, ETC).
Given enough time, we will (through incremental improvements) refine quantum computers towards their theoretical limits. We will likely see practical applications long before we reach these limitations.
> The obvious solution is to write down the initial vector of size N=2^50 and start applying the quantum gates to the vector. […] The size is “just” a petabyte—or actually 1/8 of a petabyte.
Um, NO. A quantum system is described by the complex probability of being in any of those states. You need more like 2^57 bits to represent that. 16 petabytes.
Source: I've written a 24-qubit quantum simulator.
IIRC the models of computation with pure states and mixed states are equivalent in power and efficiency (perhaps up to a polynomial blowup). In fact, you don't even need complex numbers. This is probably Lipton's mindset, seeing as he's a theorist.
No, I'm referring to pure states (vector of complex probabilities), not mixed states (matrix of complex probabilities). The latter would need 2^107 bits.
Every useful quantum algorithm manipulates the complex probabilities of the system. You cannot observe these in a true quantum system but you must still track them in a classical simulation.
So you're saying it's because each probability is a 7-bit number? I guess that's a valid issue, but it's still very "well, actually." So to counter with my own "well actually," you don't need complex probabilities, and the computation only matters with probability bounded away from 1/2. Surely you could do enough engineering tricks to make that happen and keep it at about a petabyte.
No, it's because each probability is a 128-bit number (2^7 = 128; really two 64-bit numbers).
Yes, you very much DO need complex probabilities. The ability to phase-shift qubits is key to most basic quantum algorithms.
I'm not sure what you mean by "the computation only matters with probability bounded away from 1/2".
Almost by definition, the most "interesting" quantum algorithms are those which are most difficult to simulate classically. e.g. you can use "tricks" to greatly speed up simulation if your states are separable, but then you're not really harnessing the full power of the quantum model. The most powerful quantum algorithms entail maximum entanglement and worst-case simulation performance.
Now I think you don't understand quantum computing as well as you claim. The standard model of quantum computing uses complex numbers, but it is well known that you can do everything with just real numbers instead. See exercise 10.4-10.5 of Arora-Barak [1]. The key insight in quantum computing is not that complex numbers are required, but that the invariant across computation is the 2-norm of a vector (as opposed to the 1-norm, as in the classic stochastic model of randomized computation).
Further, the standard quantum model of computation is probabilistic in the sense that you "compute" something if your program outputs the right answer with probability at least 2/3. But 2/3 is not special, you just need it to be some probability bounded away from 1/2, in the sense that it can't get closer and closer to 1/2 as the input size grows.
So it's certainly plausible that you could take advantage of this to reduce the precision enough to get to the size bound Lipton mentioned, especially if, as implied, you had the might of a hundred Google engineers working on it. And the guy is so freaking smart and experienced in theoretical computer science that chances are he thought of all this and considered it not interesting enough to spell out for the people who will say "well actually."
>Quantum Computers have been proved to be more powerful than classical. wrong.
I take issue with that claim.
Take for example, the Deutsch-Jozsa problem.
Given some function f, on n bits to one bit, such that f is either {zero on all the possible inputs, or one on all inputs}, or f is zero on half the inputs and one on the other half.
To tell which of those cases it is, it requires 2^(n-1) + 1 tests of f. You have to test it on half the inputs, plus one.
With the added power of a quantum computer, we can solve it with only one call to f.
Boom, there is exponential speedup with quantum computing. This is similar to what lies behind Shor's algorithm for factoring.
Check it out, wikipedia has diagrams that I can't put in here.
If I understood correctly, in the posted article he addressed the class of black box problems, of which Deutsch–Jozsa is a subclass, calling it an unfair setting.
Specifically:
The fair comparison is to allow the classic machine the same ability to see the circuit representation of the box. The trouble now is the proof disappears.
Isn't "quantum computers are proven to be more powerful than classical" proof that BQP != P, which would imply P != PSPACE, which hasn't yet been proven?
Not quite. Suppose there is some problem, whose best classical algorithm C, and whose best quantum algorithm is Q, with Q in o(c) [0]. Given that this relationship cannot exist in reverse (that is, quantum computers are at least as powerful as classical ones), this would mean that quantum computers are more powerful.
The statement BQP!=P means that there exists a problem that a quantum computer can solve in polynomial time, but a classical computer cannot. This is a stronger requirement.
For example, consider Shoore's algorithm, which can search an unsorted list in O(sqrt(n)) time, strictly better than the O(n) time it takes a classical algorithm.
[0] Note that the little-o means strictly less, whereas big-O means less than or equal to.
Interesting title. It is wordplay on the old movie title "Sex, Lies, and Videotape", where 'videotape' gets changed to the topic under discussion.
Of those three, " videotape " is the only one that is obsolete, while the others are timeless. While the original phrase goes increasingly obscure, the wordplay remains fresh because the obsolete part is invisible.
> "With all due apologies to Soderbergh his title seems perfect for our discussion. There may be no sex, but there is plenty of sizzle surrounding quantum computation."
Right, the title "sex, lies and videotape" is perfect for the discussion. There's no sex, and there are no videotapes, but there are quantum computers, and the author's desperate attempt to cram some whimsy into his post.
I'll sum up the problem as I see it. The problem is that we're comparing real (and thus limited by the reality of manufacturing etc.) classical computers, with theoretically perfect quantum computers.
Well the idea of something is always better, faster and sexier (see, I worked back sex into this) than reality, because as we get closer and closer to building computers with a larger qubit number we'll start hitting all sorts of engineering issues that would distance us from that perfect theoretical ideal of a 100% efficient quantum computer.
And it may turn out that idealism is all the advantage quantum computers had in the first place. It was all for naught. Not that we'll stop trying of course.
To have good quantum registers (long memory time), you want each qubit to interact as little as possible with its environment. This means isolation (qubits far apart) or encoding the information in systems that interact weakly with each other.
However, to have good quantum computation (quick), you want strongly interacting qubits.
The two goals are diametrically opposite, so it's not clear how we can achieve both of them in a single system.
IMHO, the first interesting things that will come from quantum information science will be about quantum simulations of physical systems and quantum communication. Computation might come later... but I'm sure, by then, we'll have moved away from RSA and ElGamal.