Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A Link Between the P≠NP Problem and the Quantum Nature of Universe (medium.com/the-physics-arxiv-blog)
63 points by CarolineW on July 24, 2016 | hide | past | favorite | 27 comments


At the risk of misunderstanding...this article seems completely wrong to me.

1. Technical nitpick: NP-Hard problems are not 'mathematically equivalent'. I assume the author means to refer to NP-Complete problems here. NP-Hard is a larger complexity class that may include things that are harder than NP and not equivalent at all to the set of NP-Complete problems.

2. The existence of a physical NP-Complete oracle does not imply that P = NP with respect to classical computing. If tomorrow we discover a physical process that solves travelling salesman in linear time, that does not necessarily mean that P = NP, it simply means we now have a physical oracle for solving the problem.

This is the same as with traditional quantum computing. We didn't fold BQP into P when we discovered quantum computing.


I think you're being unnecessarily pedantic. Yes, the P v. NP question pertains to Turing machines (and equivalent models), but in this case it is clear that there is an implied appeal to Physical + Complexity-Theoretic/Efficient/Extended Church-Turing Thesis. The paper explicitly says it relies on Cobham's thesis, which can be interpreted (if you squint) to encompass both extended interpretations of C-T.

You're right with regards to 1, but the actual paper does not make that mistake. Instead, it states correctly that the problem is NP-hard and therefore a feasible solution (i.e. P) would imply P = NP.


Regarding 1:

I think they might have meant that if you've shown the problem to be in NP, then if it is NP-hard then [it is NP-complete]. I'm not sure though.


That's not right.

The names of the classes can be confusing, but ${CLASS}-hard means "no harder than ${CLASS}", not " no easier than ${CLASS}".

So, for instance, every problem in P is NP-hard.

${CLASS}-complete, on the other hand, means that its members are both:

1. ${CLASS}-hard and 2. No easier than any other problem in ${CLASS}.


Sorry, you are wrong. Your description of ${CLASS}-hard is precisely backwards. For example, see Wikipedia: https://en.wikipedia.org/wiki/NP-hardness

"NP" itself already means "No harder than NP-complete" [under Karp reduction; Cook reduction perhaps tracks ordinary language "hard" better, but, whatever]. "NP-hard" means "No easier than NP-complete". "NP-complete" means, well, "Exactly as easy as NP-complete" [again, under Karp reduction].


You are right; I got it backwards.


Oh! Thank you for correcting me on that.

I had based my understanding on images like this one, which is in the article:

https://cdn-images-2.medium.com/max/1800/1*_iqZ21FEdk-A_A0-j...

Which I thought said that everything which is at least as complex as everything in NP was in NP hard.

Thank you for correcting me on that.

Is there a term for "every problem such that being able to solve it in P time would allow you to solve NP problems in P time" (or, whatever is the correct way to phrase that)?


Grandparent post is actually wrong. NP-hard roughly means "no easier than NP". Basically grandparent got it backwards.


Alright, thanks.


That statement is true of NP-complete; if you can solve any NP-complete problem in P time, then you can solve every NP-hard problem in P time.


I thought NP-complete implied that the problem was also in NP, in addition to satisfying that statement. Does it not?


As Chinjut already pointed out, procrastitron got it wrong. So let's set things straight.

1. A problem is in NP if it is a decision problem and polynomial time verifiable proof exist if the answer is yes.

2. A problem is NP-hard if every problem in NP can be reduced to it in polynomial time.

3. A problem is NP-complete if it is NP-hard and in NP.

Things to note. NP is about decision problem, problems with a yes or no answer. Finding the shortest travelling salesman tour is not NP-complete, it is not even in NP, because it is not a decision problem. And even ignoring that it is not a decision problem, there is also no polynomial time verifiable proof. You could show me your solution but how would I verify that it is indeed the shortest tour without computing the shortest tour myself and comparing it with your solution?

The decision version of the travelling salesman problem - exists a tour with costs less than X - is in NP. It is a decision problem and as proof you can just show me the tour and I can check that it is a valid tour and has costs less than X in polynomial time. Further if there is no tour with costs less than X, then you don't have to be able to proof this to me, only if the answer is yes. How would you proof to me that there is no tour with costs less than X? Decisions problems with proofs if the answer is no are in co-NP. A problem can have proofs in both cases, then it is in the intersection of NP and co-NP.

A problem is NP-hard if you can reduce every problem in NP to it in polynomial time, i.e. if you can solve a NP-hard problem, then you can solve all problems in NP with at most a polynomial slowdown. In other words NP-hard problems are at least as hard as the hardest problems in NP. A NP-hard problem is not required to be a decision problem, does not have to have polynomial time verifiable proof, it does not even have to be decidable.

Finding the shortest travelling salesman tour is NP-hard and an example of not being a decision problem and not having polynomial time verifiable proof. But if you can find the shortest tour, then you can trivially solve the decision version of the travelling salesman problem which is NP-complete. A problem is NP-complete if it is NP-hard and in NP, i.e. NP-complete problems are the NP-hard decision problems with polynomial time verifiable proof if the answer is yes.


Thank you very much.

This is much closer to what I had thought before I read what procrastitron said (though I was missing a number of the details such as things regarding NP vs co-NP ).

Thanks to you and Chinjut for clearing this up.

Also, this seems like this means that in my first comment my statement of " if you've shown the problem to be in NP, then if it is NP-hard then [it is NP-complete]." was correct? (judging based on your statement that "A problem is NP-complete if it is NP-hard and in NP", which seems to roughly say what I said in my first comment).

Am I correct in thinking that these say basically the same thing?


I would say it is correct, it just seems a somewhat strange scenario to me. You know a problem is NP-hard and it of course has to be a decision problem or it could never end up in NP and then you show that it is indeed in NP, i.e. you show that it also has a polynomial time verifiable proof, which then makes the problem NP-complete. That does not sound impossible but I am not aware or can think of an example where it was hard to figure out that such a polynomial time proof is possible. I would love to know if there are examples but I really don't know.


Old previous discussion: https://news.ycombinator.com/item?id=8081384 (59 points, 731 days ago, 74 comments)

First comment by Dunnorandom

> For anyone interested, here's Scott Aaronson's response to the paper: http://www.scottaaronson.com/blog/?p=1767#comment-103591 .


I feel that this is so important that I'll just quote the central part of Aaronson's response:

  OK everyone: At several people’s request, I’ve now taken
  a look at arXiv:1403.7686, and I can confirm that it’s
  complete garbage.
Aaronson explains why, too.


> As additional problems, the author appears to conflate the P vs. NP problem with the question of whether NP-complete problems can be efficiently solved in the physical world, a common novice mistake.

This occurred to me as well just reading the article. Take an algorithm taking roughly G * x time and another taking e ^ (x/G) where G is the Graham number. Just because the first is growing in a linear fashion and the latter in exponential it doesn't mean the first will be quicker for any number physicians need to deal with.


Curiously he singled out D-wave as an example of other people who made the same mistake the author supposedly makes. But since then, quite recently in fact, I read that the D-wave quantum computer has finally been recognized to really speed up calculations as it should if it were indeed a quantum computer. I wonder to what extent that could invalidate Aaronson's criticism.


Before reading the page I thought the combination of headline and domain looked fishy, so I decided to check the HN comments first. It isn't that I have anything against Medium but I do not trust it enough for a claim like that and the following potentially demanding but underwhelming read.

I am always astonished by how good humans are at pattern matching. And also: I live in a severe filter bubble.


This all sounds a bit like sophistry to me.

That we don't observe quantum superposition in the macro-world may have more to do with our senses (or our instruments, both physical and theoretical) than whether it is there or not.

That we can't imagine a computer that can brute-force macroscopic quantum superposition is more a flaw of the imagination. We could imagine a non-brute force solution after all. The problem is we can't imagine it clearly enough to know its details, and actually implement it. Nothing makes the imaginary 10^(-3x10^23) sec per op' computer necessary except for an argument based on it.

Finally: obviously the quantum state of macroscopic objects is being "computed", by themselves as they go about existing. With a bit more of a stretch of the imagination people have suggested the whole universe is a simulation. If that is true, maybe the universe itself doesn't have the resources for a computer that can calculate the quantum superposition of macroscopic objects, but something can, except it's outside the universe and we'll never know it.


> maybe the universe itself doesn't have the resources for a computer that can calculate the quantum superposition of macroscopic objects

If we indulge this line of imagination a little further, where some types of quantum state 'computation' would incur more stress on the simulation than others, then the simulation's computation resources would be something that could be manipulated by forcing the 'right kinds of computation.' - i.e. maybe it could be DOS'd!

It's a fun thought.

I enormously agree with your advocacy of an open-mind when it comes to what we lack the tools or imagination to observe yet.


I read it, but I think I must have misunderstood it. The argument is based on the fact that it cannot be calculated in a reasonable amount of time ?

I'm not even sure why the universe would have limits on computation. To put it bluntly: the mere existence of the universe/reality is magical, not sure why magic would have any limits in computation power (not saying there isn't any).


Why would it have any limits at all (like the speed of light or the Planck constant)? Yet, it does.


Let me get ti straight. It sounds a lot like P=NP means, there are no computer algorithms with O(n^k)? Trying to simplify the explanation might be just as impossible. Obviously I have no clue.

Are there other complexity classes that were shown to be non existent in deterministic machines?


This supports "universe is a simulation" arguments, no?


> And how does the universe decide whether a system is going to be quantum or not?

A system is quantum if its state has not yet reached consensus. A system is non-quantum if consensus on it has been reached.


Firstly, this should show a date of 2014.

Secondly, as noted in several others' comments, there are many problems with the article's basis, namely Bolotin's arXiv preprint, also from 2014.

Thirdly, the medium author originated this sentence (it isn't from Bolotin): "What of the collapse of stars, which are definitely classical objects, into black holes, which may be quantum ones?", which is wholly backwards.

Stars are definitely quantum objects; black holes may be classical ones (at least in contrived circumstances).

When you zoom in closer and closer when examining a star you see more and more evidence of quantum mechanical processes. Even at long distances, the presence and evolution of spectral lines and neutrinos give away QM processes within the star.

Compact massive objects that are extremely unlikely to be black holes give even more evidence of QM processes and behaviours that do not arise in classical mechanics.

Black holes probably have some quantum mechanical process inside preventing the existence of a physical singularity and retaining some imprint of at least the matter that was near the centre of mass at the time of the formation of the event horizon. That is the subject of a great deal of current research.

On the other hand, there is the no-hair conjecture. Under that, we can model a spherically symmetric, stationary, isolated black hole -- this is plausibly approximated in our universe in the extremely distant future from black holes that have been ejected from a galactic cluster and now inhabits its own essentially unshared Hubble volume. "Stationary" here means that it does not move against a reasonable choice of coordinates (e.g., it stays at X=0,Y=0,Z=0 as one varies T under Cartesian 3+1 coordinates on an extended patch of the Hubble volume the black hole occupies).

If no-hair is accurate, the only observables accessible at a reasonable distance outside the horizon of such an idealized (and again, plausibly physical) black hole are reduced to angular momentum (3 components), electric charge, and mass-energy. That's as classical as you can get.

Under a hard interpretation of no hair, there remains other fundamental observables even in the dynamical spacetime just outside the event horizon, and even through the event horizon. It is possible that nature doesn't care for our ideas of unitary evolution or concerns about physical infinitesimals and infinities with respect to lengths and densities, and black holes can be wholly classical all the way down.

Since such black holes may exist, black holes may be classical objects.

Such objects do not reveal what was inside the black hole at the time of its formation. Suppose a critical mass star is about ready to undergo gravitational collapse and we throw in a particle of energy E, which reaches the centre-of-mass just as the horizon forms, or we throw in two particles stacked on top of each other that have energy 1/2 E each that reach the CoM just as the horizon forms. Under no-hair, we cannot even in principle distinguish the resulting black holes from outside the horizon, even in the infinite future. (But if we throw such things into a normal star, or even a neutron star, and if we had sufficient spectrometer resources, then we could tell the difference between the two cases).

There are lots of reasons to think black holes aren't fully classical. No hair black holes might not ever arise (e.g., Hawking, Perry and Strominger 2016 argue that idealized black holes have "soft hair" that are extremely long wavelength photons generated in the spacetime very near the horizon and detertmined by the matter inside the horizon at the time it formed; other hair might exist too (e.g. EPR=ER from Susskind & Maldacena 2013); final black hole evaporation might reveal the difference between the two-particle and one-particle picture in the paragraph above; and many other solutions have been suggested over the past forty-three years since the publication of Misner, Thorne & Wheeler's Gravitation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: